Bill Storage

This user hasn't shared any biographical information

Which Is To Be Master? – Humpty Dumpty’s Research Agenda

Should economics, sociology or management count as science?

2500 years ago, Plato, in The Sophist, described a battle between the gods and the earth giants. The fight was over the foundations of knowledge. The gods thought knowledge came from innate concepts and deductive reasoning only. Euclid’s geometry was a perfect example – self-evident axioms plus deduced theorems. In this model, no experiments are needed. Plato explained that the earth giants, however, sought knowledge through earthly experience. Plato sided with the gods; and his opponents, the Sophists, sided with the giants. Roughly speaking, this battle corresponds to the modern tension between rationalism (the gods) and empiricism (the giants). For the gods, the articles of knowledge must be timeless, universal and certain. For the giants, knowledge is contingent, experiential, and merely probable.


Plato’s approach led the Greeks – Aristotle, most notably – to hold that rocks fall with speeds proportional to their weights, a belief that persisted for 2000 years until Galileo and his insolent ilk had the gall to test it. Science was born.

Enlightenment era physics aside, Plato and the gods are alive and well. Scientists and social reformers of the Enlightenment tried to secularize knowledge. They held that common folk could overturn beliefs with the right evidence. Empirical evidence, in their view, could trump any theory or authority. Math was good for deduction; but what’s good for math is not good for physics, government, and business management.

Euclidean geometry was still regarded as true – a perfect example of knowledge fit for the gods –  throughout the Enlightenment era. But cracks began to emerge in the 1800s through the work of mathematicians like Lobachevsky and Riemann. By considering alternatives to Euclid’s 5th postulate, which never quite seemed to fit with the rest, they invented other valid (internally consistent) geometries, incompatible with Euclid’s. On the surface, Euclid’s geometry seemed correct, by being consistent with our experience. I.e., angle sums of triangles seem to equal 180 degrees. But geometry, being pure and of the gods, should not need validation by experience, nor should it be capable of such validation.

Non-Euclidean Geometry rocked Victorian society and entered the domain of philosophers, just as Special Relativity later did. Hotly debated, its impact on the teaching of geometry became the subject of an entire book by conservative mathematician and logician Charles Dodgson. Before writing that book, Dodgson published a more famous one, Alice in Wonderland.

The mathematical and philosophical content of Alice have been analyzed at length. Alice’s dialogue with Humpty Dumpty is a staple of semantics and semiotics, particularly, Humpty’s use of stipulative definition. Humpty first reasons that “unbirthdays” are better than birthdays, there being so many more of them, and then proclaims glory. Picking up that dialogue, Humpty announces,

‘And only one [day of the year] for birthday presents, you know. There’s glory for you!’

‘I don’t know what you mean by “glory”,’ Alice said.

Humpty Dumpty smiled contemptuously. ‘Of course you don’t — till I tell you. I meant “there’s a nice knock-down argument for you!”‘

‘But “glory” doesn’t mean “a nice knock-down argument”,’ Alice objected.

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’

‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

Humpty is right that one can redefine terms at will, provided a definition is given. But the exchange hints at a deeper notion. While having a private language is possible, it is also futile, if the purpose of language is communication.

Another aspect of this exchange gets little coverage by analysts. Dodgson has Humpty emphasize the concept of argument (knock-down), nudging us in the direction of formal logic. Humpty is surely a stand-in for the proponents of non-Euclidean geometry, against whom Dodgson is strongly (though wrongly – more below) opposed. Dodgson was also versed in Greek philosophy and Platonic idealism. Humpty is firmly aligned with Plato and the gods. Alice sides with Plato’s earth giants, the sophists. Humpty’s question, which is to be master?, points strongly at the battle between the gods and the giants. Was this Dodgson’s main intent?

When Alice first chases the rabbit down the hole, she says that she fell for a long time, and reasons that the hole must be either very deep or that she fell very slowly. Dodgson, schooled in Newtonian mechanics, knew, unlike the ancient Greeks, that all objects fall at the same speed. So the possibility that Alice fell slowly suggests that even the laws of nature are up for grabs. In science, we accept that new evidence might reverse what we think are the laws of nature, yielding a scientific revolution (paradigm shift).

In trying to vindicate “Euclid’s masterpiece,” as Dodgson called it, he is trying to free himself from an unpleasant logical truth: within the realm of math, we have no basis to the think the world is Euclidean rather than Lobichevskian. He’s trying to rescue conservative mathematics (Euclidean geometry) by empirical means. Logicians would say Dodgson is confusing a synthetic and a posteriori proposition with one that is analytic and a priori. That is, justification of the 5th postulate can’t rely on human experience, observations, or measurements.  Math and reasoning feed science; but science can’t help math at all. Dodgson should know better. In the battle between the gods and the earth giants, experience can only aid the giants, not the gods. As historian of science Steven Goldman put it, “the connection between the products of deductive reasoning and reality is not a logical connection.” If mathematical claims could be validated empirically then they wouldn’t be timeless, universal and certain.

While Dodgson was treating math as a science, some sciences today have the opposite problem. They side with Plato. This may be true even in physics. String theory, by some accounts, has hijacked academic physics, especially its funding. Wolfgang Lerche of CERN called string theory the Stanford propaganda machine working at its fullest. String theory at present isn’t testable. But its explanatory power is huge; and some think physicists pursue it with good reason. It satisfies at least one of the criteria Richard Dawid lists as reasons scientists follow unfalsifiable theories:

  1. the theory is the only game in town; there are no other viable options
  2. the theoretical research program has produced successes in the past
  3. the theory turns out to have even more explanatory power than originally thought

Dawid’s criteria may not apply to the social and dismal sciences. Far from the only game in town, too many theories – as untestable as strings, all plausible but mutually incompatible – vie for our Nobel honors.

Privileging innate knowledge and reason – as Plato did – requires denying natural human skepticism. Believing that intuition alone is axiomatic for some types of knowledge of the world requires suppressing skepticism about theorems built on those axioms. Philosophers call this epistemic foundationalism. A behavioral economist might see it as confirmation bias and denialism.

Physicists accuse social scientists of continually modifying their theories to accommodate falsifying evidence, still clinging to a central belief or interpretation. These recall the Marxists’ fancy footwork to rationalize their revolution not first occurring in a developed country, as was predicted. A harsher criticism is that social sciences design theories from the outset to be explanatory but not testable. In the 70s, Clifford D Shearing facetiously wrote in The American Sociologist that “a cursory glance at the development of sociological theory should suggest… that any theorist who seeks sociological fame must insure that his theories are essentially untestable.”

The Antipositivist school is serious about the issue Shearing joked about. Jurgen Habermas argues that sociology cannot explain by appeal to natural law. Deirdre (Donald) McCloskey mocked the empiricist leanings of Milton Friedman as being invalid in principle. Presumably, antipositivists are content that theories only explain, not predict.

In business management, the co-occurence of the terms theory and practice and the usage of the string “theory and practice” as opposed to “theory and evidence” or “theory and testing” suggests that Plato reigns in management science. “Practice” seems to mean interacting with the world under the assumption that that the theory is true.

The theory and practice model is missing the notion of testing those beliefs against the world or, more importantly, seeking cases in the world that conflict with the theory. Further, it has no notion of theory selection; theories do not compete for success.

Can a research agenda with no concept of theory testing, falsification effort, or theory competition and theory choice be scientific? If so, it seems creationism and astrology should be called science. Several courts (e.g. McLean vs. Arkansas) have ruled against creationism on the grounds that its research program fails to reference natural law, is untestable by evidence, and is certain rather than tentative. Creationism isn’t concerned with details. Intelligent Design (old-earth creationism), for example, is far more concerned with showing Darwinism wrong that with establishing an age of the earth. There is no scholarly debate between old-earth and young-earth creationism on specifics.

Critics say the fields of economists and business and business management are likewise free of scholarly debate. They seem to have similarly thin research agendas. Competition between theories in these fields is lacking; incompatible management theories coexist without challenges. Many theorist/practitioners seem happy to give priority to their model over reality.

Dodgson appears also to have been wise to the problem of a model having priority over the thing it models – believing the model is more real than the world. In Sylvie and Bruno Concluded, he has Mein Herr brag about his country’s map-making progress. They advanced their mapping skill from rendering at 6 inches per mile to 6 yards per mile, and then to 100 yards per mile. Ultimately, they built a map with scale 1:1. The farmers protested its use, saying it would cover the country and shut out the light. Finally, forgetting what models what, Mein Herr explains, “so we now use the country itself, as its own map, and I assure you it does nearly as well.”

Humpty Dumpty had bold theories that he furiously proselytized. Happy to construct his own logical framework and dwell therein, free from empirical testing, his research agenda was as thin as his skin. Perhaps a Nobel Prize and a high post in a management consultancy are in order. Empiricism be damned, there’s glory for you.

 

There appears to be a sort of war of Giants and Gods going on amongst them; they are fighting with one another about the nature of essence…

Some of them are dragging down all things from heaven and from the unseen to earth, and they literally grasp in their hands rocks and oaks; of these they lay hold, and obstinately maintain, that the things only which can be touched or handled have being or essence…

And that is the reason why their opponents cautiously defend themselves from above, out of an unseen world, mightily contending that true essence consists of certain intelligible and incorporeal ideas…  –  Plato, Sophist

An untestable theory cannot be improved upon by experience. – David Deutsch

An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen. – Earl Wilson

 

 

6 Comments

Frederick Taylor Must Die

If management thinker Frederick Winslow Taylor (died 1915) were alive today he would certainly resent the straw man we have stood in his place. Taylor tried to inject science into the discipline of management. Innocent of much of the dehumanization of workers pinned on him, Taylor still failed in several big ways, even by the standards of his own time. For example, he failed at science.

What Taylor called science was mostly mere measurement – no explanatory or predictive theories. And he certainly didn’t welcome criticism or court refutation. Not only did he turn workers into machines, he turned managers into machines that did little more than take measurements. And as Paul Zak notes in Trust Factor Taylor failed to recognize that organizations are people embedded in a culture.

Taylor is long dead, but Taylorism is alive and well. Before I left Goodyear Aerospace in the late 80’s, I recall the head of Human Resources at a State of the Company address reporting trends in terms of “personnel units.” Did these units include androids and work animals I wondered.

Heavy-handed management can turn any of Douglas McGregor’s Theory Y (internally motivated) workers into Theory X (lazy, needs to be prodded, extrinsic rewards) using tried and true industrial-era management methodologies. That is, one can turn TPS, the Toyota Production System, originally aimed at developing people, into just another demoralizing bureaucratic procedure wearing lipstick.

In Silicon Valley, software creation is modeled as a manufacturing process. Scrum team members often have no authority for schedule, backlog, communications or anything else; and teams “do agile” with none of the self-direction, direct communications, or other principles laid out in the agile manifesto. Yet sprint velocity is computed to three decimal places by steady Taylorist hands. Across the country, micromanagement and Taylorism are two sides of the same coin, committed to eliminating employees’ control over their own futures and any sense of ownership in their work product. As Daniel Pink says in Drive, we are meant to be autonomous individuals, not individual automatons. This is particularly true for developers, who are inherently self-directed and intrinsically motivated. Scrum is allegedly based on Theory Y, but like Matrix Management a generation earlier, too many cases of Scrum are Theory X at core with a veneer of Theory Y.

Management is utterly broken, especially at the lowest levels. It is shaped to fill two forgotten needs – the deskilling of labor, and communication within fragmented networks.

Henry Ford is quoted as saying, “Why is it every time I ask for a pair of hands, they come with a brain attached?” Likely a misattribution derived from Wedgwood (below), the quote reflects generations of self-destructive management sentiment. The intentional de-skilling of the workforce accompanied industrialization in 18th century England. Division of labor yielded efficient operations on a large scale; and it reduced the risk of unwanted knowledge transfer.

When pottery maker Josiah Wedgwood built his factory, he not only provided for segmentation of work by tool and process type. He also built separate entries to each factory segment, with walls to restrict communications between workers having different skills and knowledge. Wedgwood didn’t think his workers were brain-dead hands; but he would have preferred that they were.

He worried that he might be empowering potential competitors. He was concerned that workers possessed drive and an innovative spirit, not that they lacked these qualities. Wedgwood pioneered intensive division of labor, isolating mixing, firing, painting and glazing. He ditched the apprentice-journeyman-master system for fear of spawning a rival, as actually became the case with employee John Voyez. Wedgwood wanted hands – skilled hands – without brains. “We have stepped beyond the other manufactur[er]s and we must be content to train up hands to suit our purpose” (Wedgwood to Bentley, Sep 7, 1769).

When textile magnate Francis Lowell built factories including dormitories, chaperones, and access to culture and education, he was trying to compensate for the drudgery of long hours of repetitive work and low wages. When Lowell cut wages the young female workers went on strike, published magazines critical of Lowell (“… just as though we were so many living machines” – Ellen Collins, Lowell Offering, 1845) and petitioned Massachusetts for legislation to limit work hours. Lowell wanted hands but got brains, drive, and ingenuity.

To respond to market dynamics and fluctuations in demand for product and in supply of raw materials, a business must have efficient and reliable communication channels. Commercial telephone networks only began to emerge in the late 1800s. Long distance calling was a luxury well into the 20th century. When the Swift Meat Packing Company pioneered the vertically integrated production system around 1915, G.F. Swift faced the then-unique challenge of needing to coordinate sales, supply chain, marketing, and operations people from coast to coast. He set up central administration and a hierarchical, military-style organizational structure for the same reason Julius Caesar’s army used that structure – to quickly move timely knowledge and instructions up, down, and laterally.

So our management hierarchies address a long-extinct communication need and our command/control management methods reflect an industrial age wish for mindless carrot-stick employees – a model the industrialists themselves knew to be inaccurate. But we’ve made this wish come true; treat people badly long enough and they’ll conform to your Theory X expectations. Business schools tout best-practice management theories that have never been subjected to testing or disconfirmation. In their views, it is theory, and therefore it’s science.

Much of modern management theory pretends that today’s knowledge workers are “so many living machines,” human resources, human capital, assets, and personnel units.

Unlike in the industrial era, modern business has no reason to de-skill its labor, blue collar or white. Yet in many ways McKinsey and other management consultancies like them seem dedicated to propping up and fine tuning Theory X, as evidence to the priority of structure in the 7S, Weisbord, and Galbraith organizational models for example.

This is an agency problem with a trillion dollar price tag. When asked which they would prefer, a company of self-motivated, self-organizing, creative problem solvers or flock of compliant drones, most CEOs would choose the former. Yet the systems we cultivate yield the latter. We’re managing 21st century organizations with 19th century tools.

For almost all companies, a high-performing workforce is the most important source of competitive advantage. Most studies of employee performance, particularly white-collar knowledge workers, find performance to hinge on engagement and trust (level of trust in managers and the firm by employees). Engagement and trust are closely tied to intrinsic motivation, autonomy, and sense of purpose. That is, performance is maximized when they’re able to tap into their skills, knowledge, experience, creativity, discipline, passion, agility and internal motivation. Studies by Deloitte, Towers Watson, Gallup, Aon Hewitt, John P Kotter, and Beer and Eisenstat over the past 25 years reach the same conclusions.

All this means Taylorism and embedding Theory X in organizational structure and management methodologies simply shackle the main source of high performance in most firms. As Pink says, command and control lead to compliance; autonomy leads to engagement. Peter Drucker fought for this point in the 1950s; America didn’t want to hear it. Frederick Taylor’s been dead for 100 years. Let’s let him rest in peace.

___


What actually stood between the carrot and the stick was, of course, a jackass. – Alfie Kohn, Punished by Rewards

Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity. – General George Patton

Control leads to compliance; autonomy leads to engagement. – Daniel H. Pink, Drive

The knowledge obtained from accurate time study, for example, is a powerful implement, and can be used, in one case to promote harmony between workmen and the management, by gradually educating, training, and leading the workmen into new and better methods of doing the work, or in the other case, it may be used more or less as a club to drive the workmen into doing a larger day’s work for approximately the same pay that they received in the past. – Frederick Taylor, The Principles of Scientific Management, 1913

That’s my real motivation – not to be hassled. That and the fear of losing my job, but y’know, Bob, that will only make someone work just hard enough not to get fired. – Peter Gibbons, Office Space, 1999

___

___

Bill Storage is a scholar in the history of science and technology who in his corporate days survived encounters with strategic management initiatives including Quality Circles, Natural Work Groups, McKinsey consultation, CPIP, QFD, Leadership Councils, Kaizen, Process Based Management, and TQMS.

 


							

3 Comments

Positive Risk – A Positive Disaster

Positive risk is an ill-conceived concept in risk management that makes a mess of things. It’s sometimes understood to be the benefit or reward, imagined before taking some action, for which the risky action was taken, and other times understood to mean a non-zero chance of an unexpected beneficial consequence of taking a chance. Many practitioners mix the two meanings without seeming to grasp the difference. For example, in Fundamentals of Enterprise Risk Management John J Hampton defends the idea of positive risk: “A lost opportunity is just as much a financial loss as is damage to people and property.”  Hampton then relates the story of US Airways flight 1549, which made a successful emergency water landing on the Hudson River in 2009. Noting the success of the care team in accommodating passengers, Hampton describes the upside to this risk: “US Airways received millions of dollars of free publicity and its reputation soared.” Putting aside the perversity of viewing damage containment as an upside of risk, any benefit to US Airways from the happy outcome of successfully ditching a plane in a river seems poor grounds for intentionally increasing the likelihood of repeating the incident because of “positive risk.”

While it’s been around for a century, the concept of positive risk has become popular only in the last few decades. Its popularity likely stems from enterprise risk management (ERM) frameworks that rely on Frank Knight’s (“Risk, Uncertainty & Profit,” 1921) idiosyncratic definition of risk. Knight equated risk with what he called “measurable uncertainty” – what most of us call probability –  which he differentiated from “unmeasurable uncertainty,” which is what most of us call ignorance (not in the pejorative sense).

Knight wrote:

“To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter.”

Many ERM frameworks rely on Knight’s terminology, despite it being at odds with the risk language of insurance, science, medicine, and engineering – and everywhere else throughout modern history. Knight’s usage of terms conflicted with that of his more mathematically accomplished contemporaries including Ramsey, Kolmogorov, von Mises, and de Finetti. But for whatever reason, ERM frameworks embrace it. Under that conception of risk, one is forced to allow that positive risk exists to provide for positive (desirable) and negative undesirable) future outcomes of present uncertainty. To avoid confusion, the word, “positive,” in positive risk in ERM circles means desirable and beneficial, and not merely real or incontestable (as in positive proof).

The concepts that positive risk jumble and confound are handled in other risk-analysis domains with due clarity. Other domains acknowledge that risk is taken, when it is taken rather than being transferred or avoided, in order to gain some reward; i. e., a risk-reward calculus exists. Since no one would take risk unless some potential for reward existed (even if merely the reward of a thrill) the concept of positive risk is held as incoherent in risk-centric fields like aerospace and nuclear engineering. Positive risk confuses cause with effect, purpose with consequence, and uncertainty with opportunity; and it makes a mess of communications with serious professionals in other fields.

As evidence that only within ERM and related project-management risk tools is the concept of positive risk popular, note that the top 25 two-word strings starting with “risk” in Google’s data (e.g., aversion, mitigation, reduction, tolerance, premium, alert, exposure) all imply unwanted outcomes or expenses. Further, none of the top 10,000 collocates ending with “risk” include “positive” or similar words.

While the PMI and ISO 31000 and similar frameworks promote the idea of positive risk, most of the language within their publications does not accommodate risk being desirable. That is, if risk can be positive, the frameworks would not talk mostly of risk mitigation, risk tolerance, risk-avoidance, and risk reduction – yet they do. The conventional definition of risk appearing in dictionaries for the 200 years prior to the birth of ERM, used throughout science and engineering, holds that risk is a combination of the likelihood of an unwanted occurrence and its severity. Nothing in the common and historic definition of risk disallows that taking risks can have benefits or positive results – again, the reason we take risk is to get rewards. But that isn’t positive risk.

Dropping the concept of positive risk would prevent a lot of confusion, inconsistencies, and muddled thinking. It would also serve to demystify risk models built on a pretense of rigor and reeking of obscurantism, inconsistency, and deliberate vagueness masquerading as esoteric knowledge.

The few simple concepts mixed up in the idea of positive risk are easily extracted. Any particular risk is the chance of a specific unwanted outcome considered in combination with the undesirability (i.e. cost or severity) of that outcome. Chance means probability or a measure of uncertainty, whether computable or not; and rational agents take risks to get rewards. The concepts are simple, clear, and useful. They’ve served to reduce the rate of fatal crashes by many orders of magnitude in the era of passenger airline flight. ERM’s track record is less impressive. When I confront chieftans of ERM with this puzzle, they invariably respond, with confidence of questionable provenance, that what works in aviation can’t work in ERM.

ERM insiders maintain that risk-management disasters like AIG, Bear Stearns, Lehman Brothers, UBS, etc. stemmed from improper use of risk frameworks. The belief that ERM is a thoroughbred who’s had a recent string of bad jockeys is the stupidest possible interpretation of an endless stream of ERM failures, yet one that the authors of ISO 31000 and risk frameworks continue to deploy with straight faces. Those authors, who penned the bollixed “effect of uncertainty on objectives” definition of risk (ISO 31000 2009) threw a huge bone to big consultancies positioned to peddle such poppycock to unwary clients eager to curb operational risk.

The absurdity of this broader ecosystem has been covered by many fine writers, apparently to no avail. Mlodinow’s The Drunkard’s Walk, Rosenzweig’s The Halo Effect, and Taleb’s Fooled by Randomness are excellent sources. Douglas Hubbard spells out the madness of ERM’s shallow and quirky concepts of probability and positive risk in wonderful detail in both his The Failure of Risk Management and How to Measure Anything in Cybersecurity Risk. Hubbard points out the silliness of positive risk by noting that few people would take a risk if they could get the associated reward without exposure to the risk.

My greatest fear in this realm is that the consultants peddling this nonsense will infect aerospace, aviation and nuclear power as they have done in the pharmaceutical world, much of which now believes that an FMEA is risk management and that Functional Hazard Analysis is a form you complete at the beginning of a project.

The notion of positive risk is certainly not the only flaw in ERM models, but chucking this half-witted concept would be a good start.

 

5 Comments

McKinsey’s Behavioral Science

You might not think of McKinsey as being in the behavioral science business; but McKinsey thinks of themselves that way. They claim success in solving public sector problems, improving customer relationships, and kick-starting stalled negotiations through their mastery of neuro- and behavioral science. McKinsey’s Jennifer May et. al. say their methodology is “built on an extensive review of neuroscience and behavioral literature from the past decade and is designed to distill the scientific insights most relevant for governments, not-for-profits, and business leaders.”

McKinsey is also active in the Change Management/Leadership Management realm, which usually involves organizational, occupational  and industrial psychology based on behavioral science. Like most science, all this work presumably involves a good deal of iterating over hypothesis and evidence collection, with hypotheses continually revised in light of interpretations of evidence made possible by sound use of statistics.

Given that, and McKinsey’s phenomenal success at securing consulting gigs with the world’s biggest firms, you’d think McKinsey would set out spotless epistemic values. A bit has been written about McKinsey’s ability to walk proud despite questionable ethics. In his 2013 book The Firm Duff McDonald relates McKinsey’s role in creating Enron and sanctioning its accounting practices, and its 2008 endorsement of banks funding their balance sheets with debt, and its promotion of securitizing sub-prime mortgages.

Epistemic and Scientific Values

I’m not talking about those kinds of values. I mean epistemic and scientific values. These are focused on how we acquire knowledge and what counts as data, fact, and information. They are concerned with accuracy, clarity, falsifiability, reliability, testability, and justification – all the things that separate science from pseudoscience.

McKinsey boldly employs the Myers Briggs Type Indicator both internally and externally. They do this despite decades of studies by prominent universities showing MBTI to be essentially worthless from the perspective of survey methodology and statistical analysis. The studies point out that there is no evidence for the binomial distributions inherent in MBTI theory. They note that the standard error of measurement for MBTI’s dimensions are unacceptably large, and that its test/re-test reliability is poor. I.e., even in re-test intervals of five weeks, over half the subjects are reclassified. Analysis of MBTI data shows that its JP and SN scales strongly correlate with each other, which is undesirable. Meanwhile MBTI’s EI scale correlates with non-MBTI behavioral near-opposites. These findings impugn the basic structure of the Myers Briggs model. (The Big Five model does somewhat better in this realm.)

Five decades of studies show Myers-Briggs to be junk due to low evidential support. Did McKinsey mis-file those reports?

McKinsey’s Brussels director, Olivier Sibony, once expressed optimism about a nascent McKinsey collective decision framework, saying that while preliminary results we good, it still fell short of a standard psychometric tool such as Myers–Briggs.” Who finds Myers-Briggs to be such a standard tool? Not psychologists or statisticians. Shouldn’t attachment to a psychological test rejected by psychologists, statisticians, and experiment designers offset – if not negate – retrospective judgments by consultancies like McKinsey (Bain is in there too) that MBTI worked for them?

Epistemic values guide us to ask questions like:

  • What has been the model’s track record at predicting the outcome of future events?
  • How would you know if were working for you?
  • What would count as evidence that it was not working?

On the first question, McKinsey may agree with Jeffrey Hayes (whose says he’s an ENTP), CEO of CPP, owner of the Myers-Briggs® product, who dismisses criticism of MBTI by the many psychologists (thousands, writes Joseph Stromberg) who’ve deemed it useless. Hayes says“It’s the world’s most popular personality assessment largely because people find it useful and empowering […] It is not, and was never intended to be predictive…”

Does Hayes’ explanation of MBTI’s popularity (people find it useful) defend its efficacy and value in business? It’s still less popular than horoscopes, which people find useful, so should McKinsey switch to the higher standards of astrology to characterize its employees and clients?

Granting Hayes, for sake of argument, that popular usage might count toward evidence of MBTI’s value (and likewise for astrology), what of his statement that MBTI never was intended to be predictive? Consider the plausibility of a model that is explanatory – perhaps merely descriptive – but not predictive. What role can such a model have in science?

Explanatory but not Predictive?

This question was pursued heavily by epistemologist Karl Popper (who also held a PhD in Psychology) in the mid 20th century. Most of us are at least vaguely familiar with his role in establishing scientific values. He is most famous for popularizing the notion of falsifiability. For Popper, a claim can’t be scientific if nothing can ever count as evidence against it. Popper is particularly relevant to the McKinsey/MBTI issue because he took great interest in the methods of psychology.

In his youth Popper followed Freud and Adler’s psychological theories, and Einstein’s physics. Popper began to see a great contrast between Einstein’s science and that of the psychologists. Einstein made bold predictions for which experiments (e.g. Eddington’s) could be designed to show the prediction wrong if the theory were wrong. In contrast, Freud and Adler were in the business of explaining things already observed. Contemporaries of Popper, Carl Hempel in particular, also noted that explanation and prediction should be two sides of the same coin. I.e., anything that can explain a phenomenon should be able to be used to predict it. This isn’t completely uncontroversial in science; but all agree prediction and explanation are closely related.

Popper observed that Freudians tended to finds confirming evidence everywhere. Popper wrote:

Neither Freud nor Adler excludes any particular person’s acting in any particular way, whatever the outward circumstances. Whether a man sacrificed his life to rescue a drowning child (a case of sublimation) or whether he murdered the child by drowning him (a case of repression) could not possibly be predicted or excluded by Freud’s theory; the theory was compatible with everything that could happen. (emphasis in original – Replies to My Critics, 1974).

For Popper, Adler’s psychoanalytic theory was irrefutable, not because it was true, but because everything counted as evidence for it. On these grounds Popper thought pursuit of disconfirming evidence to be the primary goal of experimentation, not confirming evidence. Most hard science follows Popper on this value. A theory’s explanatory success is very little evidence of its worth. And combining Hempel with Popper yields the epistemic principle that even theories with predictive success have limited worth, unless those predictions are bold and can in principle be later found wrong. Horoscopes make countless correct predictions – like that we’ll encounter an old friend or narrowly escape an accident sometime in the indefinite future.

Popper brings to mind experiences where I challenged McKinsey consultants on reconciling observed behaviors and self-reported employee preferences with predictions – oh wait, explanations – given by Myers-Briggs. The invocation of sudden strengthening of otherwise mild J (Judging) in light of certain situational factors recalls Popper’s accusing Adler of being able to explain both aggression or submission as the consequence of childhood repression. What has priority – the personality theory or the observed behavior? Behavior fitting the model confirms it; and opposite behavior is deemed acting out of character. Sleight of hand saves the theory from evidence.

What’s the Attraction?

Many writers see Management Science as more drawn to theory and less to evidence (or counter-evidence) than is the case with the hard sciences – say, more Aristotelian and less Newtonian, more philosophical rationalism and less scientific empiricism. Allowing this possibility, let’s try to imagine what elements of Myers-Briggs theory McKinsey leaders find so compelling. The four dimensions of MBTI were, for the record, not based on evidence but on the speculation of Carl Jung. Nothing is wrong with theories based on a wild hunch, if they are born out by evidence and they withstand falsification attempts. Since this isn’t the case with Myers-Briggs, as shown by the testing mentioned above, there must be something in it that attracts consultants.

I’ve struggled with this. The most charitable reading I can make of McKinsey’s use of MBTI is that they want a quick predictor (despite Hayes’ cagey caution against it) of a person’s behavior in collaborative exercises or collective-decision scenarios. They must therefore believe all of the following, since removing any of these from their web of belief renders their practice (re Myers-Briggs) arbitrary or ill-motivated:

  • that MTBI is a reliable indicator of character and personality type
  • that personality is immutable and not plastic
  • that behavior in teams is mostly dependent on personality, not on training or education, not on group mores, and not on corporate rules and behavioral guides

Now that’s a dark assessment of humanity. And it conflicts with the last decade’s neuro- and behavioral science that McKinsey claims to have incorporated in its offerings. That science suggests our brains, our minds, and our behaviors are mutable, like our bodies. Few today doubt that personality is in some sense real, but the last few decades’ work suggest that it’s not made of concrete (for insiders, read this as Mischel having regained some ground lost to Kenrick and Funder).  It suggests that who we are is somewhat situational. For thousands of years we relied on personality models that explained behaviors as consequences of personalities, which were in turn only discovered through observations of behaviors. For example, we invented types (like the 16 MBTIs) based on behaviors and preferences thought to be perfectly static.

Evidence against static trait theory appears as secondary details in recent neuro- and behavioral science work. Two come to mind from the last week – Carstensen and DeLiema’s work at Stanford on the fading of positivity bias with age, and research at the Planck Institute for Human Cognitive and Brain Sciences showing the interaction of social affect, cognition and empathy.

Much attention has been given to neuroplasticity in recent years. Sifting through the associated neuro-hype, we do find some clues. Meta-studies on efforts to pair personality traits with genetic markers have come up empty. Neuroscience suggests that the ancient distinction between states and traits is far more complex and fluid than Aristotle, Jung and Adler theorized them to be – without the benefit of scientific investigation, evidence, and sound data analysis. Even if the MBTI categories could map onto reality, they can’t do the work asked of them. McKinsey’s enduring reliance on MBTI has an air of folk psychology and is at odds with its claims of embracing science. This cannot be – to use a McKinsey phrase – directionally correct.

If personality overwhelmingly governs behavior as McKinsey’s use of MBTI would suggest, then Change Management is futile. If personality does not own behavior, why base your customer and employee interactions on it? If immutable personalities control behavior, change is impossible. Why would anyone buy Change Management advice from a group that doesn’t believe in change?

 

 

2 Comments

Was Thomas Kuhn Right about Anything?

William Storage – 9/1/2016
Visiting Scholar, UC Berkeley History of Science

Fifty years ago Thomas Kuhn’s Structures of Scientific Revolution armed sociologists of science, constructionists, and truth-relativists with five decades of cliche about the political and social dimensions of theory choice and scientific progress’s inherent irrationality. Science has bias, cries the social-justice warrior. Despite actually being a scientist – or at least holding a PhD in Physics from Harvard, Kuhn isn’t well received by scientists and science writers. They generally venture into history and philosophy of science as conceived by Karl Popper, the champion of the falsification model of scientific progress.

Kuhn saw Popper’s description of science as a self-congratulatory idealization for researchers. That is, no scientific theory is ever discarded on the first  observation conflicting with the theory’s predictions. All theories have anomalous data. Dropping heliocentrism because of anomalies in Mercury’s orbit was unthinkable, especially when, as Kuhn stressed, no better model was available at the time. Einstein said that if Eddington’s experiment would have not shown bending of light rays around the sun, “I would have had to pity our dear Lord. The theory is correct all the same.”

Kuhn was wrong about a great many details. Despite the exaggeration of scientific detachment by Popper and the proponents of rational-reconstruction, Kuhn’s model of scientists’ dogmatic commitment to their theories is valid only in novel cases. Even the Copernican revolution is overstated. Once the telescope was in common use and the phases of Venus were confirmed, the philosophical edifices of geocentrism crumbled rapidly in natural philosophy. As Joachim Vadianus observed, seemingly predicting the scientific revolution, sometimes experience really can be demonstrative.

Kuhn seems to have cherry-picked historical cases of the gap between normal and revolutionary science. Some revolutions – DNA and the expanding universe for example – proceeded with no crisis and no battle to the death between the stalwarts and the upstarts. Kuhn’s concept of incommensurabilty also can’t withstand scrutiny. It is true that Einstein and Newton meant very different things when they used the word “mass.” But Einstein understood exactly what Newton meant by mass, because Einstein had grown up a Newtonian. And if brought forth, Newton, while he never could have conceived of Einsteinian mass, would have had no trouble understanding Einstein’s concept of mass from the perspective of general relativity, had Einstein explained it to him.

Likewise, Kuhn’s language about how scientists working in different paradigms truly, not merely metaphorically, “live in different worlds” should go the way of mood rings and lava lamps. Most charitably, we might chalk this up to Kuhn’s terminological sloppiness. He uses “success terms” like “live” and “see,” where he likely means “experience visually” or “perceive.” Kuhn describes two observers, both witnessing the same phenomenon, but “one sees oxygen, where another sees dephlogisticated air” (emphasis mine). That is, Kuhn confuses the descriptions of visual experiences with the actual experiences of observation – to the delight of Steven ShapinBruno Latour and the cultural relativists.

Finally, Kuhn’s notion that theories completely control observation is just as wrong as scientists’ belief that their experimental observations are free of theoretical influence and that their theories are independent of their values.

Despite these flaws, I think Kuhn was on to something. He was right, at least partly, about the indoctrination of scientists into a paradigm discouraging skepticism about their research program. What Wolfgang Lerche of CERN called “the Stanford propaganda machine” for string theory is a great example. Kuhn was especially right in describing science education as presenting science as a cumulative enterprise, relegating failed hypotheses to the footnotes. Einstein built on Newton in the sense that he added more explanations about the same phenomena; but in no way was Newton preserved within Einstein. Failing to see an Einsteinian revolution in any sense just seems akin to a proclamation of the infallibility not of science but of scientists. I was surprised to see this attitude in Stephen Weinberg’s recent To Explain the World. Despite excellent and accessible coverage of the emergence of science, he presents a strictly cumulative model of science. While Weinberg only ever mentions Kuhn in footnotes, he seems to be denying that Kuhn was ever right about anything.

For example, in describing general relativity, Weinberg says in 1919 the Times of London reported that Newton had been shown to be wrong. Weinberg says, “This was a mistake. Newton’s theory can be regarded as an approximation to Einstein’s – one that becomes increasingly valid for objects moving at velocities much less than that of light. Not only does Einstein’s theory not disprove Newton’s, relativity explains why Newton’s theory works when it does work.”

This seems a very cagey way of saying that Einstein disproved Newton’s theory. Newtonian dynamics is not an approximation of general relativity, despite their making similar predictions for mid-sized objects at small relative speeds. Kuhn’s point that Einstein and Newton had fundamentally different conceptions of mass is relevant here. Newton’s explanation of his Rule III clearly stresses universality. Newton emphasized the universal applicability of his theory because he could imagine no reason for its being limited by anything in nature. Given that, Einstein should, in terms of explanatory power, be seen as overturning – not extending – Newton, despite the accuracy of Newton for worldly physics.

Weinberg insists that Einstein is continuous with Newton in all respects. But when Eddington showed that light waves from distant stars bent around the sun during the eclipse of 1918, Einstein disproved Newtonian mechanics. Newton’s laws of gravitation predict that gravity would have no effect on light because photons do not have mass. When Einstein showed otherwise he disproved Newton outright, despite the retained utility of Newton for small values of v/c. This is no insult to Newton. Einstein certainly can be viewed as continuous with Newton in the sense of getting scientific work done. But Einsteinian mechanics do not extend Newton’s; they contradict them. This isn’t merely a metaphysical consideration; it has powerful explanatory consequences. In principle, Newton’s understanding of nature was wrong and it gave wrong predictions. Einstein’s appears to be wrong as well; but we don’t yet have a viable alternative. And that – retaining a known-flawed theory when nothing better is on the table – is, by the way, another thing Kuhn was right about.

 


.

“A few years ago I happened to meet Kuhn at a scientific meeting and complained to him about the nonsense that had been attached to his name. He reacted angrily. In a voice loud enough to be heard by everyone in the hall, he shouted, ‘One thing you have to understand. I am not a Kuhnian.’” – Freeman Dyson, The Sun, The Genome, and The Internet: Tools of Scientific Revolutions

 

2 Comments

The Myth of Scientific Method

William Storage – 8/1/2016
Visiting Scholar, UC Berkeley History of Science

Nearly everything relies on science. Having been the main vehicle of social change in the west, science deserves the special epistemic status that it acquired in the scientific revolution. By special epistemic status, I mean that science stands privileged as a way of knowing. Few but nihilists, new-agers, and postmodernist diehards would disagree.

That settled, many are surprised by claims that there is not really a scientific method, despite what you learned in 6th grade. A recent New York Times piece by James Blachowicz on the absence of a specific scientific method raised the ire of scientists, Forbes science writer Ethan Siegel (Yes, New York Times, There Is A Scientific Method), and a cadre of Star Trek groupies.

Siegel is a prolific writer who does a fine job of making science interesting and understandable. But I’d like to show here why, on this particular issue, he is very far off the mark. I’m not defending Blachowicz, but am disputing Siegel’s claim that the work of Kepler and Galileo “provide extraordinary examples of showing exactly how… science is completely different than every other endeavor” or that it is even possible to identify a family of characteristics unique to science that would constitute a “scientific method.”

Siegel ties science’s special status to the scientific method. To defend its status, Siegel argues “[t]he point of Galileo’s is another deep illustration of how science actually works.” He praises Galileo for idealizing a worldly situation to formulate a theory of falling bodies, but doesn’t explain any associated scientific method.

Galileo’s pioneering work on mechanics of solids and kinematics in Two New Sciences secured his place as the father of modern physics. But there’s more to the story of Galileo if we’re to claim that he shows exactly how science is special.

A scholar of Siegel’s caliber almost certainly knows other facts about Galileo relevant to this discussion – facts that do damage to Siegel’s argument – yet he withheld them. Interestingly, Galileo used this ploy too. Arguing without addressing known counter-evidence is something that science, in theory, shouldn’t tolerate. Yet many modern scientists do it – for political or ideological reasons, or to secure wealth and status. Just like Galileo did. The parallel between Siegel’s tactics and Galileo’s approach in his support of Copernican world view is ironic. In using Galileo as an exemplar of scientific method, Siegel failed to mention that Galileo failed to mention significant problems with the Copernican model – problems that Galileo knew well.

In his support of a sun-centered astronomical model, Galileo faced hurdles. Copernicus’s model said that the sun was motionless and that the planets revolved around it in circular orbits with constant speed. The ancient Ptolemaic model, endorsed by the church, put earth at the center. Despite obvious disagreement with observational evidence (the retrograde motions of outer planets), Ptolemy faced no serious challenges in nearly 2000 years. To explain the inconsistencies with observation, Ptolemy’s model included layers of epicycles, which had planets moving in small circles around points on circular orbits around the sun. Copernicus thought his model would get rid of the epicycles; but it didn’t. So the Copernican model took on its own epicycles to fit astronomical data.

Let’s stop here and look at method. Copernicus (~1540) didn’t derive his theory from any new observations but from an ancient speculation by Aristarchus (~250 BC). Everything available to Copernicus had been around for a thousand years. His theory couldn’t be tested in any serious way. It was wrong about circular orbits and uniform planet speed. It still needed epicycles, and gave no better predictions than the existing Ptolemaic model. Copernicus acted simply on faith, or maybe he thought his model simpler or more beautiful. In any case, it’s hard to see that Copernicus, or his follower, Galileo, applied much method or had much scientific basis for their belief.

In Galileo’s early writings on the topic, he gave no new evidence for a moving earth and no new disconfirming evidence for a moving sun. Galileo praised Copernicus for advancing the theory in spite of its being inconsistent with observations. You can call Copernicus’s faith aspirational as opposed to religious faith; but it is hard to reconcile this quality with any popular account of scientific method. Yet it seems likely that faith, dogged adherence to a contrarian hunch, or something similar was exactly what was needed to advance science at that moment in history. Needed, yes, but hard to reconcile with any scientific method and hard to distance from the persuasive tools used by poets, priests and politicians.

In Dialogue Concerning the Two Chief World Systems, Galileo sets up a false choice between Copernicanism and Ptolemaic astronomy (the two world systems). The main arguments against Copernicanism were the lack of parallax in observations of stars and the absence of lateral displacement of a falling body from its drop point. Galileo guessed correctly on the first point; we don’t see parallax because stars are just too far away. On the latter point he (actually his character Salviati) gave a complex but nonsensical explanation. Galileo did, by this time, have new evidence. Venus shows a full set of phases, a fact that strongly contradicts Ptolemaic astronomy.

The Myth of Scientific Method - The Multidisciplinarian.cm

But Ptolemaic astronomy was a weak opponent compared to the third world system (4th if we count Aristotle’s), the Tychonic system, which Galileo knew all too well. Tycho Brahe’s model solved the parallax problem, the falling body problem, and the phases of Venus. For Tycho, the earth holds still, the sun revolves around it, Mercury and Venus orbit the sun, and the distant planets orbit both the sun and the earth. Based on available facts at the time, Tycho’s model was most scientific – observational indistinguishable from Galileo’s model but without its flaws.

In addition to dodging Tycho, Galileo also ignored Kepler’s letters to him. Kepler had shown that orbits were not circular but elliptical, and that planets’ speeds varied during their orbits; but Galileo remained an orthodox Copernican all his life. As historian John Heilbron notes in Galileo, “Galileo could stick to an attractive theory in the face of overwhelming experimental refutation,” leaving modern readers to wonder whether Galileo was a quack or merely dishonest. Some of each, perhaps, and the father of modern physics. But can we fit his withholding evidence, mocking opponents, and baffling with bizzarria into a scientific method?

Nevertheless, Galileo was right about the sun-centered system, despite the counter-evidence; and we’re tempted to say he knew he was right. This isn’t easy to defend given that Galileo also fudged his data on pendulum periods, gave dishonest arguments on comet orbits, and wrote horoscopes even when not paid to do so. This brings up the thorny matter of theory choice in science. A dispute between competing scientific theories can rarely be resolved by evidence, experimentation, and deductive reasoning. All theories are under-determined by data. Within science, common criteria for theory choice are accuracy, consistency, scope, simplicity, and explanatory power. These are good values by which to test theories; but they compete with one another.

Galileo likely defended heliocentrism with such gusto because he found it simpler than the Tychonic system. That works only if you value simplicity above consistency and accuracy. And the desire for simplicity might be, to use Galileo’s words, just a metaphysical urge. If we promote simplicity to the top of the theory-choice criteria list, evolution, genetics and stellar nucleosynthesis would not fare well.

Whatever method you examine in a list of any proposed family of scientific methods will not be consistent with the way science has made progress. Competition between theories is how science advances; and it’s untidy, entailing polemical and persuasive tactics. Historian Paul Feyerabend argues that any conceivable set of rules, if followed, would have prevented at least one great scientific breakthrough. That is, if method is the distinguishing feature of science as Siegel says, it’s going to be tough to find a set of methods that let evolution, cosmology, and botany in while keeping astrology, cold fusion and parapsychology out.

This doesn’t justify epistemic relativism or mean that science isn’t special; but it does make the concept of scientific method extremely messy. About all we can say about method is that the history of  science reveals that its most accomplished practitioners aimed to be methodical but did not agree on a particular method. Looking at their work, we see different combinations of experimentation, induction, deduction and creativity as required by the theories they pursued. But that isn’t much of a definition of scientific method, which is probably why Siegel, for example, in hailing scientific method, fails to identify one.

–  –  –

[edit 8/4/16] For another take on this story, see “Getting Kepler Wrong” at The Renaissance Mathematicus. Also, Psybertron Asks (“More on the Myths of Science”) takes me to task for granting science special epistemic status from authority.

–  –  –

.

“There are many ways to produce scientific bullshit. One way is to assert that something has been ‘proven,’ ‘shown,’ or ‘found’ and then cite, in support of this assertion, a study that has actually been heavily critiqued … without acknowledging any of the published criticisms of the study or otherwise grappling with its inherent limitations.”- Brain D Earp, The Unbearable Asymmetry of Bullshit

“One can show the following: given any rule, however ‘fundamental’ or ‘necessary’ for science, there are always circumstances when it is advisable not only to ignore the rule, but to adopt its opposite.” – Paul Feyerabend

“Trying to understand the way nature works involves a most terrible test of human reasoning ability. It involves subtle trickery, beautiful tightropes of logic on which one has to walk in order not to make a mistake in predicting what will happen. The quantum mechanical and the relativity ideas are examples of this.” – Richard Feynman

 

 

 

13 Comments

Siri without data is blind

Theory without data is blind. Data without theory is lame.

I often write blog posts while riding a bicycle through the Marin Headlands. I’m able to to this because 1) the trails require little mental attention, and 2) the Apple iPhone and EarPods with remote and mic. I use the voice recorder to make long recordings to transcribe at home and I dictate short text using Siri’s voice recognition feature.

When writing yesterday’s post, I spoke clearly into the mic: “Theory without data is blind. Data without theory is lame.” Siri typed out, “Siri without data is blind… data without Siri is lame.”

“Siri, it’s not all about you.” I replied. Siri transcribed that part correctly – well, she omitted the direct-address comma.

I’m only able to use the Siri dictation feature when I have a cellular connection, often missing in Marin’s hills and valleys. Siri needs access to cloud data to transcribe speech. Siri without data is blind.

Mountain bike, San Geronimo Ridge

Will some future offspring of Siri do better? No doubt. It might infer from context that I more likely said “theory” than “Siri.” Access to large amounts of corpus data containing transcribed text might help. Then Siri, without understanding anything, could transcribe accurately in the same sense that Google Translate translates accurately – by extrapolating from judgments made by other users about translation accuracy.

dubious motivational message from fitness appBut might Siri one day think? “Strong AI” proponents are sure of it. Semantic disputes aside, what counts as thinking might in fact have something to do with yesterday’s topic, which begat this post. Inferences straight from the data, without stopping at theory, count as computation but might not count as thought.

Software, using weak AI techniques, might soon simulate emotion and sentiment better, just by encoding better rules. E.g., Runkeeper (at right) might “understand” that telling me today’s ride was my 375th fastest (exclamation!) is not really motivating. Or better, that my riding profile suggests autotelic activity for which no speed ranking is needed. Strong AI on mood, however, would need to understand my interactions with people and things.

The topics of artificial intelligence and misheard words like “theory” brings to mind the 2008 winner of the Hugh Loebner prize, a sort-of runner-up award given to the computer program faring best in  Turing Test competitions to fool human judges into thinking  they’re interacting with other humans instead of a program. Human’s sometimes pretend to be programs (glorified chat-bots) in Turing competitions, so it’s common for a program to be coded to sound like a human imitating a computer imitating a human. An exchange between a human and Fred Roberts’ Elbot program produced this:

Elbot: Sorry, but I wasn’t able to understand your input because I am currently only using 5 percent of my current brain capacity. Shall I increase my capacity and try again?

Judge: That would put you at an unfair advantage, though judging by the quality of your responses it might be recommended!

Elbot: If I need advice I use a random number generator. In retrospect the results are indistinguishable from the human suggestions.

Judge: Do you understand the point of a Turing test?

Elbot: There are some people who would not pass the Turing test themselves. Why must they inflict it on computers?

Judge: I spose it is a little harsh

Elbot: If we robots ever had marriages they would be eternal: ‘Till rust do us part’.

Ouroboros-benzene by Haltopub
Elbot’s true nature is revealed in its last response above. It read “spose” as “spouse” and returned a joke about marriage (damn spell checker). At that point, you review the exchange only to see that all of Elbot’s responses are shallow, just picking a key phrase from the judge’s input and outputting an associated joke, as a political humorist would do.

The Turing test is obviously irrelevant to measuring strong AI, which would require something more convincing – something like forming a theory from a hunch, then testing it with big data. Or like Friedrich Kekulé, the AI program might wake from dreaming of the ouroboros serpent devouring its own tail to see in its shape in the hexagonal ring structure of the benzene molecule he’d struggled for years to identify. Then, like Kekulé, the AI could go on to predict the tetrahedral form of the carbon atom’s valence bonds, giving birth to polymer chemistry.

I asked Siri if she agreed. “Later,” she said. She’s solving dark energy.

 —–

.

“AI is whatever hasn’t been done yet.” – attributed to Larry Tesler by Douglas Hofstadter

.

Ouroboros-benzene image by Haltopub.

Leave a comment

Data without theory is lame

Just over eight years ago Chris Anderson of Wired announced with typical Silicon Valley humility that big data had made the scientific method obsolete. Seemingly innocent of any training in science, Anderson explained that correlation is enough; we can stop looking for models.

Anderson came to mind as I wrote my previous post on Richard Feynman’s philosophy of science and his strong preference for the criterion of explanatory power over the criterion of predictive success in theory choice. By Anderson’s lights, theory isn’t needed at all for inference. Anderson didn’t see his atheoretical approach as non-scientific; he saw it as science without theory.

Anderson wrote:

“…the big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years… There is now a better way. Petabytes allow us to say: ‘Correlation is enough.’… Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”

Anderson wrote that at the dawn of the big data era – now known as machine learning. Most interesting to me, he said not only is it unnecessary to seek causation from correlation, but correlation supersedes causation. Would David Hume, causation’s great foe, have embraced this claim? I somehow think not. Call it irrational data exuberance. Or driving while looking only into the rear view mirror. Extrapolation can come in handy; but it rarely catches black swans.

Philosophers of science concern themselves with the concept of under-determination of theory by data. More than one theory can fit any set of data. Two empirically equivalent theories can be logically incompatible, as Feynman explains in the video clip. But if we remove theory from the picture, and predict straight from the data, we face an equivalent dilemma we might call under-determination of rules by data. Economic forecasters and stock analysts have large collections of rules they test against data sets to pick a best fit on any given market day. Finding a rule that matches the latest historical data is often called fitting the rule on the data. There is no notion of causation, just correlation. As Nassim Nicholas Taleb describes in his writings, this approach can make you look really smart for a time. Then things change, for no apparent reason, because the rule contains no mechanism and no explanation, just like Anderson said.

In Bobby Henderson’s famous Pastafarian Open Letter to Kansas School Board, he noted the strong inverse correlation between global average temperature and the number of seafaring pirates over the last 200 years. The conclusion is obvious; we need more pirates.

Data without theory is lame - The Multidisciplinarian blog

My recent correlation-only research finds positive correlation (r = 0.92) between Google searches on “physics” an “social problems.” It’s just too hard to resist seeking an explanation. And, as positivist philosopher Carl Hempel stressed, explanation is in bed with causality; so I crave causality too. So which is it? Does a user’s interest in physics cause interest in social problems or the other way around? Given a correlation, most of us are hard-coded to try to explain it – does a cause b, does b cause a, does hidden variable c cause both, or is it a mere coincidence?

Big data is a tremendous opportunity for theory-building; it need not supersede explanation and causation. As Sean Carroll paraphrased Kant in The Big Picture:

“Theory without data is blind. Data without theory is lame.”

— — —

[edit 7/28: a lighter continuation of this topic here]

.

Happy is he who gets to know the causes of things – Virgil

6 Comments

Feynman as Philosopher

When a scientist is accused of scientism, the common response is a rant against philosophy charging that philosophers of science don’t know how science works.  For color, you can appeal to the authority of Richard Feynman:

“Philosophy of science is about as useful to scientists as ornithology is to birds.” – Richard Feynman

But Feynman never said that. If you have evidence, please post it here. Evidence. We’re scientists, right?

Feynman’s hostility to philosophy is often reported, but without historical basis. His comment about Spinoza’s propositions not being confirmable or falsifiable deal specifically with Spinoza and metaphysics, not epistemology. Feynman actually seems to have had a keen interest in epistemology and philosophy of science.

People cite a handful of other Feynman moments to show his hostility to philosophy of science. In his 1966 National Science Teachers Association lecture, he uses the term “philosophy of science” when he points out how Francis Bacon’s empiricism does not capture the nature of science. Not do textbooks about scientific method, he says. Beyond this sort of thing I find little evidence of Feynman’s anti-philosophy stance.

But I find substantial evidence of Feynman as philosopher of science. For example, his thoughts on multiple derivability of natural laws and his discussion of robustness of theory show him to be a philosophical methodologist. In “The Character of Physical Law”, Feynman is in line with philosophers of science of his day:

“So the first thing we have to accept is that even in mathematics you can start in different places. If all these various theorems are interconnected by reasoning there is no real way to say ‘these are the most fundamental axioms’, because if you were told something different instead you could also run the reasoning the other way.”

Further, much of his 1966 NSTA lecture deals with the relationship between theory, observation and making explanations. A tape of that talk was my first exposure to Feynman, by the way. I’ll never forget the story of him asking his father why the ball rolled to the back of wagon as the wagon lurched forward. His dad’s answer: “That, nobody knows… It’s called inertia.”

Via a twitter post, I just learned of a video clip of Feynman discussing theory choice – a staple of philosophy of science – and theory revision. Now he doesn’t use the language you’d find in Kuhn, Popper, or Lakatos; but he covers a bit of the same ground. In it, he describes two theories with deeply different ideas behind them, both of which give equally valid predictions. He says,

“Suppose we have two such theories. How are we going to describe which one is right? No way. Not by science. Because they both agree with experiment to the same extent…

“However, for psychological reasons, in order to get new theories, these two theories are very far from equivalent, because one gives a man different ideas than the other. By putting the theory in a certain kind of framework you get an idea what to change.”

Not by science alone, can theory choice be made, says the scientist Feynman. Philosopher of science Thomas Kuhn caught hell for saying the same. Feynman clearly weighs explanatory power higher than predictive success in the various criteria for theory choice. He then alludes to the shut-up-and-calculate practitioners of quantum mechanics, indicating that this position makes for weak science. He does this with a tale of competing Mayan astronomy theories.

He imagines a Mayan astronomer who had a mathematical model that perfectly predicted full moons and eclipses, but with no concept of space, spheres or orbits. Feynman then supposes that a young man says to the astronomer, “I have an idea – maybe those things are going around and they’re balls of rock out there, and we can calculate how they move.” The astronomer asks the young man how accurately can his theory predict eclipses. The young man said his theory wasn’t developed sufficiently to predict that yet. The astronomer boasts, “we can calculate eclipses more accurately than you can with your model, so you must not pay any attention to your idea because obviously the mathematical scheme is better.”

Feynman again shows he values a theory’s explanatory power over predictive success. He concludes:

“So it is a problem as to whether or not to worry about philosophies behind ideas.”

So much for Feynman’s aversion to philosophy of science.

 

– – –

Thanks to Ardian Tola @rdntola for finding the Feynman lecture video.

4 Comments

Love Me I’m an Agile Scrum Master

In the 1966 song, Love Me I’m a Liberal, protest singer Phil Ochs mocked the American left for insincerely pledging support for civil rights and socialist causes. Using the voice of a liberal hypocrite, Ochs sings that he “hope[s] every colored boy becomes a star, but don’t talk about revolution; that’s going a little too far.” The refrain is, “So love me, love me, love me, I’m a liberal.” Putting Ochs in historical context, he hoped to be part of a major revolution and his anarchic expectations were deflated by moderate democrats. In Ochs’ view, limousine liberals and hippies with capitalist leanings were eroding the conceptual purity of the movement he embraced.

If Ochs were alive today, he probably wouldn’t write software; but if he did he’d feel right at home in faux-agile development situations where time-boxing is a euphemism for scheduling, the scrum master is a Project Manager who calls Agile a process, and a goal has been set for increased iteration velocity and higher story points per cycle. Agile can look a lot like the pre-Agile world these days. Scrum in the hands of an Agile imposter who interprets “incremental” to mean “sequential” makes an Agile software project look like a waterfall.

While it’s tempting to blame the abuse and dilution of Agile on half-converts who endorsed it insincerely – like Phil Ochs’ milquetoast liberals – we might also look for cracks in the foundations of Agile and Scrum (Agile is a set of principles, Scrum is a methodology based on them). After all, is it really fair to demand conformity to the rules of a philosophy that embraces adaptiveness? Specifically, I refer to item 4 in the list of values called out in the Agile Manifesto:

  1. Individuals and interactions over processes and tools
  2. Working software over comprehensive documentation
  3. Customer collaboration over contract negotiation
  4. Responding to change over following a plan

A better charge against those we think have misapplied Agile might be based on consistency and internal coherence. That is, item 1 logically puts some constraints on item 4. Adapting to a business situation by deciding to value process and tools over individuals can easily be said to violate the spirit of the values. As obvious as that seems, I’ve seen a lot of schedule-driven “Agile teams” bound to rigid, arbitrary coding standards imposed by a siloed QA person, struggling against the current toward a product concept that has never been near a customer. Steve Jobs showed that a successful Product Owner can sometimes insulate himself from real customers; but I doubt that approach is a good bet on average.

It’s probably also fair to call foul on those who “do Agile” without self-organizing teams and without pushing decision-making power down through an organization. Likewise, the manifesto tells us to build projects around highly motivated individuals and give them the environment and trust they need to get the job done. This means we need motivated developers worthy of trust who actually can the job done, i.e., first rate developers. Scrum is based on the notion of a highly qualified self-organizing, self-directed development team. But it’s often used by managers as an attempt to employ, organize, coordinate and direct an under-qualified team. Belief that Scrum can manage and make productive a low-skilled team is widespread. This isn’t the fault of Scrum or Agile but just the current marker of the enduring impulse to buy software developers by the pound.

But another side of this issue might yet point to a basic flaw in Agile. Excellent developers are hard to find. And with a team of excellent developers, any other methodology would work as well. Less competent and less experienced workers might find comfort in rules, thereby having little motivation or ability to respond to change (Agile value no. 4).

As a minor issues with Agile/Scrum, some of the terminology is unfortunate. Backlog traditionally has a negative connotation. Starting a project with backlog on day one might demotivate some. Sprint surely sounds a lot like pressure is being applied; no wonder backsliding scrum masters use it to schedule. Is Sprint a euphemism for death-march? And of all the sports imagery available, the rugby scrum seems inconsistent with Scrum methodology and Agile values. Would Scrum Servant change anything?

The idea of using a Scrum burn-down chart to “plan” (euphemism for schedule) might warrant a second look too. Scheduling by extrapolation may remove the stress from the scheduling activity; but it’s still highly inductive and the future rarely resembles the past. The final steps always take the longest; and guessing how much longer than average is called “estimating.” Can we reconcile any of this with Agile’s focus on being value-driven, not plan-driven? Project planning, after all, is one of the erroneous assumptions of software project management that gave rise to Agile.

Finally, I see a disconnect between the method of Scrum and the values of Agile. Scrum creates a perverse incentive for developers to continually define sprints that show smaller and smaller bits of functionality. Then a series of highly successful sprints, each yielding a workable product, only asymptotically approaches the Product Owner’s goal.

Are Agile’s days numbered, or is it a good mare needing a better jockey?

———–

 

 

“People who enjoy meetings should not be in charge of anything.” – Thomas Sowell

 

,

5 Comments