Bill Storage

This user hasn't shared any biographical information

Science vs Philosophy Again

Scientists, for the most part, make lousy philosophers.

Yesterday I made a brief post on the hostility to philosophy expressed by scientists and engineers. A thoughtful reply by philosopher of science Tom Hickey left me thinking more about the topic.

Scientists are known for being hostile to philosophy and for being lousy at philosophy when they practice it inadvertently. Scientists tend to do a lousy job even at analytic philosophy, the realm most applicable to science (what counts as good thinking, evidence and proof), not merely lousy when they rhapsodize on ethics.

But science vs. philosophy is a late 20th century phenomenon. Bohr, Einstein, and Ramsey were philosophy-friendly. This doesn’t mean they did philosophy well. Many scientists, before the rift between science (“natural philosophy” as it was known) and philosophy, were deeply interested in logic, ethics and metaphysics. The most influential scientists have poor track records in philosophy – Pythagoras (if he existed), Kepler, Leibnitz and Newton, for example. Einstein’s naïve social economic philosophy might be excused for being far from his core competency, but the charge of ultracrepidarianism might still apply. More importantly, Einstein’s dogged refusal to budge on causality (“I find the idea quite intolerable that an electron exposed to radiation should chose of its own free will…”) showed methodological – if not epistemic – flaws. Still, Einstein took interest in conventionalism, positivism and the nuances of theory choice. He believed that his interest in philosophy enabled his scientific creativity:

“I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.” – (Einstein letter to Robert Thornton, Dec. 1944)

So why the current hostility? Hawking pronounced philosophy dead in his recent book. He then goes on to perform a good deal of thought around string theory, apparently unaware that he is reenacting philosophical work done long ago. Some of Hawking’s philosophy, at least, is well thought.

Not all philosophy done by scientists fares so well. Richard Dawkins makes analytic philosophers cringe; and his excursions into the intersection of science and religion are dripping with self-refutation.

The philosophy of David Deutsch is more perplexing. I recommend his The Beginning of Infinity for its breadth of ideas, some novel outlooks, for some captivating views on ethics and esthetics, and – out of the blue – for giving Jared Diamond the thrashing I think he deserves. That said, Deutsch’s dogmatism is infuriating. He invents a straw man he names inductivism. He observes that “since inductivism is false, empiricism is as well.” Deutsch misses the point that empiricism (which he calls a misconception) is something scientists lean slightly more or slightly less toward. He thinks there are card-carrying empiricists who need to be outed. Odd as the notion of scientists subscribing to a named philosophical position might appear, Deutsch does seem to be a true Popperian. He ignores the problem of choosing between alternative non-falsified theories and the matter of theory-ladenness of negative observations. Despite this, and despite Kuhn’s arguments, Popper remains on a pedestal for Deutsch. (Don’t get me wrong; there is much good in Popper.) He goes on to dismiss relativism, justificationism and instrumentalism (“a project for preventing progress in understanding the entities beyond our direct experience”) as “misconceptions.” Boom. Case closed. Read the book anyway.

So much for philosophy-hostile scientists and philosophy-friendly scientists who do bad philosophy. What about friendly scientists who do philosophy proud. For this I’ll nominate Sean Carroll. In addition to treating the common ground between physics and philosophy with great finesse in The Big Picture, Carroll, in interviews and on his blog (and here), tries to set things right. He says that “shut up and calculate” isn’t good enough, characterizing lazy critiques of philosophy as either totally dopey, frustratingly annoying, or deeply depressing. Carroll says the universe is a strange place, and that he welcomes all the help he can get in figuring it out.

 


.

Rµv – (1/2)Rgµv = 8πGTµv. This is the equation that a physicist would think of if you said “Einstein’s equation”; that E = mc2 business is a minor thing – Sean Carroll, From Eternity to Here

Up until early 20th century philosophers had material contributions to make to the physical sciences – Neil deGrasse Tyson

 

2 Comments

The P Word

Philosophy can get you into trouble.

I don’t get many responses to blog posts; and for some reason, most of those I get come as email. A good number of those I have received fall into two categories – proclamations and condemnations of philosophy.

The former consist of a final word offered on a matter that I wrote about having two sides and warranting some investigation. The respondents, whose signatures always include a three-letter suffix, set me straight, apparently discounting the possibility of an opposing PhD. Regarding argumentum ad verecundiam, John Locke’s 1689 Essay Concerning Human Understanding is apparently passé in the era where nonscientists feel no shame for their science illiteracy and “my scientist can beat up your scientist.” For one blog post where I questioned whether fault tree analysis was, as commonly claimed, a deductive process, I received two emails in perfect opposition, both suitably credentialed but unimpressively defended.

More surprising is hostility to endorsement of philosophy in general or philosophy of science (as in last post). It seems that for most scientist, engineers and Silicon Valley tech folk, “philosophy” conjures up guys in wool sportscoats with elbow patches wondering what to doubt next or French neoliberals congratulating themselves on having simultaneously confuted Freud, Marx, Mao, Hamilton, Rawls and Cato the Elder.

When I invoke philosophy here I’m talking about how to think well, not how to live right. And philosophy of science is a thing (hint: Google); I didn’t make it up. Philosophy of science is not about ethics. It has to do with that fact that most of us agree that science yields useful knowledge, but we don’t all agree about what makes good scientific thinking. I.e., what counts as evidence, what truth and proof mean, and being honest about what questions science can’t answer.

Philosophy is not, as some still maintain, a framework or ground on which science rests. The failure of logical positivism in the 1960s ended that notion. But the failure of positivism did not render science immune to philosophy. Willard Van Orman Quine is known for having put the nail in the coffin of logical positivism. Quine introduced a phrase I discussed in my last post – underdetermination of theory by data – in his 1951  “Two Dogmas of Empiricism,” often called the most important philosophical article of the 20th century. Quine’s article isn’t about ethics; it’s about scientific method. As Quine later said in Ontological Relativity and Other Essays (1969):

 see philosophy not as groundwork for science, but as continuous with science. I see philosophy and science as in the same boat – a boat which we can rebuild only at sea while staying afloat in it. There is no external vantage point, no first philosophy. All scientific findings, all scientific conjectures that are at present plausible, are therefore in my view as welcome for use in philosophy as elsewhere.

Philosophy helps us to know what science is. But then, what is philosophy, you might ask. If so, you’re halfway there.

.

Philosophy is the art of asking questions that come naturally to children, using methods that come naturally to lawyers. – David Hills in Jeffrey Kasser’s The Philosophy of Science lectures

The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. – Wilfrid Sellars, “Philosophy and the Scientific Image of Man,” 1962

This familiar desk manifests its presence by resisting my pressures and by deflecting light to my eyes. – WVO Quine, Word and Object, 1960

 

7 Comments

Andrei’s Anthropic Abduction

chaotic inflationNo Space aliens here. This deals with the question of whether Stanford physicist Andrei Linde’s work deserves to be called science or whether it is in the realm of pseudoscience some call “not even wrong.” While debated among scientists, this question isn’t really in the domain of science, but of philosophy. If use of the term abduction to describe a form of reasoning isn’t familiar, please read my previous post.

Linde is a key figure in the family of theories of cosmic origins called inflation. Inflation holds that in a period lasting roughly 10E-30 seconds the cosmos doubled in size by at least 100 orders of magnitude. Quantum fluctuations in the then-tiny inflationary region became the gravitational seeds that formed the galaxies and galaxy clusters we now observe. Proponents of the theory hold that it is the best explanation for universal homogeneity and isotropy of matter, the miniscule temperature anisotropies of the cosmic microwave background radiation, geometrical flatness of the universe, and the absence of magnetic monopoles. Inflation requires that the universe should be incredibly homogeneous, isotropic and conform to Euclidean geometry – but not completely. It’s perturbations should be Gaussian and adiabatic, and it requires a nonzero vacuum energy that is, however, extremely close to zero.

In Linde’s model of inflation, the rapidly expanding regions branch off from other expanding regions and occasionally enter a non-inflating phase. But the generation rate of inflationary regions is much higher than the rate of termination of inflation within regions. Therefore, the volume of the inflating part of space is always much larger than the part where inflation has stopped, Terminology varies; for sake of clarity I’ll use multiverse to describe all of space and universe (sometimes Linde uses bubble universe) to describe each separate inflating region or region where inflation has stopped, as is the case where we live. Each of universe in this multiverse can have radically different laws of physics (more accurately, different physical constants and properties). Finally, note that this multiverse scenario has nothing to do with the more popular parallel-universe consequences of the Many Worlds interpretation of quantum mechanics. Linde’s work is particularly interesting for looking at  scientific-realism and empiricist leanings of living scientists. Chaotic inflation is unappealing to empiricists, but less maligned than string theory.

Linde gave a fun, engaging hour-long intro to his version of inflation in a talk to the SETI Institute in 2012 (above). He presents the theory briefly, using the imagery of fractals, and then gives a long defense of anthropic reasoning. Anthropic arguments may, at first glance, appear to be mere tautologies, but scrutiny shows something more subtle and complex. Linde once said, “those who dislike anthropic principles are simply in denial.” In the SETI talk he jocularly explains the anthropic response to the apparent fine tuning of the universe. Finally, he gives a philosophical justification for his theory, explicitly rejecting empiricists’ demands that all predictions be falsifiable, and making inference to the best explanation primary with a justification of “best” by process of elimination.

Curiously, anthropic reasoning, often reviled when applied to universe-sized entities, is readily accepted on smaller scales. Roger Penrose is dubious, saying such reasoning “tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts.” But the earth and life on it in some ways seems a bag of unlikely coincidences. Life requires water and the earth is just the right distance from the sun to allow liquid water. It’s no surprise that we don’t find ourselves on Venus, because it has no water. If the overwhelming majority of planets in the universe are uninhabitable the apparent coincidence that we find ourselves on one that is habitable evaporates. If the overwhelming majority of universes don’t support star formation because of incompatible vacuum energy, the apparent fine tuning of that value here is demystified.

On vacuum energy, Linde says in the SETI talk, “that’s why the energy of the universe is so tiny, because if non-tiny, we would not be talking about it.”

Addressing his empiricist critics he says:

“Is it physics or metaphysics? Can it be experimentally tested? … This theory provides the only known explanation of numerous anthropic coincidences (extremely small vacuum energy, strange masses of elementary particles, etc.). In this sense it was already tested… When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

Mass of the neutron is just slightly larger than mass of the proton. Neutrons decay. If protons were just slightly heavier than neutrons, the protons would decay and you’d have a totally different universe where we would not be able to live… Protons are 2000 times heavier than electrons. If electronics were twice as heavy as we find them to be, we wouldn’t be able to live here… What is so special about it? What is so special about it is us. We would be unable to exist in the part of the universe where the electron has a different mass.”

Noting that the American judicial system is based on inference to the best explanation, Linde then uses humor and points to the name of a philosopher on a screen that we don’t see. Presumably the name is either Charles Peirce or Gilbert Harman. He offers a justification that while not completely watertight, is pretty good:

“The multiverse is as of now the only existing explanation of experimental fact [mass of electron]. So when people say we cannot travel that far [beyond the observable universe] and therefore the multiverse theory cannot be tested, it’s already tested experimentally by our own existence. But you may say, ‘what we want is to make a prediction and then check it experimentally.’ My answer to that is that this is not how the American court system works. For example a person killed his wife. They do not repeat the experiment. They do not give him a new wife and a knife, etc. What they do is they use the method suggested by this philosopher. They just try to eliminate impossible options. And once they eliminate them either the guy goes free or the guy goes dead, or a mistake – sorry… So everything is possible. It is not necessary to repeat the experiment and check what is going to happen with the universe if it’s cooked up differently. If what we have provides the only explanation of what we see, that’s already something.”

Linde then goes farther down the road of anthropic reasoning than I’ve seen others do, responding to famous quotes by Einstein and Wigner, following with a much less famous retort to Wigner by Israel Gelfand. The Gelfand quote, echoing Kant, gives a hint as to where Linde is heading:

The most incomprehensible thing about the universe is its comprehensibility – Albert Einstein

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift, which we neither understand nor deserve.  (The Unreasonable Effectiveness of Mathematics ) – Eugene Wigner

There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology. - Israel Gelfand

Linde says Einstein and Wigner’s puzzles are easily explained. If a universe obeys discoverable laws, it can be considered as an undeserved gift of God to physicists and mathematicians. But elsewhere, in a universe that is a mess, you cannot make any predictions and your mathematicians and physicists would be totally useless. Linde emphasizes, “Universes that do not produce observers do not produce physicists.” No one in such a universe would contemplate the effectiveness of mathematics. In a universe of high density the interactions would be so swift and strong that once you record anything, a millisecond later it would be gone. Your calculations would be instantly negated. In these universes mathematics and physics are ineffective. But we can only live in universes, says Linde, where natural selection is possible and where predictions are possible. Linde says, as humans, we need to make predictions at every step of our lives. He then jokes, “If we would be in a universe where predictions are impossible we wouldn’t be there” Einstein can only live in the kind of universe where Einstein can ask why the universe is so comprehensible.

Nobel Laureate Steven Weinberg finds the multiverse an intriguing idea with some good theoretical support, but on reading that Andrei Linde was willing to bet his life on it and that Martin Rees was willing to bet the life of his dog, Weinberg offered, “I have just enough confidence about the multiverse to bet the lives of both Andrei Linde and Martin Rees’s dog.”

Neutrinos and the Higgs field were predicted decades before they were observed. Most would agree that the detection of neutrinos is direct enough that the term “observation” is justified. The Higgs boson, in comparison, was only inferred as the best explanation of the decay patterns of high energy hadron collisions. Could the interpretation of disturbances in the cosmic background radiation as “bruises” caused by collisions between adjacent bubble universes and our own count as confirming evidence of Linde’s model? Could anything else – in practice or in principle – confirm or falsify chaotic inflation? Particle physics isn’t my day job. Let me know if my understanding of the science is wrong or if you have a different view of Linde’s philosophical stance.

– – –

.

There will never be a Newton of the blade of grass – Immanuel Kant

Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover― Bertrand Russell

As we look out into the universe and identify the many accidents of physics and astronomy that have worked together to our benefit, it almost seems as if the universe must in some sense have known we were coming. – Freeman Dyson

 

1 Comment

Of Mice, Maids and Explanatory Theories

One night in early 1981, theoretical physicist Andrei Linde woke his wife in the middle of the night and said, “I think that I know how the universe was born.”

That summer he wrote a paper on this topic, rushing to get it published in an international journal. But in cold war Russia, it took months for the government censors to approve everything that crossed the border. That October Linde was able to give a talk on his theory, which he called new inflation, at a conference on quantum gravity in Moscow attended by Stephen Hawking and similar luminaries. The next day Hawking gave a talk on Alan Guth’s earlier independent work on cosmic inflation in English. Linde received the task of translating Hawking’s talk to Russian in real time. Linde didn’t know what Hawking would be saying in advance. Hawking’s talk explained Guth’s theory and then went on explain why Linde’s theory was incorrect. So Linde had the painful experience of unfolding several arguments against his own work to an audience of Russian scientists who were in control of Linde’s budget and career. At the end of Hawking’s talk, Linde offered to explain why Hawking was wrong. Hawking agreed to listen, then agreed that Linde was right. They became friends and, as Linde explains it, “we were off to the races.”

The MultiDisciplinarian - Of Mice and Maids and Explanatory Theories

Linde’s idea was a new twist on a family of theories about cosmic inflation, a theory that encompasses big bang theory. In Linde’s refinement of it, cosmic inflation continued while the scalar field slowly rolled down. If that isn’t familiar science, don’t worry, I’ll post some links.  Linde later reworked his new inflation theory, arriving at chaotic inflation, as it is now known. The theory has, as a necessary consequence, a nearly infinite number of parallel universes. To clarify, parallel universes are not a theory of Linde’s, per se. Parallel universes fall out of his theory designed to explain phenomena we observe, such as the cosmic background radiation.

So the theory involves entities (universes) that are not only unobservable in practice, but unobservable in principle too. Can a theory that makes untestable claims and posits unobservable entities fairly be called scientific? Even if some of its consequences are observable and falsifiable? And how can we justify the judgment we reach on that question? To answer these questions we need some background from the philosophy of science.

The Scientific Method
A simplistic view of scientific method involves theories that make predictions about the world, testing the theories by experimentation and observation, and then discarding or refining theories that fail to predict the outcomes of experiments or make wrong predictions. At some point in the life of a theory or family of theories, one might judge that a law of nature has been uncovered. Such a law might be, for example, that all copper conducts electricity or that force equals mass times acceleration. Another use for theories is to explain things. For example, the patient complains of shortness of breath only on cold days and the doctor judges the cause to be episodic bronchial constriction rather than asthma. Here we’re relying on the tight link between explanation and cause. More on that below.

Notice that science, as characterized like this, doesn’t prove anything, but merely gives evidence for something. Proof uses deduction. It works for geometry, syllogisms and affirming the antecedent, but not for science. You remember the rules. All rocks are mortal. Socrates is a rock. Therefore, Socrates is mortal. The conclusion about Socrates, in this case, follows from his membership in the set of all rocks, about which it is given that they are mortal. Substitute men or any other set, class, or category for rocks and the conclusion remains valid.

Science relies on inferences that are inductive. They typically take the following form: All observed X have been Y. The next observed X will be Y. Or a simpler version: All observed X have been Y. Therefore all X are Y. Real science does a better job. It eliminates many claims about future observations of X  even before any non-Y instances of X have been found. It was unreasonable to claim that all swans were white even before Australian black swans were discovered. We know how fickle color is in birds. Another example of scientific induction is the conductive power of copper mentioned above. All observed copper conducts electricity. The next piece of copper found will also conduct.

In the mid-1700s philosopher David Hume penned a challenge to inductive thought that is still debated. Hume noted that induction assumes the uniformity of nature, something for which there can be no proof. Says Hume, we can easily imagine a universe that is not uniform – one where everything is haphazard and unpredictable. Such universes are of particular interest in Andrei Linde’s theory. Proving universal uniformity of nature would vindicate induction, said Hume, but no such proof is possible. One might be tempted to argue that nature has always been uniform until now so it is reasonable that it will continue to be so. Using induction to demonstrate the uniformity of nature – in order to vindicate induction in the first place – is obviously circular.

Despite the logical weakness of inductive reasoning, science relies on it. We beef up our induction with scientific explanations. This brings up the matter of what makes a scientific explanation good. It’s tempting to jump to the conclusion that a good explanation is one that reveals the cause of an observed effect. But, as troublemaker David Hume also showed, causality is never really observed directly – only chronology is. Analytic philosophers, logicians, and many quantum physicists are in fact very leery of causality. Carl Hempel, in the 1950s, worked hard on an alternative account of scientific explanation, the Covering-law model. It ultimately proved flawed. I’ll spare you the details.

Hempel also noted a symmetry between explanation and prediction. He claimed that the very laws of nature and experimental observations used to explain a phenomenon could have also been used to predict that phenomenon, had it not already been observed. While valid in many cases, in the years following Hempel’s valiant efforts, it became clear that significant exceptions existed for all of Hempel’s claims of symmetry in scientific explanation. So in most cases we’re really left with no option for explanation other than causality.

Beyond deduction and the simple more-of-the-same type of induction, we’ve been circling around another form of reasoning thought by some to derive from induction but argued by others to be more fundamental than the above-described induction. This is abductive reasoning or inference to the best explanation (synonymous for our purposes though differentiated by some). Inference to the best explanation requires that a theory not merely necessitate the observations but explain them. For this I’ll use an example from Samir Okasha of the University of Bristol.

Who moved the cheese?
The cheese disappeared from the cupboard last night, except for a few crumbs. The family were woken by scratching noises heard coming from the kitchen. How do we explain the phenomenon of missing cheese? Sherlock Holmes would likely claim he deduced that a mouse had crawled up the cupboard and taken the cheese. But no deduction is involved. Nor is induction as described above. Sherlock would actually be inferring, from the available evidence that, among the possible explanations for these observations, that a mouse was the best theory. The cheese could have vanished from a non-uniformity of nature, or the maid may have stolen it; but Holmes thought the mouse explanation to be best.

Inference to the best explanation is particularly important when science deals with unobservable entities. Electrons are the poster children for unobservable entities that most scientists describe as real. Other entities useful in scientific theories are given less credence. Scientists postulate such entities as components of a theory; and many such theories enjoy great predictive success. The best explanation of their predictive success is often that the postulated unobservables are in fact real. Likewise, the theory’s explanatory success, while relying on unobservables, argues that the theory is valid. The no miracles argument maintains that if the unobservable entities are actually not present in the world, then the successfully predicted phenomena would be unexplained miracles. Neutrinos were once unobservable, as were quarks and Higgs bosons. Note that “observable” here is used loosely. Some might prefer “detectable”; but that distinction opens another can of philosophical worms.

More worms emerge when we attempt to define “best” in this usage. Experimenters will have different criteria as to what makes an explanation good. For some simplicity is best, for others loveliness or probability. This can of worms might be called the problem of theory choice. For another time, maybe.

For a more current example consider the Higgs Boson. Before its recent discovery, physicists didn’t infer that all Higgs particles would have a mass of 126 GeV from prior observations of other Higgs particles having that weight, since there had been no observations of Higgs at all. Nor did they use any other form of simple induction. They inferred that the Higgs must exist as the best explanation of other observations, and that if the Higgs did exist, it would have a mass in that range. Bingo – and it did.

The school of thought most suspicious of unobservable entities is called empiricism. In contrast, those at peace with deep use of inference to the best explanation are dubbed scientific realists. Those leaning toward empiricism (few would identify fully with either label) cite two classic epistemological complaints with scientific realism: underdetermination of theory by data and pessimistic meta-induction. All theories are, to some degree, vulnerable to competing theories that explain the same observations – perhaps equally well (underdetermination). Empiricists feel that the degree of explanatory inference entailed in string theory and some of Andrei Linde’s work are dangerously underdetermined. The pessimistic meta-induction argument, in simplest form, says that science has been wrong about unobservables many times in the past and therefore, by induction, is probably wrong this time. In summary, empiricists assert that inference to the best explanation wanders too far beyond solid evidential grounds and leads to metaphysical speculation. Andrei Linde, though he doesn’t state so explicitly, sees inference to the best explanation as scientifically rational and essential to a mature theory of universal inflation.

With that background, painful as it might be, I’ll be able to explain my thoughts on Andrei Linde’s view of the world, and to analyze his defense of his theory and its unobservables in my next post.

– – –

Biographical material on Andrei Linde from the essay, “A balloon producing balloons producing balloons,” in The Universe, edited by John Brockman, and
Autobiography of Andrei Linde for the Kavli foundation

 

 

Leave a comment

My Grandfather’s Science

Velociraptor by Ben TownsendThe pace of technology is breathtaking. For that reason we’re tempted to believe our own time to be the best of times, the worst, the most wise and most foolish, most hopeful and most desperate, etc. And so we insist that our own science and technology be received, for better or worse, in the superlative degree of comparison only. For technology this may be valid. For science, technology’s foundation, perhaps not. Some perspective is humbling.

This may not be your grandfather’s Buick – or his science. It contemplates my grandfather’s science – the mind-blowing range of scientific progress during his life. It may dwarf the scientific progress of the next century. In terms of altering the way we view ourselves and our relationship to the world, the first half of the 20th century dramatically outpaced the second half.

My grandfather was born in 1898 and lived through nine decades of the 20th century. That is, he saw the first manned airplane flight and the first man on the moon. He also witnessed scientific discoveries that literally changed worldviews.

My grandfather was fascinated by the Mount Wilson observatory. The reason was the role it had played in one of the several scientific discoveries of his youth that rocked not only scientist’s view of nature but everyone’s view of themselves and of reality. These were cosmological blockbusters with metaphysical side effects.

When my grandfather was a teen, the universe was the Milky Way. The Milky Way was all the stars we could see; and it included some cloudy areas called nebulae. Edwin Hubble studied these nebulae when he arrived at Mount Wilson in 1919. Using the brand new Hooker Telescope at Mt. Wilson, Hubble located Cepheid variables in several nebulae. Cepheids are the “standard candle” stars that allow astronomers to measure their distance from earth. Hubble studied the Andromeda Nebula, as it was then known. He concluded that this nebula was not glowing gas in the Milky Way, but was a separate galaxy far away. Really far.

In one leap, the universe grew from our little galaxy to about 100,000,000 light years across. That huge number had been previously argued but was ruled out in the “Great Debate” between Shapley and Curtis in April 1920. To earlier arguments that Andromeda was a galaxy, Harvard University’s Harlow Shapley had convinced most scientists that Andromeda was just some glowing gas. Assuming galaxies of the same size, Shapley noted that Andromeda would have to be 100 million light years away to occupy the angular distance we observe. Most scientists simply could not fathom a universe that big. By 1925 Hubble and his telescope on Mt. Wilson had fixed all that.

Over the next few decades Hubble’s observations showed galaxies far more distant than Andromeda – millions of them. Stranger yet, they showed that the universe was expanding, something that even Albert Einstein did not want to accept.

The big expanding universe so impressed my grandfather that he put Mt. Wilson on his bucket list. His first trip to California in 1981 included a visit there. Nothing known to us today comes close to the cosmological, philosophical and psychological weight of learning, as a steady-state Milky Way believer, that there was a beginning of time and that space is stretching. Well, nothing except the chaotic inflation theory also proposed during my grandfather’s life. The Hubble-era universe grew by three orders of magnitude. Inflation theory asks us to accept hundreds of orders of magnitude more. Popular media doesn’t push chaotic inflation, despite its mind-blowing implications. This could stem from our lacking the high school math necessary to grasp inflation theory’s staggering numbers. The Big Bang and Cosmic Inflation will be tough acts for the 21st century to follow.

Another conceptual hurdle for the early 20th century was evolution. Yes, everyone knows that Darwin wrote in the mid-1800s; but many are unaware of the low status the theory of evolution had in biology at the turn of the century. Biologists accepted that life derived from a common origin, but the mechanism Darwin proposed was impossible. In the late 1800’s the thermodynamic calculations of Lord Kelvin (William Thomson, an old-earth creationist) conflicted with Darwin’s model of the emergence of biological diversity. Thomson’s 50-million year old earth couldn’t begin to accommodate prokaryotes, velociraptors and hominids. Additionally, Darwin didn’t have a discreet (Mendelian) theory of inheritance to allow retention of advantageous traits. The “blending theory of inheritance” then in vogue let such features regress toward the previous mean.

Darwinian evolution was rescued in the early 1900s by the discovery of radioactive decay. In 1913 Arthur Holmes, using radioactive decay as a marker, showed that certain rocks on earth were two billion years old. Evolution now had time to work. At about the same time, Mendel’s 1865 paper was rediscovered. Following Mendel, William Bateson proposed the term genetics in 1903 and the word gene in 1909 to describe the mechanism of inheritance. By 1920, Darwinian evolution and the genetic theory were two sides of the same coin. In just over a decade, 20th century thinkers let scientific knowledge change their self-image and their relationship to the world. The universe was big, the earth was old, and apes were our cousins.

Another “quantum leap” our recent ancestors had to make was quantum physics. It’s odd that we say “quantum leap” to mean a big jump. Quanta are extremely small, as are the quantum jumps of electrons. Max Planck kicked off the concept of quanta in 1900. It got a big boost in 1905 from Einstein. Everyone knows that Einstein revolutionized science with the idea of relativity in 1905. But that same year – in his spare time – he also published papers on Brownian motion and the photoelectric effect (illuminated metals give off electrons). In explaining Brownian motion, Einstein argued that atoms are real, not just a convenient model for chemistry calculations as was commonly held. In some ways the last topic, photoelectric effect, was the most profound. Like many had done with atoms Planck considered quanta as a convenient fiction. Einstein’s work on the photoelectric effect, for which he later got the Nobel Prize, made quanta real. This was the start of quantum physics.

Relativity told us that light bends and that matter warps space. This was weird stuff, but at least it spared most of the previous century’s theories – things like the atomic theory of matter and electromagnetism. Quantum physics uprooted everything. It overturned the conceptual framework of previous science and even took a bite out of basic rationality. It told us that reality at small scales is nothing like what we perceive. It said that everything, including light perhaps even time and space – is ultimately discreet, not continuous; nature is digital. Future events can affect the past and the ball can pass through the wall. Beyond the weird stuff, quantum physics makes accurate and practical predictions. It also makes your iPhone work. My grandfather didn’t have one, but his transistor radio was quantum-powered.

Technology’s current heyday is built on the science breakthroughs of a century earlier. If that seems like a stretch consider the following. Planck invented the quantum in 1900, Einstein the photon in 1903, and Von Lieben the vacuum tube in 1906. Schwarzschild predicted black holes in 1916, a few years before Hubble found foreign galaxies. Georges Lemaitre proposed a Big Bang in 1927, Dirac antimatter in 1928, and Chadwick the atomic nucleus in 1932. Ruska invented the electron microscope the following year, two years before plastic was invented. In 1942 Fermi tested controlled nuclear reactions. Avery identified DNA as the carrier of genes in 1944; Crick and Watson found the double helix in 1953. In 1958 Kilby invented the integrated circuit. Two years later Maiman had a working laser, just before the Soviets put a man in orbit. Gell-Man invented quarks in 1964. Recombinant DNA, neutron stars, and interplanetary probes soon followed. My grandfather, born in the 1800s, lived to see all of this, along with personal computers, cell phones and GPS. He liked science and so should you, your kids and your school board.

While recent decades have seen marvelous inventions and cool gadgets, conceptual breakthroughs like those my grandfather witnessed are increasingly rare. It’s time to pay the fiddler. Science education is in crisis. Less than half of New York City’s high schools offer a class in physics and only a third of US high school students take a physics class. Women, African Americans and Latinos are grossly underrepresented in the hard sciences.

Political and social science don’t count. Learn physics, kids. Then teach it to your parents.

6 Comments

The Road to Holacracy

In 1960 South Korea’s GDP per capita was at the level of the poorest of African and Asian nations. Four decades later, Korea ranked high in the G-20 major economies. Many factors including a US-assisted education system and a carefully-planned  export-oriented economic strategy made this possible. By some accounts the influence of Peter Drucker also played a key role, as attested by the prominent Korean businessman who changed his first name to “Mr. Drucker.” Unlike General Motors in the US, South Korean businesses embraced Drucker’s concept of the self-governing organization.

Drucker proposed this concept in The Future of Industrial Man and further developed it in his 1946 Concept of the Corporation, which GM’s CEO Alfred Sloan, despite Drucker’s general praise of GM, saw as a betrayal. Sloan would hear nothing flattened hierarchies and decentralization.

Drucker was shocked by Sloan’s reaction to his book. With the emergence of large corporations, Drucker saw autonomous teams and empowered employees who would assume managerial responsibilities as the ultimate efficiency booster. He sought to establish trust and “create meaning” for employees, seeing this as key to what we now call “engagement.”

In the 1960’s, Douglas McGregor of MIT used the term, Theory Y, to label the contrarian notion that democracy in the work force encourages workers to approach tasks without direct supervision, again leading to fuller engagement and higher productivity.

Neither GM nor any other big US firm welcomed self-management for the rest of the 20th century. It’s ideals may have sounded overly socialistic to CEOs of the cold war era. A few consultancies promoted related concepts like shop-floor autonomy, skepticism of bureaucracy, and focus on intrinsic employee rewards in the 1980’s, e.g., Peters and Waterman’s In Search of Excellence. Later poor performance by firms celebrated in Excellence (e.g. Wang and NCR) may have further discredited concepts like worker autonomy.

Recently, Daniel Pink’s popular Drive argued that self-management and worker autonomy lead to a sense of purpose and engagement, which motivate more than rank in a hierarchy and higher wages. Despite the cases made by these champions of flatter organizations, the approach that helped Korea become an economic power got few followers in the west.

In 2014 Zappos adopted holacracy, an organizational structure promoted by Brian J. Robertson, which is often called a flat organization. Following a big increase in turnover rate at Zappos, many concluded that holacracy left workers confused and, with no ladder to climb, flatly unmotivated. Tony Hsieh, Zappos’s CEO, denies that holacracy was the cause. Hsieh implemented holacracy because in his view, self-managed structures promote innovation while hierarchies stifle it; large companies tend to stagnate.

There’s a great deal of confusion about holacracy, and whether it in fact can accurately be called a flat structure. A closer look at holacracy helps clear this up.

To begin, note that holacracy.org itself states that work “is more structured with Holacracy than [with] conventional management.” Holacracy does not advocate a flat structure or a simple democracy. Authority, instead of being delegated, is instead granted to roles, potentially ephemeral, which are tied to specific tasks.

Much of the confusion around holacracy likely stems from Robertson’s articulation of its purpose and usage. His 2015 book, Holacracy: The New Management System for a Rapidly Changing World, is wordy and abstruse to the point of self-obfuscation. Its use of metaphors drawn from biology and natural processes suggest an envy for scientific status. There’s plenty of theory, with little evidential support. Robertson never mentions Drucker’s work on self-governance or his concept of management by objective. He never references Theory Y or John Case’s open-book management concept, Evan’s lattice structure, or any other relevant precedent for holacracy. Nor does he address any pre-existing argument against holacracy, e.g., Contingency Theory. But, a weak book doesn’t mean a weak concept.

Holacracry.org’s statement of principles is crisp, and will surely appeal to anyone who has done time in the lower tiers of a corporate hierarchy. Its envisions a corporate republic, rather than a pure democracy. I.e., authority is distributed across teams, and decisions are made locally at the lowest level possible. More importantly, the governance is based on a constitution, through which holacracy aims to curb tyranny of the majority and factionalism, and to ensure that everyone is bound to the same rule set.

Unfortunately, Holacracy’s constitution is bloated, arcane, and far too brittle to support the weight of a large corporation. Several times longer than the US constitution and laden with idiosyncratic usage of common terms, it reads like a California tax code authored by L Ron Hubbard. It also seems to be the work of a single author rather than a constitutional congress. But again, a weak implementation does not impugn the underlying principles. Further, we cannot blame the concept for its mischaracterization by an unmindful tech press as being a flat and structureless process.

Holacracy is right about the perils of both flat structures (inability to allocate resources, solve disputes, and formation of factions) and the faults of silos (demotivation, principal-agent problem, and oppressive managers). But with a dense and rigid constitution and a purely inward focus (no attention to customers) it is a flawed version 1.0 product. It, or something like it – perhaps without the superfluous neologism – will be needed to handle imminent workforce changes. We are facing an engagement crisis, with 80% of the millennial workforce reporting a sense of disengagement and inability to exploit their skills at work. Millennials, says the Pew Research Center, resist paying dues, expect more autonomy while being comfortable in teams, resent taking orders, and expect to make an impact. With productivity tied to worker engagement, and millennial engagement hinging on autonomy, empowerment and trust, some of the silos need to come down. A constitutional system embodying self-governance seems like a good place to start.

,

1 Comment

Multidisciplinary

In college, fellow cave explorer Ron Simmons found that the harnesses made for rock climbing performed very poorly underground. The cave environment shredded the seams of the harnesses from which we hung hundreds of feet off the ground in the underworld of remote southern Mexico. The conflicting goals of minimizing equipment expenses and avoiding death from equipment failure awakened our innovative spirit.

Bill Storage

We wondered if we could build a better caving harness ourselves. Having access to UVA’s Instron testing machine Ron hand-stitched some webbing junctions to compare the tensile characteristics of nylon and polyester topstitching thread. His experiments showed too much variation from irregularities in his stitching, so he bought a Singer industrial sewing machine. At that time Ron had no idea how sew. But he mastered the machine and built fabulous caving harnesses. Ron later developed and manufactured hardware for ropework and specialized gear for cave diving. Curiosity about earth’s last great exploration frontier propelled our cross-disciplinary innovation. Curiosity, imagination and restlessness drive multidisciplinarity.

Soon we all owned sewing machines, making not only harnesses but wetsuits and nylon clothing. We wrote mapping programs to reduce our survey data and invented loop-closure algorithms to optimally distribute errors across a 40-mile cave survey. We learned geomorphology to predict the locations of yet undiscovered caves. Ron was unhappy with the flimsy commercial photo strobe equipment we used underground so he learned metalworking and the electrical circuitry needed to develop the indestructible strobe equipment with which he shot the above photo of me.

Fellow caver Bill Stone pushed multidisciplinarity further. Unhappy with conventional scuba gear for underwater caving, Bill invented a multiple-redundant-processor, gas-scrubbing rebreather apparatus that allowed 12-hour dives on a tiny “pony tank” oxygen cylinder. This device evolved into the Cis-Lunar Primary Life Support System later praised by the Apollo 11 crew. Bill’s firm, Stone Aerospace, later developed autonomous underwater vehicles under NASA Astrobiology contracts, for which I conducted probabilistic risk analyses. If there is life beneath the ice of Jupiter’s moon Europa, we’ll need robots like this to find it.

Artemis

My years as a cave explorer and a decade as a systems engineer in aerospace left me comfortable crossing disciplinary boundaries. I enjoy testing the tools of one domain on the problems of another. The Multidisciplinarian is a hobby blog where I experiment with that approach. I’ve tried to use the perspective of History of Science on current issues in Technology (e.g.) and the tools of Science and Philosophy on Business Management and Politics (e.g.).

Terms like interdisciplinary and multidisciplinary get a fair bit of press in tech circles. Their usage speaks to the realization that while intense specialization and deep expertize are essential for research, they are the wrong tools for product design, knowledge transfer, addressing customer needs, and everything else related to society’s consumption of the fruits of research and invention.

These terms are generally shunned by academia for several reasons. One reason is the abuse of the terms in fringe social sciences of the 80s and 90s. Another is that the university system, since the time of Aristotle’s Lyceum, has consisted of silos in which specialists compete for top position. Academic status derives from research, and research usually means specialization. Academic turf protection and the research grant system also contribute. As Gina Kolata noted in a recent NY Times piece, the reward system of funding agencies discourages dialog between disciplines. Disappointing results in cancer research are often cited as an example of sectoral research silos impeding integrative problem solving.

Beside the many examples of silo inefficiencies, we have a long history of breakthroughs made possible by individuals who mastered several skills and integrated them. Galileo, Gutenberg, Franklin and Watt were not mere polymaths. They were polymaths who did something more powerful than putting specialists together in a room. They put ideas together in a mind.

On this view, specialization may be necessary to implement a solution but is insufficient for conceiving of that solution. Lockheed Martin does not design aircraft by putting aerodynamicists, propulsion experts, and stress analysts together in a think tank. It puts them together, along with countless other specialists, and a cadre of integrators, i.e., systems engineers, for whom excessive disciplinary specialization would be an obstacle. Bill Stone has deep knowledge in several sciences, but his ARTEMIS project, a prototype of a vehicle that could one day discover life beneath an ice-covered moon of Jupiter, succeeded because of his having learned to integrate and synthesize.

A famous example from another field is the case of the derivation of the double-helix model of DNA by Watson and Crick. Their advantage in the field, mostly regarded as a weakness before their discovery, was their failure – unlike all their rivals – to specialize in a discipline. This lack of specialization allowed them to move conceptually between disciplines, fusing separate ideas from Avery, Chargaff and Wilkins, thereby scooping front runner Linus Pauling.

Dev Patnaik, leader of Jump Associates, is a strong advocate of the conscious blending of different domains to discover opportunities that can’t be seen through a single lens. When I spoke with Dev at a recent innovation competition our conversation somehow drifted from refrigeration in Nairobi to Ludwig Wittgenstein. Realizing that, we shared a good laugh. Dev expresses pride for having hired MBA-sculptors, psychologist-filmmakers and the like. In a Fast Company piece, Dev suggested that beyond multidisciplinary teams, we need multidisciplinary people.

The silos that stifle innovation come in many forms, including company departments, academic disciplines, government agencies, and social institutions. The smarts needed to solve a problem are often at a great distance from the problem itself. Successful integration requires breaking down both institutional and epistemological barriers.

I recently overheard professor Olaf Groth speaking to a group of MBA students at Hult International Business School. Discussing the Internet of Things, Olaf told the group, “remember – innovation doesn’t go up, it goes across.” I’m not sure what context he had in mind, but it’s a great point regardless. The statement applies equally well to cognitive divides, academic disciplinary boundaries, and corporate silos.

Olaf’s statement reminded me of a very concrete example of a missed opportunity for cross-discipline, cross-division action at Gillette. Gillette acquired both Oral-B, the old-school toothbrush maker, and Braun, the electric appliance maker, in 1984. Gillette then acquired Duracell in 1996. But five years later, Gillette had not found a way into the lucrative battery-powered electric toothbrush market – despite having all the relevant technologies in house, but in different silos. They finally released the CrossAction (ironic name) brush in 2002; but it was inferior to well-established Colgate and P&G products. Innovation initiatives at Gillette were stymied by the usual suspects –  principal-agent, misuse of financial tools in evaluating new product lines, misuse of platform-based planning, and holding new products to the same metrics as established ones. All that plus the fact that the divisions weren’t encouraged to look across. The three units were adjacent in a list of divisions and product lines in Gillette’s Strategic Report.

Multidisciplinarity (or interdisciplinarity, if you prefer) clearly requires more than a simple combination of academic knowledge and professional skills. Innovation and solving new problems require integrating and synthesizing different repositories of knowledge to frame problems in a real-world context rather than through the lens of a single discipline. This shouldn’t be so hard. After all, we entered the world free of disciplinary boundaries, and we know that fervent curiosity can dissolve them.

……

The average student emerges at the end of the Ph.D. program, already middle-aged, overspecialized, poorly prepared for the world outside, and almost unemployable except in a narrow area of specialization. Large numbers of students for whom the program is inappropriate are trapped in it, because the Ph.D. has become a union card required for entry into the scientific job market. – Freeman Dyson

Science is the organized skepticism in the reliability of expert opinion. – Richard Feynman

Curiosity is one of the permanent and certain characteristics of a vigorous intellect. – Samuel Johnson

The exhortation to defer to experts is underpinned by the premise that their specialist knowledge entitles them to a higher moral status than the rest of us. – Frank Furedi

It is a miracle that curiosity survives formal education. – Albert Einstein

An expert is one who knows more and more about less and less until he knows absolutely everything about nothing. – Nicholas Murray Butler

A specialist is someone who does everything else worse. – Ruggiero Ricci

 

Ron Simmons
Ron Simmons, 1954-2007

 

,

6 Comments

Leaders and Managers in Startups

leadersThe distinction between leaders and managers has been worn to the bone in popular press, though with little agreement on what leadership is and whether leaders can be managers or vice versa. Further, a cult of leadership seems to exalt the most sadistic behaviors of charismatic leaders with no attention on key characteristics ascribed to leaders in most leader-manager dichotomies. Despite imprecision and ambiguity, a coarse distinction between leadership and management sheds powerful light on the needs of startups, as well as giving some advice and cautions about the composition of founder teams in startups.

Common distinctions between managers and leaders include a mix of behaviors and traits, e.g.:

Managers

  • Process and execution-oriented
  • Risk averse
  • Allocates resources
  • Bottom-line focus
  • Command and control
  • Schedule-driven

 Leaders

  • Risk tolerant
  • Innovative
  • Visionary
  • Thinks long-term
  • Charismatic
  • Intuitive

The cult of leadership often also paints some leaders as dictatorial, authoritative and inflexible, seeing these characteristics as an acceptable price for innovative vision. Likewise, the startup culture often views management as being wholly irrelevant to startups. Warren Bennis, in Learning to Lead, gives neither concept priority, but holds that they are profoundly different. For Bennis, managers do things right and leaders do the right thing. Peter Drucker, from 1946 on, saw leadership mostly as another attribute of good management but acknowledged a difference. He characterized good managers as leaders and bad managers as functionaries. Drucker saw a common problem in large corporations; they’re over-managed and under-led. He defined leader simply as someone with followers. He thought trust was the only means by which people chose to follow a leader.

Accepting that the above distinctions are useful for discussion, it’s arguable that in early-stage startups leadership would trump management, simply because at that stage startups require innovation and risk tolerance to get off the ground. Any schedules or bottom-line considerations in the early days of a startup rely only on rough approximations. That said, for startups targeting more serious industry sectors – financial and healthcare, for example – the domain knowledge and organizational maturity of experienced managers could be paramount.

Over the past 15 years I’ve watched a handful of startups face the challenges and benefits of functional, experience, and cognitive diversity. Some of this was firsthand – once as a board director, once on an advisory board, and twice as an owner. I also have close friends with direct experience in founding teams composed partly of tech innovators and partly of early-retired managers from large firms. My thoughts below flow from observing these startups. 

Failure is an option. Perfect is a verb.

 Silicon Valley’s “fail early, fail often” mantra is misunderstood and misused. For some it is an excuse for recklessness with investors’ money. Others chant the mantra with bad counter-inductive logic; i.e., believing that exhausting all routes to failure will necessarily result in success. Despite the hype, the fail-early perspective has value that experienced managers often miss. A look at the experience profile of corporate managers shows why.

Managers are used to having things go according to plan. That doesn’t happen in startups. Managers in startups are vulnerable to committing to an initial plan. The leader/manager distinction has some power here. You cannot manage an army into battle; you can only lead one. Yes, startups are in battle.

For a manager, planning, scheduling, estimating and budgeting traditionally involve a great deal of historical data with low variability. This is more true in the design/manufacture world than for managers who oversee product development (see Donald Reinertsen’s works for more on this distinction). But startups are much more like product development or R&D than they are like manufacturing. In manufacturing, spreadsheets and projections tend to be mostly right. In startups they are today’s best guess, which must be continually revised. Discovery-driven planning, as promoted by MacMillan and McGrath, might be a good starting point. If “fail early” rubs you the wrong way, understand it to mean disproving erroneous assumptions early, before you cast them in stone, only to have the market point them out to you.

Managers, having joined a startup, may tend to treat wild guesses, once entered into a spreadsheet, as facts, or may be overly confident in predictions derived from them. This is particularly critical for startups with complex enterprise products – just the kind of startup where corporate experience is most likely to be attractive. Such startups are prone to high costs and long development cycles. The financing Valley of Death claims many victims who budget against an optimistic release schedule and revenue forecast. It’s a reckless move with few possible escape routes, often resulting in desperate attempts to create a veneer of success on which to base another seed round.

In startups, planning must be more about prioritizing than about scheduling. Startups must treat development plans as a hypotheses to be continually refined. As various generals have said, essential as battle plans are, none has ever survived contact with the enemy. The Lean Startup’s build-measure-learn concept – which is just an abbreviated statement of the hypothetico-deductive interpretation of scientific method – is a good guide; but one that may require a mindset shift for most managers.

Zero defects

 For Philip Crosby, Zero Defects was not a motivational program. It was to be taken literally. It meant everyone should do things right the first time. That mindset, better embodied in William Deming’s statistical process control methodology, is great for manufacturing, as is obvious from results of his work with Japanese industries in the 1950s. Whether that mindset was useful to white collar workers in America, in the form of the Deming System and later Six Sigma, (e.g., Motorola, GE, and Ford) is hotly debated. Qualpro, which authored a competing quality program, reported a while back that 91% of large firms with Six Sigma programs have trailed the S&P 500 after implementing them. Some say the program was effective for its initial purpose, but doesn’t scale to today’s needs.

Whatever its efficacy, most experienced managers have been schooled in Zero Defects or something similar. Its focus on process excellence emphasizing precision, consistency, and detailed analysis seems at odds with the innovation, adaptability, and accommodation of failure we see in successful startups.

Focus on doing it right the first time in a startup will lead to excessively detailed plans containing unreliable estimates and a tendency toward unwarranted confidence in those estimates.

Motivation and hierarchy

Corporate managers are used to having clearly defined goals and plenty of resources. Startups have neither. This impacts team dynamics.

Successful startup members, biographers tell us, are self-motivated. They share a vision and are closely aligned; their personal goals match the startup’s goals. In most corporations, managers control, direct, and supervise employees whose interests are not closely aligned with those of the corporation. Corporate motivational tools, applied to startups, reek of insincerity and demotivate teams. Uncritical enthusiasm is dangerous in a startup, especially for the enthusiasts. It can blind crusaders to fatal flaws in a product, business model, marketing plan or strategy. Aspirational faith is essential, but hope is not a strategy.

An ex-manager in a CEO leadership role might also unduly don the cloak of management by viewing a small startup team of investing founders as employees. It leads to factions, resentment, and distraction from the shared objective.

Startup teamwork requires clear communications and transparency. Clinkle’s Lucas Duplan notwithstanding, I think former corporate managers are far more likely to try to filter and control communications in a startup than those without that experience. Managing communications and information flow maintains order in a corporation; it creates distrust in a startup. Leading requires followers who trust you, says Drucker.

High degrees of autonomy and responsibility in startups invariably lead to disagreements. Some organizational psychologists say conflict is a tool. While that may be pushing it, most would agree that conflict is an indication of an opportunity to work swiftly toward a more common understanding of problem definition and solutions. In the traditional manager/leader distinction, leaders put conflict front and center, seeing it as a valuable indicator of an unmet organizational need. Managers, using a corporate approach, may try to take care of things behind the scene or one-on-one, thereby preventing loss of productivity in those least engaged in the conflict. Neutralizing dissenting voices in the name of alignment likely suppresses exactly the conversation that needs to occur. Make conflict constructive rather than suppressing it.

Strategy

I’m wary of ascribing wisdom to hoodie-wearing Ferrari drivers, nevertheless I’ve cringed to see mature businessmen make strategic blunders that no hipster CEO would make. This says nothing about intellect or maturity, but much about experience and skills acquired through immersion in startupland. I’ll give a few examples.

Believing that seed funding increases your chance of an A round: Most young leaders of startups know that while the amount of seed funding has steadily and dramatically in recent years, the number of A rounds has not. By some measures it has decreased.

Accepting VC money in a seed round: This is a risky move with almost no upside. It broadcasts a message of lukewarm interest by a high-profile investor. When it’s time for an A round, every other potential investor will be asking why the VC who gave you seed money has not invested further. Even if the VC who supplied seed funding entertains an A round, this will likely result in a lower valuation than would result from a competitive process.

Looking like a manager, not a leader: Especially when seeking funding, touting your Six Sigma or process improvement training, a focus on organizational design, or your supervisory skills will raise a big red flag.

Overspending too early: Managers are used to having resources. They often spend too early and give away too much equity for minor early contributions.

Lack of focus/no target customer: Thinking you can be all things to all customers in all markets if you just add more features and relationships is a mistake few hackers would make. Again, former executives are used to having resources and living in a world where cost overruns aren’t fatal.

“Selling” to investors: VCs are highly skilled at detecting hype. Good ones bet more on the jockey than the horse. You want them as a partner, not a customer; so don’t treat them like one.

___

Leave a comment

Stop Orbit Change Denial Now

April 1, 2016.

Just like you, I grew up knowing that, unless we destroy it, the earth would  be around for another five billion years. At least I thought I knew we had a comfortable window to find a new home. That’s what the astronomical establishment led us to believe. Well it’s not true. There is a very real possibility that long before the sun goes red giant on us, instability of the multi-body gravitational dynamics at work in the solar system will wreak havoc. Some computer models show such deadly dynamism in as short as a few hundred millions years.

One outcome is that Jupiter will pull Mercury off course so that it will cross Venus’s orbit and collide with the earth. “To call this catastrophic is a gross understatement,” says Berkeley astronomer Ken Croswell. Gravitational instability might also hurl Mars from the solar system, thereby warping Earth’s orbit so badly that our planet will be ripped to shreds. If you can imagine nothing worse, hang on to your helmet. In another model, the earth itself is heaved out of orbit and we’re on a cosmic one-way journey into the blackness of interstellar space for eternity. Hasta la vista, baby.

Knowledge of the risk of orbit change isn’t new; awareness is another story. The knowledge goes right back to Isaac Newton. In 1687 Newton concluded that in a two-body system, each body attracts the other with a force (which we do not understand, but call gravity) that is proportional to the product of their masses and inversely proportional to the square of the distance between them. That is, he gave a mathematical justification for what Keppler had merely inferred from observing the movement of planets. Newton then proposed that every body in the universe attracts every other body according to the same rule. He called it the universal law of gravitation. Newton’s law predicted how bodies would behave if only gravitational forces acted upon them. This cannot be tested in the real world, as there are no such bodies. Bodies in the universe are also affected by electromagnetism and the nuclear forces. Thus no one can test Newton’s theory precisely.

Ignoring the other forces of nature, Newton’s law plus simple math allows us to predict the future position of a two-body system given their properties at a specific time. Newton also noted, in Book 3 of his Principia, that predicting the future of a three body system was an entirely different problem. Many set out solve the so-called three-body (or generalized n-body) problem. Finally, over two hundred years later, Henri Poincaré, after first wrongly believing he had worked it out – and forfeiting the prize offered by King Oscar of Sweden for a solution – gave mathematical evidence that there can be no analytical solution to the n-body problem. The problem is in the realm of what today is called chaos theory. Even with powerful computers, rounding errors in the numbers used to calculate future paths of planets prevent conclusive results. The butterfly effect takes hold. In a computer planetary model, changing the mass of Mercury by a billionth of a percent might mean the difference between it ultimately being pulled into the sun and it’s colliding with Venus.

Too many mainstream astronomers are utterly silent on the issue of potential earth orbit change. Given that the issue of instability has been known since Poincaré, why is academia silent on the matter. Even Carl Sagan, whom I trusted in my youth, seems party to the conspiracy. In Episode 9 of Cosmos, he told us:

“Some 5 billion years from now, there will be a last perfect day on Earth. Then the sun will slowly change and the earth will die. There is only so much hydrogen fuel in the sun, and when it’s almost all converted to helium the solar interior will continue its original collapse… life will be extinguished, the oceans will evaporate and boil, and gush away to space. The sun will become a bloated red giant star filling the sky, enveloping and devouring the planets Mercury and Venus, and probably the earth as well. The inner planets will be inside the sun. But perhaps by then our descendants will have ventured somewhere else.”

He goes on to explain that we are built of star stuff, dodging the whole matter of orbital instability. But there is simply no mechanistic predictability in the solar system to ensure the earth will still be orbiting when the sun goes red-giant. As astronomer Caleb Scharf says, “the notion of the clockwork nature of the heavens now counts as one of the greatest illusions of science.” Scharf is one of the bold scientists who’s broken with the military-industrial-astronomical complex to spread the truth about earth orbit change.

But for most astronomers, there is a clear denial of the potential of earth orbit change and the resulting doomsday; and this has to stop. Let’s stand with science. It’s time to expose orbit change deniers. Add your name to the list, and join the team to call them out, one by one.

,

2 Comments

Can Science Survive?

galileo
In my last post I ended with the question of whether science in the pure sense can withstand science in the corporate, institutional, and academic senses. Here’s a bit more on the matter.

Ronald Reagan, pandering to a church group in Dallas, famously said about evolution, “Well, it is a theory. It is a scientific theory only.” (George Bush, often “quoted” as saying this, did not.) Reagan was likely ignorant of the distinction between two uses of the word, theory. On the street, “theory” means an unsettled conjecture. In science a theory – gravitation for example – is a body of ideas that explains observations and makes predictions. Reagan’s statement fueled years of appeals to teach creationism in public schools, using titles like creation science and intelligent design. While the push for creation science is usually pinned on southern evangelicals, it was UC Berkeley law professor Phillip E Johnson who brought us intelligent design.

Arkansas was a forerunner in mandating equal time for creation science. But its Act 590 of 1981 (Balanced Treatment for Creation-Science and Evolution-Science Act) was shut down a year later by McLean v. Arkansas Board of Education. Judge William Overton made philosophy of science proud with his set of demarcation criteria. Science, said Overton:

  • is guided by natural law
  • is explanatory by reference to natural law
  • is testable against the empirical world
  • holds tentative conclusions
  • is falsifiable

For earlier thoughts on each of Overton’s five points, see, respectively, Isaac Newton, Adelard of Bath, Francis Bacon, Thomas Huxley, and Karl Popper.

In the late 20th century, religious fundamentalists were just one facet of hostility toward science. Science was also under attack on the political and social fronts, as well an intellectual or epistemic front.

President Eisenhower, on leaving office in 1960, gave his famous “military industrial complex” speech warning of the “danger that public policy could itself become the captive of a scientific technological elite.” At about the same time the growing anti-establishment movements – perhaps centered around Vietnam war protests –  vilified science for selling out to corrupt politicians, military leaders and corporations. The ethics of science and scientists were under attack.

Also at the same time, independently, an intellectual critique of science emerged claiming that scientific knowledge necessarily contained hidden values and judgments not based in either objective observation (see Francis Bacon) or logical deduction (See Rene Descartes). French philosophers and literary critics Michel Foucault and Jacques Derrida argued – nontrivially in my view – that objectivity and value-neutrality simply cannot exist; all knowledge has embedded ideology and cultural bias. Sociologists of science ( the “strong program”) were quick to agree.

This intellectual opposition to the methodological validity of science, spurred by the political hostility to the content of science, ultimately erupted as the science wars of the 1990s. To many observers, two battles yielded a decisive victory for science against its critics. The first was publication of Higher Superstition by Gross and Levitt in 1994. The second was a hoax in which Alan Sokal submitted a paper claiming that quantum gravity was a social construct along with other postmodern nonsense to a journal of cultural studies. After it was accepted and published, Sokal revealed the hoax and wrote a book denouncing sociology of science and postmodernism.

Sadly, Sokal’s book, while full of entertaining examples of the worst of postmodern critique of science, really defeats only the most feeble of science’s enemies, revealing a poor grasp of some of the subtler and more valid criticism of science. For example, the postmodernists’ point that experimentation is not exactly the same thing as observation has real consequences, something that many earlier scientists themselves – like Robert Boyle and John Herschel – had wrestled with. Likewise, Higher Superstition, in my view, falls far below what we expect from Gross and Levitt. They deal Bruno Latour a well-deserved thrashing for claiming that science is a completely irrational process, and for the metaphysical conceit of holding that his own ideas on scientific behavior are fact while scientists’ claims about nature are not. But beyond that, Gross and Levitt reveal surprisingly poor knowledge of history and philosophy of science. They think Feyerabend is anti-science, they grossly misread Rorty, and waste time on a lot of strawmen.

Following closely  on the postmodern critique of science were the sociologists pursuing the social science of science. Their findings: it is not objectivity or method that delivers the outcome of science. In fact it is the interests of all scientists except social scientists that govern the output of scientific inquiry. This branch of Science and Technology Studies (STS), led by David Bloor at Edinburgh in the late 70s, overplayed both the underdetermination of theory by evidence and the concept of value-laden theories. These scientists also failed to see the irony of claiming a privileged position on the untenability of privileged positions in science. I.e., it is an absolute truth that there are no absolute truths.

While postmodern critique of science and facile politics in STC studies seem to be having a minor revival, the threats to real science from sociology, literary criticism and anthropology (I don’t mean that all sociology and anthropology are non-scientific) are small. But more subtle and possibly more ruinous threats to science may exist; and they come partly from within.

Modern threats to science seem more related to Eisenhower’s concerns than to the postmodernists. While Ike worried about the influence the US military had over corporations and universities (see the highly nuanced history of James Conant, Harvard President and chair of the National Defense Research Committee), Eisenhower’s concern dealt not with the validity of scientific knowledge but with the influence of values and biases on both the subjects of research and on the conclusions reached therein. Science, when biased enough, becomes bad science, even when scientists don’t fudge the data.

Pharmaceutical research is the present poster child of biased science. Accusations take the form of claims that GlaxoSmithKline knew that Helicobacter pylori caused ulcers – not stress and spicy food – but concealed that knowledge to preserve sales of the blockbuster drugs, Zantac and Tagamet. Analysis of those claims over the past twenty years shows them to be largely unsupported. But it seems naïve to deny that years of pharmaceutical companies’ mailings may have contributed to the premature dismissal by MDs and researchers of the possibility that bacteria could in fact thrive in the stomach’s acid environment. But while Big Pharma may have some tidying up to do, its opponents need to learn what a virus is and how vaccines work.

Pharmaceutical firms generally admit that bias, unconscious and of the selection and confirmation sort – motivated reasoning – is a problem. Amgen scientists recently tried to reproduce results considered landmarks in basic cancer research to study why clinical trials in oncology have such high failure rate. They reported in Nature that they were able to reproduce the original results in only six of 53 studies. A similar team at Bayer reported that only about 25% of published preclinical studies could be reproduced. That the big players publish analyses of bias in their own field suggests that the concept of self-correction in science is at least somewhat valid, even in cut-throat corporate science.

Some see another source of bad pharmaceutical science as the almost religious adherence to the 5% (+- 1.96 sigma) definition of statistical significance, probably traceable to RA Fisher’s 1926 The Arrangement of Field Experiments. The 5% false-positive probability criterion is arbitrary, but is institutionalized. It can be seen as a classic case of subjectivity being perceived as objectivity because of arbitrary precision. Repeat any experiment long enough and you’ll get statistically significant results within that experiment. Pharma firms now aim to prevent such bias by participating in a registration process that requires researchers to publish findings, good, bad or inconclusive.

Academic research should take note. As is often reported, the dependence of publishing on tenure and academic prestige has taken a toll (“publish or perish”). Publishers like dramatic and conclusive findings, so there’s a strong incentive to publish impressive results – too strong. Competitive pressure on 2nd tier publishers leads to their publishing poor or even fraudulent study results. Those publishers select lax reviewers, incapable of or unwilling to dispute authors. Karl Popper’s falsification model of scientific behavior, in this scenario, is a poor match for actual behavior in science. The situation has led to hoaxes like Sokal’s, but within – rather than across – disciplines. Publication of the nonsensical “Fuzzy”, Homogeneous Configurations by Marge Simpson and Edna Krabappel (cartoon character names) by the Journal of Computational Intelligence and Electronic Systems in 2014 is a popular example. Following Alan Sokal’s line of argument, should we declare the discipline of computational intelligence to be pseudoscience on this evidence?

Note that here we’re really using Bruno Latour’s definition of science – what scientists and related parties do with a body of knowledge in a network, rather than simply the body of knowledge. Should scientists be held responsible for what corporations and politicians do with their knowledge? It’s complicated. When does flawed science become bad science. It’s hard to draw the line; but does that mean no line needs to be drawn?

Environmental science, I would argue, is some of the worst science passing for genuine these days. Most of it exists to fill political and ideological roles. The Bush administration pressured scientists to suppress communications on climate change and to remove the terms “global warming” and “climate change” from publications. In 2005 Rick Piltz resigned from the  U.S. Climate Change Science Program claiming that Bush appointee Philip Cooney had personally altered US climate change documents to lessen the strength of their conclusions. In a later congressional hearing, Cooney confirmed having done this. Was this bad science, or just bad politics? Was it bad science for those whose conclusions had been altered not to blow the whistle?

The science of climate advocacy looks equally bad. Lack of scientific rigor in the IPCC is appalling – for reasons far deeper than the hockey stick debate. Given that the IPCC started with the assertion that climate change is anthropogenic and then sought confirming evidence, it is not surprising that the evidence it has accumulated supports the assertion. Compelling climate models, like that of Rick Muller at UC Berkeley, have since given strong support for anthropogenic warming. That gives great support for the anthropogenic warming hypothesis; but gives no support for the IPCC’s scientific practices. Unjustified belief, true or false, is not science.

Climate change advocates, many of whom are credentialed scientists, are particularly prone to a mixing bad science with bad philosophy, as when evidence for anthropogenic warming is presented as confirming the hypothesis that wind and solar power will reverse global warming. Stanford’s Mark Jacobson, a pernicious proponent of such activism, does immeasurable damage to his own stated cause with his descent into the renewables fantasy.

Finally, both major climate factions stoop to tying their entire positions to the proposition that climate change has been measured (or not). That is, both sides are in implicit agreement that if no climate change has occurred, then the whole matter of anthropogenic climate-change risk can be put to bed. As a risk man observing the risk vector’s probability/severity axes – and as someone who buys fire insurance though he has a brick house – I think our science dollars might be better spent on mitigation efforts that stand a chance of being effective rather than on 1) winning a debate about temperature change in recent years, or 2) appeasing romantic ideologues with “alternative” energy schemes.

Science survived Abe Lincoln (rain follows the plow), Ronald Reagan (evolution just a theory) and George Bush (coercion of scientists). It will survive Barack Obama (persecution of deniers) and Jerry Brown and Al Gore (science vs. pronouncements). It will survive big pharma, cold fusion, superluminal neutrinos, Mark Jacobson, Brian Greene, and the Stanford propaganda machine. Science will survive bad science because bad science is part of science, and always has been. As Paul Feyerabend noted, Galileo routinely used propaganda, unfair rhetoric, and arguments he knew were invalid to advance his worldview.

Theory on which no evidence can bear is religion. Theory that is indifferent to evidence is often politics. Granting Bloor, for sake of argument, that all theory is value-laden, and granting Kuhn, for sake of argument, that all observation is theory-laden, science still seems to have an uncanny knack for getting the world right. Planes fly, quantum tunneling makes DVD players work, and vaccines prevent polio. The self-corrective nature of science appears to withstand cranks, frauds, presidents, CEOs, generals and professors. As Carl Sagan Often said, science should withstand vigorous skepticism. Further, science requires skepticism and should welcome it, both from within and from irksome sociologists.

.

 

the multidisciplinarian

.

XKCD cartoon courtesy of xkcd.com

 

, , ,

1 Comment