Physics for Venture Capitalists

VCs stress that they’re not in the business of evaluating technology. Few failures of startups are due to bad tech. Leo Polovets at Susa Ventures says technical diligence is a waste of time because few startups have significant technical risk. Success hinges on knowing customers’ needs, efficiently addressing those needs, hiring well, minding customer acquisition, and having a clue about management and governance.

In the dot-com era, I did tech diligence for Internet Capital Group. They invested in everything I said no to. Every one of those startups failed, likely for business management reasons. Had bad management not killed them, their bad tech would have in many cases. Are things different now?

Polovets is surely right in the domain of software. But hardware is making a comeback, even in Silicon Valley. A key difference between diligence on hardware and software startups is that software technology barely relies on the laws of nature. Hardware does. Hardware is dependent on science in a way software isn’t.

Silicon Valley’s love affairs with innovation and design thinking (the former being a retrospective judgement after market success, the latter mostly marketing jargon) leads tech enthusiasts and investors to believe that we can do anything given enough creativity. Creativity can in fact come up with new laws of nature. Isaac Newton and Albert Einstein did it. Their creativity was different in kind from that of the Wright Brothers and Elon Musk. Those innovators don’t change laws of nature; they are very tightly bound by them.

You see the impact of innovation overdose in responses to anything cautious of overoptimism in technology. Warp drive has to be real, right? It was already imagined back when William Shattner could do somersaults.

When the Solar Impulse aircraft achieved 400 miles non-stop, enthusiasts demanded solar passenger planes. Solar Impulse has the wingspan of an A380 (800 passengers) but weighs less than my car. When the Washingon Post made the mildly understated point that solar powered planes were a long way from carrying passengers, an indignant reader scorned their pessimism: “I can see the WP headline from 1903: ‘Wright Flyer still a long way from carrying passengers’. Nothing like a good dose of negativity.”

Another reader responded, noting that theoretical limits would give a large airliner coated with cells maybe 30 kilowatts of sun power, but it takes about 100 megawatts to get off the runway. Another enthusiast, clearly innocent of physics, said he disagreed with this answer because it addressed current technology and “best case.” Here we see a disconnect between two understandings of best case, one pointing to hard limits imposed by nature, the other to soft limits imposed by manufacturing and limits of current engineering know-how.

What’s a law of nature?

Law of nature doesn’t have a tight definition. But in science it usually means generalities drawn from a very large body of evidence. Laws in this sense must be universal, omnipotent, and absolute – true everywhere for all time, no exceptions. Laws of nature don’t happen to be true; they have to be true (see footnote*). They are true in both main philosophical senses of “true”: correspondence and coherence. To the best of our ability, they correspond with reality from a gods’ eye perspective; and they cohere, in the sense that each gets along with every other law of nature, allowing a coherent picture of how the universe works. The laws are interdependent.

Now we’ve gotten laws wrong in the past, so our current laws may someday be overturned too. But such scientific disruptions are rare indeed – a big one in 1687 (Newton) and another in 1905 (Einstein). Lesser laws rely on – and are consistent with – greater ones. The laws of physics erect barriers to engineering advancement. Betting on new laws of physics – as cold fusion and free-energy investors have done – is a very long shot.

As an example of what flows from laws of nature, most gasoline engines (Otto cycle) have a top theoretical efficiency of about 47%. No innovative engineering prowess can do better. Material and temperature limitations reduce that further. All metals melt at some temperature, and laws of physics tell us we’ll find no new stable elements for building engines – even in distant galaxies. Moore’s law, by the way, is not in any sense a law in the way laws of nature are laws.

The Betz limit tells us that no windmill will ever convert more than 59.3% of the wind’s kinetic energy into electricity – not here, not on Jupiter, not with curvy carbon nanotube blades, not coated with dilythium crystals. This limit doesn’t come from measurement; it comes from deduction and the laws of nature. The Shockley-Queisser limit tells us no single-layer photovoltaic cell will ever convert more than 33.7% of the solar energy hitting it into electricity. Gaia be damned, but we’re stuck with physics, and physics trumps design thinking.

So while funding would grind to a halt if investors dove into the details of pn-junctions in chalcopyrite semiconductors, they probably should be cautious of startups that, as judged by a Physics 101 student, are found to flout any fundamental laws of nature. That is, unless they’re fixing to jump in early, ride the hype cycle to the peak of expectation, and then bail out before the other investors catch on. They’d never do that, right?

Solyndra’s sales figures

In Solyndra‘s abundant autopsies we read that those crooks duped the DoE about sales volume and profits. An instant Wall Street darling, Solyndra was named one of 50 most innovative companies by Technology Review. Later, the Solyndra scandal coverage never mentioned that the idea of cylindrical containers of photovoltaic cells with spaces between them was a dubious means of maximizing incident rays. Yes, some cells in a properly arranged array of tubes would always be perpendicular to the sun (duh), but the surface area of the cells within say 30 degrees of perpendicular to the sun is necessarily (not even physics, just geometry) only one sixth of those on the tube (2 * 30 / 360). The fact that the roof-facing part of the tubes catches some reflected light relies on there being space between the tubes, which obviously aren’t catching those photons directly. A two-layer tube grabs a few more stray photons, but…   Sure, the DoE should have been more suspicious of Solyndra’s bogus bookkeeping; but there’s another lesson in this $2B Silicon Valley sinkhole. Their tech was bullshit.

The story at Abound Solar was surprisingly similar, though more focused on bad engineering than bad science. Claims about energy, given a long history of swindlers, always warrant technical diligence. Upfront Ventures recently lead a $20M B round for uBeam, maker of an ultrasonic charging system. Its high frequency sound vibrations travel across the room to a receiver that can run your iPhone or, someday, as one presentation reported, your flat screen TV, from a distance of four meters. Mark Cuban and Marissa Mayer took the plunge.

Now we can’t totally rule out uBeam’s claims, but simple physics screams out a warning. High frequency sound waves diffuse rapidly in air. And even if they didn’t, a point-source emitter (likely a good model for the uBeam transmitter) obeys the inverse-square law (see Johannes Kepler, 1596). At four meters, the signal is one sixteenth as strong as at one meter. Up close it would fry your brains. Maybe they track the target and focus a beam on it (sounds expensive). But in any case, sound-pressure-level regulations limit transmitter strength. It’s hard to imagine extracting more than a watt or so from across the room. Had Upfront hired a college kid for a few days, they might have spent more wisely and spared uBeam’s CEO the embarrassment of stepping down last summer after missing every target.

Even b-school criticism of Theranos focuses on the firm’s culture of secrecy, Holmes’ poor management practices, and bad hiring, skirting the fact that every med student knew that a drop of blood doesn’t contain enough of the relevant cells to give accurate results.

Homework: Water don’t flow uphill

Now I’m not saying all VC, MBAs, and private equity folk should study much physics. But they should probably know as much physics as I know about convertible notes. They should know that laws of nature exist, and that diligence is due for bold science/technology claims. Start here:

Newton’s 2nd law:

  • Roughly speaking, force = mass times acceleration. F = ma.
  • Important for cars. More here.
  • Practical, though perhaps unintuitive, application: slow down on I-280 when it’s raining.

2nd Law of Thermodynamics:

  • Entropy always increases. No process is thermodynamically reversible. More understandable versions came from Lord Kelvin and Rudolf Clausius.
  • Kelvin: You can’t get any mechanical effect from anything by cooling it below the temperature of its surroundings.
  • Clausius: Without adding energy, heat can never pass from a cold thing to a hot thing.
  • Practical application: in an insulated room, leaving the refrigerator door open will raise the room’s temperature.
  • American frontier version (Locomotive Engineering Vol XXII, 1899): “Water don’t flow uphill.”

_ __________ _


“If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations – then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the Second Law of Thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.”
 – Arthur Eddington

.

*footnote: Critics might point out that the distinction between laws of physics (must be true) and mere facts (happen to be true) of physics seems vague, and that this vagueness robs any real meaning from the concept of laws of physics. Who decides what has to be true instead of what happens to be true? All copper in the universe conducts electricity seems like a law. All trees in my yard are oak does not. How arrogant was Newton to move from observing that f=ma in our little solar system to his proclamation that force equals mass times acceleration in all possible worlds. All laws of science (and all scientific progress) seem to rely on the logical fallacy of affirming the consequent. This wasn’t lost on the ancient anti-sophist Greeks (Plato), the cleverest of the early Christian converts (Saint Jerome) and perceptive postmodernists (Derrida). David Hume’s 1738 A Treatise of Human Nature methodically destroyed the idea that there is any rational basis for the kind of inductive inference on which science is based. But… Hume was no relativist or nihilist. He appears to hold, as Plato did in Theaetetus, that global relativism is self-undermining. In 1951, WVO Quine eloquently exposed the logical flaws of scientific thinking in Two Dogmas of Empiricism, finding real problems with distinctions between truths grounded in meaning and truths grounded in fact. Unpacking that a bit, Quine would say that it is pointless to ask whether f=ma is a law of nature or a just deep empirical observation. He showed that we can combine two statements appearing to be laws together in a way that yielded a statement that had to be merely a fact. Finally, from Thomas Kuhn’s perspective, deciding which generalized observation becomes a law is entirely a social process. Postmodernist and Strong Program adherents then note that this process is governed by local community norms. Cultural relativism follows, and ultimately decays into pure subjectivism: each of us has facts that are true for us but not for each other. Scientists and engineers have found that relativism and subjectivism aren’t so useful for inventing vaccines and making airplanes fly. Despite the epistemological failings, laws of nature work pretty well, they say.

 

 

 

7 Comments

A Bayesian folly of J Richard Gott

Don’t get me wrong. J Richard Gott is one of the coolest people alive. Gott does astrophysics at Princeton and makes a good argument that time travel is indeed possible via cosmic strings. He’s likely way smarter than I, and he’s from down home. But I find big holes in his Copernicus Method, for which he first achieved fame.

Gott conceived his Copernuicus Method for estimating the lifetime of any phenomenon when he visited the Berlin wall in 1969. Wondering how long it would stand, Gott figured that, assuming there was nothing special about his visit, a best guess was that he happened upon the wall 50% of the way through its lifetime. Gott saw this as an application of the Copernican principle: nothing is special about our particular place (or time) in the universe. As Gott saw it, the wall would likely come down eight years later (1977), since it had been standing for eight years in 1969. That’s not exactly how Gott did the math, but it’s the gist of it.

I have my doubts about the Copernican principle – in applications from cosmology to social theory – but that’s not my beef with Gott’s judgment of the wall. Had Gott thrown a blindfolded dart at a world map to select his travel destination I’d buy it. But anyone who woke up at the Berlin Wall in 1969 did not arrive there by a random process. The wall was certainly in the top 1000 interesting spots on earth in 1969. Chance alone didn’t lead him there. The wall was still news. Gott should have concluded that he saw the wall near in the first half of its life, not at its midpoint.

Finding yourself at the grand opening of Brooklyn pizza shop, it’s downright cruel to predict that it will last one more day. That’s a misapplication of the Copernican principle, unless you ended up there by rolling dice to pick the time you’d parachute in from the space station. More likely you saw Vini’s post on Facebook last night.

Gott’s calculation boils down to Bayes Theorem applied to a power-law distribution with an uninformative prior expectation. I.e., you have zero relevant knowledge. But from a Bayesian perspective, few situations warrant an uninformative prior. Surely he knew something of the wall and its peer group. Walls erected by totalitarian world powers tend to endure (Great Wall of China, Hadrian’s Wall, the Aurelian Wall), but mean wall age isn’t the key piece of information. The distribution of wall ages is. And though I don’t think he stated it explicitly, Gott clearly judged wall longevity to be scale-invariant. So the math is good, provided he had no knowledge of this particular wall in Berlin.

But he did. He knew its provenance; it was Soviet. Believing the wall would last eight more years was the same as believing the Soviet Union would last eight more years. So without any prior expectation about the Soviet Union, Gott should have judged the wall would come down when the USSR came down. Running that question through the Copernican Method would have yielded the wall falling in the year 2016, not 1977 (i.e., 1969 + 47, the age of the USSR in 1969). But unless Gott was less informed than most, his prior expectation about the Soviet Union wasn’t uninformative either. The regime showed no signs of weakening in 1969 and no one, including George Kennan, Richard Pipes, and Gorbachev’s pals, saw it coming. Given the power-law distribution, some time well after 2016 would have been a proper Bayesian credence.

With any prior knowledge at all, the Copernican principle does not apply. Gott’s prediction was off by only 14 years. He got lucky.

Leave a comment

Physics for Frisco motorheads

San Francisco police are highly tolerant. A few are tolerant in the way giant sloths are tolerant. Most are tolerant because SF ties their hands from all but babysitting the homeless. Excellent at tolerating heroin use on Market Street, they’re also proficient at tolerating vehicular crime, from sailing through red lights (23 fatalities downtown last year) to minor stuff like illegal – oops, undocumented – car mods.

For a progressive burg, SF has a lot of muscle cars,  Oddly, many of the car nuts in San Francisco use the term “Frisco,” against local norms.

Back in the ’70s, in my small Ohio town, the losers drove muscle cars to high school. A very few of these cars had amazing acceleration ability. A variant of ’65 Pontiac Catalina could do zero to 60 in 4 1/2 seconds. A Tesla might leave it in the dust, but that was something back then. While the Catalina’s handling was awful, it could admirably smoke the starting line. Unlike the Catalina, most muscle cars of the ’60s and ’70s – including the curvaceous ’75 Corvette – were total crap, even for accelerating. My witless schoolmates lacked any grasp of the simple physics that could explain how and why their cars were crap. I longed to leave those barbarians and move to someplace civilized. I ended up in San Francisco.

Those Ohio simpletons strutted their beaters’ ability to squeal tires from a dead stop. They did this often, in case any of us might forget just how fast their foot could pound the pedal. Wimpy crates couldn’t burn rubber like that. So their cars must be pretty badass, they thought. Their tires would squeal with the tenderest touch of the pedal. Awesome power, right?

Actually, it meant a badly unbalanced vehicle design combined with a gas-pedal-position vs. fuel-delivery curve yielding a nonlinear relationship between pedal position and throttle plate position. This abomination of engineering attracted 17-year-old bubbas cocksure that hot chicks dig the smell of burning rubber. See figure A.

Fig. A

This hypothetical, badly-designed car has a feeble but weighty 100 hp engine and rear-wheel drive. Its rear tires will squeal at the drop of a hat even though the car is gutless. Its center of gravity, where its weight would be if you concentrated all its weight into a point, is too far forward. Too little load on the rear wheels.

Friction, which allows you to accelerate, is proportional to the normal force, i.e. the force of the ground pushing up on the tires. That is, the traction capacity of a tire contacting the road is proportional to the weight on the tire. With a better distribution of weight, the torque resulting from the frictional force at the rear wheels would increase the normal force there, resulting in the tendency to do a wheelie. This car will never do a wheelie. It lacks the torque, even if the meathead driving it floors it before dumping the clutch.

Figure A is an exaggeration of what was going on in the heaps driven by my classmates.

Above, I noted that the traction capacity of a tire contacting the road is proportional to the weight on the tire. The constant of proportionality is called the coefficient of friction. From this we get F = uN, meaning frictional force equals the coefficient of friction (“u”) times the normal force, which is, roughly speaking, the weight pushing on the tire.

The maximum possible coefficient of friction on smooth surfaces is 1.0. That means a car’s maximum possible acceleration would be 1g: 32 feet per second per second. Calculating a 0-60 time based on 1g yields 2.73 seconds. Hot cars can momentarily exceed that acceleration, because tires sink into small depressions in pavement, like a pinion engaging a rack (round gear on a linear gear).

Here’s how Isaac Newton, who was into hot cars, viewed the 0-60-at-1-g problem:

  • Acceleration is change in speed over time. a = dv/t.
  • Acceleration due to gravity (body falling in a vacuum) is 32.2 feet per second.
  • 5280 feet in a mile. 60 seconds in a minute.
  • 60 mph = 5280/60 ft/sec = 88 ft/sec .
  • a = delta v/t .  Solve for t:  t = dv/a.   dv = 88 ft/sec.  a = 32.2 ft/sec/sec.   t = dv/a = 88/32.2 (ft/sec) / (ft/sec squared) = 2.73 sec.  Voila.

The early 428 Shelby Mustangs were amazing, even by today’s acceleration standards, though they were likely still awful to steer. In contrast to the noble Shelbys,  some late ’60s – early ’70s Mustangs with inline-six 3-liter engines topped out at just over 100 hp. Ford even sold a V8 version of the Mustang with a pitiful 140 hp engine. Shame, Lee Iacocca. It could do zero to 60 in around 13 seconds. Really.

Those cars had terrible handling because their suspensions were lousy and because of subtle aspects of weight distribution (extra credit: see polar moment of inertia).

If you can’t have power, at least have noise. To make your car or bike really loud, do this simple trick. Insert a stack of washers or some nuts between the muffler and exhaust pipe to leave a big gap, thereby effectively disconnecting the muffler. This worked back in 1974 and, despite civic awareness and modern sensitivity to air and noise pollution, it still works great today. For more hearing damage, custom “exhaust” systems, especially for bikes (cops have deep chopper envy and will look they other way when your hog sets off car alarms), can help you exceed 105 db SPL. Every girl’s eye will be on you, bud. Hubba hubba. See figure B.

Fig. B

I get a bit of nostalgia when I hear those marvels of engineering from the ’60s and ’70s on Market Street nightly, at Fisherman’s Wharf, and even in my neighborhood. Our police can endure that kind of racket because they’re well-paid to tolerate it. Wish I were similarly compensated. I sometimes think of this at 4 am on Sunday morning even if my windows are closed.

I visited the old country, Ohio, last year. There were no squealing tires and few painfully loud motors on the street. Maybe the motorheads evolved. Maybe the cops aren’t paid enough to tolerate them. Ohio was nice to visit, but the deplorable intolerance was stifling.

2 Comments

Representative Omar’s arithmetic

Women can’t do math. Hypatia of Alexandria and Émilie du Châtelet notwithstanding, this was asserted for thousands of years by men who controlled access to education. With men in charge it was a self-fulfilling prophecy. Women now represent the majority of college students and about 40% of math degrees. That’s progress.

base rateLast week Marcio Rubio caught hell for taking Ilhan Omar’s statement about double standards and unfair terrorism risk assessment out of context. The quoted fragment was: “I would say our country should be more fearful of white men across our country because they are actually causing most of the deaths within this country…”

Most news coverage of the Rubio story (e.g. Vox) note that Omar did not mean that everyone should be afraid of white men as a group, but that, e.g., “violence by right-wing extremists, who are overwhelmingly white and male, really is a bigger problem in the United States today than jihadism.”

Let’s look at the numbers. Wikipedia, following the curious date-range choice of the US GAO, notes: “of the 85 violent extremist incidents that resulted in death since September 12, 2001, far-right politics violent extremist groups were responsible for 62 (73 percent) while radical Islamist violent extremists were responsible for 23 (27 percent).” Note that those are incident counts, not death counts. The fatality counts were 106 (47%) for white extremists and 119 (53%) for jihadists. Counting fatalities instead of incidents reverses the sense of the numbers.

Pushing the terminus post quem back one day adds the 2,977 9-11 fatalities to the category of deaths from jihadists. That makes 3% of fatalities from right wing extremists and 97% from radical Islamist extremists. Pushing the start date further back to 1/1/1990, again using Wikipedia numbers, would include the Oklahoma City bombing (white extremists, 168 dead), nine deaths from jihadists, and 14 other deaths from white wackos, including two radical Christian antisemites and professor Ted Kaczynski. So the numbers since 1990 show 92% of US terrorism deaths from jihadists and 8% from white extremists.

Barring any ridiculous adverse selection of date range (in the 3rd week of April, 1995, 100% of US terrorism deaths involved white extremists), Omar is very, very wrong in her data. The jihadist death toll dwarfs that from white extremists.

But that’s not the most egregious error in her logic – and that of most politicians armed with numbers and a cause. The flagrant abuse of data is what Kahneman and Tversky termed base-rate neglect. Omar, in discussing profiling (sampling a population subset) is arguing about frequencies while citing raw incident counts. The base rate (an informative prior, to Bayesians) is crucial. Even if white extremists caused most – as she claimed – terrorism deaths, there would have to be about one hundred times more deaths from white men (terrorists of all flavors are overwhelmingly male) than from Muslims for her profiling argument to hold. That is, the base rate of being Muslim in the US is about one percent.

The press overwhelmingly worked Rubio over for his vicious smear. 38 of the first 40 Google search results on “Omar Rubio” favored Omar. One favored Rubio and one was an IMDb link to an actor named Omar Rubio. None of the news pieces, including the one friendly to Rubio, mentioned Omar’s bad facts (bad data) or her bad analysis thereof (bad math). Even if she were right about the data – and she is terribly wrong – she’d still be wrong about the statistics.

I disagree with Trump about Omar. She should not go back to Somalia. She should go back to school.

6 Comments

Which Is To Be Master? – Humpty Dumpty’s Research Agenda

Should economics, sociology or management count as science?

2500 years ago, Plato, in The Sophist, described a battle between the gods and the earth giants. The fight was over the foundations of knowledge. The gods thought knowledge came from innate concepts and deductive reasoning only. Euclid’s geometry was a perfect example – self-evident axioms plus deduced theorems. In this model, no experiments are needed. Plato explained that the earth giants, however, sought knowledge through earthly experience. Plato sided with the gods; and his opponents, the Sophists, sided with the giants. Roughly speaking, this battle corresponds to the modern tension between rationalism (the gods) and empiricism (the giants). For the gods, the articles of knowledge must be timeless, universal and certain. For the giants, knowledge is contingent, experiential, and merely probable.


Plato’s approach led the Greeks – Aristotle, most notably – to hold that rocks fall with speeds proportional to their weights, a belief that persisted for 2000 years until Galileo and his insolent ilk had the gall to test it. Science was born.

Enlightenment era physics aside, Plato and the gods are alive and well. Scientists and social reformers of the Enlightenment tried to secularize knowledge. They held that common folk could overturn beliefs with the right evidence. Empirical evidence, in their view, could trump any theory or authority. Math was good for deduction; but what’s good for math is not good for physics, government, and business management.

Euclidean geometry was still regarded as true – a perfect example of knowledge fit for the gods –  throughout the Enlightenment era. But cracks began to emerge in the 1800s through the work of mathematicians like Lobachevsky and Riemann. By considering alternatives to Euclid’s 5th postulate, which never quite seemed to fit with the rest, they invented other valid (internally consistent) geometries, incompatible with Euclid’s. On the surface, Euclid’s geometry seemed correct, by being consistent with our experience. I.e., angle sums of triangles seem to equal 180 degrees. But geometry, being pure and of the gods, should not need validation by experience, nor should it be capable of such validation.

Non-Euclidean Geometry rocked Victorian society and entered the domain of philosophers, just as Special Relativity later did. Hotly debated, its impact on the teaching of geometry became the subject of an entire book by conservative mathematician and logician Charles Dodgson. Before writing that book, Dodgson published a more famous one, Alice in Wonderland.

The mathematical and philosophical content of Alice have been analyzed at length. Alice’s dialogue with Humpty Dumpty is a staple of semantics and semiotics, particularly, Humpty’s use of stipulative definition. Humpty first reasons that “unbirthdays” are better than birthdays, there being so many more of them, and then proclaims glory. Picking up that dialogue, Humpty announces,

‘And only one [day of the year] for birthday presents, you know. There’s glory for you!’

‘I don’t know what you mean by “glory”,’ Alice said.

Humpty Dumpty smiled contemptuously. ‘Of course you don’t — till I tell you. I meant “there’s a nice knock-down argument for you!”‘

‘But “glory” doesn’t mean “a nice knock-down argument”,’ Alice objected.

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’

‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

Humpty is right that one can redefine terms at will, provided a definition is given. But the exchange hints at a deeper notion. While having a private language is possible, it is also futile, if the purpose of language is communication.

Another aspect of this exchange gets little coverage by analysts. Dodgson has Humpty emphasize the concept of argument (knock-down), nudging us in the direction of formal logic. Humpty is surely a stand-in for the proponents of non-Euclidean geometry, against whom Dodgson is strongly (though wrongly – more below) opposed. Dodgson was also versed in Greek philosophy and Platonic idealism. Humpty is firmly aligned with Plato and the gods. Alice sides with Plato’s earth giants, the sophists. Humpty’s question, which is to be master?, points strongly at the battle between the gods and the giants. Was this Dodgson’s main intent?

When Alice first chases the rabbit down the hole, she says that she fell for a long time, and reasons that the hole must be either very deep or that she fell very slowly. Dodgson, schooled in Newtonian mechanics, knew, unlike the ancient Greeks, that all objects fall at the same speed. So the possibility that Alice fell slowly suggests that even the laws of nature are up for grabs. In science, we accept that new evidence might reverse what we think are the laws of nature, yielding a scientific revolution (paradigm shift).

In trying to vindicate “Euclid’s masterpiece,” as Dodgson called it, he is trying to free himself from an unpleasant logical truth: within the realm of math, we have no basis to the think the world is Euclidean rather than Lobichevskian. He’s trying to rescue conservative mathematics (Euclidean geometry) by empirical means. Logicians would say Dodgson is confusing a synthetic and a posteriori proposition with one that is analytic and a priori. That is, justification of the 5th postulate can’t rely on human experience, observations, or measurements.  Math and reasoning feed science; but science can’t help math at all. Dodgson should know better. In the battle between the gods and the earth giants, experience can only aid the giants, not the gods. As historian of science Steven Goldman put it, “the connection between the products of deductive reasoning and reality is not a logical connection.” If mathematical claims could be validated empirically then they wouldn’t be timeless, universal and certain.

While Dodgson was treating math as a science, some sciences today have the opposite problem. They side with Plato. This may be true even in physics. String theory, by some accounts, has hijacked academic physics, especially its funding. Wolfgang Lerche of CERN called string theory the Stanford propaganda machine working at its fullest. String theory at present isn’t testable. But its explanatory power is huge; and some think physicists pursue it with good reason. It satisfies at least one of the criteria Richard Dawid lists as reasons scientists follow unfalsifiable theories:

  1. the theory is the only game in town; there are no other viable options
  2. the theoretical research program has produced successes in the past
  3. the theory turns out to have even more explanatory power than originally thought

Dawid’s criteria may not apply to the social and dismal sciences. Far from the only game in town, too many theories – as untestable as strings, all plausible but mutually incompatible – vie for our Nobel honors.

Privileging innate knowledge and reason – as Plato did – requires denying natural human skepticism. Believing that intuition alone is axiomatic for some types of knowledge of the world requires suppressing skepticism about theorems built on those axioms. Philosophers call this epistemic foundationalism. A behavioral economist might see it as confirmation bias and denialism.

Physicists accuse social scientists of continually modifying their theories to accommodate falsifying evidence, still clinging to a central belief or interpretation. These recall the Marxists’ fancy footwork to rationalize their revolution not first occurring in a developed country, as was predicted. A harsher criticism is that social sciences design theories from the outset to be explanatory but not testable. In the 70s, Clifford D Shearing facetiously wrote in The American Sociologist that “a cursory glance at the development of sociological theory should suggest… that any theorist who seeks sociological fame must insure that his theories are essentially untestable.”

The Antipositivist school is serious about the issue Shearing joked about. Jurgen Habermas argues that sociology cannot explain by appeal to natural law. Deirdre (Donald) McCloskey mocked the empiricist leanings of Milton Friedman as being invalid in principle. Presumably, antipositivists are content that theories only explain, not predict.

In business management, the co-occurence of the terms theory and practice and the usage of the string “theory and practice” as opposed to “theory and evidence” or “theory and testing” suggests that Plato reigns in management science. “Practice” seems to mean interacting with the world under the assumption that that the theory is true.

The theory and practice model is missing the notion of testing those beliefs against the world or, more importantly, seeking cases in the world that conflict with the theory. Further, it has no notion of theory selection; theories do not compete for success.

Can a research agenda with no concept of theory testing, falsification effort, or theory competition and theory choice be scientific? If so, it seems creationism and astrology should be called science. Several courts (e.g. McLean vs. Arkansas) have ruled against creationism on the grounds that its research program fails to reference natural law, is untestable by evidence, and is certain rather than tentative. Creationism isn’t concerned with details. Intelligent Design (old-earth creationism), for example, is far more concerned with showing Darwinism wrong that with establishing an age of the earth. There is no scholarly debate between old-earth and young-earth creationism on specifics.

Critics say the fields of economists and business and business management are likewise free of scholarly debate. They seem to have similarly thin research agendas. Competition between theories in these fields is lacking; incompatible management theories coexist without challenges. Many theorist/practitioners seem happy to give priority to their model over reality.

Dodgson appears also to have been wise to the problem of a model having priority over the thing it models – believing the model is more real than the world. In Sylvie and Bruno Concluded, he has Mein Herr brag about his country’s map-making progress. They advanced their mapping skill from rendering at 6 inches per mile to 6 yards per mile, and then to 100 yards per mile. Ultimately, they built a map with scale 1:1. The farmers protested its use, saying it would cover the country and shut out the light. Finally, forgetting what models what, Mein Herr explains, “so we now use the country itself, as its own map, and I assure you it does nearly as well.”

Humpty Dumpty had bold theories that he furiously proselytized. Happy to construct his own logical framework and dwell therein, free from empirical testing, his research agenda was as thin as his skin. Perhaps a Nobel Prize and a high post in a management consultancy are in order. Empiricism be damned, there’s glory for you.

 

There appears to be a sort of war of Giants and Gods going on amongst them; they are fighting with one another about the nature of essence…

Some of them are dragging down all things from heaven and from the unseen to earth, and they literally grasp in their hands rocks and oaks; of these they lay hold, and obstinately maintain, that the things only which can be touched or handled have being or essence…

And that is the reason why their opponents cautiously defend themselves from above, out of an unseen world, mightily contending that true essence consists of certain intelligible and incorporeal ideas…  –  Plato, Sophist

An untestable theory cannot be improved upon by experience. – David Deutsch

An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen. – Earl Wilson

 

 

6 Comments

Frederick Taylor Must Die

If management thinker Frederick Winslow Taylor (died 1915) were alive today he would certainly resent the straw man we have stood in his place. Taylor tried to inject science into the discipline of management. Innocent of much of the dehumanization of workers pinned on him, Taylor still failed in several big ways, even by the standards of his own time. For example, he failed at science.

What Taylor called science was mostly mere measurement – no explanatory or predictive theories. And he certainly didn’t welcome criticism or court refutation. Not only did he turn workers into machines, he turned managers into machines that did little more than take measurements. And as Paul Zak notes in Trust Factor Taylor failed to recognize that organizations are people embedded in a culture.

Taylor is long dead, but Taylorism is alive and well. Before I left Goodyear Aerospace in the late 80’s, I recall the head of Human Resources at a State of the Company address reporting trends in terms of “personnel units.” Did these units include androids and work animals I wondered.

Heavy-handed management can turn any of Douglas McGregor’s Theory Y (internally motivated) workers into Theory X (lazy, needs to be prodded, extrinsic rewards) using tried and true industrial-era management methodologies. That is, one can turn TPS, the Toyota Production System, originally aimed at developing people, into just another demoralizing bureaucratic procedure wearing lipstick.

In Silicon Valley, software creation is modeled as a manufacturing process. Scrum team members often have no authority for schedule, backlog, communications or anything else; and teams “do agile” with none of the self-direction, direct communications, or other principles laid out in the agile manifesto. Yet sprint velocity is computed to three decimal places by steady Taylorist hands. Across the country, micromanagement and Taylorism are two sides of the same coin, committed to eliminating employees’ control over their own futures and any sense of ownership in their work product. As Daniel Pink says in Drive, we are meant to be autonomous individuals, not individual automatons. This is particularly true for developers, who are inherently self-directed and intrinsically motivated. Scrum is allegedly based on Theory Y, but like Matrix Management a generation earlier, too many cases of Scrum are Theory X at core with a veneer of Theory Y.

Management is utterly broken, especially at the lowest levels. It is shaped to fill two forgotten needs – the deskilling of labor, and communication within fragmented networks.

Henry Ford is quoted as saying, “Why is it every time I ask for a pair of hands, they come with a brain attached?” Likely a misattribution derived from Wedgwood (below), the quote reflects generations of self-destructive management sentiment. The intentional de-skilling of the workforce accompanied industrialization in 18th century England. Division of labor yielded efficient operations on a large scale; and it reduced the risk of unwanted knowledge transfer.

When pottery maker Josiah Wedgwood built his factory, he not only provided for segmentation of work by tool and process type. He also built separate entries to each factory segment, with walls to restrict communications between workers having different skills and knowledge. Wedgwood didn’t think his workers were brain-dead hands; but he would have preferred that they were.

He worried that he might be empowering potential competitors. He was concerned that workers possessed drive and an innovative spirit, not that they lacked these qualities. Wedgwood pioneered intensive division of labor, isolating mixing, firing, painting and glazing. He ditched the apprentice-journeyman-master system for fear of spawning a rival, as actually became the case with employee John Voyez. Wedgwood wanted hands – skilled hands – without brains. “We have stepped beyond the other manufactur[er]s and we must be content to train up hands to suit our purpose” (Wedgwood to Bentley, Sep 7, 1769).

When textile magnate Francis Lowell built factories including dormitories, chaperones, and access to culture and education, he was trying to compensate for the drudgery of long hours of repetitive work and low wages. When Lowell cut wages the young female workers went on strike, published magazines critical of Lowell (“… just as though we were so many living machines” – Ellen Collins, Lowell Offering, 1845) and petitioned Massachusetts for legislation to limit work hours. Lowell wanted hands but got brains, drive, and ingenuity.

To respond to market dynamics and fluctuations in demand for product and in supply of raw materials, a business must have efficient and reliable communication channels. Commercial telephone networks only began to emerge in the late 1800s. Long distance calling was a luxury well into the 20th century. When the Swift Meat Packing Company pioneered the vertically integrated production system around 1915, G.F. Swift faced the then-unique challenge of needing to coordinate sales, supply chain, marketing, and operations people from coast to coast. He set up central administration and a hierarchical, military-style organizational structure for the same reason Julius Caesar’s army used that structure – to quickly move timely knowledge and instructions up, down, and laterally.

So our management hierarchies address a long-extinct communication need and our command/control management methods reflect an industrial age wish for mindless carrot-stick employees – a model the industrialists themselves knew to be inaccurate. But we’ve made this wish come true; treat people badly long enough and they’ll conform to your Theory X expectations. Business schools tout best-practice management theories that have never been subjected to testing or disconfirmation. In their views, it is theory, and therefore it’s science.

Much of modern management theory pretends that today’s knowledge workers are “so many living machines,” human resources, human capital, assets, and personnel units.

Unlike in the industrial era, modern business has no reason to de-skill its labor, blue collar or white. Yet in many ways McKinsey and other management consultancies like them seem dedicated to propping up and fine tuning Theory X, as evidence to the priority of structure in the 7S, Weisbord, and Galbraith organizational models for example.

This is an agency problem with a trillion dollar price tag. When asked which they would prefer, a company of self-motivated, self-organizing, creative problem solvers or flock of compliant drones, most CEOs would choose the former. Yet the systems we cultivate yield the latter. We’re managing 21st century organizations with 19th century tools.

For almost all companies, a high-performing workforce is the most important source of competitive advantage. Most studies of employee performance, particularly white-collar knowledge workers, find performance to hinge on engagement and trust (level of trust in managers and the firm by employees). Engagement and trust are closely tied to intrinsic motivation, autonomy, and sense of purpose. That is, performance is maximized when they’re able to tap into their skills, knowledge, experience, creativity, discipline, passion, agility and internal motivation. Studies by Deloitte, Towers Watson, Gallup, Aon Hewitt, John P Kotter, and Beer and Eisenstat over the past 25 years reach the same conclusions.

All this means Taylorism and embedding Theory X in organizational structure and management methodologies simply shackle the main source of high performance in most firms. As Pink says, command and control lead to compliance; autonomy leads to engagement. Peter Drucker fought for this point in the 1950s; America didn’t want to hear it. Frederick Taylor’s been dead for 100 years. Let’s let him rest in peace.

___


What actually stood between the carrot and the stick was, of course, a jackass. – Alfie Kohn, Punished by Rewards

Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity. – General George Patton

Control leads to compliance; autonomy leads to engagement. – Daniel H. Pink, Drive

The knowledge obtained from accurate time study, for example, is a powerful implement, and can be used, in one case to promote harmony between workmen and the management, by gradually educating, training, and leading the workmen into new and better methods of doing the work, or in the other case, it may be used more or less as a club to drive the workmen into doing a larger day’s work for approximately the same pay that they received in the past. – Frederick Taylor, The Principles of Scientific Management, 1913

That’s my real motivation – not to be hassled. That and the fear of losing my job, but y’know, Bob, that will only make someone work just hard enough not to get fired. – Peter Gibbons, Office Space, 1999

___

___

Bill Storage is a scholar in the history of science and technology who in his corporate days survived encounters with strategic management initiatives including Quality Circles, Natural Work Groups, McKinsey consultation, CPIP, QFD, Leadership Councils, Kaizen, Process Based Management, and TQMS.

 


			

3 Comments

Positive Risk – A Positive Disaster

Positive risk is an ill-conceived concept in risk management that makes a mess of things. It’s sometimes understood to be the benefit or reward, imagined before taking some action, for which the risky action was taken, and other times understood to mean a non-zero chance of an unexpected beneficial consequence of taking a chance. Many practitioners mix the two meanings without seeming to grasp the difference. For example, in Fundamentals of Enterprise Risk Management John J Hampton defends the idea of positive risk: “A lost opportunity is just as much a financial loss as is damage to people and property.”  Hampton then relates the story of US Airways flight 1549, which made a successful emergency water landing on the Hudson River in 2009. Noting the success of the care team in accommodating passengers, Hampton describes the upside to this risk: “US Airways received millions of dollars of free publicity and its reputation soared.” Putting aside the perversity of viewing damage containment as an upside of risk, any benefit to US Airways from the happy outcome of successfully ditching a plane in a river seems poor grounds for intentionally increasing the likelihood of repeating the incident because of “positive risk.”

While it’s been around for a century, the concept of positive risk has become popular only in the last few decades. Its popularity likely stems from enterprise risk management (ERM) frameworks that rely on Frank Knight’s (“Risk, Uncertainty & Profit,” 1921) idiosyncratic definition of risk. Knight equated risk with what he called “measurable uncertainty” – what most of us call probability –  which he differentiated from “unmeasurable uncertainty,” which is what most of us call ignorance (not in the pejorative sense).

Knight wrote:

“To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter.”

Many ERM frameworks rely on Knight’s terminology, despite it being at odds with the risk language of insurance, science, medicine, and engineering – and everywhere else throughout modern history. Knight’s usage of terms conflicted with that of his more mathematically accomplished contemporaries including Ramsey, Kolmogorov, von Mises, and de Finetti. But for whatever reason, ERM frameworks embrace it. Under that conception of risk, one is forced to allow that positive risk exists to provide for positive (desirable) and negative undesirable) future outcomes of present uncertainty. To avoid confusion, the word, “positive,” in positive risk in ERM circles means desirable and beneficial, and not merely real or incontestable (as in positive proof).

The concepts that positive risk jumble and confound are handled in other risk-analysis domains with due clarity. Other domains acknowledge that risk is taken, when it is taken rather than being transferred or avoided, in order to gain some reward; i. e., a risk-reward calculus exists. Since no one would take risk unless some potential for reward existed (even if merely the reward of a thrill) the concept of positive risk is held as incoherent in risk-centric fields like aerospace and nuclear engineering. Positive risk confuses cause with effect, purpose with consequence, and uncertainty with opportunity; and it makes a mess of communications with serious professionals in other fields.

As evidence that only within ERM and related project-management risk tools is the concept of positive risk popular, note that the top 25 two-word strings starting with “risk” in Google’s data (e.g., aversion, mitigation, reduction, tolerance, premium, alert, exposure) all imply unwanted outcomes or expenses. Further, none of the top 10,000 collocates ending with “risk” include “positive” or similar words.

While the PMI and ISO 31000 and similar frameworks promote the idea of positive risk, most of the language within their publications does not accommodate risk being desirable. That is, if risk can be positive, the frameworks would not talk mostly of risk mitigation, risk tolerance, risk-avoidance, and risk reduction – yet they do. The conventional definition of risk appearing in dictionaries for the 200 years prior to the birth of ERM, used throughout science and engineering, holds that risk is a combination of the likelihood of an unwanted occurrence and its severity. Nothing in the common and historic definition of risk disallows that taking risks can have benefits or positive results – again, the reason we take risk is to get rewards. But that isn’t positive risk.

Dropping the concept of positive risk would prevent a lot of confusion, inconsistencies, and muddled thinking. It would also serve to demystify risk models built on a pretense of rigor and reeking of obscurantism, inconsistency, and deliberate vagueness masquerading as esoteric knowledge.

The few simple concepts mixed up in the idea of positive risk are easily extracted. Any particular risk is the chance of a specific unwanted outcome considered in combination with the undesirability (i.e. cost or severity) of that outcome. Chance means probability or a measure of uncertainty, whether computable or not; and rational agents take risks to get rewards. The concepts are simple, clear, and useful. They’ve served to reduce the rate of fatal crashes by many orders of magnitude in the era of passenger airline flight. ERM’s track record is less impressive. When I confront chieftans of ERM with this puzzle, they invariably respond, with confidence of questionable provenance, that what works in aviation can’t work in ERM.

ERM insiders maintain that risk-management disasters like AIG, Bear Stearns, Lehman Brothers, UBS, etc. stemmed from improper use of risk frameworks. The belief that ERM is a thoroughbred who’s had a recent string of bad jockeys is the stupidest possible interpretation of an endless stream of ERM failures, yet one that the authors of ISO 31000 and risk frameworks continue to deploy with straight faces. Those authors, who penned the bollixed “effect of uncertainty on objectives” definition of risk (ISO 31000 2009) threw a huge bone to big consultancies positioned to peddle such poppycock to unwary clients eager to curb operational risk.

The absurdity of this broader ecosystem has been covered by many fine writers, apparently to no avail. Mlodinow’s The Drunkard’s Walk, Rosenzweig’s The Halo Effect, and Taleb’s Fooled by Randomness are excellent sources. Douglas Hubbard spells out the madness of ERM’s shallow and quirky concepts of probability and positive risk in wonderful detail in both his The Failure of Risk Management and How to Measure Anything in Cybersecurity Risk. Hubbard points out the silliness of positive risk by noting that few people would take a risk if they could get the associated reward without exposure to the risk.

My greatest fear in this realm is that the consultants peddling this nonsense will infect aerospace, aviation and nuclear power as they have done in the pharmaceutical world, much of which now believes that an FMEA is risk management and that Functional Hazard Analysis is a form you complete at the beginning of a project.

The notion of positive risk is certainly not the only flaw in ERM models, but chucking this half-witted concept would be a good start.

 

5 Comments