This user hasn't shared any biographical information
The classic formulation of the trolley-problem thought experiment goes something like this:
A runaway trolley hurtles toward five tied-up people on the main track. You see a lever that controls the switch. Pull it and the trolley switches to a side track, saving the five people, but will kill one person tied up on the side track. Your choices:
- Do nothing and let the trolley kill the five on the main track.
- Pull the lever, diverting the trolley onto the side track causing it to kill one person.
At this point the Ethics 101 class debates the issue and dives down the rabbit hole of deontology, virtue ethics, and consequentialism. That’s probably what Philippa Foot, who created the problem, expected. At this point engineers probably figure that the ethicists mean cable-cars (below right), not trolleys (streetcars, left), since the cable cars run on steep hills and rely on a single, crude mechanical brake while trolleys tend to stick to flatlands. But I digress.
Many trolley problem variants exist. The first twist usually thrust upon trolley-problem rookies was called “the fat man variant” back in the mid 1970s when it first appeared. I’m not sure what it’s called now.
The same trolley and five people, but you’re on a bridge over the tracks, and you can block it with a very heavy object. You see a very fat man next to you. Your only timely option is to push him over the bridge and onto the track, which will certainly kill him and will certainly save the five. To push or not to push.
Ethicists debate the moral distinction between the two versions, focusing on intentionality, double-effect reasoning etc. Here I leave the trolley problems in the competent hands of said ethicists.
But psychologists and behavioral economists do not. They appropriate the trolley problems as an apparatus for contrasting emotion-based and reason-based cognitive subsystems. At other times it becomes all about the framing effect, one of the countless cognitive biases afflicting the subset of souls having no psych education. This bias is cited as the reason most people fail to see the two trolley problems as morally equivalent.
The degree of epistemological presumptuousness displayed by the behavioral economist here is mind-boggling. (Baby, you don’t know my mind…, as an old Doc Watson song goes.) Just because it’s a thought experiment doesn’t mean it’s immune to the rules of good design of experiments. The fat-man variant is radically different from the original trolley formulation. It is radically different in what the cognizing subject imagines upon hearing/reading the problem statement. The first scenario is at least plausible in the real world, the second isn’t remotely.
First off, pulling the lever is about as binary as it gets: it’s either in position A or position B and any middle choice is excluded outright. One can perhaps imagine a real-world switch sticking in the middle, causing an electrical short, but that possibility is remote from the minds of all but reliability engineers, who, without cracking open MIL-HDBK-217, know the likelihood of that failure mode to be around one per 10 million operations.
Pushing someone, a very heavy someone, over the railing of the bridge is a complex action, introducing all sorts of uncertainty. Of course the bridge has a railing; you’ve never seen one that didn’t. There’s a good chance the fat man’s center of gravity is lower than the top of the railing because it was designed to keep people from toppling over it. That means you can’t merely push him over; you more have to lift him up to the point where his CG is higher than the top of railing. But he’s heavy, not particularly passive, and stronger than you are. You can’t just push him into the railing expecting it to break either. Bridge railings are robust. Experience has told you this for your entire life. You know it even if you know nothing of civil engineering and pedestrian bridge safety codes. And if the term center of gravity (CG) is foreign to you, by age six you have grounded intuitions on the concept, along with moment of inertia and fulcrums.
Assume you believe you can somehow overcome the railing obstacle. Trolleys weigh about 100,000 pounds. The problem statement said the trolley is hurtling toward five people. That sounds like 10 miles per hour at minimum. Your intuitive sense of momentum (mass times velocity) and your intuitive sense of what it takes to decelerate the hurtling mass (Newton’s 2nd law, f = ma) simply don’t line up with the devious psychologist’s claim that the heavy person’s death will save five lives. The experimenter’s saying it – even in a thought experiment – doesn’t make it so, or even make it plausible. Your rational subsystem, whether thinking fast or slow, screams out that the chance of success with this plan is tiny. So you’re very likely to needlessly kill your bridge mate, and then watch five victims get squashed all by yourself.
The test subjects’ failure to see moral equivalence between the two trolley problems speaks to their rationality, not their cognitive bias. They know an absurd hypothetical when they see one. What looks like humanity’s logical ineptitude to so many behavioral economists appears to the engineers as humanity’s cultivated pragmatism and an intuitive grasp of physics, factor-relevance evaluation, and probability.
There’s book smart, and then there’s street smart, or trolley-tracks smart, as it were.
Posted in Sustainable Energy on August 15, 2019
“Alienation from nature and indifference toward natural processes is the greatest threat leading to destruction of the environment.”
For years this statement appeared at the top of an ecology awareness campaign in western national parks. Despite sounding like Heidegger and Marx, I liked it. I especially liked the fact that it addressed natural processes (how things work) rather than another appeal for empathy to charismatic species.
At the same time – early 1990s – WNYC played a radio spot making a similar point about indifference. Mr. Brooklyn asked Mr. Bronx if he knew what happened after you flushed the toilet. Bronx said this was the stupidest question he’d ever heard. Why would anyone care?
The idea of reducing indifference toward natural processes through education seemed more productive to me than promoting environmental guilt.
Wow did I get that wrong. Advance 25 years and step into an Green Tech summit in Palo Alto. A sold-out crowd of young entrepreneurs and enthusiast brims with passion about energy and the environment. Indifference is not our problem here. But unlike the followers of Stewart Brand (Whole Earth Catalog, 1968-72), whose concern for ecology lead them to dig deep into science, this Palo Alto crowd is pure passion with pitiful few physics. And it’s a big crowd, credentialed in disruptive innovation, sustainability, and social entrepreneurship.
As Brand implies when describing all the harm done by well-intentioned environmentalists, impassioned ignorance does far more damage than indifference does.
At one greentech event in 2015, a casual-business attired young woman assured me that utility-scale energy storage was 18 to 24 months away. This may have seemed a distant future to a recent graduate. But having followed battery tech a bit, I said no way, offering that no such technology existed or was on the horizon. With the cost-no-object mindset of an idealist unburdened by tax payments, she fired back that we could do it right now if we cared enough. So where was the disconnect between her and me?
I offered my side. I explained that as the fraction of base load provided by intermittent renewables increased, the incremental cost of lithium-ion storage rises exponentially. That is, you need exponentially more storage, unused in summer, to deal with load fluctuations on the cloudiest of winter days as you bring more renewables online. Analyses at the time were estimating that a renewable-only CA would entail 40 million megawatt-hours of surplus summer generation. Per the CAEC, we were able to store 150 thousand megawatt-hours of energy. And that was only because we get 15% of our energy from hydroelectric. Those big dams the greens ache to tear down provide 100% of our energy storage capacity, and half the renewable energy we brag about. (A few battery arrays were built since this 2015 conversation.)
Estimates at that time, I told her, were putting associated battery-aided renewable production cost in the range of $1600/mw-hr, as compared to $30/mw-hr for natural gas, per the EIA. An MIT report later concluded that a US 12-hour intermittency buffer would cost $2.5 trillion. Now that’s a mere $20,000 for each household, but it can’t begin to handle weather conditions like what happened last January, when more than half of the US was below freezing for days on end. That 12-hr buffer would take about 10.5 million Tesla Powerpacks (as at Mira Loma, 210 kw-hr each) totaling 470 billion lithium-ion cells. That’s 27 billion pounds of battery packs. Assuming a 10-year life, the amount of non-recyclable rare-earth materials involved is hard to consider green. I told her that could also mean candles, blankets, and no Hulu in January.
Her reply: “Have you ever heard of Mark Jacobson?”
Her heart was in the right place. Her head was someplace else. I tried to find it. She believed Jacobson’s message because of his authority. I named some equally credentialed opponents, including Brook, Caldeira, Clack, Davies, Dodge, Gilbraith, Kammen, and Wang. I said I could send her a great big list. She then said, in essence, that she held him to be authoritative because she liked his message. I told her that I believe the Bible because the truthful Bible says it is true. She smiled and slipped off to the fruit tray.
For those who don’t know Jacobson, he’s a Stanford professor and champion of a 100% renewable model. In 2017 he filed a $10M suit against the National Academy of Sciences for publishing a peer-reviewed paper authored by 21 scientist challenging his claims. Jacobson sought to censor those threatening his monopoly on the eyes and ears of these green energy devotees. Echoing my experience at greentech events, Greentech Media wrote in covering Jacobson’s suit, “It’s a common claim from advocates: We know we can create a 100 percent renewable grid, because Stanford Professor Mark Jacobson said we can.” Jacobson later dropped the suit. His poor science is seen in repeated use of quirky claims targeting naive environmentalists. He wrote that 33% of yearly averaged wind power was calculated to be usable at the same reliability as a coal-fired power plant. I have yet to find an engineer able to parse that statement. To eliminate nuclear power as a green contender, Jacobson includes carbon emissions from burning cities caused by nuclear war, which he figures occur on a 30-year cycle. My critique from before I knew he sues his critics is here.
When I attend those greentech events, often featuring biofuels, composting, local farming, and last-mile distribution of goods, I encourage people to think first about the energy. Literal energy – mechanical, thermal, electrical and gravitational: ergs, foot-pounds, joules, kilowatt-hours and calories. Energy to move things, the energy content of things, and energy conversion efficiency. Then to do the story-problem math they learned in sixth grade. Two examples:
1. Cooking oil, like gasoline, holds about 31,000 calories per gallon. 70% of restaurant food waste is water. Assume the rest is oil and you get 9,000 calories per gallon, equaling 1100 calories per pound. Assume the recycle truck gets 10 miles per gallon, drives 100 miles around town to gather 50 pounds of waste from each of 50 restaurants. With 312 gallons (2500 lb / 8 lb/g =312 gal) of food waste, does the truck make ecological sense in the simplest sense? It burns 310,000 calories of gas to reclaim 312 * 9000 = 2.8 million calories of waste. Neglecting the processing cost, that’s an 8X net return on calorie investment. Recycling urban restaurant waste makes a lot of sense.
2. Let’s look at the local-farming movement. Local in San Francisco means food grown near Sacramento, 90 miles away. If the farmer’s market involves 50 vendors, each driving a pickup-truck with 250 pounds of goods, that’s 9000 miles at 20 miles per gallon: 450 gallons of gasoline for 12,500 pounds of food. We can say that the 12,500 pounds of food “contains” 400 gallons of embedded gasoline energy (no need calculate calories – we can equally well use gallons of gas as an energy unit). So the embedded gallons per pound is 450/12,500 = 0.036 for the farmers market food. Note that the vendor count drops out of this calculation: use 100 vendors and get the same result.
Safeway says 40% of its produce comes from the same local sources. Their semi truck gets 5 mpg but carries 50,000 pounds of food, but for 180 miles, not 8000 (one round trip). If carrying only Sacramento goods, Safeway’s round trip would deliver 50,000 pounds using 36 gallons. That’s 0.0007 gallons of gas per pound. Safeway is 51 times (.036/.0007) more fuel efficient at delivering local food than the farmers markets is.
That makes local produce seem not so green – in the carbon sense. But what about the 60% of Safeway food that is not local. Let’s fly it in from Mexico on a Boeing 777. Use 2200 gallons per hour and a 220,000 pound payload flying 1800 miles at 550 mph. That’s a 3.28 hour flight, burning 7200 gallons of fuel. That means 7200/220,000 = 0.033 gallons per pound of food. On this back of the envelope, flying food from southern Mexico is carbon-friendlier than the farmers market.
In any case, my point isn’t the specific outcome but for social entrepreneurs to do the math instead of getting their energy policy from a protesting pawn of a political party or some high priest of eco-dogma.
“I daresay the environmental movement has done more harm with its opposition to genetic engineering than with any other thing we’ve been wrong about, We’ve starved people, hindered science, hurt the natural environment, and denied our own practitioners a crucial tool.” – Stewart Brand, Whole Earth Discipline
Not that names mean much, but how many of them, I wondered, could identify the California Black Oaks or the Desert Willows on the grounds outside.
VCs stress that they’re not in the business of evaluating technology. Few failures of startups are due to bad tech. Leo Polovets at Susa Ventures says technical diligence is a waste of time because few startups have significant technical risk. Success hinges on knowing customers’ needs, efficiently addressing those needs, hiring well, minding customer acquisition, and having a clue about management and governance.
In the dot-com era, I did tech diligence for Internet Capital Group. They invested in everything I said no to. Every one of those startups failed, likely for business management reasons. Had bad management not killed them, their bad tech would have in many cases. Are things different now?
Polovets is surely right in the domain of software. But hardware is making a comeback, even in Silicon Valley. A key difference between diligence on hardware and software startups is that software technology barely relies on the laws of nature. Hardware does. Hardware is dependent on science in a way software isn’t.
Silicon Valley’s love affairs with innovation and design thinking (the former being a retrospective judgement after market success, the latter mostly marketing jargon) leads tech enthusiasts and investors to believe that we can do anything given enough creativity. Creativity can in fact come up with new laws of nature. Isaac Newton and Albert Einstein did it. Their creativity was different in kind from that of the Wright Brothers and Elon Musk. Those innovators don’t change laws of nature; they are very tightly bound by them.
You see the impact of innovation overdose in responses to anything cautious of overoptimism in technology. Warp drive has to be real, right? It was already imagined back when William Shattner could do somersaults.
When the Solar Impulse aircraft achieved 400 miles non-stop, enthusiasts demanded solar passenger planes. Solar Impulse has the wingspan of an A380 (800 passengers) but weighs less than my car. When the Washingon Post made the mildly understated point that solar powered planes were a long way from carrying passengers, an indignant reader scorned their pessimism: “I can see the WP headline from 1903: ‘Wright Flyer still a long way from carrying passengers’. Nothing like a good dose of negativity.”
Another reader responded, noting that theoretical limits would give a large airliner coated with cells maybe 30 kilowatts of sun power, but it takes about 100 megawatts to get off the runway. Another enthusiast, clearly innocent of physics, said he disagreed with this answer because it addressed current technology and “best case.” Here we see a disconnect between two understandings of best case, one pointing to hard limits imposed by nature, the other to soft limits imposed by manufacturing and limits of current engineering know-how.
What’s a law of nature?
Law of nature doesn’t have a tight definition. But in science it usually means generalities drawn from a very large body of evidence. Laws in this sense must be universal, omnipotent, and absolute – true everywhere for all time, no exceptions. Laws of nature don’t happen to be true; they have to be true (see footnote*). They are true in both main philosophical senses of “true”: correspondence and coherence. To the best of our ability, they correspond with reality from a gods’ eye perspective; and they cohere, in the sense that each gets along with every other law of nature, allowing a coherent picture of how the universe works. The laws are interdependent.
Now we’ve gotten laws wrong in the past, so our current laws may someday be overturned too. But such scientific disruptions are rare indeed – a big one in 1687 (Newton) and another in 1905 (Einstein). Lesser laws rely on – and are consistent with – greater ones. The laws of physics erect barriers to engineering advancement. Betting on new laws of physics – as cold fusion and free-energy investors have done – is a very long shot.
As an example of what flows from laws of nature, most gasoline engines (Otto cycle) have a top theoretical efficiency of about 47%. No innovative engineering prowess can do better. Material and temperature limitations reduce that further. All metals melt at some temperature, and laws of physics tell us we’ll find no new stable elements for building engines – even in distant galaxies. Moore’s law, by the way, is not in any sense a law in the way laws of nature are laws.
The Betz limit tells us that no windmill will ever convert more than 59.3% of the wind’s kinetic energy into electricity – not here, not on Jupiter, not with curvy carbon nanotube blades, not coated with dilythium crystals. This limit doesn’t come from measurement; it comes from deduction and the laws of nature. The Shockley-Queisser limit tells us no single-layer photovoltaic cell will ever convert more than 33.7% of the solar energy hitting it into electricity. Gaia be damned, but we’re stuck with physics, and physics trumps design thinking.
So while funding would grind to a halt if investors dove into the details of pn-junctions in chalcopyrite semiconductors, they probably should be cautious of startups that, as judged by a Physics 101 student, are found to flout any fundamental laws of nature. That is, unless they’re fixing to jump in early, ride the hype cycle to the peak of expectation, and then bail out before the other investors catch on. They’d never do that, right?
Solyndra’s sales figures
In Solyndra‘s abundant autopsies we read that those crooks duped the DoE about sales volume and profits. An instant Wall Street darling, Solyndra was named one of 50 most innovative companies by Technology Review. Later, the Solyndra scandal coverage never mentioned that the idea of cylindrical containers of photovoltaic cells with spaces between them was a dubious means of maximizing incident rays. Yes, some cells in a properly arranged array of tubes would always be perpendicular to the sun (duh), but the surface area of the cells within say 30 degrees of perpendicular to the sun is necessarily (not even physics, just geometry) only one sixth of those on the tube (2 * 30 / 360). The fact that the roof-facing part of the tubes catches some reflected light relies on there being space between the tubes, which obviously aren’t catching those photons directly. A two-layer tube grabs a few more stray photons, but… Sure, the DoE should have been more suspicious of Solyndra’s bogus bookkeeping; but there’s another lesson in this $2B Silicon Valley sinkhole. Their tech was bullshit.
The story at Abound Solar was surprisingly similar, though more focused on bad engineering than bad science. Claims about energy, given a long history of swindlers, always warrant technical diligence. Upfront Ventures recently lead a $20M B round for uBeam, maker of an ultrasonic charging system. Its high frequency sound vibrations travel across the room to a receiver that can run your iPhone or, someday, as one presentation reported, your flat screen TV, from a distance of four meters. Mark Cuban and Marissa Mayer took the plunge.
Now we can’t totally rule out uBeam’s claims, but simple physics screams out a warning. High frequency sound waves diffuse rapidly in air. And even if they didn’t, a point-source emitter (likely a good model for the uBeam transmitter) obeys the inverse-square law (see Johannes Kepler, 1596). At four meters, the signal is one sixteenth as strong as at one meter. Up close it would fry your brains. Maybe they track the target and focus a beam on it (sounds expensive). But in any case, sound-pressure-level regulations limit transmitter strength. It’s hard to imagine extracting more than a watt or so from across the room. Had Upfront hired a college kid for a few days, they might have spent more wisely and spared uBeam’s CEO the embarrassment of stepping down last summer after missing every target.
Even b-school criticism of Theranos focuses on the firm’s culture of secrecy, Holmes’ poor management practices, and bad hiring, skirting the fact that every med student knew that a drop of blood doesn’t contain enough of the relevant cells to give accurate results.
Homework: Water don’t flow uphill
Now I’m not saying all VC, MBAs, and private equity folk should study much physics. But they should probably know as much physics as I know about convertible notes. They should know that laws of nature exist, and that diligence is due for bold science/technology claims. Start here:
Newton’s 2nd law:
- Roughly speaking, force = mass times acceleration. F = ma.
- Important for cars. More here.
- Practical, though perhaps unintuitive, application: slow down on I-280 when it’s raining.
2nd Law of Thermodynamics:
- Entropy always increases. No process is thermodynamically reversible. More understandable versions came from Lord Kelvin and Rudolf Clausius.
- Kelvin: You can’t get any mechanical effect from anything by cooling it below the temperature of its surroundings.
- Clausius: Without adding energy, heat can never pass from a cold thing to a hot thing.
- Practical application: in an insulated room, leaving the refrigerator door open will raise the room’s temperature.
- American frontier version (Locomotive Engineering Vol XXII, 1899): “Water don’t flow uphill.”
_ __________ _
“If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations – then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the Second Law of Thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” – Arthur Eddington
*footnote: Critics might point out that the distinction between laws of physics (must be true) and mere facts (happen to be true) of physics seems vague, and that this vagueness robs any real meaning from the concept of laws of physics. Who decides what has to be true instead of what happens to be true? All copper in the universe conducts electricity seems like a law. All trees in my yard are oak does not. How arrogant was Newton to move from observing that f=ma in our little solar system to his proclamation that force equals mass times acceleration in all possible worlds. All laws of science (and all scientific progress) seem to rely on the logical fallacy of affirming the consequent. This wasn’t lost on the ancient anti-sophist Greeks (Plato), the cleverest of the early Christian converts (Saint Jerome) and perceptive postmodernists (Derrida). David Hume’s 1738 A Treatise of Human Nature methodically destroyed the idea that there is any rational basis for the kind of inductive inference on which science is based. But… Hume was no relativist or nihilist. He appears to hold, as Plato did in Theaetetus, that global relativism is self-undermining. In 1951, WVO Quine eloquently exposed the logical flaws of scientific thinking in Two Dogmas of Empiricism, finding real problems with distinctions between truths grounded in meaning and truths grounded in fact. Unpacking that a bit, Quine would say that it is pointless to ask whether f=ma is a law of nature or a just deep empirical observation. He showed that we can combine two statements appearing to be laws together in a way that yielded a statement that had to be merely a fact. Finally, from Thomas Kuhn’s perspective, deciding which generalized observation becomes a law is entirely a social process. Postmodernist and Strong Program adherents then note that this process is governed by local community norms. Cultural relativism follows, and ultimately decays into pure subjectivism: each of us has facts that are true for us but not for each other. Scientists and engineers have found that relativism and subjectivism aren’t so useful for inventing vaccines and making airplanes fly. Despite the epistemological failings, laws of nature work pretty well, they say.
Don’t get me wrong. J Richard Gott is one of the coolest people alive. Gott does astrophysics at Princeton and makes a good argument that time travel is indeed possible via cosmic strings. He’s likely way smarter than I, and he’s from down home. But I find big holes in his Copernicus Method, for which he first achieved fame.
Gott conceived his Copernuicus Method for estimating the lifetime of any phenomenon when he visited the Berlin wall in 1969. Wondering how long it would stand, Gott figured that, assuming there was nothing special about his visit, a best guess was that he happened upon the wall 50% of the way through its lifetime. Gott saw this as an application of the Copernican principle: nothing is special about our particular place (or time) in the universe. As Gott saw it, the wall would likely come down eight years later (1977), since it had been standing for eight years in 1969. That’s not exactly how Gott did the math, but it’s the gist of it.
I have my doubts about the Copernican principle – in applications from cosmology to social theory – but that’s not my beef with Gott’s judgment of the wall. Had Gott thrown a blindfolded dart at a world map to select his travel destination I’d buy it. But anyone who woke up at the Berlin Wall in 1969 did not arrive there by a random process. The wall was certainly in the top 1000 interesting spots on earth in 1969. Chance alone didn’t lead him there. The wall was still news. Gott should have concluded that he saw the wall near in the first half of its life, not at its midpoint.
Finding yourself at the grand opening of Brooklyn pizza shop, it’s downright cruel to predict that it will last one more day. That’s a misapplication of the Copernican principle, unless you ended up there by rolling dice to pick the time you’d parachute in from the space station. More likely you saw Vini’s post on Facebook last night.
Gott’s calculation boils down to Bayes Theorem applied to a power-law distribution with an uninformative prior expectation. I.e., you have zero relevant knowledge. But from a Bayesian perspective, few situations warrant an uninformative prior. Surely he knew something of the wall and its peer group. Walls erected by totalitarian world powers tend to endure (Great Wall of China, Hadrian’s Wall, the Aurelian Wall), but mean wall age isn’t the key piece of information. The distribution of wall ages is. And though I don’t think he stated it explicitly, Gott clearly judged wall longevity to be scale-invariant. So the math is good, provided he had no knowledge of this particular wall in Berlin.
But he did. He knew its provenance; it was Soviet. Believing the wall would last eight more years was the same as believing the Soviet Union would last eight more years. So without any prior expectation about the Soviet Union, Gott should have judged the wall would come down when the USSR came down. Running that question through the Copernican Method would have yielded the wall falling in the year 2016, not 1977 (i.e., 1969 + 47, the age of the USSR in 1969). But unless Gott was less informed than most, his prior expectation about the Soviet Union wasn’t uninformative either. The regime showed no signs of weakening in 1969 and no one, including George Kennan, Richard Pipes, and Gorbachev’s pals, saw it coming. Given the power-law distribution, some time well after 2016 would have been a proper Bayesian credence.
With any prior knowledge at all, the Copernican principle does not apply. Gott’s prediction was off by only 14 years. He got lucky.
Posted in Engineering on July 28, 2019
San Francisco police are highly tolerant. A few are tolerant in the way giant sloths are tolerant. Most are tolerant because SF ties their hands from all but babysitting the homeless. Excellent at tolerating heroin use on Market Street, they’re also proficient at tolerating vehicular crime, from sailing through red lights (23 fatalities downtown last year) to minor stuff like illegal – oops, undocumented – car mods.
For a progressive burg, SF has a lot of muscle cars, Oddly, many of the car nuts in San Francisco use the term “Frisco,” against local norms.
Back in the ’70s, in my small Ohio town, the losers drove muscle cars to high school. A very few of these cars had amazing acceleration ability. A variant of ’65 Pontiac Catalina could do zero to 60 in 4 1/2 seconds. A Tesla might leave it in the dust, but that was something back then. While the Catalina’s handling was awful, it could admirably smoke the starting line. Unlike the Catalina, most muscle cars of the ’60s and ’70s – including the curvaceous ’75 Corvette – were total crap, even for accelerating. My witless schoolmates lacked any grasp of the simple physics that could explain how and why their cars were crap. I longed to leave those barbarians and move to someplace civilized. I ended up in San Francisco.
Those Ohio simpletons strutted their beaters’ ability to squeal tires from a dead stop. They did this often, in case any of us might forget just how fast their foot could pound the pedal. Wimpy crates couldn’t burn rubber like that. So their cars must be pretty badass, they thought. Their tires would squeal with the tenderest touch of the pedal. Awesome power, right?
Actually, it meant a badly unbalanced vehicle design combined with a gas-pedal-position vs. fuel-delivery curve yielding a nonlinear relationship between pedal position and throttle plate position. This abomination of engineering attracted 17-year-old bubbas cocksure that hot chicks dig the smell of burning rubber. See figure A.
This hypothetical, badly-designed car has a feeble but weighty 100 hp engine and rear-wheel drive. Its rear tires will squeal at the drop of a hat even though the car is gutless. Its center of gravity, where its weight would be if you concentrated all its weight into a point, is too far forward. Too little load on the rear wheels.
Friction, which allows you to accelerate, is proportional to the normal force, i.e. the force of the ground pushing up on the tires. That is, the traction capacity of a tire contacting the road is proportional to the weight on the tire. With a better distribution of weight, the torque resulting from the frictional force at the rear wheels would increase the normal force there, resulting in the tendency to do a wheelie. This car will never do a wheelie. It lacks the torque, even if the meathead driving it floors it before dumping the clutch.
Figure A is an exaggeration of what was going on in the heaps driven by my classmates.
Above, I noted that the traction capacity of a tire contacting the road is proportional to the weight on the tire. The constant of proportionality is called the coefficient of friction. From this we get F = uN, meaning frictional force equals the coefficient of friction (“u”) times the normal force, which is, roughly speaking, the weight pushing on the tire.
The maximum possible coefficient of friction on smooth surfaces is 1.0. That means a car’s maximum possible acceleration would be 1g: 32 feet per second per second. Calculating a 0-60 time based on 1g yields 2.73 seconds. Hot cars can momentarily exceed that acceleration, because tires sink into small depressions in pavement, like a pinion engaging a rack (round gear on a linear gear).
Here’s how Isaac Newton, who was into hot cars, viewed the 0-60-at-1-g problem:
- Acceleration is change in speed over time. a = dv/t.
- Acceleration due to gravity (body falling in a vacuum) is 32.2 feet per second.
- 5280 feet in a mile. 60 seconds in a minute.
- 60 mph = 5280/60 ft/sec = 88 ft/sec .
- a = delta v/t . Solve for t: t = dv/a. dv = 88 ft/sec. a = 32.2 ft/sec/sec. t = dv/a = 88/32.2 (ft/sec) / (ft/sec squared) = 2.73 sec. Voila.
The early 428 Shelby Mustangs were amazing, even by today’s acceleration standards, though they were likely still awful to steer. In contrast to the noble Shelbys, some late ’60s – early ’70s Mustangs with inline-six 3-liter engines topped out at just over 100 hp. Ford even sold a V8 version of the Mustang with a pitiful 140 hp engine. Shame, Lee Iacocca. It could do zero to 60 in around 13 seconds. Really.
Those cars had terrible handling because their suspensions were lousy and because of subtle aspects of weight distribution (extra credit: see polar moment of inertia).
If you can’t have power, at least have noise. To make your car or bike really loud, do this simple trick. Insert a stack of washers or some nuts between the muffler and exhaust pipe to leave a big gap, thereby effectively disconnecting the muffler. This worked back in 1974 and, despite civic awareness and modern sensitivity to air and noise pollution, it still works great today. For more hearing damage, custom “exhaust” systems, especially for bikes (cops have deep chopper envy and will look they other way when your hog sets off car alarms), can help you exceed 105 db SPL. Every girl’s eye will be on you, bud. Hubba hubba. See figure B.
I get a bit of nostalgia when I hear those marvels of engineering from the ’60s and ’70s on Market Street nightly, at Fisherman’s Wharf, and even in my neighborhood. Our police can endure that kind of racket because they’re well-paid to tolerate it. Wish I were similarly compensated. I sometimes think of this at 4 am on Sunday morning even if my windows are closed.
I visited the old country, Ohio, last year. There were no squealing tires and few painfully loud motors on the street. Maybe the motorheads evolved. Maybe the cops aren’t paid enough to tolerate them. Ohio was nice to visit, but the deplorable intolerance was stifling.
Posted in Probability and Risk on July 28, 2019
Women can’t do math. Hypatia of Alexandria and Émilie du Châtelet notwithstanding, this was asserted for thousands of years by men who controlled access to education. With men in charge it was a self-fulfilling prophecy. Women now represent the majority of college students and about 40% of math degrees. That’s progress.
Last week Marcio Rubio caught hell for taking Ilhan Omar’s statement about double standards and unfair terrorism risk assessment out of context. The quoted fragment was: “I would say our country should be more fearful of white men across our country because they are actually causing most of the deaths within this country…”
Most news coverage of the Rubio story (e.g. Vox) note that Omar did not mean that everyone should be afraid of white men as a group, but that, e.g., “violence by right-wing extremists, who are overwhelmingly white and male, really is a bigger problem in the United States today than jihadism.”
Let’s look at the numbers. Wikipedia, following the curious date-range choice of the US GAO, notes: “of the 85 violent extremist incidents that resulted in death since September 12, 2001, far-right politics violent extremist groups were responsible for 62 (73 percent) while radical Islamist violent extremists were responsible for 23 (27 percent).” Note that those are incident counts, not death counts. The fatality counts were 106 (47%) for white extremists and 119 (53%) for jihadists. Counting fatalities instead of incidents reverses the sense of the numbers.
Pushing the terminus post quem back one day adds the 2,977 9-11 fatalities to the category of deaths from jihadists. That makes 3% of fatalities from right wing extremists and 97% from radical Islamist extremists. Pushing the start date further back to 1/1/1990, again using Wikipedia numbers, would include the Oklahoma City bombing (white extremists, 168 dead), nine deaths from jihadists, and 14 other deaths from white wackos, including two radical Christian antisemites and professor Ted Kaczynski. So the numbers since 1990 show 92% of US terrorism deaths from jihadists and 8% from white extremists.
Barring any ridiculous adverse selection of date range (in the 3rd week of April, 1995, 100% of US terrorism deaths involved white extremists), Omar is very, very wrong in her data. The jihadist death toll dwarfs that from white extremists.
But that’s not the most egregious error in her logic – and that of most politicians armed with numbers and a cause. The flagrant abuse of data is what Kahneman and Tversky termed base-rate neglect. Omar, in discussing profiling (sampling a population subset) is arguing about frequencies while citing raw incident counts. The base rate (an informative prior, to Bayesians) is crucial. Even if white extremists caused most – as she claimed – terrorism deaths, there would have to be about one hundred times more deaths from white men (terrorists of all flavors are overwhelmingly male) than from Muslims for her profiling argument to hold. That is, the base rate of being Muslim in the US is about one percent.
The press overwhelmingly worked Rubio over for his vicious smear. 38 of the first 40 Google search results on “Omar Rubio” favored Omar. One favored Rubio and one was an IMDb link to an actor named Omar Rubio. None of the news pieces, including the one friendly to Rubio, mentioned Omar’s bad facts (bad data) or her bad analysis thereof (bad math). Even if she were right about the data – and she is terribly wrong – she’d still be wrong about the statistics.
I disagree with Trump about Omar. She should not go back to Somalia. She should go back to school.
Should economics, sociology or management count as science?
2500 years ago, Plato, in The Sophist, described a battle between the gods and the earth giants. The fight was over the foundations of knowledge. The gods thought knowledge came from innate concepts and deductive reasoning only. Euclid’s geometry was a perfect example – self-evident axioms plus deduced theorems. In this model, no experiments are needed. Plato explained that the earth giants, however, sought knowledge through earthly experience. Plato sided with the gods; and his opponents, the Sophists, sided with the giants. Roughly speaking, this battle corresponds to the modern tension between rationalism (the gods) and empiricism (the giants). For the gods, the articles of knowledge must be timeless, universal and certain. For the giants, knowledge is contingent, experiential, and merely probable.
Plato’s approach led the Greeks – Aristotle, most notably – to hold that rocks fall with speeds proportional to their weights, a belief that persisted for 2000 years until Galileo and his insolent ilk had the gall to test it. Science was born.
Enlightenment era physics aside, Plato and the gods are alive and well. Scientists and social reformers of the Enlightenment tried to secularize knowledge. They held that common folk could overturn beliefs with the right evidence. Empirical evidence, in their view, could trump any theory or authority. Math was good for deduction; but what’s good for math is not good for physics, government, and business management.
Euclidean geometry was still regarded as true – a perfect example of knowledge fit for the gods – throughout the Enlightenment era. But cracks began to emerge in the 1800s through the work of mathematicians like Lobachevsky and Riemann. By considering alternatives to Euclid’s 5th postulate, which never quite seemed to fit with the rest, they invented other valid (internally consistent) geometries, incompatible with Euclid’s. On the surface, Euclid’s geometry seemed correct, by being consistent with our experience. I.e., angle sums of triangles seem to equal 180 degrees. But geometry, being pure and of the gods, should not need validation by experience, nor should it be capable of such validation.
Non-Euclidean Geometry rocked Victorian society and entered the domain of philosophers, just as Special Relativity later did. Hotly debated, its impact on the teaching of geometry became the subject of an entire book by conservative mathematician and logician Charles Dodgson. Before writing that book, Dodgson published a more famous one, Alice in Wonderland.
The mathematical and philosophical content of Alice have been analyzed at length. Alice’s dialogue with Humpty Dumpty is a staple of semantics and semiotics, particularly, Humpty’s use of stipulative definition. Humpty first reasons that “unbirthdays” are better than birthdays, there being so many more of them, and then proclaims glory. Picking up that dialogue, Humpty announces,
‘And only one [day of the year] for birthday presents, you know. There’s glory for you!’
‘I don’t know what you mean by “glory”,’ Alice said.
Humpty Dumpty smiled contemptuously. ‘Of course you don’t — till I tell you. I meant “there’s a nice knock-down argument for you!”‘
‘But “glory” doesn’t mean “a nice knock-down argument”,’ Alice objected.
‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’
‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’
‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’
Humpty is right that one can redefine terms at will, provided a definition is given. But the exchange hints at a deeper notion. While having a private language is possible, it is also futile, if the purpose of language is communication.
Another aspect of this exchange gets little coverage by analysts. Dodgson has Humpty emphasize the concept of argument (knock-down), nudging us in the direction of formal logic. Humpty is surely a stand-in for the proponents of non-Euclidean geometry, against whom Dodgson is strongly (though wrongly – more below) opposed. Dodgson was also versed in Greek philosophy and Platonic idealism. Humpty is firmly aligned with Plato and the gods. Alice sides with Plato’s earth giants, the sophists. Humpty’s question, which is to be master?, points strongly at the battle between the gods and the giants. Was this Dodgson’s main intent?
When Alice first chases the rabbit down the hole, she says that she fell for a long time, and reasons that the hole must be either very deep or that she fell very slowly. Dodgson, schooled in Newtonian mechanics, knew, unlike the ancient Greeks, that all objects fall at the same speed. So the possibility that Alice fell slowly suggests that even the laws of nature are up for grabs. In science, we accept that new evidence might reverse what we think are the laws of nature, yielding a scientific revolution (paradigm shift).
In trying to vindicate “Euclid’s masterpiece,” as Dodgson called it, he is trying to free himself from an unpleasant logical truth: within the realm of math, we have no basis to the think the world is Euclidean rather than Lobichevskian. He’s trying to rescue conservative mathematics (Euclidean geometry) by empirical means. Logicians would say Dodgson is confusing a synthetic and a posteriori proposition with one that is analytic and a priori. That is, justification of the 5th postulate can’t rely on human experience, observations, or measurements. Math and reasoning feed science; but science can’t help math at all. Dodgson should know better. In the battle between the gods and the earth giants, experience can only aid the giants, not the gods. As historian of science Steven Goldman put it, “the connection between the products of deductive reasoning and reality is not a logical connection.” If mathematical claims could be validated empirically then they wouldn’t be timeless, universal and certain.
While Dodgson was treating math as a science, some sciences today have the opposite problem. They side with Plato. This may be true even in physics. String theory, by some accounts, has hijacked academic physics, especially its funding. Wolfgang Lerche of CERN called string theory the Stanford propaganda machine working at its fullest. String theory at present isn’t testable. But its explanatory power is huge; and some think physicists pursue it with good reason. It satisfies at least one of the criteria Richard Dawid lists as reasons scientists follow unfalsifiable theories:
- the theory is the only game in town; there are no other viable options
- the theoretical research program has produced successes in the past
- the theory turns out to have even more explanatory power than originally thought
Dawid’s criteria may not apply to the social and dismal sciences. Far from the only game in town, too many theories – as untestable as strings, all plausible but mutually incompatible – vie for our Nobel honors.
Privileging innate knowledge and reason – as Plato did – requires denying natural human skepticism. Believing that intuition alone is axiomatic for some types of knowledge of the world requires suppressing skepticism about theorems built on those axioms. Philosophers call this epistemic foundationalism. A behavioral economist might see it as confirmation bias and denialism.
Physicists accuse social scientists of continually modifying their theories to accommodate falsifying evidence, still clinging to a central belief or interpretation. These recall the Marxists’ fancy footwork to rationalize their revolution not first occurring in a developed country, as was predicted. A harsher criticism is that social sciences design theories from the outset to be explanatory but not testable. In the 70s, Clifford D Shearing facetiously wrote in The American Sociologist that “a cursory glance at the development of sociological theory should suggest… that any theorist who seeks sociological fame must insure that his theories are essentially untestable.”
The Antipositivist school is serious about the issue Shearing joked about. Jurgen Habermas argues that sociology cannot explain by appeal to natural law. Deirdre (Donald) McCloskey mocked the empiricist leanings of Milton Friedman as being invalid in principle. Presumably, antipositivists are content that theories only explain, not predict.
In business management, the co-occurence of the terms theory and practice and the usage of the string “theory and practice” as opposed to “theory and evidence” or “theory and testing” suggests that Plato reigns in management science. “Practice” seems to mean interacting with the world under the assumption that that the theory is true.
The theory and practice model is missing the notion of testing those beliefs against the world or, more importantly, seeking cases in the world that conflict with the theory. Further, it has no notion of theory selection; theories do not compete for success.
Can a research agenda with no concept of theory testing, falsification effort, or theory competition and theory choice be scientific? If so, it seems creationism and astrology should be called science. Several courts (e.g. McLean vs. Arkansas) have ruled against creationism on the grounds that its research program fails to reference natural law, is untestable by evidence, and is certain rather than tentative. Creationism isn’t concerned with details. Intelligent Design (old-earth creationism), for example, is far more concerned with showing Darwinism wrong that with establishing an age of the earth. There is no scholarly debate between old-earth and young-earth creationism on specifics.
Critics say the fields of economists and business and business management are likewise free of scholarly debate. They seem to have similarly thin research agendas. Competition between theories in these fields is lacking; incompatible management theories coexist without challenges. Many theorist/practitioners seem happy to give priority to their model over reality.
Dodgson appears also to have been wise to the problem of a model having priority over the thing it models – believing the model is more real than the world. In Sylvie and Bruno Concluded, he has Mein Herr brag about his country’s map-making progress. They advanced their mapping skill from rendering at 6 inches per mile to 6 yards per mile, and then to 100 yards per mile. Ultimately, they built a map with scale 1:1. The farmers protested its use, saying it would cover the country and shut out the light. Finally, forgetting what models what, Mein Herr explains, “so we now use the country itself, as its own map, and I assure you it does nearly as well.”
Humpty Dumpty had bold theories that he furiously proselytized. Happy to construct his own logical framework and dwell therein, free from empirical testing, his research agenda was as thin as his skin. Perhaps a Nobel Prize and a high post in a management consultancy are in order. Empiricism be damned, there’s glory for you.
There appears to be a sort of war of Giants and Gods going on amongst them; they are fighting with one another about the nature of essence…
Some of them are dragging down all things from heaven and from the unseen to earth, and they literally grasp in their hands rocks and oaks; of these they lay hold, and obstinately maintain, that the things only which can be touched or handled have being or essence…
And that is the reason why their opponents cautiously defend themselves from above, out of an unseen world, mightily contending that true essence consists of certain intelligible and incorporeal ideas… – Plato, Sophist
An untestable theory cannot be improved upon by experience. – David Deutsch
An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen. – Earl Wilson
Posted in Management Science on February 19, 2018
If management thinker Frederick Winslow Taylor (died 1915) were alive today he would certainly resent the straw man we have stood in his place. Taylor tried to inject science into the discipline of management. Innocent of much of the dehumanization of workers pinned on him, Taylor still failed in several big ways, even by the standards of his own time. For example, he failed at science.
What Taylor called science was mostly mere measurement – no explanatory or predictive theories. And he certainly didn’t welcome criticism or court refutation. Not only did he turn workers into machines, he turned managers into machines that did little more than take measurements. And as Paul Zak notes in Trust Factor Taylor failed to recognize that organizations are people embedded in a culture.
Taylor is long dead, but Taylorism is alive and well. Before I left Goodyear Aerospace in the late 80’s, I recall the head of Human Resources at a State of the Company address reporting trends in terms of “personnel units.” Did these units include androids and work animals I wondered.
Heavy-handed management can turn any of Douglas McGregor’s Theory Y (internally motivated) workers into Theory X (lazy, needs to be prodded, extrinsic rewards) using tried and true industrial-era management methodologies. That is, one can turn TPS, the Toyota Production System, originally aimed at developing people, into just another demoralizing bureaucratic procedure wearing lipstick.
In Silicon Valley, software creation is modeled as a manufacturing process. Scrum team members often have no authority for schedule, backlog, communications or anything else; and teams “do agile” with none of the self-direction, direct communications, or other principles laid out in the agile manifesto. Yet sprint velocity is computed to three decimal places by steady Taylorist hands. Across the country, micromanagement and Taylorism are two sides of the same coin, committed to eliminating employees’ control over their own futures and any sense of ownership in their work product. As Daniel Pink says in Drive, we are meant to be autonomous individuals, not individual automatons. This is particularly true for developers, who are inherently self-directed and intrinsically motivated. Scrum is allegedly based on Theory Y, but like Matrix Management a generation earlier, too many cases of Scrum are Theory X at core with a veneer of Theory Y.
Management is utterly broken, especially at the lowest levels. It is shaped to fill two forgotten needs – the deskilling of labor, and communication within fragmented networks.
Henry Ford is quoted as saying, “Why is it every time I ask for a pair of hands, they come with a brain attached?” Likely a misattribution derived from Wedgwood (below), the quote reflects generations of self-destructive management sentiment. The intentional de-skilling of the workforce accompanied industrialization in 18th century England. Division of labor yielded efficient operations on a large scale; and it reduced the risk of unwanted knowledge transfer.
When pottery maker Josiah Wedgwood built his factory, he not only provided for segmentation of work by tool and process type. He also built separate entries to each factory segment, with walls to restrict communications between workers having different skills and knowledge. Wedgwood didn’t think his workers were brain-dead hands; but he would have preferred that they were.
He worried that he might be empowering potential competitors. He was concerned that workers possessed drive and an innovative spirit, not that they lacked these qualities. Wedgwood pioneered intensive division of labor, isolating mixing, firing, painting and glazing. He ditched the apprentice-journeyman-master system for fear of spawning a rival, as actually became the case with employee John Voyez. Wedgwood wanted hands – skilled hands – without brains. “We have stepped beyond the other manufactur[er]s and we must be content to train up hands to suit our purpose” (Wedgwood to Bentley, Sep 7, 1769).
When textile magnate Francis Lowell built factories including dormitories, chaperones, and access to culture and education, he was trying to compensate for the drudgery of long hours of repetitive work and low wages. When Lowell cut wages the young female workers went on strike, published magazines critical of Lowell (“… just as though we were so many living machines” – Ellen Collins, Lowell Offering, 1845) and petitioned Massachusetts for legislation to limit work hours. Lowell wanted hands but got brains, drive, and ingenuity.
To respond to market dynamics and fluctuations in demand for product and in supply of raw materials, a business must have efficient and reliable communication channels. Commercial telephone networks only began to emerge in the late 1800s. Long distance calling was a luxury well into the 20th century. When the Swift Meat Packing Company pioneered the vertically integrated production system around 1915, G.F. Swift faced the then-unique challenge of needing to coordinate sales, supply chain, marketing, and operations people from coast to coast. He set up central administration and a hierarchical, military-style organizational structure for the same reason Julius Caesar’s army used that structure – to quickly move timely knowledge and instructions up, down, and laterally.
So our management hierarchies address a long-extinct communication need and our command/control management methods reflect an industrial age wish for mindless carrot-stick employees – a model the industrialists themselves knew to be inaccurate. But we’ve made this wish come true; treat people badly long enough and they’ll conform to your Theory X expectations. Business schools tout best-practice management theories that have never been subjected to testing or disconfirmation. In their views, it is theory, and therefore it’s science.
Much of modern management theory pretends that today’s knowledge workers are “so many living machines,” human resources, human capital, assets, and personnel units.
Unlike in the industrial era, modern business has no reason to de-skill its labor, blue collar or white. Yet in many ways McKinsey and other management consultancies like them seem dedicated to propping up and fine tuning Theory X, as evidence to the priority of structure in the 7S, Weisbord, and Galbraith organizational models for example.
This is an agency problem with a trillion dollar price tag. When asked which they would prefer, a company of self-motivated, self-organizing, creative problem solvers or flock of compliant drones, most CEOs would choose the former. Yet the systems we cultivate yield the latter. We’re managing 21st century organizations with 19th century tools.
For almost all companies, a high-performing workforce is the most important source of competitive advantage. Most studies of employee performance, particularly white-collar knowledge workers, find performance to hinge on engagement and trust (level of trust in managers and the firm by employees). Engagement and trust are closely tied to intrinsic motivation, autonomy, and sense of purpose. That is, performance is maximized when they’re able to tap into their skills, knowledge, experience, creativity, discipline, passion, agility and internal motivation. Studies by Deloitte, Towers Watson, Gallup, Aon Hewitt, John P Kotter, and Beer and Eisenstat over the past 25 years reach the same conclusions.
All this means Taylorism and embedding Theory X in organizational structure and management methodologies simply shackle the main source of high performance in most firms. As Pink says, command and control lead to compliance; autonomy leads to engagement. Peter Drucker fought for this point in the 1950s; America didn’t want to hear it. Frederick Taylor’s been dead for 100 years. Let’s let him rest in peace.
What actually stood between the carrot and the stick was, of course, a jackass. – Alfie Kohn, Punished by Rewards
Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity. – General George Patton
Control leads to compliance; autonomy leads to engagement. – Daniel H. Pink, Drive
The knowledge obtained from accurate time study, for example, is a powerful implement, and can be used, in one case to promote harmony between workmen and the management, by gradually educating, training, and leading the workmen into new and better methods of doing the work, or in the other case, it may be used more or less as a club to drive the workmen into doing a larger day’s work for approximately the same pay that they received in the past. – Frederick Taylor, The Principles of Scientific Management, 1913
That’s my real motivation – not to be hassled. That and the fear of losing my job, but y’know, Bob, that will only make someone work just hard enough not to get fired. – Peter Gibbons, Office Space, 1999
Bill Storage is a scholar in the history of science and technology who in his corporate days survived encounters with strategic management initiatives including Quality Circles, Natural Work Groups, McKinsey consultation, CPIP, QFD, Leadership Councils, Kaizen, Process Based Management, and TQMS.
Posted in Risk Management on January 2, 2018
Positive risk is an ill-conceived concept in risk management that makes a mess of things. It’s sometimes understood to be the benefit or reward, imagined before taking some action, for which the risky action was taken, and other times understood to mean a non-zero chance of an unexpected beneficial consequence of taking a chance. Many practitioners mix the two meanings without seeming to grasp the difference. For example, in Fundamentals of Enterprise Risk Management John J Hampton defends the idea of positive risk: “A lost opportunity is just as much a financial loss as is damage to people and property.” Hampton then relates the story of US Airways flight 1549, which made a successful emergency water landing on the Hudson River in 2009. Noting the success of the care team in accommodating passengers, Hampton describes the upside to this risk: “US Airways received millions of dollars of free publicity and its reputation soared.” Putting aside the perversity of viewing damage containment as an upside of risk, any benefit to US Airways from the happy outcome of successfully ditching a plane in a river seems poor grounds for intentionally increasing the likelihood of repeating the incident because of “positive risk.”
While it’s been around for a century, the concept of positive risk has become popular only in the last few decades. Its popularity likely stems from enterprise risk management (ERM) frameworks that rely on Frank Knight’s (“Risk, Uncertainty & Profit,” 1921) idiosyncratic definition of risk. Knight equated risk with what he called “measurable uncertainty” – what most of us call probability – which he differentiated from “unmeasurable uncertainty,” which is what most of us call ignorance (not in the pejorative sense).
“To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter.”
Many ERM frameworks rely on Knight’s terminology, despite it being at odds with the risk language of insurance, science, medicine, and engineering – and everywhere else throughout modern history. Knight’s usage of terms conflicted with that of his more mathematically accomplished contemporaries including Ramsey, Kolmogorov, von Mises, and de Finetti. But for whatever reason, ERM frameworks embrace it. Under that conception of risk, one is forced to allow that positive risk exists to provide for positive (desirable) and negative undesirable) future outcomes of present uncertainty. To avoid confusion, the word, “positive,” in positive risk in ERM circles means desirable and beneficial, and not merely real or incontestable (as in positive proof).
The concepts that positive risk jumble and confound are handled in other risk-analysis domains with due clarity. Other domains acknowledge that risk is taken, when it is taken rather than being transferred or avoided, in order to gain some reward; i. e., a risk-reward calculus exists. Since no one would take risk unless some potential for reward existed (even if merely the reward of a thrill) the concept of positive risk is held as incoherent in risk-centric fields like aerospace and nuclear engineering. Positive risk confuses cause with effect, purpose with consequence, and uncertainty with opportunity; and it makes a mess of communications with serious professionals in other fields.
As evidence that only within ERM and related project-management risk tools is the concept of positive risk popular, note that the top 25 two-word strings starting with “risk” in Google’s data (e.g., aversion, mitigation, reduction, tolerance, premium, alert, exposure) all imply unwanted outcomes or expenses. Further, none of the top 10,000 collocates ending with “risk” include “positive” or similar words.
While the PMI and ISO 31000 and similar frameworks promote the idea of positive risk, most of the language within their publications does not accommodate risk being desirable. That is, if risk can be positive, the frameworks would not talk mostly of risk mitigation, risk tolerance, risk-avoidance, and risk reduction – yet they do. The conventional definition of risk appearing in dictionaries for the 200 years prior to the birth of ERM, used throughout science and engineering, holds that risk is a combination of the likelihood of an unwanted occurrence and its severity. Nothing in the common and historic definition of risk disallows that taking risks can have benefits or positive results – again, the reason we take risk is to get rewards. But that isn’t positive risk.
Dropping the concept of positive risk would prevent a lot of confusion, inconsistencies, and muddled thinking. It would also serve to demystify risk models built on a pretense of rigor and reeking of obscurantism, inconsistency, and deliberate vagueness masquerading as esoteric knowledge.
The few simple concepts mixed up in the idea of positive risk are easily extracted. Any particular risk is the chance of a specific unwanted outcome considered in combination with the undesirability (i.e. cost or severity) of that outcome. Chance means probability or a measure of uncertainty, whether computable or not; and rational agents take risks to get rewards. The concepts are simple, clear, and useful. They’ve served to reduce the rate of fatal crashes by many orders of magnitude in the era of passenger airline flight. ERM’s track record is less impressive. When I confront chieftans of ERM with this puzzle, they invariably respond, with confidence of questionable provenance, that what works in aviation can’t work in ERM.
ERM insiders maintain that risk-management disasters like AIG, Bear Stearns, Lehman Brothers, UBS, etc. stemmed from improper use of risk frameworks. The belief that ERM is a thoroughbred who’s had a recent string of bad jockeys is the stupidest possible interpretation of an endless stream of ERM failures, yet one that the authors of ISO 31000 and risk frameworks continue to deploy with straight faces. Those authors, who penned the bollixed “effect of uncertainty on objectives” definition of risk (ISO 31000 2009) threw a huge bone to big consultancies positioned to peddle such poppycock to unwary clients eager to curb operational risk.
The absurdity of this broader ecosystem has been covered by many fine writers, apparently to no avail. Mlodinow’s The Drunkard’s Walk, Rosenzweig’s The Halo Effect, and Taleb’s Fooled by Randomness are excellent sources. Douglas Hubbard spells out the madness of ERM’s shallow and quirky concepts of probability and positive risk in wonderful detail in both his The Failure of Risk Management and How to Measure Anything in Cybersecurity Risk. Hubbard points out the silliness of positive risk by noting that few people would take a risk if they could get the associated reward without exposure to the risk.
My greatest fear in this realm is that the consultants peddling this nonsense will infect aerospace, aviation and nuclear power as they have done in the pharmaceutical world, much of which now believes that an FMEA is risk management and that Functional Hazard Analysis is a form you complete at the beginning of a project.
The notion of positive risk is certainly not the only flaw in ERM models, but chucking this half-witted concept would be a good start.
You might not think of McKinsey as being in the behavioral science business; but McKinsey thinks of themselves that way. They claim success in solving public sector problems, improving customer relationships, and kick-starting stalled negotiations through their mastery of neuro- and behavioral science. McKinsey’s Jennifer May et. al. say their methodology is “built on an extensive review of neuroscience and behavioral literature from the past decade and is designed to distill the scientific insights most relevant for governments, not-for-profits, and business leaders.”
McKinsey is also active in the Change Management/Leadership Management realm, which usually involves organizational, occupational and industrial psychology based on behavioral science. Like most science, all this work presumably involves a good deal of iterating over hypothesis and evidence collection, with hypotheses continually revised in light of interpretations of evidence made possible by sound use of statistics.
Given that, and McKinsey’s phenomenal success at securing consulting gigs with the world’s biggest firms, you’d think McKinsey would set out spotless epistemic values. A bit has been written about McKinsey’s ability to walk proud despite questionable ethics. In his 2013 book The Firm Duff McDonald relates McKinsey’s role in creating Enron and sanctioning its accounting practices, and its 2008 endorsement of banks funding their balance sheets with debt, and its promotion of securitizing sub-prime mortgages.
Epistemic and Scientific Values
I’m not talking about those kinds of values. I mean epistemic and scientific values. These are focused on how we acquire knowledge and what counts as data, fact, and information. They are concerned with accuracy, clarity, falsifiability, reliability, testability, and justification – all the things that separate science from pseudoscience.
McKinsey boldly employs the Myers Briggs Type Indicator both internally and externally. They do this despite decades of studies by prominent universities showing MBTI to be essentially worthless from the perspective of survey methodology and statistical analysis. The studies point out that there is no evidence for the binomial distributions inherent in MBTI theory. They note that the standard error of measurement for MBTI’s dimensions are unacceptably large, and that its test/re-test reliability is poor. I.e., even in re-test intervals of five weeks, over half the subjects are reclassified. Analysis of MBTI data shows that its JP and SN scales strongly correlate with each other, which is undesirable. Meanwhile MBTI’s EI scale correlates with non-MBTI behavioral near-opposites. These findings impugn the basic structure of the Myers Briggs model. (The Big Five model does somewhat better in this realm.)
Five decades of studies show Myers-Briggs to be junk due to low evidential support. Did McKinsey mis-file those reports?
McKinsey’s Brussels director, Olivier Sibony, once expressed optimism about a nascent McKinsey collective decision framework, saying that while preliminary results we good, it still fell short of “a standard psychometric tool such as Myers–Briggs.” Who finds Myers-Briggs to be such a standard tool? Not psychologists or statisticians. Shouldn’t attachment to a psychological test rejected by psychologists, statisticians, and experiment designers offset – if not negate – retrospective judgments by consultancies like McKinsey (Bain is in there too) that MBTI worked for them?
Epistemic values guide us to ask questions like:
- What has been the model’s track record at predicting the outcome of future events?
- How would you know if were working for you?
- What would count as evidence that it was not working?
On the first question, McKinsey may agree with Jeffrey Hayes (whose says he’s an ENTP), CEO of CPP, owner of the Myers-Briggs® product, who dismisses criticism of MBTI by the many psychologists (thousands, writes Joseph Stromberg) who’ve deemed it useless. Hayes says, “It’s the world’s most popular personality assessment largely because people find it useful and empowering […] It is not, and was never intended to be predictive…”
Does Hayes’ explanation of MBTI’s popularity (people find it useful) defend its efficacy and value in business? It’s still less popular than horoscopes, which people find useful, so should McKinsey switch to the higher standards of astrology to characterize its employees and clients?
Granting Hayes, for sake of argument, that popular usage might count toward evidence of MBTI’s value (and likewise for astrology), what of his statement that MBTI never was intended to be predictive? Consider the plausibility of a model that is explanatory – perhaps merely descriptive – but not predictive. What role can such a model have in science?
Explanatory but not Predictive?
This question was pursued heavily by epistemologist Karl Popper (who also held a PhD in Psychology) in the mid 20th century. Most of us are at least vaguely familiar with his role in establishing scientific values. He is most famous for popularizing the notion of falsifiability. For Popper, a claim can’t be scientific if nothing can ever count as evidence against it. Popper is particularly relevant to the McKinsey/MBTI issue because he took great interest in the methods of psychology.
In his youth Popper followed Freud and Adler’s psychological theories, and Einstein’s physics. Popper began to see a great contrast between Einstein’s science and that of the psychologists. Einstein made bold predictions for which experiments (e.g. Eddington’s) could be designed to show the prediction wrong if the theory were wrong. In contrast, Freud and Adler were in the business of explaining things already observed. Contemporaries of Popper, Carl Hempel in particular, also noted that explanation and prediction should be two sides of the same coin. I.e., anything that can explain a phenomenon should be able to be used to predict it. This isn’t completely uncontroversial in science; but all agree prediction and explanation are closely related.
Popper observed that Freudians tended to finds confirming evidence everywhere. Popper wrote:
Neither Freud nor Adler excludes any particular person’s acting in any particular way, whatever the outward circumstances. Whether a man sacrificed his life to rescue a drowning child (a case of sublimation) or whether he murdered the child by drowning him (a case of repression) could not possibly be predicted or excluded by Freud’s theory; the theory was compatible with everything that could happen. (emphasis in original – Replies to My Critics, 1974).
For Popper, Adler’s psychoanalytic theory was irrefutable, not because it was true, but because everything counted as evidence for it. On these grounds Popper thought pursuit of disconfirming evidence to be the primary goal of experimentation, not confirming evidence. Most hard science follows Popper on this value. A theory’s explanatory success is very little evidence of its worth. And combining Hempel with Popper yields the epistemic principle that even theories with predictive success have limited worth, unless those predictions are bold and can in principle be later found wrong. Horoscopes make countless correct predictions – like that we’ll encounter an old friend or narrowly escape an accident sometime in the indefinite future.
Popper brings to mind experiences where I challenged McKinsey consultants on reconciling observed behaviors and self-reported employee preferences with predictions – oh wait, explanations – given by Myers-Briggs. The invocation of sudden strengthening of otherwise mild J (Judging) in light of certain situational factors recalls Popper’s accusing Adler of being able to explain both aggression or submission as the consequence of childhood repression. What has priority – the personality theory or the observed behavior? Behavior fitting the model confirms it; and opposite behavior is deemed acting out of character. Sleight of hand saves the theory from evidence.
What’s the Attraction?
Many writers see Management Science as more drawn to theory and less to evidence (or counter-evidence) than is the case with the hard sciences – say, more Aristotelian and less Newtonian, more philosophical rationalism and less scientific empiricism. Allowing this possibility, let’s try to imagine what elements of Myers-Briggs theory McKinsey leaders find so compelling. The four dimensions of MBTI were, for the record, not based on evidence but on the speculation of Carl Jung. Nothing is wrong with theories based on a wild hunch, if they are born out by evidence and they withstand falsification attempts. Since this isn’t the case with Myers-Briggs, as shown by the testing mentioned above, there must be something in it that attracts consultants.
I’ve struggled with this. The most charitable reading I can make of McKinsey’s use of MBTI is that they want a quick predictor (despite Hayes’ cagey caution against it) of a person’s behavior in collaborative exercises or collective-decision scenarios. They must therefore believe all of the following, since removing any of these from their web of belief renders their practice (re Myers-Briggs) arbitrary or ill-motivated:
- that MTBI is a reliable indicator of character and personality type
- that personality is immutable and not plastic
- that behavior in teams is mostly dependent on personality, not on training or education, not on group mores, and not on corporate rules and behavioral guides
Now that’s a dark assessment of humanity. And it conflicts with the last decade’s neuro- and behavioral science that McKinsey claims to have incorporated in its offerings. That science suggests our brains, our minds, and our behaviors are mutable, like our bodies. Few today doubt that personality is in some sense real, but the last few decades’ work suggest that it’s not made of concrete (for insiders, read this as Mischel having regained some ground lost to Kenrick and Funder). It suggests that who we are is somewhat situational. For thousands of years we relied on personality models that explained behaviors as consequences of personalities, which were in turn only discovered through observations of behaviors. For example, we invented types (like the 16 MBTIs) based on behaviors and preferences thought to be perfectly static.
Evidence against static trait theory appears as secondary details in recent neuro- and behavioral science work. Two come to mind from the last week – Carstensen and DeLiema’s work at Stanford on the fading of positivity bias with age, and research at the Planck Institute for Human Cognitive and Brain Sciences showing the interaction of social affect, cognition and empathy.
Much attention has been given to neuroplasticity in recent years. Sifting through the associated neuro-hype, we do find some clues. Meta-studies on efforts to pair personality traits with genetic markers have come up empty. Neuroscience suggests that the ancient distinction between states and traits is far more complex and fluid than Aristotle, Jung and Adler theorized them to be – without the benefit of scientific investigation, evidence, and sound data analysis. Even if the MBTI categories could map onto reality, they can’t do the work asked of them. McKinsey’s enduring reliance on MBTI has an air of folk psychology and is at odds with its claims of embracing science. This cannot be – to use a McKinsey phrase – directionally correct.
If personality overwhelmingly governs behavior as McKinsey’s use of MBTI would suggest, then Change Management is futile. If personality does not own behavior, why base your customer and employee interactions on it? If immutable personalities control behavior, change is impossible. Why would anyone buy Change Management advice from a group that doesn’t believe in change?