This is not news about alternateve energy. It is alternatative news about energy. I.e., it is energy news of the sort that may not necessarily fit the agendas of MSNBC, Fox News, and CNN, so it is likely to escape common knowledge.
Duke is asking North Carolina regulators to ease air quality emission limits for some of Duke’s combustion turbine facilities. The utility is trying to reduce air pollution it says is due to the increased penetration of solar power. North Carolina ranks second in the nation, behind only California, in the amount of installed solar plants. Duke’s problem shows what happens when basic science collides with operational reality. It turns out that when zero-emission nuclear plants are dialed back to make room for solar, greenhouse gas-emitting plants must be employed to give nuclear plants time to ramp back up when the sun goes down.
China intends to cooperate with Russia to develop nuclear power and wind power projects in the Arctic region. China and Russia will deepen energy cooperation. Ou Xiaoming, chief representative of China State Grid Corporation, indicated that except for the cooperative agreement to build nuclear reactors in China, two countries will also develop Arctic wind energy resources.
The offshore wind industry has a problem with how it forecasts the output of projects, industry leader Ørsted has warned. Denmark’s Ørsted issued a statement alongside its Q3 results explaining that it was downgrading its anticipated internal rate of return for several projects. The underlying issue is an underestimation of wake and blockage effects.
After years of backing away from nuclear power, France suddenly wants to build six huge reactors. The third-generation design produces enough electricity to supply 1.5 million people, and automatically shuts down and cools in the event of an accident.
On October 16, CNNC announced that Hualong No. 1 reactor started construction in Zhangzhou, Fujian Province. This project plans to build 6 million-kilowatt-class third-generation nuclear power units. Two units in Phase I will feature Hualong No. 1 technology. In addition, Zhangzhou Nuclear Power is a large-scale clean energy base planned by the China National Nuclear Corporation in Fujian Province aiming at exploring a new paradigm for nuclear power development.
If a Gen IV reactors get too hot, it automatically cools on its own. This all happens because of gravity—no pumps, external power, or human intervention is required. Existing nuclear waste becomes impotent through the Gen IV process. Gen IV can also consume traditional fuel and no weapons-grade material byproduct will result.
Nakamura, having worked at MIT, Georgia Institute of Technology, NASA, Jet Propulsion Laboratory, California Institute of Technology and Duke University, reports that global mean temperatures before 1980 are based on untrustworthy data, that today’s “global warming science” is built on the work of a few climate modelers who claim to have demonstrated that human-derived CO2 emissions are the cause of recently rising temperatures “and have then simply projected that warming forward.” Every climate researcher thereafter has taken the results of these original models as a given, and we’re even at the stage now where merely testing their validity is regarded as heresy, says Nakamura.
The current annual British subsidy will be about £9 billion, and the grand total for the years 2017 to 2024 will come to nearly £70 billion. Details of the environmental levies and subsidies in the UK.
Fossil-fuel divestment is a waste of time according to Gates. While it may come as a shock to climate activists who claim to refuse to invest in oil and coal will help the planet, Gates observed, “divestment, to date, probably has reduced about zero tonnes of emissions… It’s not like you’ve capital-starved [the] people making steel and gasoline.
Climate change denier, climate denial and similar terms peaked in usage, according to Google trends data, at the last presidential election. Usage today is well below those levels, but based on trends in the last week, is heading for a new high. The obvious meaning of climate change denial would seem to me to be saying that either the climate is not changing or that people are not responsible for climate change. But this is clearly not the case.
Patrick Moore, a once influential Greenpeace member, is often called a denier by climate activists. Niall Ferguson says he doesn’t deny anthropogenic climate change, but is attacked as a denier. After a Long Now Foundation talk by Saul Griffith, I heard Saul being accused being a denier. Even centenarian James Lovelock, the originator of Gaia theory who now believes his former position was alarmist (“I’ve grown up a bit since then“), is called a denier in California green energy events, despite his very explicit denial of being a denier.
Trying to look logically at the spectrum of propositions one might affirm or deny, I come up with the following possible claims. You can no doubt fine-tune these or make them more granular.
- The earth’s climate is changing (typically, average temperature is increasing.
- The earth’s average temperature has increased more rapidly since the industrial revolution.
- Some increase in warming rate is caused by human activity.
- The increase in warming rate is overwhelmingly due to humans (as opposed to, e.g. sun activity and orbital factors)
- Anthropogenic warming poses imminent threat to human life on earth.
- The status quo (greenhouse gas production) will result in human extinction.
- The status quo poses significant threat (even existential threat) and the proposed renewables policy will mitigate it.
- Nuclear energy is not an acceptable means of reducing greenhouse gas production.
No one with a command of high school math and English could deny claim 1. Nearly everything is changing at some level. We can argue about what constitutes significant change. That’s a matter of definition, of meaning, and of values.
Claim 2 is arguable. It depends on having a good bit of data. We can argue about data sufficiency, accuracy and interpretation of the noisy data.
Claim 3 relies much more on theory (to establishing causation) than on meaning/definitions and facts/measurements, as is the case with 1 and 2. Claim 4 is a strong version of claim 3, requiring much more scientific analysis and theorizing.
While informed by claims 1-4, Claims 5 and 6 (imminent threat, certain doom) are mostly outside the strict realm of science. They differ on the severity of the threat; and they rely of risk modeling, engineering feasibility analyses, and economics. For example, could we afford to pay for the mitigations that could reverse the effects of continued greenhouse gas release, and is geoengineering feasible? Claim 6 is held by Greta Thunberg (“we are in the beginning of a mass extinction”). Al Gore seems somewhere between 5 and 6.
Claim 7 (renewables can cure climate change) is the belief held by followers of the New Green Deal.
While unrelated to the factual (whether true or false) components of claims 1-4 and the normative components of claims 5-7, claim 8 (fission not an option) seems to be closely aligned with claim 6. Vocal supporters of 6 tend to be proponents of 8. Their connection seems to be on ideological grounds. It seems logically impossible to reconcile holding claims 6 and 8 simultaneously. I.e., neither the probability nor severity components of nuclear risk can exceed claim 6’s probability (certainty) and severity (extinction). Yet they are closely tied. Naomi Oreskes accused James Hansen of being a denier because he endorsed nuclear power.
Beliefs about the claims need not be binary. For each claim, one could hold belief in a range from certitude to slightly possible, as well as unknown or unknowable. Fred Singer, for example, accepts that CO2 alters the climate, but allows that its effect could be cooling rather than warming. Singer’s uncertainty stems from his perception that the empirical data does not jibe with global-warming theory. It’s not that he’s lukewarm; he finds the question presently unknowable. This is a form of denial (see Freedman and McKibben below) green activists, blissfully free of epistemic humility and doubt, find particularly insidious.
Back to the question of what counts as a denier. I once naively thought that “climate change denier” applies only to claims 1-4. After all, the obvious literal meaning of the words would apply only to claims 1 and 2. We can add 3 and 4 if we allow that those using the term climate denier use it as a short form of “anthopogenic climate-change denier.”
Clearly, this is not the popular usage, however. I am regularly called a denier at green-tech events for arguing against claim 7 (renewables as cure). Whether anthopogenic climate change exists, regardless of the size of the threat, wind and solar cannot power a society anything like the one we live in. I’m an engineer, I specialized in thermodynamics and energy conversion, that’s my argument, and I’m happy to debate it.
Green activists’ insistence that we hold claim 8 (no fission) to be certain, in my view, calls their program and motivations into question, for reasons including the above mentioned logical incompatibility of claims 6 and 8 (certain extinction without change, but fission is to dangerous).
I’ve rarely heard anyone deny claims 1-3 (climate change exists and humans play a role). Not even Marc Morano denies these. I don’t think any of our kids, indoctrinated into green policy at school, have any idea that those they’re taught are deniers do not deny climate change.
In the last year I’ve seen a slight increase in professional scientists who deny claim 4 (overwhelmingly human cause), but the majority of scientists in relevant fields seem to agree with claim 4. Patrick Moore, Caleb Rossiter, Roger A. Pielke and Don Easterbrook seem to deny claim 4. Leighton Steward denies it on the grounds that climate change is the cause of rising CO2 levels, not its effect.
Some of the key targets of climate activism don’t appear to deny the basic claims of climate change. Among these are Judith Curry, Richard Tol, Ivar Giaever, Roy Spencer, Robert M Carter, Denis Rancourt, Richard Tol, John Theon, Scott Armstrong, Patrick Michaels, Will Happer and Philip Stott. Anthony Watts and Matt Ridley are very explicit about accepting claim 4 (mostly human-caused) but denying claims 5 and 6 (significant threat or extinction). William M. Briggs called himself a climate denier, but meant by it that the concept of climate, as understood by most people, is itself invalid.
More and more people who disagree with the greens’ proposed policy implementation are labeled deniers (as Oreskes calling Hansen a denier because he supports fission). Andrew Freedman seemed to implicitly acknowledge the expanding use of the denier label in a recent Mashable piece, in which he warned of some green opponents who were moving “from outright climate denial to a more subtle, insidious and risky form.” Bill McKibben, particularly immune to the nuances of scientific method and rational argument, called “renewables denial” “at least as ugly” as climate denial.
Opponents argue that the green movement is a religious cult. Arguing over matters of definition has limited value, but greens are prone to apocalyptic rants that would make Jonathan Edwards blush, focus on sin and redemption, condemnation of heresy, and attempts to legislate right behavior. Last week The Conversation said it was banning not only climate denial but “climate skepticism”). I was amused at an aspect of the religiosity of the greens in both Freedman and McKibben’s complaints.: each is insisting that being partially sinful warrants more condemnation than committing the larger sin.
So because you are lukewarm, and neither hot nor cold, I will spit you out of My mouth. – Revelation 3:16 (NAS)
Refusal to debate crackpots is understandable, but Michael Mann’s refusal to debate “deniers” (he refused even to share his data when order to do so by British Columbia Supreme Court) looks increasingly like fear of engaging worthy opponents – through means other than suing them.
On his liberal use of the “denier” accusation, the below snippet provides some levity. In a house committee session Mann denies calling anyone a denier and says he’s been misrepresented. Judith Curry (the denier) responds “it’s in your written testimony.” On page 6 of Mann’s testimony, he says “climate science denier Judith Curry” adding that “I use the term carefully.”
I deny claims 6 through 8. The threat is not existential; renewables won’t fix it; and fission can.
Follow this proud denier on twitter.
Young people around the world protested for climate action last week. 16-year old Greta Thunberg implored congress to “listen to the scientists” about climate change and fix it so her generation can thrive.
OK, let’s listen to them, and assume for sake of argument that we understand “them” to be a large majority of all relevant scientists, and that say with one voice that humans are materially affecting climate. And let’s take the IPCC’s projections for a 3.4 to 4.8 degree C rise by 2100 in the absence of policy changes. While activists and politicians report that scientific consensus exists, some reputable scientists dispute this. But for sake of discussion assume such consensus exists.
That temperature rise, scientists tell us, would change sea levels and would warm cold regions more than warm regions. “Existential crisis,” said Elizabeth Warren on Tuesday. Would that in fact pose an existential threat? I.e., would it cause human extinction? That question probably falls much more in the realm of engineering than in science. But let’s assume Greta might promote (or demote, depending on whether you prefer expert generalists to exert specialists) engineers to the rank of scientists.
The World Bank 4 Degrees – Turn Down the Heat report is often cited as concluding that uncontrolled human climate impact threatens the human race. It does not. It describes Sub-Saharan Africa food production risk, southeast Asia water scarcity and coastal productivity risk. It speaks of wakeup-calls and tipping points, and, lacking the ability to quantify risks, assumes several worst imaginable cases of cascade effects, while rejecting all possibility that innovation and engineering can, for example, mitigate water scarcity problems before they result in health problems. The language and methodology of this report is much closer to the realm of sociology than to that of people we usually call scientists. Should sociology count as science or as philosophy and ethics? I think the latter, and I think the World Bank’s analysis reeks of value-laden theory and theory-laden observations. But for sake of argument let’s grant that climate Armageddon, true danger to survival of the race, is inevitable without major change.
Now given this impending existential crisis, what can the voice of scientists do for us? Those schooled in philosophy, ethics, and the soft sciences might recall the is-ought problem, also known as Hume’s Guillotine, in honor of the first writer to make a big deal of it. The gist of the problem, closely tied to the naturalistic fallacy, is that facts about the world do not and cannot directly cause value judgments. And this holds regardless of whether you conclude that moral truths do or don’t exist. “The rules of morality are not conclusions of our reason,” observed Hume. For a more modern philosophical take on this issue see Simon Blackburn’s Ethics.
Strong statements on the non-superiority of scientists as advisers outside their realm come from scientists like Richard Feynman and Wifred Trotter (see below).
But let’s assume, for sake of argument, that scientists are the people who can deliver us from climate Armageddon. Put them on a pedestal, like young Greta does. Throw scientism caution to the wind. I believe scientists probably do have more sensible views on the matter than do activists. But if we’re going to do this – put scientists at the helm – we should, as Greta says, listen to those scientists. That means the scientists, not the second-hand dealers in science – the humanities professors, pandering politicians, and journalists with agendas, who have, as Hayek phrased it, absorbed rumors in the corridors of science and appointed themselves as spokesmen for science.
What are these scientists telling us to do about climate change? If you think they’re advising us to equate renewables with green, as young protesters have been taught to do, then you’re listening not to the scientists but to second-hand dealers of misinformed ideology who do not speak for science. How many scientist think that renewables – at any scale that can put a real dent in fossil fuel use – are anything remotely close to green? What scientist thinks utility-scale energy storage can be protested and legislated into existence by 2030? How many scientist think uranium is a fossil fuel?
The greens, whose plans for energy are not remotely green, have set things up so that sincere but uniformed young people like Greta have only one choice – to equate climate change mitigation with what they call renewable energy. Even under Mark Jacobson’s grossly exaggerated claims about the efficiency and feasibility of electricity generation from renewables, Greta and her generation would shudder at the environmental devastation a renewables-only energy plan would yield.
Where is the green cry for people like Greta to learn science and engineering so they can contribute to building the world they want to live in? “Why should we study for a future that is being taken away from us?” asked Greta. One good reason 16-year-olds might do this is that in 2025 they can have an engineering degree and do real work on energy generation and distribution. Climate Armageddon will not happen by 2025.
I feel for Greta, for she’s been made a stage prop in an education-system and political drama that keeps young people ignorant of science and engineering, ensures they receive filtered facts from specific “trustworthy” sources, and keeps them emotionally and politically charged – to buy their votes, to destroy capitalism, to rework political systems along a facile Marxist ideology, to push for open borders and world government, or whatever the reason kids like her are politically preyed upon.
If the greens really believed that climate Armageddon were imminent (combined with the fact the net renewable contribution to world energy is still less than 1%), they might consider the possibility that gas is far better than coal in the short run, and that nuclear risks are small compared to the human extinction they are absolutely certain is upon us. If the greens’ real concern was energy and the environment, they would encourage Greta to list to scientists like Nobel laureate in physics, Ivar Giaever, who says climate alarmism is garbage, and then to identify the points on which Giaever is wrong. That’s what real scientists do.
But it isn’t about that, is it? It’s not really about science or even about climate. As Saikat Chakrabarti, former chief of staff for Ocasio-Cortez, admitted: “the interesting thing about the Green New Deal is it wasn’t originally a climate thing at all.” “Because we really think of it as a how-do-you-change-the-entire-economy thing,” he added. To be clear, Greta did not endorse the Green New Deal, but she is their pawn.
Frightened, indoctrinated, science-ignorant kids are really easy to manipulate and exploit. Religions – particularly those that silence dissenters, brand heretics, and preach with righteous indignation of apocalypses that fail to happen – have long understood this. The green religion understands it too.
Go back to school, kids. You can’t protest your way to science. Learn physics, not social studies – if you can – because most of your teachers are puppets and fools. Learn to decide for yourself who you will listen to.
I believe that a scientist looking at nonscientific problems is just as dumb as the next guy — and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter. – Richard Feynman, The Value of Science, 1955.
Nothing is more flatly contradicted by experience than the belief that a man, distinguished in one of the departments of science is more likely to think sensibly about ordinary affairs than anyone else. – Wilfred Trotter, Has the Intellect a Function?, 1941
Most people believe they are better than average drivers. Is this a cognitive bias? Behavioral economists think so: “illusory superiority.” But a rational 40-year-old having had no traffic accidents might think her car insurance premiums are still extremely high. She may then conclude she is a better than average drivers since she’s apparently paying for a lot of other peoples’ smashups. Are illusory superiority and selective recruitment at work here? Or is this intuitive Bayesianism operating on the available evidence?
Bayesian philosophy is based on using a specific rule set for updating one’s belief in light of new evidence. Objective Bayesianism, in particular, if applied strictly, would require us to quantify every belief we hold – our prior credence – with a probability in the range of zero to one and to quantify the value of new evidence. That’s a lot of cognizing, which would lead to a lot more personal book keeping than most of us care to do.
As I mentioned last time, Daniel Kahneman and other in his field hold that we are terrible intuitive Bayesians. That is, they believe we’re not very good at doing the equivalent of Bayesian reasoning intuitively (“not Bayesian at all” said Kahneman and Tversky in Subjective probability: A judgment of representativeness, 1972). But beyond the current wave of books and TED talks framing humans as sacks of cognitive bias (often with government-paternalistic overtones), many experts in social psychology have reached the opposite conclusion.
- Edwards, W. 1968. “Conservatism in human information processing”. In Formal representation of human judgment.
- Peterson, C. R. and L. R. Beach. 1967. “Man as an intuitive statistician”. Psychological Bulletin. 68.
- Piaget, Jean. 1975. The Origin of the Idea of Chance in Children.
- Anderson, J. R. 1990. The Adaptive Character of Thought.
Anderson makes a particularly interesting point. People often have reasonable but wrong understandings of base rates, and official data sources often vary wildly about some base rates. So what is characterized by critics of humans’ poor performance at Bayesian reasoning (e.g., by ignoring rates) is in fact use of incorrect base rates, not a failure to employ base rates at all.
Beyond the simple example above of better-than-average driver belief, many examples have been given (and ignored by those who see bias everywhere) of intuitive Bayesian reasoning that yields rational but incorrect results. These include not only for single judgments, but for people’s modification of belief across time – Bayesian updates.
For math-inclined folk seeking less trivial examples, papers like this one from Benoit and Dubra lay this out in detail (If a fraction x of the population believes that they rank in, say, the top half of the distribution with probability at least q > 1/2, then Bayesian rationality immediately implies that xq <= 1/2, not that x <= 1/2 [where q is the subject’s confidence that he is in the top half and x is the fraction who think they’re in the top half]).
A 2006 paper, Optimal Predictions in Everyday Cognition, by Thomas L. Griffiths and Joshua B. Tenenbaum warrants special attention. It is the best executed study I’ve ever seen in this field, and its findings are astounding – in a good way. They asked subjects to predict the duration or extent of common phenomena such as human lifespans, movie run times, and the box office gross of movies. They then compared the predictions given by participants with calculations from an optimal Bayesian model. They found that, as long as subjects had some everyday experience with the phenomena being predicted (like box office gross, unlike the reign times of Egyptian pharaohs), people predict extremely well.
The results of Griffiths and Tenenbaum showed people to be very competent intuitive Bayesians. Even more interesting, people’s implicit beliefs about data distributions, be they Gaussian (birth weights), Erlang, (call-center hold times), or power-law (length of poems), were very consistent with real works statistics, as was hinted at in Adaptive Character of Thought.
Looking at the popular material judging people to be lousy Bayesians steeped in bias and systematic error, and far less popular material like that from Griffiths/Tenenbaum, Benoit/Dubra and Anderson, makes me think several phenomena are occurring. To start, as noted in previous posts, those dedicated to uncovering bias (e.g. Kahneman, Ariely) strongly prefer confirming evidence over disconfirming evidence. This bias bias manifests itself both as ignoring cases where humans are good Bayesians reaching right conclusions (as in Griffiths/Tenebaum and Anderson) and as failure to grant that wrong conclusions don’t necessarily mean bad reasoning (auto driver example and the Benoit/Dubra cases).
Further, the pop-science presentation of human bias (Ariely TED talks, e.g.) makes newcomers to the topic feel like they’ve received a privileged view into secret knowledge. This gives the bias meme much stronger legs than the idea that humans are actually amazingly good intuitive Bayesians in most cases. As John Stuart Mill noted 200 years ago, those who despair when others hope are admired as sages while optimists are dismissed as fools. The best, most rigorous analyses in this realm, however, rest strongly with the optimists.
Daniel Kahneman has made great efforts to move psychology in the direction of science, particularly with his pleas for attention to replicability after research fraud around the priming effect came to light. Yet in Thinking Fast And Slow Kahneman still seems to draw some broad conclusions from a thin mantle of evidentiary icing upon a thick core of pre-formed theory. He concludes that people are bad intuitive Bayesians through flawed methodology and hypotheticals that set things up so that his psychology experiment subjects can’t win. Like many in the field of behavioral economics, he’s inclined to find bias and irrational behavior in situations better explained by the the subjects’ simply lacking complete information.
Like Richard Thaler and Dan Ariely, Kahneman sees bias as something deeply ingrained and hard-coded, programming that cannot be unlearned. He associates most innate bias with what he calls System 1, our intuitive, fast thinking selves. When called on to judge probability,” Kahneman says, “people actually judge something else and believe they have judged probability.” He agrees with Thaler, who finds “our ability to de-bias people is quite limited.”
But who is the “we” (“our” in that quote), and how is that “they” (Thaler, Ariely and Kahneman) are sufficiently unbiased to make this judgment? Are those born without the bias gene somehow drawn to the field of psychology; or through shear will can a few souls break free? If behavioral economists somehow clawed their way out of the pit of bias, can they not throw down a rope for the rest of us?
Take Kahneman’s example of the theater tickets. He compares two situations:
A. A woman has bought two $80 tickets to the theater. When she arrives at the theater, she opens her wallet and discovers that the tickets are missing. $80 tickets are still available at the box office. Will she buy two more tickets to see the play?
B. A woman goes to the theater, intending to buy two tickets that cost $80 each. She arrives at the theater, opens her wallet, and discovers to her dismay that the $160 with which she was going to make the purchase is missing. $80 tickets are still available at the box office. She has a credit card. Will she buy the tickets and just charge them?
Kahnemen says that the sunk-cost fallacy, a mental-accounting fallacy, and the framing effect account for the fact that many people view these two situations differently. Cases A and B are functionally equivalent, Kahneman says.
Really? Finding that $160 is missing from a wallet would cause most people to say, “darn, where did I misplace that money?”. Surely, no pickpocket removed the cash and stealthily returned the wallet to her purse. So the cash is unarguably a sunk cost in case A, but reasonable doubt exists in case B. She probably left the cash at home. As with philosophy, many problems in psychology boil down to semantics. And like the trolley problem variants, the artificiality of the problem statement is a key factor in the perceived irrationality of subjects’ responses.
By framing effect, Kahneman means that people’s choices are influenced by whether two options are presented with positive or negative connotations. Why is this bias? The subject has assumed that some level of information is embedded in the framer’s problem statement. If the psychologist judges that the subject has given this information too much weight, we might consider demystifying the framing effect by rebranding it the gullibility effect. But at that point it makes sense to question whether framing, in a broader sense, is at work in the thought problems. In presenting such problems and hypothetical situations to subjects, the framers imply a degree of credibility that is then used against those subjects by judging them irrational for accepting the conditions stipulated in the problem statement.
Bayesian philosophy is based on the idea of using a specific rule set for updating a “prior” (meaning prior belief – the degree of credence assigned to a claim or proposition) on the basis of new evidence. A Bayesian would interpret the framing effect, and related biases Kahneman calls anchoring and priming, as either a logic error in processing the new evidence or as a judgment error in the formation of an initial prior. The latter – how we establish initial priors – is probably the most enduring criticism of Bayesian reasoning. More on that issue later, but a Bayesian would say that Kayneman’s subjects need training in the use of uninformative priors and initial priors. Humans are shown to be very trainable in this matter, against the behavioral economists’ conclusion that we are hopelessly bound to innate bias.
One example Kahneman uses to show the framing effect presents different anchors to two separate test groups:
Group 1: Is the height of the tallest redwood more or less than 1200 feet? What is your best guess for the height of the tallest redwood?
Group 2: Is the height of the tallest redwood more or less than 120 feet? What is your best guess for the height of the tallest redwood?
Group 1’s average estimate was 844 feet, Group 2 gave 282 feet. The difference between the two anchors is 1080 feet. (1200 – 120). The difference in estimates by the two groups was 562 feet. Kahneman defines anchoring index as the ratio of the difference between mean estimates and difference in anchors. He uses this anchoring index to measure the robustness of the effect. He rules out the possibility that anchors are taken by subjects to be informative, saying that obviously random anchors can be just as effective, citing a 50% anchoring index when German judges rolled loaded dice (allowing only values of 3 or 9 to come up) before sentencing a shoplifter (hypothetical, of course). Kahneman reports that judges rolling a 3 gave 5-month sentences while those rolling a 9 assigned the shoplifter an 8-month sentence (index = 50%).
But the actual study (Englich, et. al.) cited by Kahneman has some curious aspects, besides the fact that it was very hypothetical. The judges found the fictional case briefs to be realistic, but they were not judging from the bench. They were working a thought problem. Englich’s Study 3 (the one Kahneman cites) shows the standard deviation in sentences was relatively large compared to the difference between sentences assigned by the two groups. More curious is a comparison of Englich’s Study 2 and the Study 3 Kahneman describes in Fast and Slow. Study 2 did not involve throwing dice to create an anchor. Its participants were only told that the prosecutor was demanding either a 3 or 9 month sentence, those terms not having originated in any judicial expertise. In Study 3, the difference between mean sentences from judges who received the two anchors was only two months (anchoring index = 33%).
Studies 2 and 3 therefore showed a 51% higher anchoring index for an explicitly random (clearly known to be random by participants) anchor than for an anchor understood by participants to be minimally informative. This suggests either that subjects regard pure chance as being more useful than potentially relevant information, or that something is wrong with the experiment, or that something is wrong with Kahnemnan’s inferences from evidence. I’ll suggest that the last two are at work, and that Kahneman fails to see that he is preferentially selecting confirming evidence over disconfirming evidence because he assumed his model of innate human bias was true before he examined the evidence. That implies a much older, more basic fallacy might be at work: begging the question, where an argument’s premise assumes the truth of the conclusion.
That fallacy is not an innate bias, however. It’s a rhetorical sin that goes way back. It is eminently curable. Aristotle wrote of it often and committed it slightly less often. The sciences quickly began to learn the antidote – sometimes called the scientific method – during the Enlightenment. Well, some quicker than others.
(2nd post on rational behavior of people too hastily judged irrational)
“These villagers have some really messed-up building practices.”
That’s a common reaction by gringos on first visiting rural Mexico. They see half-completed brick or cinder-block walls, apparently abandoned for a year or more, or exposed rebar sticking up from the roof of a one-story structure. It’s a pretty common sight.
In the 1990s I spent a few months in some pretty remote places in southern Mexico’s Sierra Madre Oriental mountains exploring caves. The indigenous people, Mazatecs and Chinantecs, were corn and coffee growers, depending on elevation and rain, which vary wildly over short distances. I traveled to San Agustin Zaragoza, a few miles from Huautla de Jimenez, home of Maria Sabina and the birthplace of the American psychedelic drug culture. San Agustin was mostly one-room thatched-roof stone houses, a few of brick or block, a few with concrete walls or floors. One had glass windows. Most had electricity, though often a single bulb hanging from a beam. No phones for miles. Several cinder block houses had rebar sticking from their flat roofs.
Talking with the adult Mazatecs of San Agustin wasn’t easy. Few of them spoke Spanish, but all their kids were learning it. Since we were using grade-school kids as translators, and our Spanish was weak to start with, we rarely went deep into politics or philosophy.
Juan Felix’s son Luis told me, after we got to know each other a bit, that when he turned fourteen he’d be heading off to a boarding school. He wanted to go. His dad had explained to Luis that life beyond the mountains of Oaxaca was an option. Education was the way out.
Mazatecs get far more cooperation from their kids than US parents do. This observation isn’t mere noble-savage worship. They consciously create early opportunities for toddlers to collaborate with adults in house and field work. They do this fully aware that the net contribution from young kids is negative; adults have to clean up messes made by honest efforts of preschoolers. But by age 8 or 9, kids don’t shun household duties. The American teenager phenomenon is nonexistent in San Agustin.
Juan Felix was a thinker. I asked Luis to help me ask his dad some questions. What’s up with the protruding rebar, I asked. Follow the incentives, Juan Felix said in essence. Houses under construction are taxed as raw land; completed houses have a higher tax rate. Many of the locals, having been relocated from more productive land now beneath a lowland reservoir, were less than happy with their federal government.
Back then Mexican Marxists patrolled the deeply-rutted mud roads in high-clearance trucks blasting out a bullhorn message that the motives of the Secretariat of Hydraulic Resources had been ethnocidal and that the SHR sought to force the natives into an evil capitalist regime by destroying their cultural identity, etc. Despite being victims of relocation, the San Agustin residents didn’t seem to buy the argument. While there was still communal farming in the region, ejidos were giving way to privately owned land.
A few years later, caver Louise Hose and I traveled to San Juan Zautla, also in the state of Oaxaca, to look for caves. Getting there was a day of four-wheeling followed by a two-day walk over mountainous dirt trails. It was as remote a place as I could find in North America. We stopped overnight in the village of Tecomaltianguisco and discussed our travel plans. We were warned that we might be unwelcome in Zautla.
On approaching Zautla we made enough noise to ensure we weren’t surprising anyone. Zautlans speak Sochiapam Chinantec. Like Mazatec, it is a highly tonal language, so much so that they can conduct full conversations over distance by whistling the tones that would otherwise accompany speech. Knowing that we were being whistled about was unnerving, though had they been talking, we wouldn’t have understood a word of their language any more than we would understand an etic tone of it.
But the Zautla residents welcomed us with open arms, gave us lodging, and fed us, including the fabulous black persimmons they grew there along with coffee. Again communicating through their kids, they told us we were the first brown-hairs that had ever visited Zautla. They guessed that the last outsiders to arrive there were the Catholic Spaniards who had brought the town bell for a tower that was never built. The Zautlans are not Catholic. They showed us the bell. Its inscription included a date in the 1700’s. Today there’s a road to Zautla. Census data says that in 2015 100% of the population (1200) was still indigenous and that there were no land lines, no cell phones and no internet access.
In Zautla I saw very little exposed rebar, but partially-completed block walls were everywhere. I doubted that property-tax assessors spent much time in Zautla, so the tax story didn’t seem to apply. So, through a 10 year old, I asked the jefe about the construction practices, which to outsiders appeared to reflect terrible planning.
Jefe Miguel laid it out. Despite their remote location, they still purchased most of their construction materials in distant Cuicatlan. Mules carried building materials over the dirt trail that brought us to Zautla. Inflation in Mexico had been running at 70% annually, compounding to over 800% for the last decade. Cement, mortar and cinder block are non-depreciating assets in a high inflation economy, Miguel told us. Buying construction materials as early as possible makes economic sense. Paying high VAT on the price of materials added insult to inflationary injury. Blocks take up a lot of space so you don’t want to store them indoors. While theft is uncommon, it’s still a concern. Storing them outdoors is made safer by gluing them down with mortar where the new structure is planned. Of course its not ideal, but don’t blame Zautla, blame the monetary tomfoolery of the PRI – Partido Revolucionario Institucional. Zautla economics 101.
San Agustin Christmas Eve 1988.
Bernard on fiddle, Jaime on Maria Sabina’s guitar.
San Agustin Zaragoza from the trail to Santa Maria la Asuncion
On the trail from San Agustin to Santa Maria la Asuncion
On the trail from Tecomaltianguisco to San Juan Zautla
San Juan Zautla, Feb. 1992
The karst valley below Zautla
Chinatec boy with ancient tripod bowl
Mountain view from Zautla
The classic formulation of the trolley-problem thought experiment goes something like this:
A runaway trolley hurtles toward five tied-up people on the main track. You see a lever that controls the switch. Pull it and the trolley switches to a side track, saving the five people, but will kill one person tied up on the side track. Your choices:
- Do nothing and let the trolley kill the five on the main track.
- Pull the lever, diverting the trolley onto the side track causing it to kill one person.
At this point the Ethics 101 class debates the issue and dives down the rabbit hole of deontology, virtue ethics, and consequentialism. That’s probably what Philippa Foot, who created the problem, expected. At this point engineers probably figure that the ethicists mean cable-cars (below right), not trolleys (streetcars, left), since the cable cars run on steep hills and rely on a single, crude mechanical brake while trolleys tend to stick to flatlands. But I digress.
Many trolley problem variants exist. The first twist usually thrust upon trolley-problem rookies was called “the fat man variant” back in the mid 1970s when it first appeared. I’m not sure what it’s called now.
The same trolley and five people, but you’re on a bridge over the tracks, and you can block it with a very heavy object. You see a very fat man next to you. Your only timely option is to push him over the bridge and onto the track, which will certainly kill him and will certainly save the five. To push or not to push.
Ethicists debate the moral distinction between the two versions, focusing on intentionality, double-effect reasoning etc. Here I leave the trolley problems in the competent hands of said ethicists.
But psychologists and behavioral economists do not. They appropriate the trolley problems as an apparatus for contrasting emotion-based and reason-based cognitive subsystems. At other times it becomes all about the framing effect, one of the countless cognitive biases afflicting the subset of souls having no psych education. This bias is cited as the reason most people fail to see the two trolley problems as morally equivalent.
The degree of epistemological presumptuousness displayed by the behavioral economist here is mind-boggling. (Baby, you don’t know my mind…, as an old Doc Watson song goes.) Just because it’s a thought experiment doesn’t mean it’s immune to the rules of good design of experiments. The fat-man variant is radically different from the original trolley formulation. It is radically different in what the cognizing subject imagines upon hearing/reading the problem statement. The first scenario is at least plausible in the real world, the second isn’t remotely.
First off, pulling the lever is about as binary as it gets: it’s either in position A or position B and any middle choice is excluded outright. One can perhaps imagine a real-world switch sticking in the middle, causing an electrical short, but that possibility is remote from the minds of all but reliability engineers, who, without cracking open MIL-HDBK-217, know the likelihood of that failure mode to be around one per 10 million operations.
Pushing someone, a very heavy someone, over the railing of the bridge is a complex action, introducing all sorts of uncertainty. Of course the bridge has a railing; you’ve never seen one that didn’t. There’s a good chance the fat man’s center of gravity is lower than the top of the railing because it was designed to keep people from toppling over it. That means you can’t merely push him over; you more have to lift him up to the point where his CG is higher than the top of railing. But he’s heavy, not particularly passive, and stronger than you are. You can’t just push him into the railing expecting it to break either. Bridge railings are robust. Experience has told you this for your entire life. You know it even if you know nothing of civil engineering and pedestrian bridge safety codes. And if the term center of gravity (CG) is foreign to you, by age six you have grounded intuitions on the concept, along with moment of inertia and fulcrums.
Assume you believe you can somehow overcome the railing obstacle. Trolleys weigh about 100,000 pounds. The problem statement said the trolley is hurtling toward five people. That sounds like 10 miles per hour at minimum. Your intuitive sense of momentum (mass times velocity) and your intuitive sense of what it takes to decelerate the hurtling mass (Newton’s 2nd law, f = ma) simply don’t line up with the devious psychologist’s claim that the heavy person’s death will save five lives. The experimenter’s saying it – even in a thought experiment – doesn’t make it so, or even make it plausible. Your rational subsystem, whether thinking fast or slow, screams out that the chance of success with this plan is tiny. So you’re very likely to needlessly kill your bridge mate, and then watch five victims get squashed all by yourself.
The test subjects’ failure to see moral equivalence between the two trolley problems speaks to their rationality, not their cognitive bias. They know an absurd hypothetical when they see one. What looks like humanity’s logical ineptitude to so many behavioral economists appears to the engineers as humanity’s cultivated pragmatism and an intuitive grasp of physics, factor-relevance evaluation, and probability.
There’s book smart, and then there’s street smart, or trolley-tracks smart, as it were.