Bill Storage

This user hasn't shared any biographical information

The Trouble with Doomsday

Doomsday just isn’t what is used to be. Once the dominion of ancient apologists and their votary, the final destiny of humankind now consumes probability theorists, physicists. and technology luminaries. I’ll give some thoughts on probabilistic aspects of the doomsday argument after a brief comparison of ancient and modern apocalypticism.

Apocalypse Then

The Israelites were enamored by eschatology. “The Lord is going to lay waste the earth and devastate it,” wrote Isaiah, giving few clues about when the wasting would come. The early Christians anticipated and imminent end of days. Matthew 16:27: some of those who are standing here will not taste death until they see the Son of Man coming in His kingdom.

From late antiquity through the middle ages, preoccupation with the Book of Revelation led to conflicting ideas about the finer points of “domesday,” as it was called in Middle English. The first millennium brought a flood of predictions of, well, flood, along with earthquakes, zombies, lakes of fire and more. But a central Christian apocalyptic core was always beneath these varied predictions.

Right up to the enlightenment, punishment awaited the unrepentant in a final judgment that, despite Matthew’s undue haste, was still thought to arrive any day now. Disputes raged over whether the rapture would be precede the tribulation or would follow it, the proponents of each view armed with supporting scripture. Polarization! When Christianity began to lose command of its unruly flock in the 1800’s, Nietzsche wondered just what a society of non-believers would find to flog itself about. If only he could see us now.

Apocalypse Now

Our modern doomsday riches include options that would turn an ancient doomsayer green. Alas, at this eleventh hour we know nature’s annihilatory whims, including global pandemic, supervolcanoes, asteroids, and killer comets. Still in the Acts of God department, more learned handwringers can sweat about earth orbit instability, gamma ray bursts from nearby supernovae, or even a fluctuation in the Higgs field that evaporates the entire universe.

As Stephen Hawking explained bubble nucleation, the Higgs field might be metastable at energies above a certain value, causing a region of false vacuum to undergo catastrophic vacuum decay, causing a bubble of the true vacuum expanding at the speed of light. This might have started eons ago, arriving at your doorstep before you finish this paragraph. Harold Camping, eat your heart out.

Hawking also feared extraterrestrial invasion, a view hard to justify with probabilistic analyses. Glorious as such cataclysms are, they lack any element of contrition. Real apocalypticism needs a guilty party.

Thus anthropogenic climate change reigned for two decades with no creditable competitors. As self-inflicted catastrophes go, it had something for everyone. Almost everyone. Verily, even Pope Francis, in a covenant that astonished adherents, joined – with strong hand and outstretched arm – leftists like Naomi Oreskes, who shares little else with the Vatican, ideologically speaking.

While Global Warming is still revered, some prophets now extend the hand of fellowship to some budding successor fears, still tied to devilries like capitalism and the snare of scientific curiosity. Bioengineered coronaviruses might be invading as we speak. Careless researchers at the Large Hadron Collider could set off a mini black hole that swallows the earth. So some think anyway.

Nanotechnology now gives some prominent intellects the willies too. My favorite in this realm is Gray Goo, a catastrophic chain of events involving molecular nanobots programmed for self-replication. They will devour all life and raw materials at an ever-increasing rate. How they’ll manage this without melting themselves due to the normal exothermic reactions tied to such processes is beyond me.  Global Warming activists may become jealous, as the very green Prince Charles himself now diverts a portion of the crown’s royal dread to this upstart alternative apocalypse.

My cataclysm bucks are on full-sized Artificial Intelligence though. I stand with chief worriers Bill Gates, Ray Kurzweil, and Elon Musk. Computer robots will invent and program smarter and more ruthless autonomous computer robots on a rampage against humans seen by the robots as obstacles to their important business of building even smarter robots. Game over.

The Mathematics of Doomsday

The Doomsday Argument is a mathematical proposition arising from the Copernican principle – a trivial application of Bayesian reasoning – wherein we assume that, lacking other info, we should find ourselves, roughly speaking, in the middle of the phenomenon of interest. Copernicus didn’t really hold this view, but 20th century thinkers blamed him for it anyway.

Applying the Copernican principle to human life starts with the knowledge that we’ve been around for 200 hundred thousand years, during which 60 billion of us have lived. Copernicans then justify the belief that half the humans that will have ever lived remain to be born. With an expected peak earth population of 12 billion, we might, using this line of calculating, expect the human race to go extinct in a thousand years or less.

Adding a pinch of statistical rigor, some doomsday theorists calculate a 95% probability that the number of humans to have lived so far is less than 20 times the number that will ever live. Positing individual life expectancy of 100 years and 12 billion occupants, the earth will house humans for no more than 10,000 more years.

That’s the gist of the dominant doomsday argument. Notice that it is purely probabilistic. It applies equally to the Second Coming and to Gray Goo. However, its math and logic are both controversial. Further, I’m not sure why its proponents favor population-based estimates over time-based estimates. That is, it took a lot longer than 10,000 years, the proposed P = .95 extinction term, for the race to arrive at our present population. So why not place the current era in the middle of the duration of the human race, thereby giving us another 200,000 thousand years? That’s quite an improvement on the 10,000 year prediction above.

Even granting that improvement, all the above doomsday logic has some curious bugs. If we’re justified in concluding that we’re midway through our reign on earth, then should we also conclude we’re midway through the existence of agriculture and cities? If so, given that cities and agriculture emerged 10,000 years ago, we’re led to predict a future where cities and agriculture disappear in 10,000 years, followed by 190,000 years of post-agriculture hunter-gatherers. Seems unlikely.

Astute Bayesian reasoners might argue that all of the above logic relies – unjustifiably – on an uninformative prior. But we have prior knowledge suggesting we don’t happen to be at some random point in the life of mankind. Unfortunately, we can’t agree on which direction that skews the outcome. My reading of the evidence leads me to conclude we’re among the first in a long line of civilized people. I don’t share Elon Musk’s pessimism about killer AI. And I find Hawking’s extraterrestrial worries as facile as the anti-GMO rantings of the Union of Concerned Scientists. You might read the evidence differently. Others discount the evidence altogether, and are simply swayed by the fashionable pessimism of the day.

Finally, the above doomsday arguments all assume that we, as observers, are randomly selected from the set of all existing humans, including past, present and future, ever be born, as opposed to being selected from all possible births. That may seem a trivial distinction, but, on close inspection, becomes profound. The former is analogous to Theory 2 in my previous post, The Trouble with Probability. This particular observer effect, first described by Dennis Dieks in 1992, is called the self-sampling assumption by Nick Bostrom. Considering yourself to be randomly selected from all possible births prior to human extinction is the analog of Theory 3 in my last post. It arose from an equally valid assumption about sampling. That assumption, called self-indication by Bostrom, confounds the above doomsday reasoning as it did the hotel problem in the last post.

Th self-indication assumption holds that we should believe that we’re more likely to discover ourselves to be members of larger sets than of smaller sets. As with the hotel room problem discussed last time, self-indication essentially cancels out the self-sampling assumption. We’re more likely to be in a long-lived human race than a short one. In fact, setting aside some secondary effects, we can say that the likelihood of being selected into any set is proportional to the size of the set; and here we are in the only set we know of. Doomsday hasn’t been called off, but it has been postponed indefinitely.

6 Comments

The Trouble with Probability

The trouble with probability is that no one agrees what it means.

Most people understand probability to be about predicting the future and statistics to be about the frequency of past events. While everyone agrees that probability and statistics should have something to do with each other, no one agrees on what that something is.

Probability got a rough start in the world of math. There was no concept of probability as a discipline until about 1650 – odd, given that gambling had been around for eons. Some of the first serious work on probability was done by Blaise Pascal, who was assigned by a nobleman to divide up the winnings when a dice game ended unexpectedly. Before that, people just figured chance wasn’t receptive to analysis. Aristotle’s idea of knowledge required that it be universal and certain. Probability didn’t fit.

To see how fast the concept of probability can go haywire, consider your chance of getting lung cancer. Most agree that probability is determined by your membership in a reference class for which a historical frequency is known. Exactly which reference class you belong to is always a matter of dispute. How similar to them do you need to be? The more accurately you set the attributes of the reference population, the more you narrow it down. Eventually, you get down to people of your age, weight, gender, ethnicity, location, habits, and genetically determined preference for ice cream flavor. Your reference class then has a size of one – you. At this point your probability is either zero or one, and nothing in between. The historical frequency of cancer within this population (you) cannot predict your future likelihood of cancer. That doesn’t seem like what we wanted to get from probability.

Similarly, in the real world, the probabilities of uncommon events and of events with no historical frequency at all are the subject of keen interest. For some predictions of previously unexperienced events, like and airplane crashing due to simultaneous failure of a certain combination of parts, even though that combination may have never occurred in the past, we can assemble a probability from combining historical frequencies of the relevant parts using Boolean logic. My hero Richard Feynman seemed not to grasp this, oddly.

For worries like a large city being wiped out by an asteroid, our reasoning becomes more conjectural. But even for asteroids we can learn quite a bit about asteroid impact rates based on the details of craters on the moon, where the craters don’t weather away so fast as they do on earth. You can see that we’re moving progressively away from historical frequencies and becoming more reliant on inductive reasoning, the sort of thing that gave Aristotle the hives.

Finally, there are some events for which historical frequencies provide no useful information. The probability that nanobots will wipe out the human race, for example. In these cases we take a guess, maybe even a completely wild guess. and then, on incrementally getting tiny bits of supporting or nonsupporting evidence, we modify our beliefs. This is the realm of Bayesianism. In these cases when we talk about probability we are really only talking about the degree to which we believe a proposition, conjecture or assertion.

Breaking it down a bit more formally, a handful of related but distinct interpretations of probability emerge. Those include, for example:

Objective chances: The physics of flipping a fair coin tend to result in heads half the time.

Frequentism: Relative frequency across time: of all the coins ever flipped, one half have been heads, so expect more of the same.

Hypothetical frequentism: If you flipped coins forever, the heads/tails ratio would approach 50%.

Bayesian belief: Prior distributions equal belief: before flipping a coin, my personal expectation that it will be heads is equal to that of it being tails.

Objective Bayes: Prior distributions represent neutral knowledge: given only that a fair coin has been flipped, the plausibility of it’s having fallen heads equals that of it having been tails.

While those all might boil down to the same thing in the trivial case of a coin toss, they can differ mightily for difficult questions.

People’s ideas of probability differ more than one might think, especially when it becomes personal. To illustrate, I’ll use a problem derived from one that originated either with Nick Bostrom, Stuart Armstrong or Tomas Kopf, and was later popularized by Isaac Arthur. Suppose you wake up in a room after suffering amnesia or a particularly bad night of drinking. You find that you’re part of a strange experiment. You’re told that you’re in one of 100 rooms and that the door of your room is either red or blue. You’re instructed to guess which color it is. Finding a coin in your pocket you figure flipping it is as good a predictor of door color as anything else, regardless of the ratio of red to blue doors, which is unknown to you. Heads red, tails blue.

The experimenter then gives you new info. 90 doors are red and 10 doors are blue. Guess your door color, says the experimenter. Most people think, absent any other data, picking red is a 4 1/2 times better choice than letting a coin flip decide.

Now you learn that the evil experimenter had designed two different branches of experimentation. In Experiment A, ten people would be selected and placed, one each, into rooms 1 through 10. For Experiment B, 100 other people would be placed, one each, in all 100 rooms. You don’t know which room you’re in or which experiment, A or B, was conducted. The experimenter tells you he flipped a coin to choose between Experiment A, heads, and Experiment B, tails. He wants you to guess which experiment, A or B, won his coin toss. Again, you flip your coin to decide, as you have nothing to inform a better guess. You’re flipping a coin to guess the result of his coin flip. Your odds are 50-50. Nothing controversial so far.

Now you receive new information. You are in Room 5. What judgment do you now make about the result of his flip? Some will say that the odds of experiment A versus B were set by the experimenter’s coin flip, and are therefore 50-50. Call this Theory 1.

Others figure that your chance of being in Room 5 under Experiment A is 1 in 10 and under Experiment B is 1 in 100. Therefore it’s ten times more likely that Experiment A was the outcome of the experimenter’s flip. Call this Theory 2.

Still others (Theory 3) note that having been selected into a group of 100 was ten times more likely than having been selected into a group of 10, and on that basis it is ten times more likely that Experiment B was the result of the experimenter’s flip than Experiment A.

My experience with inflicting this problem on victims is that most people schooled in science – though certainly not all – prefer Theories 2 or 3 to Theory 1, suggesting they hold different forms of Bayesian reasoning. But between Theories 2 and 3, war breaks out.

Those preferring Theory 2 think the chance of having been selected into Experiment A (once it became the outcome of the experimenter’s coin flip) is 10 in 110 and the chance of being in Room 5 is 1 in 10, given that Experiment A occurred. Those who hold Theory 3 perceive a 100 in 110 chance of having been selected into Experiment B, once it was selected by the experimenter’s flip, and then a 1 in 100 chance of being in Room 5, given Experiment B. The final probabilities of being in room 5 under Theories 2 and  3 are equal (10/110 x 1/10 equals 1 in 110, vs. 100/110 x 1/100 also equals 1 in 110), but the answer to the question about the outcome of the experimenter’s coin flip having been heads (Experiment A) and tails (Experiment B) remains in dispute.  To my knowledge, there is no basis for settling that dispute. Unlike Martin Gardner’s boy-girl paradox, this dispute does not result from ambiguous phrasing; it seems a true paradox.

The trouble with probability makes it all the more interesting. Is it math, philosophy, or psychology?


How dare we speak of the laws of chance. Is not chance the antithesis of all law? – Joseph Bertrand, Calcul des probabilités, 1889

Though there be no such thing as Chance in the world; our ignorance of the real cause of any event has the same influence on the understanding, and begets a like species of belief or opinion. – David Hume, An Enquiry Concerning Human Understanding, 1748

It is remarkable that a science which began with the consideration of games of chance should have become the most important object of human knowledge. – Blaise Pascal, Théorie Analytique des Probabilitiés, 1812

2 Comments

Actively Disengaged?

Over half of employees in America are disengaged from their jobs – 85%, according to a recent Gallup poll. About 15% are actively disengaged – so miserable that they seek to undermine the productivity of everyone else. Gallup, ADP and Towers Watson have been reporting similar numbers for two decades now. It’s an astounding claim that signals a crisis in management and the employee experience. Astounding. And it simply cannot be true.

Think about it. When you shop, eat out, sit in a classroom, meet with an accountant, hire an electrician, negotiate contracts, and talk to tech support, do you get a sense that they truly hate their jobs? They might begrudge their boss. They might be peeved about their pay. But those problems clearly haven’t lead to enough employment angst and career choice regret that they are truly disengaged. If they were, they couldn’t hide it. Most workers I encounter at all levels reveal some level of pride in their performance.

According to Bersin and Associates, we spend about a billion dollars per year to cure employee disengagement. And apparently to little effect given the persistence of disengagement reported in these surveys. The disengagement numbers don’t reconcile with our experience in the world. We’ve all seen organizational dysfunction and toxic cultures, but they are easy to recognize; i.e., they stand out from the norm. From a Bayesian logic perspective, we have rich priors about employee sentiments and attitudes, because we see them everywhere every day.

How do research firms reach such wrong conclusions about the state of engagement? That’s not entirely clear, but it probably goes beyond the fact that most of those firms offer consulting services to cure the disengagement problem. Survey researchers have long known that small variations in question wording and order profoundly affect responses (e.g. Hadley Cantril, 1944). In engagement surveys, context and priming likely play a large part.

I’m not saying that companies do a good job of promoting the right people into management; and I’m not denying that Dilbert is alive and well. I’m saying that the evidence suggests that despite these issues, most employees seek mastery of vocation; and they somehow find some degree of purpose in their work.

Successful firms realize that people will achieve mastery on their own if you get out of their way. They’re organized for learning and sensible risk-taking, not for process compliance. They’ve also found ways to align employees’ goals with corporate mission, fostering employees’ sense of purpose in their work.

Mastery seems to emerge naturally, perhaps from intrinsic motivation, when people have a role in setting their goals. In contrast, purpose, most researchers find, requires some level of top-down communications and careful trust building. Management must walk the talk to bring a mission to life.

Long ago I worked on a top secret aircraft project. After waiting a year or so on an SBI clearance, I was surprised to find that despite the standard need-to-know conditions being stipulated, the agency provided a large amount of information about the operational profile and mission of the vehicle that didn’t seem relevant to my work. Sensing that I was baffled by this, the agency’s rep explained that they had found that people were better at keeping secrets when they knew they were trusted and knew that they were a serious part of a serious mission. Never before or since have I felt such a sense of professional purpose.

Being able to see what part you play in the big picture provides purpose. A small investment in the top-down communication of a sincere message regarding purpose and risk-taking can prevent a large investment in rehiring, retraining and searching for the sources of lost productivity.

2 Comments

A short introduction to small data

How many children are abducted each year? Did you know anyone who died in Vietnam?

Wikipedia explains that big data is about correlations, and that small data is either about the causes of effects, or is an inference from big data. None of that captures what I mean by small data.

Most people in my circles instead think small data deals with inferences about populations made from the sparse data from within those populations. For Bayesians, this means making best use of an intuitive informative prior distribution for a model. For wise non-Bayesians, it can mean bullshit detection.

In the early 90’s I taught a course on probabilistic risk analysis in aviation. In class we were discussing how to deal with estimating equipment failure rates where few previous failures were known when Todd, a friend who was attending the class, asked how many kids were abducted each year. I didn’t know. Nor did anyone else. But we all understood where Todd was going with the question.

Todd produced a newspaper clipping citing an evangelist – Billy Graham as I recall – who claimed that 50,000 children a year were abducted in the US. Todd asked if we thought that yielded a a reasonable prior distribution.

Seeing this as a sort of Fermi problem, the class kicked it around a bit. How many kids’ pictures are on milk cartons right now, someone asked (Milk Carton Kids – remember, this was pre-internet). We remembered seeing the same few pictures of missing kids on milk cartons for months. None of us knew of anyone in our social circles who had a child abducted. How does that affect your assessment of Billy Graham’s claim?

What other groups of people have 50,000 members I asked. Americans dead in Vietnam, someone said. True, about 50,000 American service men died in Vietnam (including 9000 accidents and 400 suicides, incidentally). Those deaths spanned 20 years. I asked the class if anyone had known someone, at least indirectly, who died in Vietnam (remember, this was the early 90s and most of us had once owned draft cards). Almost every hand went up. Assuming that dead soldiers and our class were roughly randomly selected implied each of our social spheres had about 4000 members (200 million Americans in 1970, divided by 50,000 deaths). That seemed reasonable, given that news of Vietnam deaths propagated through friends-of-friends channels.

Now given that most of us had been one or two degrees’ separation from someone who died in Vietnam, could Graham’s claim possibly be true? No, we reasoned, especially since news of abductions should travel through social circles as freely as Vietnam deaths. And those Vietnam deaths had spanned decades. Graham was claiming 50,000 abductions per year.

Automobile deaths, someone added. Those are certainly randomly distributed across income, class and ethnicity. Yes, and, oddly, they occur at a rate of about 50,000 per year in the US. Anyone know someone who died in a car accident? Every single person in the class did. Yet none of us had been close to an abduction. Abductions would have to be very skewed against aerospace engineers for our car death and abduction experience to be so vastly different given their supposedly equal occurrence rates in the larger population. But the Copernican position that we resided nowhere special in the landscapes of either abductions or automobile deaths had to be mostly valid, given the diversity of age, ethnicity and geography in the class (we spanned 30 years in age, with students from Washington, California and Missouri).

One way to check the veracity of Graham’s claim would have been to do a bunch of research. That would have been library slow and would  have likely still required extrapolation and assumptions about distributions and the representativeness of whatever data we could dig up. Instead we drew a sound inference from very small data, our own sampling of world events.

We were able to make good judgments about the rate of abduction, which we were now confident was very, very much lower than one per thousand (50,000 abductions per year divide by 50 million kids). Our good judgments stemmed from our having rich priors (prior distributions) because we had sampled a lot of life and a lot of people. We had rich data about deaths from car wrecks and Vietnam, and about how many kids were not abducted in each of  our admittedly small circles. Big data gets the headlines, causing many of us to forget just how good small data can be.

 

2 Comments

Countable Infinity – Math or Metaphysics?

Are we too willing to accept things on authority – even in math? Proofs of the irrationality of the square root of two and of the Pythagorean theorem can be confirmed by pure deductive logic. Georg Cantor’s (d. 1918) claims on set size and countable infinity seem to me a much less secure sort of knowledge. High school algebra books (e.g., the classic Dolciani) teach 1-to-1 correspondence between the set of natural numbers and the set of even numbers as if it is a demonstrated truth. This does the student a disservice.

Following Cantor’s line of reasoning is simple enough, but it seems to treat infinity as a number, thereby passing from mathematics into philosophy. More accurately, it treats an abstract metaphysical construct as if it were math. Using Cantor’s own style of reasoning, one can just as easily show the natural and even number sets to be no non-corresponding.

Cantor demonstrated a one-to-one correspondence between natural and even numbers by showing their elements can be paired as shown below:

1 <—> 2
2 <—> 4
3 <—> 6

n <—> 2n

This seems a valid demonstration of one-to-one correspondence. It looks like math, but is it? I can with equal validity show the two sets (natural numbers and even numbers) to have a 2-to-1 correspondence. Consider the following pairing. Set 1 on the left is the natural numbers. Set 2 on the right is the even numbers:

1      unpaired
2 <—> 2
3      unpaired
4 <—> 4
5      unpaired

2n -1      unpaired
2n <—> 2n

By removing all the unpaired (odd) elements from the set 1, you can then pair each remaining member of set 1 with each element of set 2. It seems arguable that if a one to one correspondence exists between part of set 1 and all of set 2, the two whole sets cannot support a 1-to-1 correspondence. By inspection, the set of even numbers is included within the set of natural numbers and obviously not coextensive with it. Therefore Cantor’s argument, based solely on correspondence, works only by promoting one concept – the pairing of terms – while suppressing an equally obvious concept, that of inclusion. Cantor indirectly dismisses this argument against set correspondence by allowing that a set and a proper subset of it can be the same size. That allowance is not math; it is metaphysics.

Digging a bit deeper, Cantor’s use of the 1-to-1 concept (often called bijection) is heavy handed. It requires that such correspondence be established by starting with sets having their members placed in increasing order. Then it requires the first members of each set to be paired with one another, and so on. There is nothing particularly natural about this way of doing things. It got Cantor into enough of a logical corner that he had to revise the concepts of cardinality and ordinality with special, problematic definitions.

Gottlob Frege and Bertrand Russell later patched up Cantor’s definitions. The notion of equipollent sets fell out of this work, along with complications still later addressed by von Neumann and Tarski. Finally, it seems to me that Cantor implies – but fails to state outright – that the existence of a simultaneous 2-to-1 correspondence (i.e., group each n and n+1 in set 1 with each 2n in set 2 to get a 1-to-1correspondence between the two sets) does no damage to the claims that 1-to-1correspondence between the two sets makes them equal in size. In other words, Cantor helped himself to an unnaturally restrictive interpretation (i.e., a matter of language, not of math) of 1-to-1 correspondence that favored his agenda. Finally, Cantor slips a broader meaning of equality on us than the strict numerical equality the rest of math. This is a sleight of hand. Further, his usage of the term – and concept of – size requires a special definition.

Cantor’s rule set for the pairing of terms and his special definitions are perfectly valid axioms for mathematical system, but there is nothing within mathematics that justifies these axioms. Believing that the consequences of a system or theory justify its postulates is exactly the same as believing that the usefulness of Euclidean geometry justifies Euclid’s fifth postulate. Euclid knew this wasn’t so, and Proclus tells us Euclid wasn’t alone in that view.

Galileo seems to have had a more grounded sense of the infinite than did Cantor. For Galileo, the concrete concept of mathematical equality does not reconcile with the abstract concept of infinity. Galileo thought concepts like similarity, countability, size, and equality just don’t apply to the infinite. Did the development of calculus create an unwarranted acceptance of infinity as a mathematical entity? Does our understanding that things can approach infinity justify allowing infinities to be measured and compared?

Cantor’s model of infinity is interesting and useful, but it is a shame that’s it’s taught as being a matter of fact, e.g., “infinity comes in infinitely many different sizes – a fact discovered by Georg Cantor” (Science News, Jan 8, 2008).

On countable infinity we might consider WVO Quine’s position that the line between analytic (a priori) and synthetic (about the world) statements is blurry, and that no claim is immune to empirical falsification. In that light I’d argue that the above demonstration of inequality of the sets of natural and even numbers (inclusion of one within the other) trumps the demonstration of equal size by correspondence.

Mathematicians who state the equal-size concept as a fact discovered by Cantor have overstepped the boundaries of their discipline. Galileo regarded the natural-even set problem as a true paradox. I agree. Does Cantor really resolve this paradox or is he merely manipulating language?

1 Comment

Paul Feyerabend, The Worst Enemy of Science

“How easy it is to lead people by the nose in a rational way.”

A similarly named post I wrote on Paul Feyerabend seven years ago turned out to be my most popular post by far. Seeing it referenced in a few places has made me cringe, and made me face the fact that I failed to make my point. I’ll try to correct that here. I don’t remotely agree with the paper in Nature that called Feyerabend the worst enemy of science, nor do I side with the postmodernists that idolize him. I do find him to be one of the most provocative thinkers of the 20th century, brash, brilliant, and sometimes full of crap.

Feyerabend opened his profound Against Method by telling us to always remember that what he writes in the book does not reflect any deep convictions of his, but that he intends “merely show how easy it is to lead people by the nose in a rational way.” I.e., he was more telling us what he thought we needed to hear than what he necessarily believed. In his autobiography he wrote that for Against Method he had used older material but had “replaced moderate passages with more outrageous ones.” Those using and abusing Feyerabend today have certainly forgot what this provocateur, who called himself an entertainer, told us always to remember about him in his writings.

PFK3

Any who think Feyerabend frivolous should examine the scientific rigor in his analysis of Galileo’s work. Any who find him to be an enemy of science should actually read Against Method instead of reading about him, as quotes pulled from it can be highly misleading as to his intent. My communications with some of his friends after he died in 1994 suggest that while he initially enjoyed ruffling so many feathers with Against Method, he became angered and ultimately depressed over both critical reactions against it and some of the audiences that made weapons of it. In 1991 he wrote, “I often wished I had never written that fucking book.”

I encountered Against Method searching through a library’s card catalog seeking an authority on the scientific method. I learned from Feyerabend that no set of methodological rules fits the great advances and discoveries in science. It’s obvious once you think about it. Pick a specific scientific method – say the hypothetico-deductive model – or any set of rules, and Feyerabend will name a scientific discovery that would not have occurred had the scientist, from Galileo to Feynman, followed that method, or any other.

Part of Feyerabend’s program was to challenge the positivist notion that in real science, empiricism trumps theory. Galileo’s genius, for Feyerabend, was allowing theory to dominate observation. In Dialogue Galileo wrote:

Nor can I ever sufficiently admire the outstanding acumen of those who have taken hold of this opinion and accepted it as true: they have, through sheer force of intellect, done such violence to their own senses as to prefer what reason told them over that which sensible experience plainly showed them to be the contrary.

For Feyerabend, against Popper and the logical positivists of the mid 1900’s, Galileo’s case exemplified a need to grant theory priority over evidence. This didn’t sit well with empiricist leanings of the the post-war western world. It didn’t set well with most scientists or philosophers. Sociologists and literature departments loved it. It became fuel for fire of relativism sweeping America in the 70’s and 80’s and for the 1990’s social constructivists eager to demote science to just another literary genre.

PKF2But in context, and in the spheres for which Against Method was written, many people – including Feyerabend’s peers from 1970 Berkeley, with whom I’ve had many conversations on the topic, conclude that the book’s goading style was a typical Feyerabendian corrective provocation to that era’s positivistic dogma.

Feyerabend distrusts the orthodoxy of social practices of what Thomas Kuhn termed “normal science” – what scientific institutions do in their laboratories. Unlike their friend Imre Lakatos, Feyerabend distrusts any rule-based scientific method at all. Instead, Feyerabend praises the scientific innovation and individual creativity. For Feyerabend science in the mid 1900’s had fallen prey to the “tyranny of tightly-knit, highly corroborated, and gracelessly presented theoretical systems.” What would he say if alive today?

As with everything in the philosophy of science in the late 20th century, some of the disagreement between Feyerabend, Kuhn, Popper and Lakatos revolved around miscommunication and sloppy use of language. The best known case of this was Kuhn’s inconsistent use of the term paradigm. But they all (perhaps least so Lakatos) talked past each other by failing to differentiate different meanings of the word science, including:

  1. An approach or set of rules and methods for inquiry about nature
  2. A body of knowledge about nature
  3. In institution, culture or community of scientists, including academic, government and corporate

Kuhn and Feyerabend in particular vacillating between meaning science as a set of methods and science as an institution. Feyerabend certainly was referring to an institution when he said that science was a threat to democracy and that there must be “a separation of state and science just as there is a separation between state and religious institutions.” Along these lines Feyerabend thought that modern institutional science resembles more the church of Galileo’s day than it resembles Galileo.

On the matter of state control of science, Feyerabend went further than Eisenhower did in his “military industrial complex” speech, even with the understanding that what Eisenhower was describing was a military-academic-industrial complex. Eisenhower worried that a government contract with a university “becomes virtually a substitute for intellectual curiosity.” Feyerabend took this worry further, writing that university research requires conforming to orthodoxy and “a willingness to subordinate one’s ideas to those of a team leader.” Feyerabend disparaged Kuhn’s normal science as dogmatic drudgery that stifles scientific creativity.

A second area of apparent miscommunication about the history/philosophy of science in the mid 1900’s was the descriptive/normative distinction. John Heilbron, who was Kuhn’s grad student when Kuhn wrote Structure of Scientific Revolutions, told me that Kuhn absolutely despised Popper, not merely as a professional rival. Kuhn wanted to destroy Popper’s notion that scientists discard theories on finding disconfirming evidence. But Popper was describing ideally performed science; his intent was clearly normative. Kuhn’s work, said Heilbron (who doesn’t share my admiration for Feyerabend), was intended as normative only for historians of science, not for scientists. True, Kuhn felt that it was pointless to try to distinguish the “is” from the “ought” in science, but this does not mean he thought they were the same thing.

As with Kuhn’s use of paradigm, Feyerabend’s use of the term science risks equivocation. He drifts between methodology and institution to suit the needs of his argument. At times he seems to build a straw man of science in which science insists it creates facts as opposed to building models. Then again, on this matter (fact/truth vs. models as the claims of science) he seems to be more right about the science of 2019 than he was about the science of 1975.

While heavily indebted to Popper, Feyerabend, like Kuhn, grew hostile to Popper’s ideas of demarcation and falsification: “let us look at the standards of the Popperian school, which are still being taken seriously in the more backward regions of knowledge.” He eventually expanded his criticism of Popper’s idea of theory falsification to a categorical rejection of Popper’s demarcation theories and of Popper’s critical rationalism in general. Now from the perspective of half a century later, a good bit of the tension between Popper and both Feyerabend and Kuhn and between Kuhn and Feyerabend seems to have been largely semantic.

For me, Feyerabend seems most relevant today through his examination of science as a threat to democracy. He now seems right in ways that even he didn’t anticipate. He thought it a threat mostly in that science (as an institution) held complete control over what is deemed scientifically important for society. In contrast, people as individuals or small competing groups, historically have chosen what counts as being socially valuable. In this sense science bullied the citizen, thought Feyerabend. Today I think we see a more extreme example of bullying, in the case of global warming for example, in which government and institutionalized scientists are deciding not only what is important as a scientific agenda but what is important as energy policy and social agenda. Likewise the role that neuroscience plays in primary education tends to get too much of the spotlight in the complex social issues of how education should be conducted. One recalls Lakatos’ concern against Kuhn’s confidence in the authority of “communities.” Lakatos had been imprisoned by the Nazis for revisionism. Through that experience he saw Kuhn’s “assent of the relevant community” as not much of a virtue if that community has excessive political power and demands that individual scientists subordinate their ideas to it.

As a tiny tribute to Feyerabend, about whom I’ve noted caution is due in removal of his quotes from their context, I’ll honor his provocative spirit by listing some of my favorite quotes, removed from context, to invite misinterpretation and misappropriation.

“The similarities between science and myth are indeed astonishing.”

“The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo’s doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism.”

“All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’.”

“Revolutions have transformed not only the practices their initiators wanted to change buy the very principles by means of which… they carried out the change.”

“Kuhn’s masterpiece played a decisive role. It led to new ideas, Unfortunately it also led to lots of trash.”

“First-world science is one science among many.”

“Progress has always been achieved by probing well-entrenched and well-founded forms of life with unpopular and unfounded values. This is how man gradually freed himself from fear and from the tyranny of unexamined systems.”

“Research in large institutes is not guided by Truth and Reason but by the most rewarding fashion, and the great minds of today increasingly turn to where the money is — which means military matters.”

“The separation of state and church must be complemented by the separation of state and science, that most recent, most aggressive, and most dogmatic religious institution.”

“Without a constant misuse of language, there cannot be any discovery, any progress.”

 

__________________

Photos of Paul Feyerabend courtesy of Grazia Borrini-Feyerabend

 

 

, ,

2 Comments

Use and Abuse of Failure Mode & Effects Analysis in Business

On investigating about 80 deaths associated with the drug heparin in 2009, the FDA found that over-sulphated chondroitin with toxic effects had been intentionally substituted for a legitimate ingredient for economic reasons. That is, an unscrupulous supplier sold a counterfeit chemical costing 1% as much as the real thing and it killed people.

This wasn’t unprecedented. Gentamicin, in the late 1980s, was a similar case. Likewise Cefaclor in 1996, and again with diethylene glycol sold as glycerin in 2006.

Adulteration is an obvious failure mode of supply chains and operations for drug makers. Drug firms buying adulterated raw material had presumably conducted failure mode effects analyses at several levels. An early-stage FMEA should have seen the failure mode and assessed its effects, thereby triggering the creation of controls to prevent the process failure. So what went wrong?

The FDA’s reports on the heparin incident didn’t make public any analyses done by the drug makers. But based on the “best practices” specified by standards bodies, consulting firms, and many risk managers, we can make a good guess. Their risk assessments were likely misguided, poorly executed, gutless, and ineffective.

Abuse of FMEA - On Risk Of. Photo by Bill StoragePromoters of FMEAs as a means of risk analysis often cite aerospace as a guiding light in matters of risk. Commercial aviation should be the exemplar of risk management. In no other endeavor has mankind made such an inherently dangerous activity so safe as commercial jet flight.

While those in pharmaceutical risk and compliance extol aviation, they mostly stray far from its methods, mindset, and values. This is certainly the case with the FMEA, a tool poorly understood, misapplied, poorly executed, and then blamed for failing to prevent catastrophe.

In the case of heparin, a properly performed FMEA exercise would certainly have identified the failure mode. But FMEA wasn’t even the right tool for identifying that hazard in the first place. A functional hazard anlysis (FHA) or Business Impact Analysis (BIA) would have highlighted chemical contamination leading to death of patients, supply disruption, and reputation damage as a top hazard in minutes. I know this for fact, because I use drug manufacture as an example when teaching classes on FHA. First-day students identify that hazard without being coached.

FHAs can be done very early in the conceptual phase of a project or system design. They need no implementation details. They’re short and sweet, and they yield concerns to address with high priority. Early writers on the topic of FMEA explicitly identified it as being something like the opposite of an FHA, for former being “bottom-up, the latter “top down,” NASA’s response to the USGS on the suitability of FMEAs their needs, for example, stressed this point. FMEAs rely strongly on implementation details. They produce a lot of essential but lower-value content (essential because FMEAs help confirm which failure modes can be de-prioritized) when there is an actual device or process design.

So a failure mode of risk management is using FMEAs for purposes other than those for which they were designed. Equating FMEA with risk analysis and risk management is a gross failure mode of management.

If industry somehow stops misusing FMEAs, they then face the hurdle of doing them well. This is a challenge, as the quality of training, guidance, and facilitation of FMEAs has degraded badly over the past twenty years.

FMEAs, as promoted by the Project Management Institute, ISO 31000, and APM PRAM, to name a few, bear little resemblance to those in aviation. I know this, from three decades of risk work in diverse industries, half of it in aerospace. You can see the differences by studying sample FMEAs on the web.

It’s anyone’s guess how  FMEAs went so far astray. Some blame the explosion of enterprise risk management suppliers in the 1990s. ERM, partly rooted in the sound discipline of actuarial science, generally lacks rigor. It was up-sold by consultancies to their existing corporate clients, who assumed those consultancies actually had background in risk science, which they did not.  Studies a decade later by Protiviti and the EIU failed to show any impact on profit or other benefit of ERM initiatives, except for positive self-assessments by executives of the firms.

But bad FMEAs predated the ERM era. Adopted by US automotive industry in the 1970s, sloppy FMEAs justified optimistic warranty claims estimates for accounting purposes. While Toyota was implementing statistical process control to precisely predict the warranty cost of adverse tolerance accumulation, Detroit was pretending that multiplying ordinal scales of probability, severity, and detectability was mathematically or scientifically valid.

Citing inability to quantify failure rates of basic components and assemblies (an odd claim given the abundance of warranty and repair data), auto firms began to assign scores or ranks to failure modes rather than giving probability values between zero and one. This first appears in automotive conference proceedings around 1971. Lacking hard failure rates – if in fact they did – reliability workers could have estimated numeric probability values based on subjective experience or derived them from reliability handbooks then available. Instead they began to assign ranks or scores on a 1 to 10 scale.

In principle there is no difference between guessing a probability of 0.001 (a numerical probability value) and guessing a value of “1” on a 10 scale (either an ordinal number or a probability value mapped to a limited-range score). But in practice there is a big difference.

One difference is that people estimating probability scores in facilitated FMEA sessions usually use grossly different mental mapping processes to get from labels like “extremely likely” or “moderately unlikely” to numerical probabilities. A physicist sees “likely” for a failure mode to mean more than once per million; a drug trial manager interprets it to mean more than 5%. Neither is wrong; but if those two specialists aren’t alert to the difference, when they each judge a failure likely, there will be a dangerous illusion of communication and agreement where none exists.

Further, FMEA participants don’t agree – and often don’t know they don’t agree – on the mapping of their probability estimates into 1-10 scores.

The resultant probability scores or ranks (as opposed to P values between zero and one)  are used to generate Risk Priority Numbers (RPN), that first appeared in the American automotive industry. You won’t find RPN or anything like it in aviation FMEAs, or even the modern automotive industry. Detroit abandoned them long ago.

RPNs are defined as the arithmetic product of a probability score, a severity score, and a detection (more precisely, the inverse of detectability) score. The explicit thinking here is that risks can be prioritized on the basis of the product of three numbers, each ranging from 1 to 10.

An implicit – but critical, though never addressed by users of RPN – thinking here is that engineers, businesses, regulators and consumers are risk-neutral. Risk neutrality, as conceived in portfolio choice theory, would in this context mean that everyone would be indifferent to two risks of the same RPN, even comprising very different probability and severity values.That is, an RPN formed from the scores {2,8,4} would dictate the same risk response as failure modes with RPN scores {8,4,2} and {4,4,4} since the RPN values (product of the scores) are equal. In the real world this is never true. It is usually very far from true. Most of us are not risk-neutral, we’re risk-averse. That changes things. As a trivial example, banks might have valid reasons for caring more about a single $100M loss than one hundred $1M losses.

Beyond the implicit assumption of risk-neutrality, RPN has other problems. As mentioned above, there both cognitive and group-dynamics problems arise when FMEA teams attempt to model probabilities as ranks or scores. Similar difficulties arise with scoring the cost of a loss, i.e., the severity component of RPN. Again there is the question of why, if you know the cost of a failure (in dollars, lives lost, or patients not cured) would you convert a valid measurement into a subjective score (granting, for sake of argument, that risk-neutrality is justified)? Again the answer is to enter that score into the RPN calculation.

Still more problematic is the detectability value used in RPNs. In a non-trivial system or process, detectability and probability are not independent variables. And there is vagueness around the meaning of detectability. Is it the means by which you know the failure mode has happened, after the fact? Or is there an indication that the failure is about to happen, such that something can be observed thereby preventing the failure? If the former, detection is irrelevant to risk of failure, if the latter the detection should be operationalized in the model of the system. That is, if a monitor (e.g, brake fluid level check) is in a system, the monitor is a component with its own failure modes and exposure times, which impact its probability of failure. This is how aviation risk analysis models such things. But not the Project Management Institute

A simple summary of the problems with scoring, ranking and RPN is that adding ambiguity to a calculation necessarily reduces precision.

I’ve identified  several major differences between the approach to FMEAs used in aviation and those who claim they’re behaving like aerospace. They are not. Aviation risk analysis has reduced risk by a factor of roughly a thousand, based on fatal accident rates since aviation risk methods were developed. I don’t think the PMI can sees similar results from its adherents.

A partial summary of failure modes of common FMEA processes includes the following, based on the above:

  • Equating FMEA with risk assessment
  • Confusing FMEA with Hazard Analysis
  • Viewing the FMEA as a Quality (QC) function
  • Insufficient rigor in establishing probability and severity values
  • Unwarranted (and implicit) assumption of risk-neutrality
  • Unsound quantification of risk (RPN)
  • Confusion about the role of detection

The corrective action for most of these should be obvious, including operationalizing a system’s detection methods, using numeric (non-ordinal) probability and cost values (even if estimated) instead of masking ignorance and uncertainty with ranking and scoring, and steering clear of Risk Priority Numbers and the Project Management Institute.

2 Comments

Alternative Energy News Oct 2019

This is not news about alternateve energy. It is alternatative news about energy. I.e., it is energy news of the sort that may not necessarily fit the agendas of MSNBC, Fox News, and CNN, so it is likely to escape common knowledge.

North Carolina Energy Company Finds Solar Power Actually Increases Pollution

Duke is asking North Carolina regulators to ease air quality emission limits for some of Duke’s combustion turbine facilities. The utility is trying to reduce air pollution it says is due to the increased penetration of solar power. North Carolina ranks second in the nation, behind only California, in the amount of installed solar plants. Duke’s problem shows what happens when basic science collides with operational reality. It turns out that when zero-emission nuclear plants are dialed back to make room for solar, greenhouse gas-emitting plants must be employed to give nuclear plants time to ramp back up when the sun goes down.

China and Russia plan nuclear power projects in the Arctic region

China intends to cooperate with Russia to develop nuclear power and wind power projects in the Arctic region. China and Russia will deepen energy cooperation. Ou Xiaoming, chief representative of China State Grid Corporation, indicated that except for the cooperative agreement to build nuclear reactors in China, two countries will also develop Arctic wind energy resources.

Orsted Lowers Offshore Wind Output Forecasts, Warns of Industry-Wide Problem

The offshore wind industry has a problem with how it forecasts the output of projects, industry leader Ørsted has warned. Denmark’s Ørsted issued a statement alongside its Q3 results explaining that it was downgrading its anticipated internal rate of return for several projects. The underlying issue is an underestimation of wake and blockage effects.

Why France is eyeing nuclear power again

After years of backing away from nuclear power, France suddenly wants to build six huge reactors. The third-generation design produces enough electricity to supply 1.5 million people, and automatically shuts down and cools in the event of an accident.

China Zhangzhou NPP launch construction

On October 16, CNNC announced that Hualong No. 1 reactor started construction in Zhangzhou, Fujian Province. This project plans to build 6 million-kilowatt-class third-generation nuclear power units. Two units in Phase I will feature Hualong No. 1 technology. In addition, Zhangzhou Nuclear Power is a large-scale clean energy base planned by the China National Nuclear Corporation in Fujian Province aiming at exploring a new paradigm for nuclear power development.

Thorner: Gen IV Nuclear Energy Is Clean, Efficient and Plentiful

If a Gen IV reactors get too hot, it automatically cools on its own. This all happens because of gravity—no pumps, external power, or human intervention is required. Existing nuclear waste becomes impotent through the Gen IV process. Gen IV can also consume traditional fuel and no weapons-grade material byproduct will result.

Mototaka Nakamura breaks ranks with AGW

Nakamura, having worked at MIT, Georgia Institute of Technology, NASA, Jet Propulsion Laboratory, California Institute of Technology and Duke University, reports that global mean temperatures before 1980 are based on untrustworthy data, that today’s “global warming science” is built on the work of a few climate modelers who claim to have demonstrated that human-derived CO2 emissions are the cause of recently rising temperatures “and have then simply projected that warming forward.” Every climate researcher thereafter has taken the results of these original models as a given, and we’re even at the stage now where merely testing their validity is regarded as heresy, says Nakamura.

Current Costs of British Renewables Subsidies per Household

The current annual British subsidy will be about £9 billion, and the grand total for the years 2017 to 2024 will come to nearly £70 billion. Details of the environmental levies and subsidies in the UK.

Bill Gates: Fossil-Fuel Divestment Has ‘Zero’ Impact On Climate

Fossil-fuel divestment is a waste of time according to Gates. While it may come as a shock to climate activists who claim to refuse to invest in oil and coal will help the planet, Gates observed, “divestment, to date, probably has reduced about zero tonnes of emissions… It’s not like you’ve capital-starved [the] people making steel and gasoline.

1 Comment

What is a climate denier?

Climate change denier, climate denial and similar terms peaked in usage, according to Google trends data, at the last presidential election. Usage today is well below those levels, but based on trends in the last week, is heading for a new high. The obvious meaning of climate change denial would seem to me to be saying that either the climate is not changing or that people are not responsible for climate change. But this is clearly not the case.

Patrick Moore, a once influential Greenpeace member, is often called a denier by climate activists. Niall Ferguson says he doesn’t deny anthropogenic climate change, but is attacked as a denier. After a Long Now Foundation talk by Saul Griffith, I heard Saul being accused being a denier. Even centenarian James Lovelock, the originator of Gaia theory who now believes his former position was alarmist (“I’ve grown up a bit since then“), is called a denier in California green energy events, despite his very explicit denial of being a denier.

Trying to look logically at the spectrum of propositions one might affirm or deny, I come up with the following possible claims. You can no doubt fine-tune these or make them more granular.

  1. The earth’s climate is changing (typically, average temperature is increasing.
  2. The earth’s average temperature has increased more rapidly since the industrial revolution.
  3. Some increase in warming rate is caused by human activity.
  4. The increase in warming rate is overwhelmingly due to humans (as opposed to, e.g. sun activity and orbital factors)
  5. Anthropogenic warming poses imminent threat to human life on earth.
  6. The status quo (greenhouse gas production) will result in human extinction.
  7. The status quo poses significant threat (even existential threat) and the proposed renewables policy will mitigate it.
  8. Nuclear energy is not an acceptable means of reducing greenhouse gas production.

No one with a command of high school math and English could deny claim 1. Nearly everything is changing at some level. We can argue about what constitutes significant change. That’s a matter of definition, of meaning, and of values.

Claim 2 is arguable. It depends on having a good bit of data. We can argue about data sufficiency, accuracy and interpretation of the noisy data.

Claim 3 relies much more on theory (to establishing causation) than on meaning/definitions and facts/measurements, as is the case with 1 and 2. Claim 4 is a strong version of claim 3, requiring much more scientific analysis and theorizing.

While informed by claims 1-4, Claims 5 and 6 (imminent threat, certain doom) are mostly outside the strict realm of science. They differ on the severity of the threat; and they rely of risk modeling, engineering feasibility analyses, and economics. For example, could we afford to pay for the mitigations that could reverse the effects of continued greenhouse gas release, and is geoengineering feasible? Claim 6 is held by Greta Thunberg (“we are in the beginning of a mass extinction”). Al Gore seems somewhere between 5 and 6.

Claim 7 (renewables can cure climate change) is the belief held by followers of the New Green Deal.

While unrelated to the factual (whether true or false) components of claims 1-4 and the normative components of claims 5-7, claim 8 (fission not an option) seems to be closely aligned with claim 6. Vocal supporters of 6 tend to be proponents of 8. Their connection seems to be on ideological grounds. It seems logically impossible to reconcile holding claims 6 and 8 simultaneously. I.e., neither the probability nor severity components of nuclear risk can exceed claim 6’s probability (certainty) and severity (extinction). Yet they are closely tied. Naomi Oreskes accused James Hansen of being a denier because he endorsed nuclear power.

Beliefs about the claims need not be binary. For each claim, one could hold belief in a range from certitude to slightly possible, as well as unknown or unknowable. Fred Singer, for example, accepts that CO2 alters the climate, but allows that its effect could be cooling rather than warming. Singer’s uncertainty stems from his perception that the empirical data does not jibe with global-warming theory. It’s not that he’s  lukewarm; he finds the question presently unknowable. This is a form of denial (see Freedman and McKibben below) green activists, blissfully free of epistemic humility and doubt, find particularly insidious.

Back to the question of what counts as a denier. I once naively thought that “climate change denier” applies only to claims 1-4. After all, the obvious literal meaning of the words would apply only to claims 1 and 2. We can add 3 and 4 if we allow that those using the term climate denier use it as a short form of “anthopogenic climate-change denier.”

Clearly, this is not the popular usage, however. I am regularly called a denier at green-tech events for arguing against claim 7 (renewables as cure). Whether anthopogenic climate change exists, regardless of the size of the threat, wind and solar cannot power a society anything like the one we live in. I’m an engineer, I specialized in thermodynamics and energy conversion, that’s my argument, and I’m happy to debate it.

Green activists’ insistence that we hold claim 8 (no fission) to be certain, in my view, calls their program and motivations into question, for reasons including the above mentioned logical incompatibility of claims 6 and 8 (certain extinction without change, but fission is to dangerous).

I’ve rarely heard anyone deny claims 1-3 (climate change exists and humans play a role). Not even Marc Morano denies these. I don’t think any of our kids, indoctrinated into green policy at school, have any idea that those they’re taught are deniers do not deny climate change.

In the last year I’ve seen a slight increase in professional scientists who deny claim 4 (overwhelmingly human cause), but the majority of scientists in relevant fields seem to agree with claim 4. Patrick Moore, Caleb Rossiter, Roger A. Pielke and Don Easterbrook seem to deny claim 4. Leighton Steward denies it on the grounds that climate change is the cause of rising CO2 levels, not its effect.

Some of the key targets of climate activism don’t appear to deny the basic claims of climate change. Among these are Judith Curry, Richard Tol, Ivar Giaever, Roy Spencer, Robert M Carter, Denis Rancourt, Richard Tol, John Theon, Scott Armstrong, Patrick Michaels, Will Happer and Philip Stott. Anthony Watts and Matt Ridley are very explicit about accepting claim 4 (mostly human-caused) but denying claims 5 and 6 (significant threat or extinction).  William M. Briggs called himself a climate denier, but meant by it that the concept of climate, as understood by most people, is itself invalid.

More and more people who disagree with the greens’ proposed policy implementation are labeled deniers (as Oreskes calling Hansen a denier because he supports fission). Andrew Freedman seemed to implicitly acknowledge the expanding use of the denier label in a recent Mashable piece, in which he warned of some green opponents who were moving “from outright climate denial to a more subtle, insidious and risky form.” Bill McKibben, particularly immune to the nuances of scientific method and rational argument, called “renewables denial” “at least as ugly” as climate denial.

Opponents argue that the green movement is a religious cult. Arguing over matters of definition has limited value, but greens are prone to apocalyptic rants that would make Jonathan Edwards blush, focus on sin and redemption, condemnation of heresy, and attempts to legislate right behavior. Last week The Conversation said it was banning not only climate denial but “climate skepticism”). I was amused at an aspect of the religiosity of the greens in both Freedman and McKibben’s complaints.: each is insisting that being partially sinful warrants more condemnation than committing the larger sin.

So because you are lukewarm, and neither hot nor cold, I will spit you out of My mouth. – Revelation 3:16 (NAS)

Refusal to debate crackpots is understandable, but Michael Mann’s refusal to debate “deniers” (he refused even to share his data when order to do so by British Columbia Supreme Court) looks increasingly like fear of engaging worthy opponents – through means other than suing them.

On his liberal use of the “denier” accusation, the below snippet provides some levity. In a house committee session Mann denies calling anyone a denier and says he’s been misrepresented. Judith Curry (the denier) responds “it’s in your written testimony.” On page 6 of Mann’s testimony, he says “climate science denier Judith Curry” adding that “I use the term carefully.”

I deny claims 6 through 8. The threat is not existential; renewables won’t fix it; and fission can.

Follow this proud denier on twitter.

 

 

,

7 Comments

Yes, Greta, let’s listen to the scientists

Young people around the world protested for climate action last week. 16-year old Greta Thunberg implored congress to “listen to the scientists” about climate change and fix it so her generation can thrive.

OK, let’s listen to them, and assume for sake of argument that we understand “them” to be a large majority of all relevant scientists, and that say with one voice that humans are materially affecting climate. And let’s take the IPCC’s projections for a 3.4 to 4.8 degree C rise by 2100 in the absence of policy changes. While activists and politicians report that scientific consensus exists, some reputable scientists dispute this. But for sake of discussion assume such consensus exists.

That temperature rise, scientists tell us, would change sea levels and would warm cold regions more than warm regions. “Existential crisis,” said Elizabeth Warren on Tuesday. Would that in fact pose an existential threat? I.e., would it cause human extinction? That question probably falls much more in the realm of engineering than in science. But let’s assume Greta might promote (or demote, depending on whether you prefer expert generalists to exert specialists) engineers to the rank of scientists.

The World Bank 4 Degrees – Turn Down the Heat report is often cited as concluding that uncontrolled human climate impact threatens the human race. It does not. It describes Sub-Saharan Africa food production risk, southeast Asia water scarcity and coastal productivity risk. It speaks of wakeup-calls and tipping points, and, lacking the ability to quantify risks, assumes several worst imaginable cases of cascade effects, while rejecting all possibility that innovation and engineering can, for example, mitigate water scarcity problems before they result in health problems. The language and methodology of this report is much closer to the realm of sociology than to that of people we usually call scientists. Should sociology count as science or as philosophy and ethics? I think the latter, and I think the World Bank’s analysis reeks of value-laden theory and theory-laden observations. But for sake of argument let’s grant that climate Armageddon, true danger to survival of the race, is inevitable without major change.

Now given this impending existential crisis, what can the voice of scientists do for us? Those schooled in philosophy, ethics, and the soft sciences might recall the is-ought problem, also known as Hume’s Guillotine, in honor of the first writer to make a big deal of it. The gist of the problem, closely tied to the naturalistic fallacy, is that facts about the world do not and cannot directly cause value judgments. And this holds regardless of whether you conclude that moral truths do or don’t exist. “The rules of morality are not conclusions of our reason,” observed Hume. For a more modern philosophical take on this issue see Simon Blackburn’s Ethics.

Strong statements on the non-superiority of scientists as advisers outside their realm come from scientists like Richard Feynman and Wifred Trotter (see below).

But let’s assume, for sake of argument, that scientists are the people who can deliver us from climate Armageddon. Put them on a pedestal, like young Greta does. Throw scientism caution to the wind. I believe scientists probably do have more sensible views on the matter than do activists. But if we’re going to do this – put scientists at the helm – we should, as Greta says, listen to those scientists. That means the scientists, not the second-hand dealers in science – the humanities professors, pandering politicians, and journalists with agendas, who have, as Hayek phrased it, absorbed rumors in the corridors of science and appointed themselves as spokesmen for science.

What are these scientists telling us to do about climate change? If you think they’re advising us to equate renewables with green, as young protesters have been taught to do, then you’re listening not to the scientists but to second-hand dealers of misinformed ideology who do not speak for science. How many scientist think that renewables – at any scale that can put a real dent in fossil fuel use – are anything remotely close to green? What scientist thinks utility-scale energy storage can be protested and legislated into existence by 2030? How many scientist think uranium is a fossil fuel?

The greens, whose plans for energy are not remotely green, have set things up so that sincere but uniformed young people like Greta have only one choice – to equate climate change mitigation with what they call renewable energy. Even under Mark Jacobson’s grossly exaggerated claims about the efficiency and feasibility of electricity generation from renewables, Greta and her generation would shudder at the environmental devastation a renewables-only energy plan would yield.

Where is the green cry for people like Greta to learn science and engineering so they can contribute to building the world they want to live in? “Why should we study for a future that is being taken away from us?” asked Greta. One good reason 16-year-olds might do this is that in 2025 they can have an engineering degree and do real work on energy generation and distribution. Climate Armageddon will not happen by 2025.

I feel for Greta, for she’s been made a stage prop in an education-system and political drama that keeps young people ignorant of science and engineering, ensures they receive filtered facts from specific “trustworthy” sources, and keeps them emotionally and politically charged – to buy their votes, to destroy capitalism, to rework political systems along a facile Marxist ideology, to push for open borders and world government, or whatever the reason kids like her are politically preyed upon.

If the greens really believed that climate Armageddon were imminent (combined with the fact the net renewable contribution to world energy is still less than 1%), they might consider the possibility that gas is far better than coal in the short run, and that nuclear risks are small compared to the human extinction they are absolutely certain is upon us. If the greens’ real concern was energy and the environment, they would encourage Greta to list to scientists like Nobel laureate in physics, Ivar Giaever, who says climate alarmism is garbage, and then to identify the points on which Giaever is wrong. That’s what real scientists do.

But it isn’t about that, is it? It’s not really about science or even about climate. As Saikat Chakrabarti, former chief of staff for Ocasio-Cortez, admitted: “the interesting thing about the Green New Deal is it wasn’t originally a climate thing at all.” “Because we really think of it as a how-do-you-change-the-entire-economy thing,” he added. To be clear, Greta did not endorse the Green New Deal, but she is their pawn.

Frightened, indoctrinated, science-ignorant kids are really easy to manipulate and exploit. Religions – particularly those that silence dissenters, brand heretics, and preach with righteous indignation of apocalypses that fail to happen – have long understood this. The green religion understands it too.

Go back to school, kids. You can’t protest your way to science. Learn physics, not social studies – if you can – because most of your teachers are puppets and fools. Learn to decide for yourself who you will listen to.

.



I believe that a scientist looking at nonscientific problems is just as dumb as the next guy — and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter. – Richard Feynman, The Value of Science, 1955.

Nothing is more flatly contradicted by experience than the belief that a man, distinguished in one of the departments of science is more likely to think sensibly about ordinary affairs than anyone else. – Wilfred Trotter, Has the Intellect a Function?, 1941

 

10 Comments