Archive for category Philosophy
Despising Derrida, Postmodern Scapegoat
Posted by Bill Storage in Philosophy on May 10, 2021
There is a trend in conservative politics to blame postmodernism for everything that is wrong with America today. Meanwhile conservatives say a liberal who claims that anything is wrong with the USA should be exiled to communist Canada. Postmodernism in this context is not about art and architecture. It is a school of philosophy – more accurately, criticism – that not long ago was all the rage in liberal education and seems to have finally reached business and government. Right-wing authors tie it to identity politics, anti-capitalism, anti-enlightenment and wokeness. They’re partly right.
Postmodernism challenged the foundations of knowledge, science in particular, arguing against the univocity of meaning. Postmodernists think that there is no absolute truth, no certain knowledge. That’s not the sort of thing Republicans like to hear.
Deconstruction, an invention of Jacques Derrida, was a major component of heyday postmodernism. Deconstruction dug into the fine points of the relationship between text and meaning, something like extreme hermeneutics (reading between the lines, roughly). Richard Rorty, the leading American postmodernist, argued that there can be no unmediated expression of any non-linguistic entity. And what does that mean?
It means in part that there is no “God’s eye view” or “view from nowhere,” at least none that we have access to. Words cannot really hook onto reality for several reasons. There are always interpreters between words and reality (human interpreters), and between you and me. And we really have no means of knowing whether your interpretation is the same as mine. How could we test such a thing? Only by using words on each other. You own your thoughts but not your words once they’ve left your mouth or your pen. They march on without you. Never trust the teller; trust the tale, said D.H. Lawrence. Derrida took this much farther, exploring “oppositions inside text,” which he argued, mostly convincingly, can be found in any nontrivial text. “There is nothing outside the text,” Derrida proclaimed.
Derrida was politically left but not nearly as left as his conservative enemies pretended. Communists had no use for Derrida. Conservatives outright despised him. Paul Gross and Norman Levitt spent an entire chapter deriding Derrida for some inane statements he made about Einstein and relativity back before Derrida was anyone. In Higher Superstition – The Academic Left and its Quarrels with Science, they attacked from every angle, making much of Derrida’s association with some famous Nazis. This was a cheap shot having no bearing on the quality of Derrida’s work.
Worse still, Gross and Levitt attacked the solid aspects of postmodern deconstruction:
“The practice of close, exegetical reading, of hermeneutics, is elevated an ennobled by Derrida and his followers. No longer is it seen as a quaint academic hobby-horse for insular specialists, intent on picking the last meat from the bones of Jane Austen and Herma Melville. Rather, it has now become the key to comprehension of the profoundest matters of truth and meaning, the mantic art of understanding humanity and the universe at their very foundation.”
There was, and is, plenty of room between Jane Austen hermeneutics and arrogantly holding that nothing has any meaning except that which the god of deconstruction himself has tapped into. Yes, Derrida the man was an unsavory and pompous ass, and much of his writing was blustering obscurantism. But Derrida-style deconstruction has great value. Ignore Derrida’s quirks, his arrogance, and his political views. Ignore “his” annoying scare quotes and “abuse” of “language.” Embrace his form of deconstruction.
Here’s a simple demo of oppositions inside text. Hebrews 13 tells us to treat people with true brotherly love, not merely out of adherence to religious code. “Continue in brotherly love. Do not neglect to show hospitality to strangers, for by so doing some have entertained angels without knowing it.” The Hebrews author has embedded a less pure motive in his exhortation – a favorable review from potential angels in disguise. Big Angel is watching you.
Can conservatives not separate Derrida from his work, and his good work from his bad? After all, they are the ones who think that objective criteria can objectively separate truth from falsehood, knowledge from mere belief, and good (work) from bad.
Another reason postmodernists say the distinction between truth and falsehood is fatally flawed – as is in fact the whole concept of truth – is the deep form of the “view from nowhere” problem. This is not merely the impossibility of neutrality in journalism. It is the realization that no one can really evaluate a truth claim by testing its correspondence to reality – because we have no unmediated access to the underlying reality. We have only our impressions and experience. If everyone is wearing rose colored glasses that cannot be removed, we can’t know whether reality is white or is rose colored. Thus falls the correspondence theory of truth.
Further, the coherence theory of truth is similarly flawed. In this interpretation of truth, a statement is judged likely true if it coheres with a family of other statements accepted as true. There’s an obvious bootstrapping problem here. One can imagine a large, coherent body of false claims. They hang together, like the elements of a tall tale, but aren’t true.
Beyond correspondence and coherence, we’re basically out of defenses for Truth with a capital T. There are a few other theories of truth, but they more or less boil down to variants on these two core interpretations. Richard Rorty, originally an analytic philosopher (the kind that studies math and logic and truth tables), spent a few decades poking all aspects of the truth problem, borrowing a bit from historian of science Thomas Kuhn. Rorty extended what Kuhn had only applied to scientific truth to truth in general. Experiments – and experience in the real world – only provide objective verification of truth claims if your audience (or opponents) agree that it does. For Rorty, this didn’t mean there was no truth out there, but it meant that we don’t have any means of resolving disputes over incompatible truth claims derived from real world experience. Applying Kuhn to general knowledge, Truth is merely the assent of the relevant community. Rorty’s best formulation of this concept was that truth is just a compliment we pay to claims that satisfy our group’s validation criteria. Awesome.
Conservatives cudgeled the avowed socialist Rorty as badly as they did Derrida. Dinesh D’Souza saw Rorty as the antichrist. Of course conservatives hadn’t bothered to actually read Rorty any more than they had bothered to read Derrida. Nor had conservatives ever read a word from Michel Foucault, another postmodern enemy of all things seen as decent by conservatives. Foucault was once communist. He condoned sex between adults and consenting children. I suspect some religious conservatives secretly agree. He probably had Roman Polanski’s ear. He politicized sexual identity – sort of (see below). He was a moral relativist; there is no good or bad behavior, only what people decide is good for them. Yes, Foucault was a downer and a creep, but some of his his ideas on subjectivity were original and compelling.
The conservatives who hid under their beds from Derrida, Rorty and Foucault did so because they relied on the testimony of authorities who otherwise told them what they wanted to hear about postmodernism. Thus they missed out on some of the most original insights about the limitations of what it is possible to know, what counts as sound analytical thinking, and the relationship between the teller and the hearer. Susan Sontag, an early critic of American exceptionalism and a classic limousine liberal, famously condemned interpretation. But she emptied the tub leaving the baby to be found by nuns. Interpretation and deconstruction are useful, though not the trump card the postmodernism founders originally thought they had. They overplayed their hand, but there was something in that hand.
Postmodernists, in their critique of science, thought scientists were incapable of sorting through evidence because of their social bias, their interest, as Marxists like to say. They critically examined science – in a manner they judged to be scientific, oddly enough. They sought to knock science down a peg. No objective truth, remember. Postmodern social scientists found that interest pervaded hard science and affected its conclusions. These social scientists, using scientific methods, were able to sort through interest in the way that other scientists could not sort through evidence. See a problem here? Their findings were polluted by interest.
When a certain flavor of the vegan, steeped in moral relativism, argues that veganism is better for your health, and, by the way, it is good for the planet, and, by the way, animals have rights, and, by the way, veganism is what our group of social activists do…, then I am tempted to deploy some deconstruction. We can’t know motives, some say. Or can we? There is nothing outside the text. Can an argument reasonably flow from multiple independent reasons? We can be pretty sure that some of those reasons were backed into from a conclusion received from the relevant community. Cart and horse are misconfigured.
Conservatives didn’t read Derrida, Foucault and Rorty, and liberals only made it through chapter one. If they had read further they wouldn’t now be parroting material from the first week of Postmodernism 101. They wouldn’t be playing the village postmodernist.
Foucault, patron saint of sexual identity among modern liberal academics, himself offered that to speak of homosexuals as a defined group was historically illiterate. He opined that sexual identity was an absurd basis to form one’s personal identity. They usually skip that part during protest practice. The political left in 2021 exists at the stage of postmodern thought before the great postmodernists, Derrida and crew, realized that the assertion that it is objectively true that nothing is objectively true is more than a bit self-undermining. They missed a boat that sailed 50 years back. Postmodern thought, applied to postmodernism, destroys postmodernism as a program. But today its leading adherents don’t know it. On the death of Postmodernism with a capital P we inherited some good tools and perspectives. But the present postmodern evangelists missed the point where logic flushed the postmodern program down the same drain where objective truth had gone. They are like Sunday Christians, they’re the Cafeteria Catholics of postmodernism.
Richard Rorty, a career socialist, late in his life, using postmodern reasoning, took moral relativism to its logical conclusion. He realized that the implausibility of moral absolutism did not support its replacement by moral relativism. The former could be out without the latter being in. If two tribes hold incommensurable “truths,” it is illogical for either to conclude the other is equally correct. After all, each reached its conclusion based on the evidence and what that community judged to be sound reasoning. It would be hypocritical or incoherent to be less resolved about a conclusion, merely by knowing that a group with whom you did not share moral or epistemic values, concluded otherwise. That reasoning has also escaped the academic left. This was the ironic basis for Rorty’s intellectual defense of ethnocentrism, which got him, once the most prominent philosopher in the world, booted from academic prominence, deleted from libraries, and erased from history.
Rorty’s 1970’s socialist side does occasionally get trotted out by The New Yorker to support identify politics whenever needed, despite his explicit rejection of that concept by name. His patriotic side, which emerged from his five decade pursuit of postmodern thought, gets no coverage in The New Republic or anywhere else. National pride, Rorty said, is to countries what self-respect is to individuals – a necessary condition for self-improvement. Hearing that could put some freshpersons in the campus safe place for a few days. Are the kittens ready?
Postmodern sock puppets, Derrida, Foucault, and Rorty are condemned by conservatives and loved by liberals. Both read into them whatever they want and don’t want to hear. Appropriation? Or interpretation?
Derrida would probably approve. He is “dead.” And he can make no “claim” to the words he “wrote.” There is nothing outside the text.
The Trouble with Doomsday
Posted by Bill Storage in Philosophy, Probability and Risk on February 4, 2020
Doomsday just isn’t what is used to be. Once the dominion of ancient apologists and their votary, the final destiny of humankind now consumes probability theorists, physicists. and technology luminaries. I’ll give some thoughts on probabilistic aspects of the doomsday argument after a brief comparison of ancient and modern apocalypticism.
Apocalypse Then
The Israelites were enamored by eschatology. “The Lord is going to lay waste the earth and devastate it,” wrote Isaiah, giving few clues about when the wasting would come. The early Christians anticipated and imminent end of days. Matthew 16:27: some of those who are standing here will not taste death until they see the Son of Man coming in His kingdom.
From late antiquity through the middle ages, preoccupation with the Book of Revelation led to conflicting ideas about the finer points of “domesday,” as it was called in Middle English. The first millennium brought a flood of predictions of, well, flood, along with earthquakes, zombies, lakes of fire and more. But a central Christian apocalyptic core was always beneath these varied predictions.
Right up to the enlightenment, punishment awaited the unrepentant in a final judgment that, despite Matthew’s undue haste, was still thought to arrive any day now. Disputes raged over whether the rapture would be precede the tribulation or would follow it, the proponents of each view armed with supporting scripture. Polarization! When Christianity began to lose command of its unruly flock in the 1800’s, Nietzsche wondered just what a society of non-believers would find to flog itself about. If only he could see us now.
Apocalypse Now
Our modern doomsday riches include options that would turn an ancient doomsayer green. Alas, at this eleventh hour we know nature’s annihilatory whims, including global pandemic, supervolcanoes, asteroids, and killer comets. Still in the Acts of God department, more learned handwringers can sweat about earth orbit instability, gamma ray bursts from nearby supernovae, or even a fluctuation in the Higgs field that evaporates the entire universe.
As Stephen Hawking explained bubble nucleation, the Higgs field might be metastable at energies above a certain value, causing a region of false vacuum to undergo catastrophic vacuum decay, causing a bubble of the true vacuum expanding at the speed of light. This might have started eons ago, arriving at your doorstep before you finish this paragraph. Harold Camping, eat your heart out.
Hawking also feared extraterrestrial invasion, a view hard to justify with probabilistic analyses. Glorious as such cataclysms are, they lack any element of contrition. Real apocalypticism needs a guilty party.
Thus anthropogenic climate change reigned for two decades with no creditable competitors. As self-inflicted catastrophes go, it had something for everyone. Almost everyone. Verily, even Pope Francis, in a covenant that astonished adherents, joined – with strong hand and outstretched arm – leftists like Naomi Oreskes, who shares little else with the Vatican, ideologically speaking.
While Global Warming is still revered, some prophets now extend the hand of fellowship to some budding successor fears, still tied to devilries like capitalism and the snare of scientific curiosity. Bioengineered coronaviruses might be invading as we speak. Careless researchers at the Large Hadron Collider could set off a mini black hole that swallows the earth. So some think anyway.
Nanotechnology now gives some prominent intellects the willies too. My favorite in this realm is Gray Goo, a catastrophic chain of events involving molecular nanobots programmed for self-replication. They will devour all life and raw materials at an ever-increasing rate. How they’ll manage this without melting themselves due to the normal exothermic reactions tied to such processes is beyond me. Global Warming activists may become jealous, as the very green Prince Charles himself now diverts a portion of the crown’s royal dread to this upstart alternative apocalypse.
My cataclysm bucks are on full-sized Artificial Intelligence though. I stand with chief worriers Bill Gates, Ray Kurzweil, and Elon Musk. Computer robots will invent and program smarter and more ruthless autonomous computer robots on a rampage against humans seen by the robots as obstacles to their important business of building even smarter robots. Game over.
The Mathematics of Doomsday
The Doomsday Argument is a mathematical proposition arising from the Copernican principle – a trivial application of Bayesian reasoning – wherein we assume that, lacking other info, we should find ourselves, roughly speaking, in the middle of the phenomenon of interest. Copernicus didn’t really hold this view, but 20th century thinkers blamed him for it anyway.
Applying the Copernican principle to human life starts with the knowledge that we’ve been around for 200 hundred thousand years, during which 60 billion of us have lived. Copernicans then justify the belief that half the humans that will have ever lived remain to be born. With an expected peak earth population of 12 billion, we might, using this line of calculating, expect the human race to go extinct in a thousand years or less.
Adding a pinch of statistical rigor, some doomsday theorists calculate a 95% probability that the number of humans to have lived so far is less than 20 times the number that will ever live. Positing individual life expectancy of 100 years and 12 billion occupants, the earth will house humans for no more than 10,000 more years.
That’s the gist of the dominant doomsday argument. Notice that it is purely probabilistic. It applies equally to the Second Coming and to Gray Goo. However, its math and logic are both controversial. Further, I’m not sure why its proponents favor population-based estimates over time-based estimates. That is, it took a lot longer than 10,000 years, the proposed P = .95 extinction term, for the race to arrive at our present population. So why not place the current era in the middle of the duration of the human race, thereby giving us another 200,000 thousand years? That’s quite an improvement on the 10,000 year prediction above.
Even granting that improvement, all the above doomsday logic has some curious bugs. If we’re justified in concluding that we’re midway through our reign on earth, then should we also conclude we’re midway through the existence of agriculture and cities? If so, given that cities and agriculture emerged 10,000 years ago, we’re led to predict a future where cities and agriculture disappear in 10,000 years, followed by 190,000 years of post-agriculture hunter-gatherers. Seems unlikely.
Astute Bayesian reasoners might argue that all of the above logic relies – unjustifiably – on an uninformative prior. But we have prior knowledge suggesting we don’t happen to be at some random point in the life of mankind. Unfortunately, we can’t agree on which direction that skews the outcome. My reading of the evidence leads me to conclude we’re among the first in a long line of civilized people. I don’t share Elon Musk’s pessimism about killer AI. And I find Hawking’s extraterrestrial worries as facile as the anti-GMO rantings of the Union of Concerned Scientists. You might read the evidence differently. Others discount the evidence altogether, and are simply swayed by the fashionable pessimism of the day.
Finally, the above doomsday arguments all assume that we, as observers, are randomly selected from the set of all existing humans, including past, present and future, ever be born, as opposed to being selected from all possible births. That may seem a trivial distinction, but, on close inspection, becomes profound. The former is analogous to Theory 2 in my previous post, The Trouble with Probability. This particular observer effect, first described by Dennis Dieks in 1992, is called the self-sampling assumption by Nick Bostrom. Considering yourself to be randomly selected from all possible births prior to human extinction is the analog of Theory 3 in my last post. It arose from an equally valid assumption about sampling. That assumption, called self-indication by Bostrom, confounds the above doomsday reasoning as it did the hotel problem in the last post.
Th self-indication assumption holds that we should believe that we’re more likely to discover ourselves to be members of larger sets than of smaller sets. As with the hotel room problem discussed last time, self-indication essentially cancels out the self-sampling assumption. We’re more likely to be in a long-lived human race than a short one. In fact, setting aside some secondary effects, we can say that the likelihood of being selected into any set is proportional to the size of the set; and here we are in the only set we know of. Doomsday hasn’t been called off, but it has been postponed indefinitely.
Countable Infinity – Math or Metaphysics?
Posted by Bill Storage in Philosophy, Philosophy of Science on December 18, 2019
Are we too willing to accept things on authority – even in math? Proofs of the irrationality of the square root of two and of the Pythagorean theorem can be confirmed by pure deductive logic. Georg Cantor’s (d. 1918) claims on set size and countable infinity seem to me a much less secure sort of knowledge. High school algebra books (e.g., the classic Dolciani) teach 1-to-1 correspondence between the set of natural numbers and the set of even numbers as if it is a demonstrated truth. This does the student a disservice.
Following Cantor’s line of reasoning is simple enough, but it seems to treat infinity as a number, thereby passing from mathematics into philosophy. More accurately, it treats an abstract metaphysical construct as if it were math. Using Cantor’s own style of reasoning, one can just as easily show the natural and even number sets to be no non-corresponding.
Cantor demonstrated a one-to-one correspondence between natural and even numbers by showing their elements can be paired as shown below:
1 <—> 2
2 <—> 4
3 <—> 6
…
n <—> 2n
This seems a valid demonstration of one-to-one correspondence. It looks like math, but is it? I can with equal validity show the two sets (natural numbers and even numbers) to have a 2-to-1 correspondence. Consider the following pairing. Set 1 on the left is the natural numbers. Set 2 on the right is the even numbers:
1 unpaired
2 <—> 2
3 unpaired
4 <—> 4
5 unpaired
…
2n -1 unpaired
2n <—> 2n
By removing all the unpaired (odd) elements from the set 1, you can then pair each remaining member of set 1 with each element of set 2. It seems arguable that if a one to one correspondence exists between part of set 1 and all of set 2, the two whole sets cannot support a 1-to-1 correspondence. By inspection, the set of even numbers is included within the set of natural numbers and obviously not coextensive with it. Therefore Cantor’s argument, based solely on correspondence, works only by promoting one concept – the pairing of terms – while suppressing an equally obvious concept, that of inclusion. Cantor indirectly dismisses this argument against set correspondence by allowing that a set and a proper subset of it can be the same size. That allowance is not math; it is metaphysics.
Digging a bit deeper, Cantor’s use of the 1-to-1 concept (often called bijection) is heavy handed. It requires that such correspondence be established by starting with sets having their members placed in increasing order. Then it requires the first members of each set to be paired with one another, and so on. There is nothing particularly natural about this way of doing things. It got Cantor into enough of a logical corner that he had to revise the concepts of cardinality and ordinality with special, problematic definitions.
Gottlob Frege and Bertrand Russell later patched up Cantor’s definitions. The notion of equipollent sets fell out of this work, along with complications still later addressed by von Neumann and Tarski. Finally, it seems to me that Cantor implies – but fails to state outright – that the existence of a simultaneous 2-to-1 correspondence (i.e., group each n and n+1 in set 1 with each 2n in set 2 to get a 1-to-1correspondence between the two sets) does no damage to the claims that 1-to-1correspondence between the two sets makes them equal in size. In other words, Cantor helped himself to an unnaturally restrictive interpretation (i.e., a matter of language, not of math) of 1-to-1 correspondence that favored his agenda. Finally, Cantor slips a broader meaning of equality on us than the strict numerical equality the rest of math. This is a sleight of hand. Further, his usage of the term – and concept of – size requires a special definition.
Cantor’s rule set for the pairing of terms and his special definitions are perfectly valid axioms for mathematical system, but there is nothing within mathematics that justifies these axioms. Believing that the consequences of a system or theory justify its postulates is exactly the same as believing that the usefulness of Euclidean geometry justifies Euclid’s fifth postulate. Euclid knew this wasn’t so, and Proclus tells us Euclid wasn’t alone in that view.
Galileo seems to have had a more grounded sense of the infinite than did Cantor. For Galileo, the concrete concept of mathematical equality does not reconcile with the abstract concept of infinity. Galileo thought concepts like similarity, countability, size, and equality just don’t apply to the infinite. Did the development of calculus create an unwarranted acceptance of infinity as a mathematical entity? Does our understanding that things can approach infinity justify allowing infinities to be measured and compared?
Cantor’s model of infinity is interesting and useful, but it is a shame that’s it’s taught as being a matter of fact, e.g., “infinity comes in infinitely many different sizes – a fact discovered by Georg Cantor” (Science News, Jan 8, 2008).
On countable infinity we might consider WVO Quine’s position that the line between analytic (a priori) and synthetic (about the world) statements is blurry, and that no claim is immune to empirical falsification. In that light I’d argue that the above demonstration of inequality of the sets of natural and even numbers (inclusion of one within the other) trumps the demonstration of equal size by correspondence.
Mathematicians who state the equal-size concept as a fact discovered by Cantor have overstepped the boundaries of their discipline. Galileo regarded the natural-even set problem as a true paradox. I agree. Does Cantor really resolve this paradox or is he merely manipulating language?
Let’s just fix the trolley
Posted by Bill Storage in Ethics, Philosophy on August 23, 2019
The classic formulation of the trolley-problem thought experiment goes something like this:
A runaway trolley hurtles toward five tied-up people on the main track. You see a lever that controls the switch. Pull it and the trolley switches to a side track, saving the five people, but will kill one person tied up on the side track. Your choices:
- Do nothing and let the trolley kill the five on the main track.
- Pull the lever, diverting the trolley onto the side track causing it to kill one person.
At this point the Ethics 101 class debates the issue and dives down the rabbit hole of deontology, virtue ethics, and consequentialism. That’s probably what Philippa Foot, who created the problem, expected. At this point engineers probably figure that the ethicists mean cable-cars (below right), not trolleys (streetcars, left), since the cable cars run on steep hills and rely on a single, crude mechanical brake while trolleys tend to stick to flatlands. But I digress.
Many trolley problem variants exist. The first twist usually thrust upon trolley-problem rookies was called “the fat man variant” back in the mid 1970s when it first appeared. I’m not sure what it’s called now.
The same trolley and five people, but you’re on a bridge over the tracks, and you can block it with a very heavy object. You see a very fat man next to you. Your only timely option is to push him over the bridge and onto the track, which will certainly kill him and will certainly save the five. To push or not to push.
Ethicists debate the moral distinction between the two versions, focusing on intentionality, double-effect reasoning etc. Here I leave the trolley problems in the competent hands of said ethicists.
But psychologists and behavioral economists do not. They appropriate the trolley problems as an apparatus for contrasting emotion-based and reason-based cognitive subsystems. At other times it becomes all about the framing effect, one of the countless cognitive biases afflicting the subset of souls having no psych education. This bias is cited as the reason most people fail to see the two trolley problems as morally equivalent.
The degree of epistemological presumptuousness displayed by the behavioral economist here is mind-boggling. (Baby, you don’t know my mind…, as an old Doc Watson song goes.) Just because it’s a thought experiment doesn’t mean it’s immune to the rules of good design of experiments. The fat-man variant is radically different from the original trolley formulation. It is radically different in what the cognizing subject imagines upon hearing/reading the problem statement. The first scenario is at least plausible in the real world, the second isn’t remotely.
First off, pulling the lever is about as binary as it gets: it’s either in position A or position B and any middle choice is excluded outright. One can perhaps imagine a real-world switch sticking in the middle, causing an electrical short, but that possibility is remote from the minds of all but reliability engineers, who, without cracking open MIL-HDBK-217, know the likelihood of that failure mode to be around one per 10 million operations.
Pushing someone, a very heavy someone, over the railing of the bridge is a complex action, introducing all sorts of uncertainty. Of course the bridge has a railing; you’ve never seen one that didn’t. There’s a good chance the fat man’s center of gravity is lower than the top of the railing because it was designed to keep people from toppling over it. That means you can’t merely push him over; you more have to lift him up to the point where his CG is higher than the top of railing. But he’s heavy, not particularly passive, and stronger than you are. You can’t just push him into the railing expecting it to break either. Bridge railings are robust. Experience has told you this for your entire life. You know it even if you know nothing of civil engineering and pedestrian bridge safety codes. And if the term center of gravity (CG) is foreign to you, by age six you have grounded intuitions on the concept, along with moment of inertia and fulcrums.
Assume you believe you can somehow overcome the railing obstacle. Trolleys weigh about 100,000 pounds. The problem statement said the trolley is hurtling toward five people. That sounds like 10 miles per hour at minimum. Your intuitive sense of momentum (mass times velocity) and your intuitive sense of what it takes to decelerate the hurtling mass (Newton’s 2nd law, f = ma) simply don’t line up with the devious psychologist’s claim that the heavy person’s death will save five lives. The experimenter’s saying it – even in a thought experiment – doesn’t make it so, or even make it plausible. Your rational subsystem, whether thinking fast or slow, screams out that the chance of success with this plan is tiny. So you’re very likely to needlessly kill your bridge mate, and then watch five victims get squashed all by yourself.
The test subjects’ failure to see moral equivalence between the two trolley problems speaks to their rationality, not their cognitive bias. They know an absurd hypothetical when they see one. What looks like humanity’s logical ineptitude to so many behavioral economists appears to the engineers as humanity’s cultivated pragmatism and an intuitive grasp of physics, factor-relevance evaluation, and probability.
There’s book smart, and then there’s street smart, or trolley-tracks smart, as it were.
Recent Comments