Science, as an enterprise that acquires knowledge (true justified beliefs) in the form of testable predictions by systematic iterations of observation and math-based theory started around the 17th century, somewhere between Copernicus and Newton. That, we learned in school, was the beginning of the scientific revolution. Historians of science tend to regard this great revolution as the one that never happened. That is, as Floris Cohen puts it, the scientific revolution, once an innovative and inspiring concept, has since turned into a straight-jacket. As with the demarcation problem in science (what really counts as science?), picking this revolution’s starting point, identifying any cause for it, and deciding what concepts and technological innovations belong to it are problematic.
That said, several writers have made good cases for why the pace of evolution – if not revolution – of modern science accelerated dramatically in Europe, only when it did, why it has continuously gained steam rather than petering out, its primary driving force, and the associated transformations in our view of how nature works. Some thought the protestant ethic and capitalism set the stage for science. Others thought science couldn’t emerge until the alliance between Christianity and Aristotelianism was dissolved. Moveable type and mass production of books can certainly claim a role, but was it really a prerequisite? Some think a critical mass of ancient Greek writings had to have been transferred to western Europe by the Muslims. The humanist literary critics that enabled repair and reconstruction of ancient texts mangled in translation from Greek to Syriac to Persian to Latin and botched by illiterate medieval scribes certainly played a part. If this sounds like a stretch, note that those critics seem to mark the first occurrence of a collective effort by a group spread across a large geographic space using shared standards to reach a peer-reviewed consensus. Sound familiar?
But those reasons given for the scientific revolution all have the feel of post hoc theorizing. Might intellectuals of the day, observing these events, have concluded that a resultant scientific revolution was on the horizon? Francis Bacon comes closest to fitting this bill, but his prediction gave little sense that he was envisioning anything like what really happened.
I’ve wondered why the burst of progress in science – as differentiated from plain know-how, nature-knowledge, art, craft, technique, or engineering knowledge – didn’t happen earlier. Why not just after the period of innovation in from about 1100 to 1300 CE in Europe. In this period Jean Buridan invented calculators and almost got the concept of inertia right. Robert Grosseteste hinted at the experiment-theory model of science. Nicole Oresme debunked astrology and gave arguments for a moving earth. But he was the end of this line. After this brief awakening, which also included the invention of banking and the university, progress came to a screeching halt. Some blame the plague, but that can’t be the culprit. Literature of the time barley mentions the plague. Despite the death toll, politics and war went on as usual; but interest in resurrecting ancient Greek knowledge of all sorts tanked.
Why not in the Islamic world in the time of Ali al-Qushji and al-Birjandi? Certainly the mental capacity was there. A layman would have a hard time distinguishing al-Birjandi’s arguments and thought experiments for the earth rotation from those of Galileo. But Islamic civilization at the time had plenty of scholars but no institutions for making practical use of such knowledge and its society would not have tolerated displacement of received wisdom by man-made knowledge.
The most compelling case for civilization having been on the brink of science at an earlier time seems to be the late republic or early imperial Rome. This may seem a stretch, since Rome is much more known for brute force than for finesse, despite their flying buttresses, cranes, fire engines, central heating and indoor plumbing.
Consider the writings of one Vitruvius, likely Marcus Vitruvius Pollio, in the early reign of Augustus. Vitruvius wrote De Architectura, a ten volume guide to Roman engineering knowledge. Architecture, in Latin, translates accurately into what we call engineering. Rediscovered and widely published during the European renaissance as a standard text for engineers, Vitruvius’s work contains text that seems to contradict what we were all taught about the emergence of the – or a – scientific method.
Vitruvius is full of surprises. He acknowledges that he is not a scientist (an anachronistic but fitting term) but a collator of Greek learning from several preceding centuries. He describes vanishing point perspective: “…the method of sketching a front with the sides withdrawing into the background, the lines all meeting in the center of a circle.” (See photo below of a fresco in the Oecus at Villa Poppea, Oplontis showing construction lines for vanishing point perspective.) He covers acoustic considerations for theater design, explains central heating technology, and the Archimedian water screw used to drain mines. He mentions a steam engine, likely that later described by Hero of Alexandria (aeolipile drawing at right), which turns heat into rotational energy. He describes a heliocentric model passed down from ancient Greeks. To be sure, there is also much that Vitruvius gets wrong about physics. But so does Galileo.
Most of De Architectura is not really science; it could more accurately be called know-how, technology, or engineering knowledge. Yet it’s close. Vitruvius explains the difference between mere machines, which let men do work, and engines, which derive from ingenuity and allow storing energy.
What convinces me most that Vitruvius – and he surely could not have been alone – truly had the concept of modern scientific method within his grasp is his understanding that mathematical proof (“demonstration” in his terms) plus theory plus hands-on practice are needed for real engineering knowledge. Thus he says that what we call science – theory plus math (demonstration) plus observation (practice) – is essential to good engineering.
The engineer should be equipped with knowledge of many branches of study and varied kinds of learning, for it is by his judgement that all work done by the other arts is put to test. This knowledge is the child of practice and theory. Practice is the continuous and regular exercise of employment where manual work is done with any necessary material according to the design of a drawing. Theory, on the other hand, is the ability to demonstrate and explain the productions of dexterity on the principles of proportion.
It follows, therefore, that engineers who have aimed at acquiring manual skill without scholarship have never been able to reach a position of authority to correspond to their pains, while those who relied only upon theories and scholarship were obviously hunting the shadow, not the substance. But those who have a thorough knowledge of both, like men armed at all points, have the sooner attained their object and carried authority with them.
It appears, then, that one who professes himself an engineer should be well versed in both directions. He ought, therefore, to be both naturally gifted and amenable to instruction. Neither natural ability without instruction nor instruction without natural ability can make the perfect artist. Let him be educated, skillful with the pencil, instructed in geometry, know much history, have followed the philosophers with attention, understand music, have some knowledge of medicine, know the opinions of the jurists, and be acquainted with astronomy and the theory of the heavens. – Vitruvius – De Architectura, Book 1
Historians, please correct me if you know otherwise, but I don’t think there’s anything else remotely like this on record before Isaac Newton – anything in writing that shows an understanding of modern scientific method.
So what went wrong in Rome? Many blame Christianity for the demise of knowledge in Rome, but that is not the case here. We can’t know for sure, but the later failure of science in the Islamic world seems to provide a clue. Society wasn’t ready. Vitruvius and his ilk may have been ready for science, but after nearly a century of civil war (starting with the Italian social wars), Augustus, the senate, and likely the plebes, had seen too much social innovation that went bad. The vision of science, so evident during the European Enlightenment, as the primary driver of social change may have been apparent to influential Romans as well, at a time when social change had lost its luster. As seen in writings of Cicero and the correspondence between Pliny and Trajan, Rome now regarded social innovation with suspicion if not contempt. Roman society, at least its government and aristocracy, simply couldn’t risk the main byproduct of science – progress.
History is not merely what happened: it is what happened in the context of what might have happened. – Hugh Trevor-Roper – Oxford Valedictorian Address, 1998
The affairs of the Empire of letters are in a situation in which they never were and never will be again; we are passing now from an old world into the new world, and we are working seriously on the first foundation of the sciences. – Robert Desgabets, Oeuvres complètes de Malebranche, 1676
Newton interjected historical remarks which were neither accurate nor fair. These historical lapses are a reminder that history requires every bit as much attention to detail as does science – and the history of science perhaps twice as much. – Carl Benjamin Boyer, The Rainbow: From Myth to Mathematics, 1957
Text and photos © 2015 William Storage
The writings of Stanford’s Mark Jacobson effortlessly blends science and ideology along a continuum to envision an all-renewable energy future for America. His success in doing this marks a sad state of affairs between science, culture and politics.
Jacobson’s popularity began with his 2009 Scientific American piece, A Plan to Power 100 Percent of the Planet with Renewables. The piece and his recent works argue both a means by which we could transition to renewable-only power and that an all-renewable energy mix is the means by which we should pursue greenhouse gas reduction. They seem to answer several questions, though the questions aren’t stated explicitly:
Is it possible to power 100% of the planet with renewables?
Is it feasible to power 100% of the planet with renewables?
Is it desirable to power 100% of the planet with renewables?
Is a renewable-only portfolio the best means of stopping the increase in atmospheric CO2?
The first question is an engineering problem. The 2nd is an engineering and economic question. The 3rd is economic, social, and political. The 4th is my restating of the 3rd to emphasize an a-priori exclusion of non-renewables from the goal of stopping the increase in atmospheric CO2. That objective, implied in the Sci Am article’s title, is explicitly stated in the piece’s opening paragraph:
“In December leaders from around the world will meet in Copenhagen to try to agree on cutting back greenhouse gas emissions for decades to come. The most effective step to implement that goal would be a massive shift away from fossil fuels to clean, renewable energy sources.”
It should be clear to readers that the possibility or technical feasibility of a global 100%-renewable energy portfolio in no way defends the assertion that it the most effective way to implement that goal is such a portfolio. Assuming that the most desirable way to cut greenhouse gas emissions is by using a 100% renewable portfolio, the feasibility of such a portfolio becomes an engineering, economic, and social challenge; but that is not the gist of Jacobson’s works, where the premise and conclusion are intertwined. Questions 1 and 2 would obviously be great topics for a paper, as would questions 3 and 4. Addressing all of them together is a laudable goal – and one that requires clear thinking about evidence and justification. On that requirement, A Plan to Power 100 Percent of the Planet with Renewables fails outright in my view, as do his recent writings.
Major deficiencies in Jacobson’s engineering and economic analyses have been discussed at length, most notably by Brian Wang, William Hannahan, Ted Trainer, Edward Dodge, Nate Gilbraith, Charles Barton, Gene Preston, and Barry W. Brook. The deficiencies they address include wrong facts, adverse selection, and vague language, e.g.:
“In another study, when 19 geographically disperse wind sites in the Midwest, over a region 850 km 850 km, were hypothetically interconnected, about 33% of yearly averaged wind power was calculated to be usable at the same reliability as a coal-fired power plant.”
Engineers will note that “usable at the same reliability” simply cannot be parsed into an intelligible claim; and if the intent was to say that that these sites had the same capacity factor as a coal-powered plant, the statement is obviously false.
Jacobson’s proposal for New York includes clearing 340 square miles of land to generate 39,000 MW with concentrated solar power facilities. CSP requires flat, sunny, unburdened land, kept free or rain and snow without addressing the possibility, let alone feasibility, of doing this. His NY plan calls for building 140 sq mi of photovoltaic farms, with similar requirements for land quality. He overstates capacity factor of both wind and photovoltaics in NY, as elsewhere. He calls for 12,500 5MW offshore wind turbines with no discussion of feasibility in light of bathymetry, shipping and commercial water route use. Further, his offshore wind turbine plan ignores efficiency reductions due to wind shadowing that would exist in his proposed turbine density. The economic impact, social acceptability, and environmental impact of clearing hundreds of square miles of mostly-wooded land and grading it level (NY is hilly), of erecting another 4000 onshore turbines, and of 12,500 offshore turbines is a very real – but unaddressed by Jacobson – factor in determining the true feasibility of the proposed solution.
The above writers cover many concerns about Jacobson’s work along these lines. Their criticism is aimed at the feasibility of Jacobson’s implementation plan. In my engineering judgment these complaints have considerable merit. But that is not where I want to go here. Instead, I’m intensely concerned about two related issues:
1) the lack of knowledge on the street that Jacobson has credible opponents that dispute his major claims
2) absence of criticism of Jacobson for doing bad science – not bad because of wrong details but bad because of poor method and bad values.
By values, I don’t mean ethics, beliefs or preferences. Jacobson and I share social values (cut CO2 emissions) but not scientific values. By scientific values I mean things like accuracy, precision, clarity (e.g., “useable at the same reliability”), testability, and justification – epistemic values focused on reliable knowledge. To clarify, I’m not so naïve as to think scientists and engineers shouldn’t have biases and personal beliefs, that they shouldn’t act on hunches, or that theory and observation are not intertwined. But misrepresenting normative statements as descriptive ones is a kind of bad science against which Bacon and Descartes would have railed; and that is what Jacobson has done. He answered one question (what we should do to level CO2 emissions) while pretending to answer a different one (are renewables sufficient to replace fossil fuels). This should not pass as science.
Jacobson’s writings are highly quantitative where they oppose fission, and grossly qualitative where they dodge the deficiencies in renewables. This holds particularly true on the matters of variability of renewables (e.g., large regions of Europe are often simultaneously without wind and sun), difficulties and inefficiencies of distribution, and the feasibility of energy storage and its inevitable inefficiencies (I mean laws-of-nature inefficiencies, not inefficiencies that can be cured with technology). He states the fission is not carbon-free because fossil fuels are used in its construction and maintenance, while failing to mention that the concrete and other CO2-emitters used in building and maintaining solar and wind power dwarf those of fission.
At times Jacobson’s claims might be called crypto-normative. For example, he says that “Nuclear power results in up to 25 times more carbon emissions than wind energy, when reactor construction and uranium refining and transport are considered.” As stated, the claim is absurd. Applying the principle of charity in argument, I dug down to see what he might have meant to say. Beneath it, he is actually including the CO2 footprint of his estimation of the impact of inevitable nuclear war. So, yes, with a big enough nuclear war included (not uranium refining and transport), the C)2 emissions of nuclear power plus nuclear war could result in up to 25 times more CO2 than wind. But why stop there? We could conceive of nuclear war (or non-nuclear was for that matter) that emitted thousands of time more CO2 than wind power. Speculation about nuclear war risks is a worthwhile topic, but not when buried in the calculation of CO2 footprints. And it has no place in calculating the most effective means to cut greenhouse gas.
How can Jacobson have so many mistakes in his details (all of which favor an all-renewables plan) and engage in such bad science while so few seem to notice? I’m not sure, but I fear that much of science has become the handmaid of politics and naïve ideological activism. I cannot know Jacobson’s motives, but I am certain of the incentives. Opposition to renewables is framed as opposing the need to cut CO2 and worse – like being in the pocket of evil corporations. I experience this personally, when I attend clean-tech events and when I use this example Philosophy of Science talks. As a career and popularity move, it’s hard to go wrong by jumping on the renewables-only bandwagon.
At a recent Silicon Valley clean-tech event, I challenged three different attendees on claims they made about renewables. Two of these were related to capacity factors given for solar power on the east coast and one dealt with the imminence (or lack thereof) of utility-scale energy storage technology. All three attendees, independently, in their responses cited Mark Jacobson’s work as justification for their claims. My attempts at reality checks on capacity factors using real-world values in calculations didn’t seem to faze them. Arguments hardly affect the faithful, noted Paul Feyerabend; their beliefs have an entirely different foundation.
Science was once accused of being the handmaid of religion. Under President Eisenhower, academic science was accused of being a pawn of the military industrial complex and then took big steps to avoid being one. The money flow is now different, but the incentives for institutional science – where it comes anywhere near policy matters – to conform to fickle societal expectations present a huge obstacle to the honest pursuit of a real CO2 solution.
I’m not sure how to fix the problem demonstrated by the unquestioning acceptance of Jacobson’s work as scientific knowledge. Improvements in STEM education will certainly help. But I doubt that spreading science and engineering education across a broader segment of society will be sufficient. It seems to me we’d benefit more from having engineers and policy makers develop a broader interpretation of the word science – one that includes epistemology and theory of justification. I’ve opined in the past that teaching philosophy of science to engineers would make them much better at engineering. It would also result in better policy makers in a world where technology has become integral to everything. Independent of whether a statement is true or false, every educated person should be able to differentiate a scientific statement from a non-scientific one, should know what constitutes confirming and disconfirming evidence, and should cry foul when a normative claim pretends to be descriptive.
“The separation of state and church must be complemented by the separation of state and science, that most recent, most aggressive, and most dogmatic religious institution.” – Paul Feyerabend – Against Method, 1975.
“I tried hard to balance the needs of the science and the IPCC, which were not always the same.” – Keith Briffa – IPCC email correspondence, 2007.
“A philosopher who has been long attached to a favorite hypothesis, and especially if he have distinguished himself by his ingenuity in discovering or pursuing it, will not, sometimes, be convinced of its falsity by the plainest evidence of fact. Thus both himself, and his followers, are put upon false pursuits, and seem determined to warp the whole course of nature, to suit their manner of conceiving of its operations.” – Joseph Priestley – The History and Present State of Electricity, 1775
A friend of mine teaches design thinking and hosts creativity programs. His second child was born 90 seconds after his first. He says they’re not twins. Go for it…
The story is true, not just an exercise in thinking out of the box. In our first meeting my friend issued this challenge, adding that only one person in his seminars had ever gotten the answer. I did what most people probably do; I entertained some possible but unlikely scenarios that could lead to that outcome. But no, he didn’t impregnate two different women within a few weeks of each other, who then coincidentally gave birth at the same time. Nor was he a sperm donor. Nor is he using the “father” term loosely in a case where his wife had been implanted with fertilized eggs from two different pairs of parents.
I pondered it for bit, and then felt a tinge of disappointment when it hit me. “Do you have triplets?”, I asked. He smiled and nodded. The incident left me wondering about some other creativity trainers I’ve known. It also got me thinking about the twentieth-century philosophers I praised in my last post. In the early 1900s, young Ludwig Wittgenstein realized that most philosophical problems – certainly those dealing with ideals and universals – simply stem from misunderstandings of the logic of language. Wittgenstein worked in the cold, hard, realm of logic we call analytic philosophy. Coincidentally, those fuzzy-thinking French at the far extremes of philosophy during the same era also concluded, through a radically different method, that language is definitely not a transparent medium of thought. Michel Foucault and Jacques Derrida, for all their incoherent output, actually do, in my view, defend this position convincingly. Richard Rorty, in his 1967 introductory essay to The Linguistic Turn, brilliantly compares the similar conclusions reached at the same time by these two disjoined schools of thought.
As we talked about using the triplets puzzle in creativity seminars I wondered if those who solved it might be more gifted in linguistics – or perhaps philosophy of language – than in creative thought. Creativity certainly had little to do with my drilling into the language of the puzzle only after plodding through the paternal possibilities. I was channeling Jacques Derrida, not being creative.
It is only a quirk of language that we don’t think that two triplets are also twins. In fact, I seem to recall that they often are – literally. That is, triplets often comprise a pair of monozygotic twins plus a fraternal sibling. So even by use of standard language, two of his triplets might be twins.
The idea of confusing creative problem solving with creative use of – or analysis of – language reminds me of another scenario that often puzzled me. Tony Buzan, the mind-mapping creativity guru, starts one of his courses off by challenging students to, in a fixed time period, record as many uses of a paper clip as possible. Presumably, creative folk find more than the rest of us. He then issues a 2nd challenge: how many things can you not do with a paper clip? Most people find more non-uses than uses. Tony jokingly suggests that we’re negative thinkers because we produce longer lists for the latter.
He then collects examples of non-uses for paper clips from the class, including that you can’t eat them or use them for transportation. Challenging that group to assert whether they’re sure there’s no possible way to eat a paper clip, someone eventually offers that if the paper clip is ferrous, you could grind it up and eat it as a supplement. Inevitably, a more creative student then realizes that Tony didn’t specify the material from which the paper clip was made. It could be made of dried wheat, and then, of course, you could eat it.
Once again, for me at least, the challenge now focuses on language more than creativity. Is it creative to call a paper-clip-shaped piece of spaghetti a paper clip? Or is it just undisciplined? Or both? I doubt that most audiences would have trouble coming up with culinary solutions when quizzed about what sort of things they could do with a paper-clip-shaped piece of pasta. So I suspect the difference between those who went down the route of non-metal (or non-plastic) paper clips and those who did not may stem from experience and situation more than from innate or learned creative abilities. And, by the way, I can easily drive a paper clip if it has wheels, an engine, and comes from Bugatti, not Buitoni. Cream-colored, or bolognese-red?
Once you become attuned to paradoxes that dissolve under a linguistic lens, you find them everywhere. Even in modern philosophy, a place you might expect practitioners to be vigilant. Experimental philosopher Joshua Knobe comes to mind. He’s famous for the Knobe Effect, as seen in the following story.
The CEO of a company is sitting in his office when his Vice President of R&D comes in and says, “We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.” The CEO responds that he doesn’t care about harming the environment and just wants to make as much profit as possible. The program is carried out, profits are made and the environment is harmed.
Knobe asks those presented with this story whether the CEO intentionally harmed the environment. 82 percent say he did. Knobe then repeats the story, changing only a single word. “Harm” becomes “help”: “… it will also help the environment.”
Knobe then asks whether, in the second story, the CEO intentionally helped the environment. Only 23% of people think so. Some see the asymmetry in responses as a direct challenge to the notion of a one-way flow of judgment from the factual (non-moral) domain to the moral. Spooky and fascinating as that prospect is, I don’t think the Knobe Effect is evidence of it. It’s a language game, Josh – as Wittgenstein would say.
The asymmetry stems not from different bridges (for “harm” and “help”) from fact to moral judgment, but from the semantic asymmetry between “intentionally harm” and “intentionally help.” In context, “intentionally harm” is not simply the negation of “intentionally help.” “Intentional” means different things when applied to help and harm. In popular usage “intentionally harm” is understood by most people to mean awareness that your action will cause harm, as its primary purpose or as a secondary consequence. However, “intentional help” is understood by most people to mean your primary purpose was to help, and not that helpfulness could be a mere byproduct.
As WVO Quine made clear, meaning does not stem from words – it stems from sentences, at minimum. No word’s meaning is independent of its context. Quine discusses such concepts at length in Pursuit of Truth (1990) and “Ontological Relativity” (1967).
I get a real kick out of Tony Buzan. I’m well aware that most of his claims about the brain are pure quackery. What percentage of your brain do you use…? His mind-map claims (ultimate revolutionary mind power tool) are a bit out to lunch too. But he’s charming; and I know many people who thrive on mind maps and do great things with them (“if that works for you, great…”). Kudos to him for putting the ancient Greek and Roman mnemonists on a pedestal, and for stressing the link between memory training and creativity. More importantly, anyone who champions games, daydreaming, not acting your age, while pushing rigorous memory training gets my highest praise. Oh, and he designs his own clothes.
I thought hard; and I finally I envisaged one thing a paper clip can never be. A paper clip can absolutely never be a non-paper-clip. But can it be a set of non-paper-clips? Or a set of all sets not containing non-paper-clips? Can you picture it?
April 1 2015.
My neighbor asked me if I thought anything new ever happened in philosophy, or whether, 2500 years after Socrates, all that could be worked out in philosophy had been wrapped up and shipped. Alfred Whitehead came to mind, who wrote in Process and Reality that the entire European philosophical tradition was merely footnotes to Plato. I don’t know what Whitehead meant by this, or for that matter, by the majority of his metaphysical ramblings. I’m no expert, but for my money most of what’s great in philosophy has happened in the last few centuries – including some real gems in the last few decades.
For me, ancient, eastern, and medieval philosophy is merely a preface to Hume. OK, a few of his predecessors deserve a nod – Peter Abelard, Adelard of Bath, and Francis Bacon. But really, David Hume was the first human honest enough to admit that we can’t really know much about anything worth knowing and that our actions are born of custom, not reason. Hume threw a wrench into the works of causation and induction and stopped them cold. Hume could write clearly and concisely. Try his Treatise some time.
Immanuel Kant, in an attempt to reconcile empiricism with rationalism, fought to rescue us from Hume’s skepticism and failed miserably. Kant, often a tad difficult to grasp (“transcendental idealism” actually can make sense once you get his vocabulary), succeeded in opposing every one of his own positions while paving the way for the great steaming heap of German philosophy that reeks to this day.
The core of that heap is, of course, the domain of GWF Hegel, which the more economical Schopenhauer called “pseudo-philosophy paralyzing all mental powers, stifling all real thinking.”
Don’t take my word (or Schopenhauer‘s) for it. Read Karl Marx’s Critique of Hegel’s Philosophy. On second thought, don’t. Just read Imre Lakatos’s critique of Marx’s critique of Hegel. Better yet, read Paul Feyerabend’s critique of Lakatos’s critique of Marx’s critique. Of Hegel. Now you’re getting the spirit of philosophy. For every philosopher there is an equal and opposite philosopher. For Kant, they were the same person. For Hegel, the opposite and its referent are both all substance and non-being. Or something like that.
Hegel set out to “make philosophy speak German” and succeeded in making German speak gibberish. Through great effort and remapping your vocabulary you can eventually understand Hegel, at which point you realize what an existential waste that effort has been. But not all of what Hegel wrote was gibberish; some of it was facile politics.
Hegel writes – in the most charitable of translations – that reason “is Substance, as well as Infinite Power; its own Infinite Material underlying all the natural and spiritual life which it originates, as also the Infinite Form, – that which sets this Material in motion”
I side with the logical positivists, who, despite ultimately crashing into Karl Popper’s brick wall, had the noble cause of making philosophy work like science. The positivists, as seen in writings by AJ Ayer and Hans Reichenbach, thought the words of Hegel simply did no intellectual work. Rudolf Carnap relentlessly mocked Heidegger’s “the nothing itself nothings.” It sounds better in the Nazi philosopher’s own German: “Das Nichts nichtet,” and reveals that Reichenbach could have been more sympathetic in his translation by using nihilates instead of nothings. The removal of a sentence from its context was unfair, as you can plainly see when it is returned to its native habitat:
In anxiety occurs a shrinking back before … which is surely not any sort of flight but rather a kind of bewildered calm. This “back before” takes its departure from the nothing. The nothing itself does not attract; it is essentially repelling. But this repulsion is itself as such a parting gesture toward beings that are submerging as a whole. This wholly repelling gesture toward beings that are in retreat as a whole, which is the action of the nothing that oppresses Dasein in anxiety, is the essence of the nothing: nihilation. It is neither an annihilation of beings nor does it spring from a negation. Nihilation will not submit to calculation in terms of annihilation and negation. The nothing itself nihilates.
Heidegger goes on like that for 150 pages.
The positivists found fault with philosophers who argued from their armchairs that Einstein could not have been right. Yes, they really did this; and not all of them opposed Einstein’s science just because it was Jewish. The philosophy of the positivists had some real intellectual heft, despite being wrong, more or less. They were consumed not only by causality and determinism, but by the quest for demarcation – the fine line between science and nonsense. They failed. Popper burst their bubble by pointing out that scientific theory selection relied more on absence of disconfirming evidence than on the presence of confirming evidence. Positivism fell victim mainly to its own honest efforts. The insider Willard Van Orman Quine (like Popper), put a nail in positivism’s coffin by showing the distinction between analytic and synthetic statements to be false. Hillary Putnam, killing the now-dead horse, then showed the distinction between “observational” and “theoretical” to be meaningless. Finally, in 1960, Thomas Kuhn showed up in Berkeley with the bomb that the truth conditions for science do not stand independent of their paradigms. I think often and write occasionally on the highly misappropriated Kuhn. He was wrong in all his details and overall one of the rightest men who ever lived.
Before leaving logical positivism, I must mention another hero from its ranks, Carl Hempel. Hempel is best known, at least in scientific circles, for his wonderful illustration of Hume’s problem of induction known as the Raven Paradox.
But I digress. I mainly intended to say that philosophy for me really starts with Hume and some of his contemporaries, like Adam Smith, William Blackstone, Voltaire, Diderot, Moses Mendelssohn, d’Alembert, and Montesquieu.
And to say that 20th century philosophers have still been busy, and have broken new ground. As favorites I’ll cite Quine, Kuhn and Hempel, mentioned above, along with Ludwig Wittgenstein, Richard Rorty (late works in particular), Hannah Arendt, John Rawls (read about, don’t read – great thinker, tedious writer), Michel Foucault (despite his Hegelian tendencies), Charles Peirce, William James (writes better than his brother), Paul Feyerabend, 7th Circuit Judge Richard Posner, and the distinguished Simon Blackburn, with whom I’ll finish.
One of Thomas Kuhn’s more controversial concepts is that of incommensurability. He maintained that cross-paradigm argument is futile because members of opposing paradigms do not share a sufficiently common language in which to argue. At best, they lob their words across each other’s bows. This brings to mind a story told by Simon Blackburn at a talk I attended a few years back. It recalls Theodoras and Protagoras against Socrates on truth being absolute vs. relative – if you’re into that sort of thing. If not, it’s still good.
Blackburn said that Lord Jeremy Waldron was attending a think tank session on ethics at Princeton, out of obligation, not fondness for such sessions. As Blackburn recounted Waldron’s experience, Waldron sat on a forum in which representatives of the great religions gave presentations.
First the Buddhist talked of the corruption of life by desire, the eight-fold way, and the path of enlightenment, to which all the panelists said “Wow, terrific. If that works for you that’s great” and things of the like.
Then the Hindu holy man talked of the cycles of suffering and birth and rebirth, the teachings of Krishna and the way to release. And the panelists praised his conviction, applauded and cried ‘Wow, terrific – if it works for you that’s fabulous” and so on.
A Catholic priest then came to the podium, detailing the message of Christ, the promise of salvation, and the path to eternal life. The panel cheered at his great passion, applauded and cried, ‘Wow, terrific, if that works for you, great”.
And the priest pounded his fist on the podium and shouted, ‘No! Not a question of whether it works for me! This is the true word of the living God; and if you don’t believe it you’re all damned to Hell!”
The panel cheered and gave a standing ovation, saying: “Wow! Terrific! If that works for you that’s great”!
With some sadness I recently received a Notice of Assignment for the Benefit of Creditors signaling the demise of PureSense Environmental, Inc. PureSense was real green – not green paint.
It’s ironic that PureSense was so little known. Environmental charlatans and quacks continue to get venture capital and government grants for businesses built around absurd “green” products debunkable by anyone with knowledge of high school physics. PureSense was nothing like that. Their down-to-earth (literally) concept provides real-time irrigation and agricultural field management with inexpensive hardware and sophisticated software. Their matrix of sensors record soil moisture, salinity, soil temperature and climate data from crop fields every 15 minutes. Doing this eliminates guesswork, optimizing use of electricity, water, and pesticides. Avoiding over- and under-watering maximizes crop yield while minimizing use of resources. It’s a win-win.
But innovation and farming are strange bedfellows. Apparently, farmers didn’t all jump at the opportunity. I did some crop disease modelling work for PureSense a few years back. Their employees told me that a common response to showing farmers that their neighbors had substantially increased yield using PureSense was along the lines of, “we’re doing ok with what we’ve got…” Perhaps we shouldn’t be surprised. Not too long ago, farmers who experimented too wildly left no progeny.
The ever fascinating Jethro Tull, inventor of the modern seed drill and many other revolutionary farming gadgets in the early 1700s, was flabbergasted at the reluctance of farmers to adopt his tools and methods. Tull wrote on Soil and Civilization, predicting that future people would have easier lives, since “the Produce of Land Will be Increased, and the Usual Expence Lessened” through a scientific (though that word is an anachronism) approach to agriculture.
The editor of his 2nd edition of his Horse-hoeing Husbandry, Or, An Essay on the Principles of Vegetation and Tillage echoed Tull’s astonishment at farmers’ behavior.
How it has happened that a Method of Culture which proposes such advantages to those who shall duly prosecute it, hath been so long neglected in this Country, may be matter of Surprize to such as are not acquainted with the Characters of the Men on whom the Practice thereof depends; but to those who know them thoroughly it can be done. For it is certain that very few of them can be prevailed on to alter their usual Methods upon any consideration; though they are convinced that their continuing therein disables them from paying their Rents, and maintaining their Families.
And, what is still more to be lamented, these People are so much attached to their old Customs, that they are not only averse to alter them themselves, but are moreover industrious to prevent others from succeeding, who attempt to introduce anything new; and indeed have it too generally in their Power, to defeat any Scheme which is not agreeable to their own Notions; seeing it must be executed by the same sort of Hands.
Tull could have predicted PureSense’s demise. I think its employees could have as well. GlassDoor comments suggested that PureSense needed “a more devoted sales staff.” That is likely an understatement given the market. A more creative sales model might be more on the mark. Knowing that farmers, even while wincing at ever-shrinking margins, will cling to their established methods for better or worse, PureSense should perhaps have gotten closer to the culture of farming.
PureSense’s possible failure to tap into farmers’ psyche aside, America’s vulnerability to futuristic technobabble is no doubt a major funding hurdle. You’d think that USDA REAP loan providers and NRCS Conservation Innovation Grants programs would be lining up at their door. But I suspect crop efficiency pales in wow factor compared to a cylindrical tower of solar cells that somehow magically increases the area of sun-facing photovoltaics (hint: Solyndra’s actual efficiency was about 8.5%, a far cry from their claims that got them half a billion from the Obama administration).
Ozzie Zehner nailed this problem in Green Illusions. In his chapter on the alternative-energy fetish, he discusses energy pornographers, the enviro-techno-enthusiasts who jump to spend billions on dubious green tech that yields less benefit than home insulation and proper tire inflation would. Insulation, light rail, and LED lighting isn’t sexy; biofuels, advanced solar, and stratospheric wind turbines are. Jethro Tull would not have been surprised that modern farmers are as resistant to change as those of 17th century Berkshire. But I think he’d be appalled to learn the extent to which modern tech press, business and government line up for physics-defying snake oil while ignoring something as fundamental as agriculture.
As I finished writing this I learned that Jain Irrigation has just acquired the assets of PureSense and has pledged a long-term commitment to the PureSense platform.
Jethro Tull smiles.
In a post on Richard Feynman and philosophy of science, I suggested that engineers would benefit from a class in philosophy of science. A student recently asked if I meant to say that a course in philosophy would make engineers better at engineering – or better philosophers. Better engineers, I said.
Here’s an example from my recent work as an engineer that drives the point home.
I was reviewing an FMEA (Failure Mode Effects Analysis) prepared by a high-priced consultancy and encountered many cases where a critical failure mode had been deemed highly improbable on the basis that the FMEA was for a mature system with no known failures.
How many hours of operation has this system actually seen, I asked. The response indicated about 10 thousand hours total.
I said on that basis we could assume a failure rate of about one per 10,001 hours. The direct cost of the failure was about $1.5 million. Thus the “expected value” (or “mathematical expectation” – the probabilistic cost of the loss) of this failure mode in a 160 hour mission is $24,000 or about $300,000 per year (excluding any secondary effects such as damaged reputation). With that number in mind, I asked the client if they wanted to consider further mitigation by adding monitoring circuitry.
I was challenged on the failure rate I used. It was, after all, a mature, ten year old system with no recorded failures of this type.
Here’s where the analytic philosophy course those consultants never took would have been useful.
You simply cannot justify calling a failure mode extremely rare based on evidence that it is at least somewhat rare. All unique events – like the massive rotor failure that took out all three hydraulic systems of a DC-10 in Sioux City – were very rare before they happened.
The authors of the FMEA I was reviewing were using unjustifiable inductive reasoning. Philosopher David Hume debugged this thoroughly in his 1738 A Treatise of Human Nature.
Hume concluded that there simply is no rational or deductive basis for induction, the belief that the future will be like the past.
Hume understood that, despite the lack of justification for induction, betting against the sun rising tomorrow was not a good strategy either. But this is a matter of pragmatism, not of rationality. A bet against the sunrise would mean getting behind counter-induction; and there’s no rational justification for that either.
In the case of the failure mode not yet observed, however, there is ample justification for counter-induction. All mechanical parts and all human operations necessarily have nonzero failure or error rates. In the world of failure modeling, the knowledge “known pretty good” does not support the proposition “probably extremely good”, no matter how natural the step between them feels.
Hume’s problem of induction, despite the efforts of Immanuel Kant and the McKinsey consulting firm, has not been solved.
A fabulously entertaining – in my view – expression of the problem of induction was given by philosopher Carl Hempel in 1965.
Hempel observed that we tend to take each new observation of a black crow as incrementally supporting the inductive conclusion that all crows are black. Deductive logic tells us that if a conditional statement is true, its contrapositive is also true, since the statement and its contrapositive are logically equivalent. Thus if all crows are black then all non-black things are non-crow.
It then follows that if each observation of black crows is evidence that all crows are black (compare: each observation of no failure is evidence that no failure will occur), then each observation of a non-black non-crow is also evidence that all crows are black.
Following this line, my red shirt is confirming evidence for the proposition that all crows are black. It’s a hard argument to oppose, but it simply does not “feel” right to most people.
Many try to salvage the situation by suggesting that observing that my shirt is red is in fact evidence that all crows are black, but provides only unimaginably small support to that proposition.
But pushing the thing just a bit further destroys even this attempt at rescuing induction from the clutches of analysis.
If my red shirt gives a tiny bit of evidence that all crows are black, it then also gives equal support to the proposition that all crows are white. After all, my red shirt is a non-white non-crow.
This is a slightly abbreviated repost of a piece by the same name that I posted two years ago today. It was reblogged in a few places, including, oddly, the site of European design school. I was surprised at the high ratio of praise to condemnation this generated. For a thoughtful opposing view, see this piece on the Censemaking blog. Two years later, Design Thinking appears to have the same degree of promise, the same advocates and detractors, and even more misappropriation and co-opting.
Design Thinking is getting a new life. We should bury it instead. Here’s why.
Its Humble Origins
In 1979 Bruce Archer, renowned engineer and professor at the Royal College of Art, wrote in a Design Studies paper,
“There exists a designerly way of thinking and communicating that is both different from scientific and scholarly ways of thinking and communicating, and as powerful as scientific and scholarly methods of inquiry when applied to its own kinds of problems.”
Innocent enough in context, Archer’s statement was likely the impetus for the problematic term, Design Thinking. Archer convincingly argued that design warranted a third fundamental area of education along with science and humanities. The next year Bryan Lawson at University of Sheffield wrote How Designers Think, now in 4th edition. Peter Rowe then authored Design Thinking in the mid 1980s. At that time, design thinking mainly referred to thinking about design and the mental process of designing well. In the mid 1990s, management consultancies, seeking new approaches to sell to clients looking outside their box for a competitive edge, pounced on Design Thinking. Design Thinking then transformed into a conceptual framework, a design-centered management initiative, that deified a narrow subset of people engaged in design – those who defined the shape of products, typically called “designers.”
These designers – again, a subset of those Archer was addressing – think differently, Lawson told us. His point was valid. But many professionals – some of them designers – read much more into his observation. Many readers inferred that designers have a special – almost mystical – way of knowing. Designers, suddenly with guru status, were in demand for advisory roles. Design firms didn’t at all mind becoming management consultancies and being put in the position of advising CEOs not only on product definition but on matters ranging from personnel to market segment analysis. It paid well and designers found the view from atop this new pedestal refreshing. But any value that may have existed from teaching “designerly” ways to paper pushers, bean counters and silo builders deflated as Design Thinking was then reshaped into another n-step improvement process by legacy consulting firms.
If you find my summary overly cynical, consider that Bruce Nussbaum, once one of design thinking’s most vocal advocates, calls design thinking a failed experiment. Don Norman, IDEO fellow and former VP of Apple, calls the idea that designers possess some creative thought process above all others in their skills at creative thought “a myth lacking any evidence.” He sees Design Thinking as now being a public relations term aimed at mystifying an ineffective approach to convince business that designers can add value to problems like healthcare, pollution, and organizational dynamics. It’s a term that needs to die, says Norman. Peter Merholz, president of Adaptive Path, calls BusinessWeek’s recent praise of design thinking “fetishistic.” He facetiously suggests that to fix things, you can simply “apply some right-brained turtleneck-wearing ‘creatives,’ ‘ideating’ tons of concepts … out of whole cloth.”
Analysis and Synthesis Again
Misunderstood science contributed to the early days of Design Thinking in the same way that it informed Systems Thinking. As with Systems Thinking, confusion about the relationship between analysis and synthesis was fundamental to the development of Design Thinking. Recall that in science, synthesis is the process of inferring effects from given causes; whereas analysis is the route by which we seek the causes of observed effects. Loosely speaking, using this first definition of synthesis, analysis is the opposite of synthesis. In broader usage synthesis indicates combining components to form something new that has properties not found in its components (we’ll call this definition 2). I’ll touch on the consequence of conflating the two definitions of synthesis below.
In How Designers Think, Lawson performed a famous experiment on two groups, one of architects and one of scientists, involving combining colored blocks to achieve a specified design, where some of rules about block combinations were revealed only by experimentation. The architects did better than the scientists in this test. Lawson repeated the experiment with groups of students just entering educational programs for scientists and architects. Both these groups did poorer than both groups of their trained equivalents. From this experiment Lawson concluded that the educational experience of the different professions caused the difference in thinking styles, while acknowledging that those more adept at thinking in the abstract might be more inclined toward architecture than science.
Lawson concludes that the scientists tried to maximize the information available to them about the allowed combinations; i.e., they sought to identify the governing rules. In contrast, the architects, he concluded, aimed directly at achieving the desired result, only replacing blocks when rules emerged to show the attempted arrangement unworkable or disallowed. From these conclusions about why the groups behaved the way he observed them to behave, Lawson secondarily concluded that:
The essential difference between these two strategies is that while the scientists focused their attention on discovering the rule, the architects were obsessed with achieving the desired result. The scientists adopted a generally problem-focused strategy and the architects a solution-focused strategy.
Lawson’s work is fascinating, and How Designers Think is still a great read 30 years later; but there are huge leaps of inference in his conclusions summarized above. Further, the choice of language is a opportunistic. A simpler reading of the facts (one less reliant on characterizing states of mind of the participants and relying less on semantics) might be that architects are better at building structures (architecting) than are scientists. A likely cause is that architects are trained to build structures and scientists are not. An experiment involving “design” of a corrosion-resistant steel alloy might well find scientists to be more creative (successful at creating or synthesizing such a result).
Lawson correctly observes that, generally speaking, architects learn about the nature of the problem largely as a result of trying out solutions, whereas scientists set out specifically to study the problem to discover the relevant principles. Presumably, most engineers would fall somewhere between these extremes. While trying out solutions might not be universally applicable (not a good choice for tall buildings, reactors and aircraft) scientists, business managers, and many others too often forget to use the “designerly” approach to challenges – including trying out different solutions early in the game. Further, anyone who has seen corporate analysis-paralysis in action (inaction) can readily see where more architect-style thinking might be useful in many business problems. However, much that has been built on Lawson’s findings cannot bear the weight of real business.
Design – A Remedy for Destructive Science?
In “Designerly Ways of Knowing,” a 1982 paper in Design Studies, Nigel Cross concluded from Lawson’s work that:
These experiments suggest that scientists problem-solve by analysis, whereas designers problem-solve by synthesis.
Cross’s statement – quoted ad nauseum by the worst hucksters of Design Thinking – has several logical problems, especially when removed from its context. First, assuming Lawson’s findings correct, Cross erroneously equates rule discovery (how scientists solve problems) with analysis. Second, it implies that analysis (seeking causes for observed effects) is the opposite not of definition 1 of synthesis above but of definition 2 (building something new out of components). Thus by substitution, the reader infers that building something is the opposite of analyzing something. This position is obviously wrong on logical grounds, yet is deeply engrained in popular thought and in many introductions to Design Thinking.
The error is due to choice of language, choice of examples, and semantic equivocation. Analysis of composition differs from analysis of function. Further, analysis of composition can be physical or conceptual. The destructive connotation of analysis only applies when value judgment is attached to physical decomposition. You analyze a frog by dissecting it (murderer!). You analyze a clock by disassembling it – no, by tearing it apart. This wording needlessly condemns the concept of analysis from the start. But what if you analyze the compressive strength of stone by building a tower of stone blocks? Or if you analyze trends by building software. How about analyzing electrical components by building a circuit? And what of Lawson’s architects who analyzed feasibility of certain arrangements of blocks by using a solution-focused strategy. In these examples analysis appears less villainous.
In its original context, Cross’s analysis-synthesis statement – though technically incorrect – makes a point. We gather that architects aim initially for a satisfactory solution, then seek to refine it if possible, rather than on methodical discovery of the parameters of the problem. Despite providing fodder for less thoughtful advocates of Design Thinking, Cross advanced the field by making a solid case for the value of design education, defending his position that such education develops skills for solving real-world, ill-defined problems, and promotes visual thinking and iconic modes of cognition. It’s unfortunate that his analysis-synthesis quote has been put to such facile use.
For Archer, Lawson, and Cross, Design Thinking was largely about design, design education, and the insights that good design skills bring, such as welcoming new points of view and fresh insights, challenging implicit constraints, and conscious avoidance of stomping on the creative spirit. But Design Thinking after the mid 1990’s set unrealistic goals. It wasn’t just Design Thinking’s reliance on a shaky conception of analysis and synthesis that set it adrift. It was the expansion of scope and the mark left by its corporate usurpers, subjecting the term to endless redefinition and reducing it to jargon. While Tim Brown’s Change by Design does venture fairly far into the realm of corporate renewal, he still tends to keep design on center stage. But in the writings of more ambitious gurus, Design Thinking has strayed far from its roots. For Thomas Lockwood (Design Thinking: Integrating Innovation, Customer Experience, and Brand Value) Design Thinking seems a transformation of consciousness that will not only nourish corporate creativity but will cure societal ails, fix the economy and rescue the environment.
A recent WSJ article explains that Design Thinking “uses close, almost anthropological observation of people to gain insight into problems.” Search Twitter for Design Thinking and you’ll find recent tweets from initiates having discovered this cutting edge concept. ” Kick off your week with a new way of thinking: Design Thinking.” Supply chain thought leadership through Design Thinking. “Use design thinking to find the right-fit job.” One advocate proclaims Design Thinking to be the means to overcome emotional resistance to change.
Don Norman is on the mark when he reminds us that radical breakthrough ideas and creative thinking somehow managed to shape history before the advent of Design Thinking. Norman observes, “‘Design Thinking’ is what creative people in all disciplines have always done.” Breakthroughs happen when people find fresh insights, break outmoded rules, and get new perspectives through conscious effort – all without arcane modes of thinking.
Rational Thinking – The Next Old Thing
Design Thinking has lost its focus – and perhaps its mind. The term has been redefined to the point of absurdity. And its overworked referent has drifted from an attitude and guiding principle to yet another hackneyed process in a long line of bankrupt business improvement initiatives, passionately embraced by amnesic devotees for a few months until the next one comes along. This might be the inevitable fate of brands that no one owns (e.g., “Design Thinking”) spawned by innovators, put into the public domain, and hijacked by consultancies that prey on business managers seeking that infusion of quick-transformation magic.
In short, Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design. Everyone already knows that solution-focus is as essential as problem-focus. Stop arguing the point. If good design doesn’t convince the world that design should be fully integrated into business and society, another over-caffeinated Design Thinking program isn’t likely to do so either.