Bill Storage

Unknown's avatar

This user hasn't shared any biographical information

Is Clean Energy a Wicked Problem? – Part 2

William Storage           19 Sep 2012
Visiting Scholar, UC Berkeley Science, Technology & Society Center

Nowhere to Run AnymoreIn the last post I looked at Rittel and Webber’s  definition of wicked problem toward determining whether clean energy met that definition. Answering that involves figuring out what we mean by clean energy.

The clean energy problem is closely linked to the issue of climate change, though they are not equal. The climate change problem is usually taken to mean that, given that anthropogenic warming has occurred and will continue unless greenhouse gas emissions are substantially reduced (note this is a premise I don’t care to argue about here), either geoengineering or dramatic changes to energy production techniques are urgently needed. Clean energy assumes that dramatic changes to energy production techniques are urgently needed to correct man-made climate change along with other constraints and provisions.

The  energy problem also includes the need for a continuous supply of energy for the lifetime of the human race, along with getting that energy to developing nations. I.e., even if coal could be made clean, through carbon sequestration or similar, the energy problem would not be solved by burning coal, since it is in finite supply. We may disagree about size of that supply, but not about its finitude. Security of supply must be included too. If oil were clean and in near-infinite supply, but only sourced by hostile governments, design of an energy production system should accommodate that constraint. Terms like green, sustainable, renewable, and alternative are off the table for this discussion. They are too nebulous, ideological, or overloaded. Clean does not necessarily imply renewable. If coal were infinite and clean, it would suffice, as would fusion if it existed. Further, many energy sources today called renewable, my not be sufficiently clean for indefinite use since their energy production densities are too low to supply a significant portion of global demand without major modifications to the earth. More on that in a later post.

Others have put far more thought into defining long term energy requirements than I, so I’ll draw from some experts in the field. Combining David MacKay’s three motivations (Sustainable Energy – without the hot air with, p. 5) and The Hartwell Paper’s three overarching objectives yields something along these lines:

  • The energy supply cannot be finite (in practical terms).
  • It must be secure.
  • It cannot change the climate.
  • It must ensure energy access for all.

I’m specifically not including adaptation and I’m aware that we can quibble over whether universal energy access is a principle, a constraint or a goal. Still, I think this is decent working set. The beginning of an attempt to convert these goals into a requirement might look something like this:

A means of providing sufficient energy for the human race to flourish for 10,000 years without significantly altering the surface and atmosphere of the planet in the acquisition of energy (population growth may require extensive modification of the planet, but that’s out of scope here).

You might then attempt to quantify “flourish” and “significantly alter” by coming up with an energy quantity per person, a percentage of earth’s surface devoted to energy production, and an allowable carbon production per unit of energy.

I’m not saying getting agreement on the numbers will be easy or even possible; I’m merely outlining the process toward the goal of deciding how wicked the energy problem is.

With this in mind let’s have a look at Rittel’s properties of wicked problems against the energy problem as summarized above to see which of them apply (Yes or No, below). Refer to yesterday’s post for more detail on each of the 10 properties.

1. No definitive formulation – solving the problem is identical to understanding its nature: No
Understanding the nature of clean energy and even anthropogenic climate change is mostly independent from solving it. The social components of climate change, energy demand and energy production are not mysterious or unpredictable. Economists and scientists have had great success in that area. The vagaries of climate prediction and extent to which climate change is manmade are rather independent of the solutions that might be put in place based on any such predictions and analyses. This one clearly does not apply; clean energy is not wicked based on this criterion of wickedness.

2. No stopping rule: No
Since atmospheric carbon, temperature, population, sea level, disease, starvation, and energy production and consumption are reasonably measurable, there clearly is a stopping rule in place for clean energy.

3. No formal decision rules – better/worse, not true/false: Yes
One might argue that if a set of metrics could be agreed-upon, clean energy actual does become true/false, but I don’t think that is fair to Rittel’s intent for this rule.

4a. No ultimate test of solution: No
For the same reasons stated in rule 1, clean energy solutions are reasonably testable.

4b. Unintended consequences: Yes
Leaving geoengineering out of the picture, we’d still need to watch for surprises, especially from low density production schemes that would involve large transformations, e.g., massive solar or wind farms, tide and ocean wave modification, geothermal plants, and carbon sequestration schemes.

5. One-shot operation – no second chance: No
Some concern over the ramifications of expending all a government leader’s political capital on short-term measures with trivial contribution toward a solution is warranted; but overall, energy initiatives are very tolerant of experimentation and learning by trial. This is especially on a global scale, even with disasters like Chernobyl and red herrings like fuel cells in the 1990s.

6. No enumerable or exhaustively describable set of potential solutions: No
Nature, physics and economics combine to yield a finite set of policy and technology components to a solution. Yes, there are infinite permutations of the components, but this is always true. In any case, the potential solutions and their elements are enumerable.

7. Unique problem: Yes
Aren’t they all?

8. The problem is a symptom of another problem: Yes
Human breeding habits, materialism, inequitable distribution of wealth, sexy car ads, inefficiency, indifference toward nature, bad science education, the Roman Empire and the Han Dynasty are all problems of which the need for clean energy is symptomatic.

9. Numerous explanations: Yes
Yes, for the same reasons listed in number 9 above. The numerous explanations are in fact relevant, because they could materially affect the solution. For example, realizing that waste and inefficiency is significant can lead to product requirements that result in a lower figure for per-capita energy requirements. Japan has had remarkable success at this.

10. Planner has no right to be wrong: Yes
In the case of clean energy, answering Yes for item 10 seems to be in conflict with answering No for 4a. and 5. Repeated readings of Rittel and Webber have not allowed me to see a real difference between this and number 5 above. The difference between them may be more apparent in problems whose scope is urban planning, the original context of Rittel and Webber. Nevertheless, for sake of charity in argument, I’ll answer Yes here to represent the voice that, in the long haul, we have to get this right or civilization may fail.

So for Rittel’s ten properties, here presented as eleven, we have five No and six Yes responses. On that basis, clean energy can be said to be a half wicked problem. Systems engineers, product managers and designers might say that all engineering and design problems are partly – perhaps equally – wicked. This and other considerations make me wonder whether characterizing a problem as wicked has any practical use.

That will be the topic of my next post. I vow to make it more controversial.

.

—————————-

.

Photo: “Nowhere to Run Anymore” by Thomas Hawk on Flickr

, , ,

Leave a comment

Is Clean Energy a Wicked Problem?

Deciding whether clean energy is a wicked problem involves two tasks. One is to define wicked problem and the other is a formulation of the clean energy objective.

Advocates of Design Thinking and Systems Thinking, among others, are fond of the term, wicked problem. Popular examples include climate change/clean energy, drug trafficking, homeland security, nuclear energy, natural hazards and healthcare. In the next few posts, I’ll argue that the characterization of clean energy as a wicked problem is, at best, not very useful and, at worst, detrimental to the stated goals of those who use it. I think the clean energy challenge is partly wicked – but only partly – and not for most of the reasons one might guess. In upcoming posts I’ll also argue that to some degree the clean energy problem is made wicked by characterizing it as wicked.  There is a Keyser Söze effect (seemingly omnipotent criminal whose omnipotence derives from his scaremongering) at work here. It demoralizes us and misdirects thinking that could be put to better use solving problems. My previous post, on philosopher Richard Rorty, ends wth Rorty’s appeal that if a solution to the problem of climate and energy exists, it is a matter for the engineers. Indeed. Let’s get to work.

The term wicked problem was first used around 1967 in lectures by Horst Rittel of UC Berkeley according to systems guru West Churchman, who first used it in print, in reference to Rittel’s lecture. The context of Rittel’s use of the term was social policy and urban planning. Six years later, Rittel and Melvin Webber defined wicked problems in detail in “Dilemmas in a General Theory of Planning,” published in the journal of the Society for Policy Sciences.

Rittel and Webber list ten distinguishing properties of the planning-type problems they classify as wicked. They note that wicked does not mean that anything in the problem space is ethically deplorable or that malicious intent exists, but that such problems are tricky, malignant, vicious and aggressive.

Both Rittel & Webber and Churchman do, however, go to some length to describe an ethical issue related to wicked problems. This important point is lost in most modern use of the term. The authors indicate that it is usually morally objectionable for a planner to treat a wicked problem as though it were a tame one, or to tame only part of a wicked problem. Churchman says that taming part of a wicked problem, but not the whole, is morally wrong, because doing so can create the illusion of safety where danger exists. He then calls for a new level of maturity and morality in operations research and management science. Churchman urges that his profession not only avoid telling management what it wants to hear, but that operations researchers should not tame parts of wicked problems even if they warn management that only part of a problem was solved. It takes more than a verbal caveat, said Churchman, to convince the management that a solution is incomplete. For the energy/climate problem, it seems to me this aspect of Rittel, Webber, and Churchman’s work may be considerably more important than examining the wickedness of the energy/climate problem. More on that in a later post.

Rittel’s ten distinguishing properties of wicked problems are listed below. These descriptions are excerpted directly from Rittel’s wording with very minor additions and clarifications. I’ve split Rittel’s item number 4 into two parts because I think he inadvertently connects two related but distinct characteristics – solution testability and likelihood of unexpected consequences. I differentiate these because non-function and malfunction (and the likelihood of each) are fundamentally different engineering concerns.

1. There is no definitive formulation of a wicked problem. In order to describe a wicked-problem in sufficient detail, one has to develop an exhaustive inventory of all conceivable solutions ahead of time. The process of solving the problem is identical with the process of understanding its nature.

2. Wicked problems have no stopping rule. You never know whether you’re finished.

3. Solutions to wicked problems are not true-or-false, but better-or-worse. Parties may be equally interested or entitled to judge the solutions, but none has the power to set formal decision rules to determine correctness.

4a. There is no immediate and no ultimate test of a solution to a wicked problem.

4b. Wicked problems are prone to unintended consequences.

5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly. Every implemented solution is consequential, leaving “traces” that cannot be undone.

6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions.

7. Every wicked problem is essentially unique. Despite long lists of similarities between a current problem and a previous one, there always might be an additional distinguishing property that is of overriding importance. The conditions in a city constructing a subway may look similar to the conditions in San Francisco, say; but planners would be ill-advised to transfer the San Francisco solutions directly. Differences in commuter habits or residential patterns may far outweigh similarities in subway layout, downtown layout and the rest.

8. Every wicked problem can be considered to be a symptom of another problem. The process of resolving the problem starts with the search for causal explanation of the discrepancy. Removal of that cause poses another problem of which the original problem is a “symptom.”

9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution. Crime in the streets can be explained by not enough police, by too many criminals, by inadequate laws, too many police, cultural deprivation, deficient opportunity, too many guns, etc.

10. The planner has no right to be wrong. As Karl Popper argues in The Logic of Scientific Discovery, it is a principle of science that solutions to problems are only hypotheses offered for refutation. In the world of planning and wicked problems no such immunity is tolerated.

The definition of wicked problem has remained consistent through its usage. It appears in Design Thinking and climate-change circles often, with substantially the same meaning, usually referencing Rittel and Webber. Given that consistency of usage, we can next take a crack at what we mean when we say we want clean energy. With a useful definition of wicked and a fair formulation of a clean energy objective, we can then look at whether clean energy is a wicked problem and how that characterization might impact planning and design of solutions.

More on that tomorrow.

, ,

8 Comments

Richard Rorty: A Matter for the Engineers

William Storage           13 Sep 2012
Visiting Scholar, UC Berkeley Science, Technology & Society Center

Richard Rorty, PhilosopherRichard Rorty (1931-2007) was arguably the most controversial philosopher in recent history. Unarguably, he was the most entertaining. Profoundly influenced by Thomas Kuhn, Rorty is fascinating and inspirational, even for engineers and scientists.

Rorty’s thought defied classification – literally; encyclopedias struggle to pin philosophical categories to him. He felt that confining yourself to a single category leads to personal stagnation on all levels. An interview excerpt at the end of this post ends with a casual yet weighty statement of his confidence in engineers’ ability to save the world.

Unlike many of his contemporaries, Rorty looked at familiar things in different light – and could explain his position in plain English. I never found much of Heidegger to be coherent, let alone important. No such problem with Dick Rorty.

Rorty could simplify arcane philosophical concepts. He saw similarities where others saw differences, being mostly rejected by schools of thought he drew from. This was especially true for pragmatism. Often accused of hijacking this term, Rorty offered that pragmatism is a vague, ambiguous, and overworked word, but nonetheless, “it names the chief glory of our country’s intellectual tradition.” He was enamored with moral and scientific progress, and often glowed with optimism and hope while his contemporaries brooded in murky, nihilistic dungeons.

Richard Rorty, PhilosopherRichard Rorty photo by Mary Rorty. Used by permission.

Rorty called himself a “Kuhnian” apart from those Kuhnians for whom The Structure of Scientific Revolution justified moral relativism and epistemic nihilism. Rorty’s critics in the hard sciences – at least those who embrace Kuhn – have gone to great lengths to distance Kuhn from Rorty. Philosophers have done the same, perhaps a bit sore from Rorty’s denigration of analytic philosophy and his insistence that philosophers have no special claim to wisdom. Kyle Cavagnini in the Spring 2012 issue of Stance (“Descriptions of Scientific Revolutions: Rorty’s Failure at Redescribing Scientific Progress”) finds that Rorty tries too hard to make Kuhn a relativist:

“Kuhn’s work provided a new framework in philosophy of science that garnered much attention, leading some of his theories to be adopted outside of the natural sciences. Unfortunately, some of these adoptions have not been faithful to Kuhn’s original theories, and at times just plain erroneous conclusions are drawn that use Kuhn as their justification. These misreadings not only detract from the power of Kuhn’s argument, but also serve to add false support for theories that Kuhn was very much against; Rorty was one such individual.”

Cavagnini may have some valid technical points. But it’s as easy to misread Rorty as to misread Kuhn. As I read Rorty, he derives from Kuhn that the authority of science has no basis beyond scientific consensus. It then follows for Rorty that instituational science and scientists have no basis for a privileged status in acquiring truth. Scientist who know their stuff shouldn’t disagree on this point. Rorty’s position is not cultural constructivism applied to science. He doesn’t remotely imply that one claim of truth – scientific or otherwise – is as good as another. In fact, Rorty explicitly argues against that position as applied to both science and ethics. Rorty then takes ideas he got from Kuhn to places that Kuhn would not have gone, without projecting his philosophical ideas onto Kuhn:

“To say that the study of the history of science, like the study of the rest of history, must be hermeneutical, and to deny (as I, but not Kuhn, would) that there is something extra called ‘rational reconstruction’ which can legitimize current scientific practice, is still not to say that the atoms, wave packages, etc., discovered by the physical scientists are creations of the human spirit.”  – Philosophy and the Mirror of Nature

“I hope to convince the reader that the dialectic within analytical philosophy, which has carried … philosophy of science from Carnap to Kuhn, needs to be carried a few steps further.” – Philosophy and the Mirror of Nature

What Rorty calls “leveling down science” is aimed at the scientism of logical positivists in philosophy – those who try to “science-up” analytic philosophy:

“I tend to view natural science as in the business of controlling and predicting things, and as largely useless for philosophical purposes” – Rorty and Pragmatism: The Philosopher Responds to his Critics

For Rorty, both modern science and modern western ethics can claim superiority over their precursors and competitors. In other words, we are perfectly capable of judging that we’ve made moral and scientific progress without a need for a privileged position of any discipline, and without any basis beyond consensus. This line of thought enabled the political right to accuse Rorty of moral relativism and at the same time the left to accuse him of bigotry and ethnocentrism. Both did vigorously. [note]

You can get a taste of Rorty from the sound and video snippets available on the web, e.g. this clip where he dresses down the standard philosophical theory of truth with an argument that would thrill mathematician Kurt Gödel:

In his 2006 Dewey Lecture in Law and Philosophy at the University of Chicago, he explains his position, neither moral absolutist nor moral relativist (though accused of being both by different factions), in praise of western progress in science and ethics.

Another example of Rorty’s nuanced position is captured on tape in Stanford’s archives of the Entitled Opinions radio program. Host Robert Harrison is an eloquent scholar and announcer, but in a 2005 Entitled Opinions interview, Rorty frustrates Harrison to the point of being tongue-tied. At some point in the discussion Rorty offers that the rest of the world should become more like America. This strikes Harrison as perverse.  Harrison asks for clarification, getting a response he finds even more perverse:

Harrison: What do you mean that the rest of the world should become a lot more like America? Would it be desirable to have all the various cultures across the globe Americanize? Would that not entail some sort of loss at least at the level of diversity or certain wisdoms that go back through their own particular traditions. What would be lost in the Americanization or Norwegianization of the world?

Rorty: A great deal would be lost. A great deal was lost when the Roman Empire suppressed a lot of native cultures. A great deal was lost when the Han Empire in China suppressed a lot of native cultures […]. Whenever there’s a rise in a great power a lot of great cultures get suppressed.  That’s the price we pay for history.

Asked if this is not too high a price to pay, Rorty answers that if you could get American-style democracy around the globe, it would be a small price to have paid. Harrison is astounded, if not offended:

Harrison: Well here I’m going to speak in my own proper voice and to really disagree in this sense: that  I think governments and forms of government are the result of a whole host of contingent geographical historical factors whereby western bourgeois liberalism or democracy arose through a whole set of circumstances that played themselves out over time, and I think that [there is in] America a certain set of presumptions that our form of democracy is infinitely exportable … [and] that we can just take this model of American democracy and make it work elsewhere. I think experience has shown us that it’s not that easy.

Rorty: We can’t make it work elsewhere but people coming to our country and finding out how things are done in the democratic west can go back and try to imitate that in their own countries. They’ve often done so with considerable success. I was very impressed on a visit to Guangzhou to see a replica of the statue of Liberty in one of the city parks. It was built by the first generation of Chinese students to visit America when they got back. They built a replica of the Statue of Liberty in order to help to try to explain to the other Chinese what was so great about the country they’d come back from. And remember that a replica of the Statue of Liberty was carried by the students in Tiananmen Square.

Harrison (agitated): Well OK but that’s one way. What if you… Why can’t we go to China and see a beautiful statue of the Buddha or something, and understand equally – have a moment of enlightenment and bring that statue back and say that we have something to learn from this other culture out there. And why is the statue of liberty the final transcend[ant] – you say yourself as a philosopher that you don’t – that there are no absolutes and that part of the misunderstanding in the history of philosophy is that there are no absolutes. It sounds like that for you the Statue of Liberty is an absolute.

Rorty: How about it’s the best thing anybody has come up with so far. It’s done more for humanity than the Buddha ever did. And it gives us something that … [interrupted]

Harrison: How can we know that!?

Rorty: From history.

Harrison: Well, for example, what do we know about the happiness of the Buddhist cultures from the inside?  Can we really know from the outside that we’re happier than they are?

Rorty: I suspect so. We’ve all had experiences in moving around from culture to culture. They’re not closed off entities, opaque to outsiders. You can talk to people raised in lots of different places about how happy they are and what they’d like.

Then it spirals down a bit further. Harrison asks Rorty if he thinks capitalism is a neutral phenomenon. Rorty replies that capitalism is the worst system imaginable except for all the others that have been tried so far. He offers that communism, nationalization of production and state capitalism were utter disasters, adding that private property and private business are the only option left until some genius comes up with a new model.

Harrison then reveals his deep concern over the environment and the free market’s effect on it, suggesting that since the human story is now shown to be embedded in the world of nature, that philosophy might entertain the topic of “life” – specifically, progressing beyond 20th century humanist utopian values in light of climate change and resource usage. Rorty offers that unless we develop fusion energy or similar, we’ve had it just as much as if the terrorists get their hands on nuclear bombs. Rorty says human life and nature are valid concerns, but that he doesn’t see that they give any reason for philosophers to start talking about life, a topic he says philosophy has thus far failed to illuminate.

This irritates Harrison greatly. At one point he curtly addresses Rorty as “my dear Dick.” Rorty’s clarification, his apparent detachment, and his brevity seem to make things worse:

Rorty: “Well suppose that we find out that it’s all going to be wiped out by an asteroid. Would you want philosophers to suddenly start thinking about asteroids? We may well collapse due to the exhaustion of natural resources but what good is it going to do for philosophers to start thinking about natural resources?”

Harrison: “Yeah but Dick there’s a difference between thinking of asteroids, which is something that is outside of human control and which is not submitted to human decision and doesn’t enter into the political sphere, and talking about something which is completely under the governance of human action. I don’t say it’s under the governance of human will, but it is human action which is bringing about the asteroid, if you like. And therefore it’s not a question of waiting around for some kind of natural disaster to happen, because we are the disaster – or one could say that we are the disaster – and that the maximization of wealth for the maximum amount of people is exactly what is putting us on this track toward a disaster.

Rorty: Well, we’ve accommodated environmental change before. Maybe we can accommodate it again; maybe we can’t. But surely this is a matter for the engineers rather than the philosophers.

A matter for the engineers indeed.

.

————————————————-

.

Notes

1) Rorty and politics: The academic left cheered as Rorty shelled Ollie North’s run for the US Senate. As usual, not mincing words, Rorty called North a liar, a claim later repeated by Nancy Reagan. There was little cheering from the right when Rorty later had the academic left in his crosshairs; perhaps they failed to notice.. In 1997 Rorty wrote that the academic left must shed its anti-Americanism and its quest for even more abusive names for “The System.” “Outside the academy,  Americans still want to feel patriotic,” observed Rorty. “They still want to feel part  of a nation which can take control of its destiny and make itself a  better place.”

On racism, Rorty observed that the left once promoted equality by saying we were all Americans, regardless of color. By contrast, he said, the contemporary left now “urges that America should  not be a melting-pot, because we need to respect one another  in our differences.” He chastised the academic left for destroying any hope for a sense of commonality by highlighting differences and preserving otherness. “National pride is to countries what self-respect is to individuals,” wrote Rorty.

For Dinesh D’Souza, patriotism is no substitute for religion. D’Souza still today seems obsessed with Rorty’s having once stated his intent “to arrange things so that students who enter as  bigoted, homophobic religious fundamentalists will leave college with  views more like our own.” This assault on Christianity lands Rorty on a D’Souza enemy list that includes Sam Harris, Christopher Hitchens, and Richard Dawkins, D’Souza apparently unaware that Rorty’s final understanding of pragmatism included an accomodation of liberal Christianity.

2) See Richard Rorty bibliographical material and photos maintained by the Rorty family on the Stanford web site. 

, , , , , ,

4 Comments

Kaczynski, Gore, and Cool Headed Logicians

Unabomber-sketchYesterday I was talking to Robert Scoble about context-aware computing and we ended up on the topic of computer analysis of text. I’ve done some work in this area over the years for ancient text author attribution, cheating detection in college and professional essay exam scenarios, and for sentiment and mood analysis. A technique common to authorship studies is statistical stylometry, which aims to quantify linguistic style. Subtle but persistent differences between text written by different authors, even when writing about the same topic or in the same genre often emerge from statistical analysis of their writings.

Robert was surprised to hear that Ted Kaczynski, the Unabomber, was caught because of linguistic analysis, not done by computer, but by Kaczynski’s brother and sister-in-law. Contrary to stories circulating in the world of computational linguistics and semantics, computer analysis played no part in getting a search warrant or prosecuting Kaczynski. It could have, but Kaczynski plead guilty before his case went to trial. The FBI did hire James Fitzgerald, a forensic linguist, to compare Kaczynski’s writings to the Unabomber’s manifesto, and Fitzgerald’s testimony was used in the trial.

gore1Analysis of text has uses beyond author attribution. Google’s indexing and search engine relies heavily on discovering the topic and contents of text. Sentiment analysis tries to guess how customers like a product based on their tweets and posts about it. But algorithmic sentiment analysis is horribly unreliable in its present state, failing to distinguish positive and negative sentiments the vast majority of the time. Social media monitoring tools have a long way to go.

The problem stems from the fact that human speech and writing are highly evolved and complex. Sarcasm is common, and relies on context to reveal that you’re saying the opposite of what you mean. Subcultures have wildly different usage for overloaded terms. Retirees  rarely use “toxic” and “sick” as compliments like college students do. Even merely unwinding phrases to determine the referent of a negator is difficult for computers. Sentiment analysis and topic identification rely on nouns and verbs, which are only sometimes useful in authorship studies. Consider the following sentences:

1) The twentieth century has not been kind to the constant human striving for a sense of purpose in life.

2) The Industrial Revolution and its consequences have been a disaster for the human race.

The structure, topic, and sentiment of these sentences is similar. The first is a quote from Al Gore’s 2006 Earth in the Balance. The second is the opening statement of Kaczynski’s 1995 Unabomber manifesto, “Industrial Society and its Future.” Even using the total corpus of works by Gore and Kaczynski, it would be difficult to guess which author wrote each sentence. However, compare the following paragraphs, one from each of these authors:

1) Modern industrial civilization, as presently organized, is colliding violently with our planet’s ecological system. The ferocity of its assault on the earth is breathtaking, and the horrific consequences are occurring so quickly as to defy our capacity to recognize them, comprehend their global implications, and organize an appropriate and timely response. Isolated pockets of resistance fighters who have experienced this juggernaut at first hand have begun to fight back in inspiring but, in the final analysis, woefully inadequate ways.

2) It is not necessary for the sake of nature to set up some chimerical utopia or any new kind of social order. Nature takes care of itself: It was a spontaneous creation that existed long before any human society, and for countless centuries, many different kinds of human societies coexisted with nature without doing it an excessive amount of damage. Only with the Industrial Revolution did the effect of human society on nature become really devastating.

Francisc BaconAgain, the topic, mood, and structure are similar. Who wrote which? Lexical analysis immediately identifies paragraph 1 as Gore and paragraph 2 as Kaczynski. Gore uses the word “juggernaut” twice in Earth in the Balance and once in The Assault on Reason. Kaczynski never uses the word in any published writing. Fitzgerald (“Using a forensic linguistic approach to track the Unabomber”, 2004) identified “chimerical,” along with “cool-headed logician” to be Kaczynski signatures.

Don’t make too much – as some of Gore’s critics do – of the similarity between those two paragraphs. Both writers have advanced degrees from prestigious universities, they share an interest in technology and environment, and are roughly the same age. Reading further in the manifesto reveals a great difference in attitudes. Though algorithms would have a hard time with the following paragraph, few human readers would identify the following excerpt with Gore (this paragraph caught my I eye because of its apparent reference to Thomas Kuhn, discussed a lot here recently – both were professors at UC Berkeley):

Modern leftist philosophers tend to dismiss reason, science, objective reality and to insist that everything is culturally relative. It is true that one can ask serious questions about the foundations of scientific knowledge and about how, if at all, the concept of objective reality can be defined. But it is obvious that modern leftist philosophers are not simply cool-headed logicians systematically analyzing the foundations of knowledge.

David Kaczynski, Ted’s brother, describes his awful realization about similarity between his brother’s language and that used in the recently published manifesto:

“When Linda and I returned from our Paris vacation, the Washington Post published the Unabomber’s manifesto. After I read the first few pages, my jaw literally dropped. One particular phrase disturbed me. It said modern philosophers were not ‘cool-headed logicians.’ Ted had once said I was not a ‘cool-headed logician’, and I had never heard anyone else use that phrase.”

shakespeareAnd on that basis, David went to the FBI, who arrested Ted in his cabin. It’s rare that you’re lucky enough to find such highly distinctive terms in authorship studies though. In my statistical stylometry work, I looked for unique or uncommon 2- to 8-word phrases (“rare pairs“, etc.) used only by two people in a larger population, and detected unwanted collaboration by that means. Most of my analysis, and that of experts far more skilled in this field than I, is not concerned with content. Much of it centers on function-word statistics – usage of pronouns, prepositions and conjunctions. Richness of vocabulary, rate of introduction of new words, and vocabulary frequency distribution also come into play. Some recent, sophisticated techniques look at characteristics of zipped text (which obviously does include content), and use markov chains, principal component analysis and support vector machines.

Statistical stylometry has been put to impressive use with startling and unpopular results. For over a century people have been attempting to determine whether Shakespeare wrote everything attributed to him, or whether Francis Bacon helped.  More recently D. I. Homes showed rather conclusively using hierarchical cluster analysis that the Book of Mormon and Book of Abraham both arose from the prophetic voice of Joseph Smith himself. Mosteller and Wallace differentiated, using function word frequency distributions, the writing of Hamilton and Madison in the Federalist Papers. They have also shown clear literary fingerprints in the writings of Jane Austen, Arthur Conan Doyle, Charles Dickens, Rudyard Kipling and Jack London. And for real fun, look into New Testament authorship studies.

Computer analysis of text is still in its infancy. I look forward to new techniques and new applications for them. Despite false starts and some exaggerated claims, this is good stuff. Given the chance, it certainly would have nailed the Unabomber. Maybe it can even determine what viewers really think of a movie.

, ,

3 Comments

Why Rating Systems Sometimes Work

Goodfilms is a Melbourne based startup that aims to do a better job of recommending movies to you. Their system uses your social network, e.g., Facebook, to show you what your friends are watching, along with two attributes of films, which you rate on a 10 scale (1 to 5 stars in half-star increments). It doesn’t appear that they include a personalized recommendation system based on collaborative filtering or similar.

In today’s Goodfilms blog post, Why Ratings Systems Don’t Work, the authors point to an XKCD cartoon identifying one of the many problems with collecting ratings from users.

star_ratings

The Goodfilms team says the problem with averaged rating values is that they attempt to distil an entire product down to a scalar value; that is, a number along a scale from 1 to some maximum imaginable goodness. They also suggest that histograms aren’t useful, asking how seeing the distribution of ratings for a film might possibly help you judge whether you’d like it.

Goodfilms demonstrates the point using three futuristic films, Blade Runner, Starship Troopers, and Fifth Element. The Goodfilms data shows bimodal distributions for all three films; the lowest number of ratings for each film is 2, 3, or 4 stars with 1 star and 5 stars having more votes.

Goodfilms goes on to say that their system gives you better guidance. Their film-quality visualization – rather than a star bar-chart and histogram – is  a two axis scatter plot of the two attributes you rate for films on their site – quality and rewatchability, how much you’d like to watch that film again.

Angels Theater, Angels Camp, CAAn astute engineer or economist might note that Goodfilms assumes quality and rewatchability to be independent variables, but they clearly are not. The relationship between the two attributes is complex and may vary greatly between film watchers. Regardless of the details of how those two variables interact, they are not independent; few viewers would rate something low in quality and high in rewatchability.

But even if these attributes were independent of each other, films have many other attributes that might be more telling – length, realism, character development, skin exposure, originality, clarity of intent, provocation, explosion count, and an endless list of others. Even if you included 100 such variables (and had a magic visualization tool for such data), you might not capture the sentiment of a crowd of viewers about the film, let alone be able to decide whether you would like it based on that data. Now if you had some deep knowledge of how you, as an individual, compare, in aesthetics, values and mental process, to your Facebook friends and to a larger population of viewers – then we’d really know something, but that kind of analysis is still some distance out.

Goodfilms is correct in concluding that rating systems have their perils; but their solution, while perhaps a step in the right direction, is naive. The problem with rating systems is not that they don’t capture enough attributes of the rated product or in their presentation of results. The problem lies in soft things. Rating systems tend to deal more with attributes of products than with attributes of raters of those products. Recommendation systems don’t account for social influence well at all. And there’s the matter of actual preferences versus stated preference; we sometimes lie about what we like, even to ourselves.

Social influence, as I’ve noted in past posts, is profound, yet its sources can be difficult to isolate. In rating systems, knowledge of how peers or a broader population have rated what you’re about to rate strongly influence the outcome of ratings. Experiments by Salganik and others on this (discussed in this post) are truly mind boggling, showing that weak information about group sentiment not only exaggerates preferences but greatly destabilizes the system.

The Goodfilms data shows bimodal distributions for all three films. The 1 star and 5 star vote count is higher than the minimum count of the 2, 3, and 4 star rating counts. Interestingly, this is much less true for Imdb’s data. So what’s the difference? Goodfilms’ rating counts for these movies range from about 900 to 1800. Imdb has hundreds of thousands of votes for these films.

SLO RewindAs described in a previous post (Wisdom and Madness of the Yelp Crowd), many ratings sites for various products have bimodal distributions when rating count is low, but more normally distributed votes as the count increases. It may be that the first people who rate feel the need to exaggerate their preferences to be heard. Any sentiment above middle might gets cast as 5 star, otherwise it’s 1 star. As more votes are cast, one of these extremes becomes dominant and attracts voters. Now just one vote in a crowd, those who rate later aren’t compelled to be extreme, yet are influenced by their knowledge of how others voted. This still results in exaggeration of group preferences (data is left or right skewed) through the psychological pressure to conform, but eliminates the bimodal distribution seen in the early phase of rating for a given product. There is also a tendency at Imdb for a film to be rated higher when it’s new than a year later. Bias originating in suggestion from experts surely plays a role in this too; advertising works.

In the Imdb data, we see a tiny bit bimodality. The number of “1” ratings is only slightly higher that the number of “2” ratings (1-10 scale). Based on Imdb data, all three movies are all better than average – “average” being not 5.5 (halfway between 1 and 10) but either 6.2, the mean Imdb rating, or 6.4, if you prefer the median.

Imdb publishes the breakdown of ratings based on gender and age (Blade Runner, Starship Troopers, Fifth Element). Starship Troopers has considerably more variation between ratings of those under 18 and those over 30 than do the other two films. Blade Runner is liked more by older audiences than younger ones. That those two facts aren’t surprising suggests that we should be able to do better than recommending products based only on what our friends like (unless you will like something because your friends like it) or based on simple collaborative filtering algorithms (you’ll like it because others who like what you like liked it).

Blade Runner on Imdb

Imdb rating count vs. rating for 3 movies

So far, attempts to predict preferences across categories – furniture you’ll like based on your music preferences – have been rather disastrous. But movie rating systems actually do work. Yes, there are a few gray sheep, who lack preference similarity with the rest of users, but compared to many things, movies are very predictable  – if you adjust for rating bias.  Without knowledge that Imdb ratings are biased toward good and toward new, you high think a film with an average rating of 6 is better than average, but it isn’t, according to the Imdb community. They rate high.

Algorithms can handle that minor obstacle, even when the bias toward high varies between raters. With minor tweaks of textbook filtering algorithms, I’ve gotten movie predictions to be accurate within about half a star of actual. I tested this by using the movielens database and removing one rating from each users’ data and then making predictions for the missing movie for each user, then averaging the difference between predicted and actual values. Movie preferences are very predictable. You’re likely to give a film the same rating whether you saw it yesterday or today. And you’re likely to continue liking things liked by those whose taste was similar to yours in the past. 

Restaurants are slightly less predictable, but still pretty good. Yesterday the restaurant was empty and you went for an early dinner. Today, you might get seated next to a loud retirement party and get a bad waiter. Same food, but your experience would color your subjective evaluation of food quality and come out in your rating.

Predicting who you should date or whether you’d like an autumn vacation in Paris is going to require a much different approach. Predicting that you’d like Paris based on movie tastes is ludicrous. There’s no reason to expect that to work other than Silicon Valley’s exuberant AI hype. That sort of prediction capability is probably within reach. But it will require a combination of smart filtering techniques (imputation-boosting, dimensionality reduction, hybrid clustering), taxonomy-driven computation, and a whole lot more context.

Mann CriterionContext?  – you ask. How does my GPS position affect my dating preferences? Well that one should be obvious. On the dating survey, you said you love ballet, but you were in a bowling alley four nights last week. You might want to sign up for the mixed league bowling. But what about dining preferences? To really see where this is going you need to expand your definition of context (I’m guessing Robert Scoble and Shel Israel have such an expanded view of context based on the draft TOC for their upcoming Age of Context).

My expanded view of context for food recommendations would include location and whatever physical sensor info I can get, along with “soft” data like your stated preferences, your dining history and other previous activities, food restrictions, and your interactions with your social network. I might conclude that you like pork ribs, based on the fact that you checked-in 30 times this year at a joint that serves little else. But you never go there for lunch with Bob, who seems to be a vegetarian based on his lunch check-ins. Bob isn’t with you today (based on both of your geo data), you haven’t been to Roy’s Ribs in two weeks, and it’s only a mile away. Further, I see that you’re trying to limit carbohydrates, so I’ll suggest you have the salad instead of fries with those ribs. That is, unless I know what you’ve eaten this week and see that you’re well below your expected carb intake, in which case I might recommend the baked potato since you’re also minding your sodium levels. And tomorrow you might want to try the Hủ Tiếu Mì at the Vietnamese place down the road because people who share your preferences and restrictions tend to like Vietnamese pork stew. Jill’s been there twice lately. She’s single, and in the bowling league, and she rates Blade Runner a 10.

, , , ,

2 Comments

Paul Feyerabend – The Worst Enemy of Science

Moved to Paul Feyerabend, The Worst Enemy of Science

 

, , , ,

7 Comments

The Incommensurable Thomas Kuhn


William Storage           4 Aug 2012
Visiting Scholar, UC Berkeley Center for Science, Technology & Society

Thomas Kuhn’s 1962 book, The Structure of Scientific Revolutions, appears in Wikipedia’s list of the 100 most influential books in history. In Structure, Kuhn introduced the now ubiquitous term and concept of paradigm shift. As Kuhn saw it, the scope of a paradigm was universal. A paradigm is not merely a theory, but the framework and worldview in which a theory dwells. Kuhn explained that, “successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science.” His view was that paradigms guide research through periods of “normal science,” during which, any experimental results not consistent with the paradigm are deemed erratic and are discarded. This persists until overwhelming evidence against the paradigm results in its collapse, and a paradigm shift occurs.

Kuhn stressed the idea of incommensurability between associated paradigms, meaning that it is impossible to understand the new paradigm from within the conceptual framework of its predecessor. Examples include the Copernican Revolution, plate tectonics,  and quantum mechanics.

Countless discussions and critiques of Kuhn and his work have been published. I’ll focus mainly on aspects of his work – and popular conceptions of it – related to its appropriation in technology and business process management; but a bit of background on popular misunderstandings of his work from a philosophy perspective will come in handy later.

Kuhn’s claim of incommensurability led many  to conclude that the selection of a governing theory is fundamentally irrational, a product of consensus or politics rather than of objective criteria. This fueled flames already raging in criticism of science in postmodernist, subjectivist, and post-structuralist circles. Kuhn was an overnight sensation and placed on a pedestal by all sorts of relativism, sociology, and arts and humanities movements, despite his vigorous rejection of them, their methods, their theories, and their paradigms. Decades later (The Road Since Structure), Kuhn added that, “if it was relativism, it was an interesting sort of relativism that needed to be thought out before the tag was applied.”

Communities outside of hard science – 20th century social theory in particular – couldn’t get enough of Kuhn and his paradigm shifts. Much of the Philosophy of Science community scoffed at his book. Within hard science there was considerable debate, particularly by Karl Popper, Stephen Toulmin and Paul Feyerabend. And even in the hard science community, Kuhn found himself in constant defense not against the scientific reading of his model, but against the ideas appropriated by schools of philosophers, cultural theorists, and literary critics calling themselves Kuhnians. Freeman Dyson recall s having confronted Kuhn about these schools of thought:

“A few years ago I happened to meet Kuhn at a scientific meeting and complained to him about the nonsense that had been attached to his name. He reacted angrily. In a voice loud enough to be heard by everyone in the hall, he shouted, ‘One thing you have to understand. I am not a Kuhnian.'” – Freeman Dyson, The Sun, The Genome, and The Internet: Tools of Scientific Revolutions

Postmodern deconstructionists are certainly right about one thing; there are many ways to read Kuhn. Kuhn’s Structure – if interpreted outside the narrow realm in which he intended it to operate – becomes strangely self-referential and self-exemplifying. Different communities consumed it as constrained by their existing paradigms. In The Road Since Structure Kuhn reflected that, regarding Structure‘s uptake, he had disappointments but not regrets. He suggested that if he had it do over, he would have sought to prevent readings such as the view that paradigms are sources of oppression to be destroyed.

Kuhn would have to have been extremely naive to fail to consider the consequences – in the socially precarious 1960s – of describing scientific change in terms of a sociological, political, and Gestalt-psychology models in a book having “revolution” in its title. Or perhaps it was a scientist’s humility (he was educated as a physicist) that allowed him to not anticipate that a book on history of science would ever be read outside the communities of science. Despite the incredulity of such claims – and independent of accuracy of his position on science – my reading of Kuhn’s interviews and commentary on the impact of Structure leads me to conclude that Kuhn is truly an accidental guru – misread, misunderstood, and misused by adoring postmodernist theorists and business strategists alike. Without Thomas Kuhn, paradigm shift would not rank in CNET’s top 10 dot-com buzzwords, futurist Joel Barker and  motivator Stephen Covey would have had very different careers, and postmodern relativists might still be desperately craving some shred of external validation.

——————————
.

“You talk about misuses of Kuhn’s work. I think it is wildly misused outside of natural sciences. The number of scientific revolutions is extremely small… To find one outside the natural sciences is very hard. There are just not enough interesting and signficant ideas around, but it is curious if you read the sociological or linguistic literate, that people are finding revolutions everywhere.” – Noam Chomsky, The Generative Enterprise Revisited

“Let us now turn our atention towards some historical analyses that have apparently provided grist for the mill of contemporary relativism. The most famous of these is undoubtedly Thomas Kuhn’s The Structure of Scientific Revolutions.” – Alan Sokol, Beyond the Hoax

——————————

The above use of a low-resolution image of Thomas Kuhn is contended to be a fair use because it is solely for the educational purpose of illustrating this article and because the value of any existing copyright is not lessened by its use here. The subject is deceased and no free equivalent can therefore be obtained. The image is of greatly lower quality than the original, reducing the risk of damage to the value of the original version.

, , ,

5 Comments

Dislodged Systems Engineers

When I mostly dislodged myself from aerospace a while back and became mostly embedded in Silicon Valley, I was surprised by the undisciplined use of the term “Systems Engineer.”

To me, Systems Engineering was a fairly concise term for an interdisciplinary approach to design and construct successful systems. Systems Engineering – as seen by INCOSE,  the International Council on Systems Engineering – involves translating customer needs into requirements, then proceeding with design synthesis. This process integrates many disciplines and specialty groups into a team effort to transform concept into design, production and operation. Systems Engineering accommodates business, technical and regulatory needs and requirements toward the goal of providing a quality product that makes investors, customers, regulators and insurers happy. It’s a methodical, top-down, big-picture approach.

In Silicon Valley, “systems engineering” is usually short for “embeddedsystems engineering,” i.e., the engineering of embedded systems. An embedded system is usually a computer system that performs specific control functions, often within a larger system – like those designed by systems engineers as described above. Embedded systems get their name by being completely contained within a physical (hardware) device. Embedded systems typically contain microcontrollers or digital signal processors for a particular task within the device. A common form of embedded system is the firmware that provides the logic for your smart phone.

IrrigationThere is often overlap. Aircraft, hospitals and irrigation management networks are all proper systems. And they contain many devices with embedded systems. Systems engineers need to have a cursory knowledge of what embedded-systems engineers do, and often detailed knowledge of the requirements for embedded systems. It’s a rare Systems Engineer who also does well at detailed design of embedded systems (Ron Bax at Crane Hydro-Aire take a bow). And vice versa. Designers of embedded systems usually only deal with a subset of the fundamentals of systems engineering – business problem statement, formulation of alternatives (trade studies), system modeling, integration, prototyping, performance assessment, reevaluation and iteration on these steps.

Because there are a lot more embedded-systems engineers than systems engineers in Silicon Valley, its residents are happy with dropping the “embedded” part, probably not realizing that doing so would make it hard for a systems engineer to find consulting work. Or perhaps “embedded” seems superfluous if you don’t know about the discipline of systems engineering at all. This is a shame, since a lot of firms who make things with embedded systems could use a bit – perhaps quite a bit – of systems engineering perspective.

This is an appeal for more discipline in the semantics of engineering (call me a pedantic windbag – my wife does) and for awareness of the discipline of Systems Engineering. Systems Engineering is a thing and the world could use more of it. Silicon Valley firms would benefit from the methodical, big-picture perspective of Systems Engineering by better transforming concept to design and design to product. Their investors would like it too.

—————————————————-

Tangent:

In my work as a software engineer – not of the embedded sort – I’ve spent some time with various aspects of semantics and linguistics – forensic linguistics being the most fun. “Embedded” in linguistics refers to a phrase contained in a phrase of the same type. This makes for very difficult machine – and often human – parsing. Humans have little trouble with single embedding but struggle with double embedding. Triple embedding, though it appeared in ancient writing, sends modern humans running for the reboot switch. The ancient Romans were far more adept at parsing such sentences than we are today, though their language was more suited to it.

The child the dog bit got rabies shots. The child the dog the man shot bit got rabies shots. The child the dog the man the owner sued shot bit got rabies shots.

My wife is probably right.

, , , , , , ,

2 Comments

Spurious Regression

William Storage           14 Jun 2012
Visiting Scholar, UC Berkeley Center for Science, Technology, Medicine & Society


I’ve been looking into the range of usage of the term “Design Thinking” (see previous post on this subject) on the web along with its rate of appearance in publications. According to Google, the term first appeared in print in 1973, occurring occasionally until 1988. Over the next five years its usage increased ten-fold, then calming down a bit. It peaked again in 2003 and has declined a bit since then.

Design-ThinkingRate of appearance of “Design Thinking” in publications
since 1970 (bottom horizontal is zero) per Google.

More interesting than term publication rates was the Google data on search requests. I happened upon a strong correlation between Google searches for “Design Thinking” and both “Bible verse” and “scriptures.” That is, the rate of Google searches for Design Thinking rise and fall in sync with searches for Bible verses.

A scatter plot of search activity for Design Thinking and Bible verse from 2005 to present shows an uncanny correlation:

DesignThinking-BibleVerse_scatter
US web search activity for Design Thinking and Bible verse (r=0.9648) Source: Google Correlate

From this, we might conclude that Design Thinking is a religion or that holism is central to both Christianity and Design Thinking. Or that studying Design Thinking causes interest in scriptures or vice versa. While at least one of these four possibilities is in fact true (Christianity and Design Thinking both rely on holism), we would be very wrong to think the relationship between search behavior on these terms to be causal.

A closer look at the Design Thinking – Bible verse data, this time as a line plot, over a few years is telling. Searches for the both terms hit a yearly minimum the last week of December and another local minimum near mid-July. It would seem that time of year has something do with searching on both terms.

Google-DesignThinking-BibleVerse2
Google Correlate relative rates of searches on Design Thinking
and Bible verse, July 09-July 2011 (r=0.964)

If two sets of data, A and B, correlate, there are four possibilities to explain the correlation:

1. A causes B
2. B causes A
3. C causes both A and B
4. The correlation is merely coincidental

Item 3, known as the hidden variable or ignoring a common cause, is standard fare for politics and TV news (imagine what Fox News or NPR might do with the Design Thinking – Bible verse correlation). But in statistics, spurious correlations are bad news.

Spurious regression is the term for the scenario above. In this linear regression model, A was regressed on B. But there is some unknown C probably having to do with seasonal interest/disinterest due to time availability or more pressing topics of interest. Searches on Broncos and Tebow, for example, have negative correlations with Design Thinking and Bible verse.

Watch for tomorrow’s piece on Politics Thinking and  Journalism Thinking.


Untitled

, ,

2 Comments

Wind Science Fluttering in the Breeze

Windmill and barnThree years ago Inc magazine praised a recently-funded startup called WindTronics. Their energy claims for their $5500 rooftop wind turbine seemed so absurd that I suspected Inc had botched the technical details. Since then I’ve followed the Michigan firm. Their rooftop wind turbine was awarded “Best of What’s New” by Popular Science magazine last November. It was called “one of the 10 most brilliant products of 2009” by Popular Mechanics. In 2009 they moved their production to Ontario. They recently closed operations in Ontario and moved back to Michigan. Reports say Canadians aren’t happy about the $2.7 million Canada gave the company as an incentive to set up operations there. The Windsor Star reports that WindTronics left without making good on its debts.

There may be two sides to the financial issues; I didn’t dig very deep. The technical claims, however, are another matter. Some basic analysis reveals big problems with the claims.

Windtronics make a 6-foot diameter rooftop wind turbine. They claimed the device could supply 18% of an average household’s electricity, based on a 12.8 mph wind speed. Without knowing a thing about their technology, it’s very easy to debunk this. They also claim it generates power down to a wind speed of two miles per hour. This is true, but highly deceptive.

The wind in Chicago, the windy city, averages about 10 mph. Kinetic energy is equal to ½ the mass of the moving matter times its velocity squared. So wind energy extracted from moving air – if you could catch it all – would be proportional to the square of the wind speed. Cut the speed in half and you end up with one fourth of the energy. – You’d cut the ideal maximum by 75 percent, assuming the turbine were equally efficient at both wind speeds – which is impossible. At two mph wind speed, the maximum theoretical power would be 4% of the power at 10 mph. But a few more details will show it to be even far less than that.

Large modern wind turbines have an efficiency of about 40%, but they reach this maximum at the specific wind speed for which they were designed. The efficiency is constrained by frictional losses at low speeds and back pressure (the “lift” that makes an aircraft fly) on the blades above the design speed. Above or below the optimum wind speed, efficiency drops off steeply. For example, at twice their design wind speed, the efficiency of commercial wind turbines drops to about 10%.

Betz’ Law, a principle of hydraulics, shows that the maximum energy that a turbine of any design can extract from such a wind turbine is exactly 16/27 (~59%) of the kinetic energy of wind. The Windtronics machine is six feet in diameter. Assuming its blades go to the very outer diameter of their housing, its wind area is 28 square feet. Using average air pressure, temperature and humidity and a Rayleigh distribution of wind speed, one can then calculate the energy in a 6-foot diameter tube of air moving at 12.8 miles per hour. 59% of that will be the maximum possible energy that the Windtronics machine could produce if it were a perfect machine. That equates to 2000 kWh per year. But that value is for a machine that is frictionless.

At an optimistic efficiency of 50% and a wind velocity of 6.5 miles per hour, the calculated yearly output of the WindTronics turbine is 404 kWh, which is about 4.0% of the average household’s electrical usage, based on Department of Energy usage numbers.

Also per the DOE, the average cost of residential electricity in the United States was (and still is) 12 cents per kWh when WindTronics released their turbine. The average household uses 11,000 kWh per year, and therefore, pays about $1300 for all their electricity. If the rooftop turbine supplies 4% of that and costs $5500, you could amortize your purchase in a mere 100 years, assuming your installation costs are zero and the unit lasts a century without maintenance.

Consumer Reports evaluated the turbine in October 2011 and reported an installation cost of about $11,000. They said they got only a fraction of the power WindTronics told them to expect and noted that it would not pay for itself in its expected 20-year life. My quick analysis suggests they put it mildly.

Windtronics explains the magic of their gizmo:

Our wind turbine utilizes a system of magnets and stators surrounding its outer ring, capturing power at the blade tips where speed is greatest, practically eliminating mechanical resistance and drag. Rather than forcing the available wind to turn a generator, the perimeter power system becomes the generator by swiftly passing the blade tip magnets through the copper coil banks mounted onto the enclosed perimeter frame.

While there’s nothing actually false in those words, they seem to aim at baffling more than illuminating. Elegant words whose meaning is lost somewhere in a vast windswept expanse.

, , ,

3 Comments