Archive for category Engineering
Physics for Frisco motorheads
Posted by Bill Storage in Engineering on July 28, 2019
San Francisco police are highly tolerant. A few are tolerant in the way giant sloths are tolerant. Most are tolerant because SF ties their hands from all but babysitting the homeless. Excellent at tolerating heroin use on Market Street, they’re also proficient at tolerating vehicular crime, from sailing through red lights (23 fatalities downtown last year) to minor stuff like illegal – oops, undocumented – car mods.
For a progressive burg, SF has a lot of muscle cars, Oddly, many of the car nuts in San Francisco use the term “Frisco,” against local norms.
Back in the ’70s, in my small Ohio town, the losers drove muscle cars to high school. A very few of these cars had amazing acceleration ability. A variant of ’65 Pontiac Catalina could do zero to 60 in 4 1/2 seconds. A Tesla might leave it in the dust, but that was something back then. While the Catalina’s handling was awful, it could admirably smoke the starting line. Unlike the Catalina, most muscle cars of the ’60s and ’70s – including the curvaceous ’75 Corvette – were total crap, even for accelerating. My witless schoolmates lacked any grasp of the simple physics that could explain how and why their cars were crap. I longed to leave those barbarians and move to someplace civilized. I ended up in San Francisco.
Those Ohio simpletons strutted their beaters’ ability to squeal tires from a dead stop. They did this often, in case any of us might forget just how fast their foot could pound the pedal. Wimpy crates couldn’t burn rubber like that. So their cars must be pretty badass, they thought. Their tires would squeal with the tenderest touch of the pedal. Awesome power, right?
Actually, it meant a badly unbalanced vehicle design combined with a gas-pedal-position vs. fuel-delivery curve yielding a nonlinear relationship between pedal position and throttle plate position. This abomination of engineering attracted 17-year-old bubbas cocksure that hot chicks dig the smell of burning rubber. See figure A.

Fig. A
This hypothetical, badly-designed car has a feeble but weighty 100 hp engine and rear-wheel drive. Its rear tires will squeal at the drop of a hat even though the car is gutless. Its center of gravity, where its weight would be if you concentrated all its weight into a point, is too far forward. Too little load on the rear wheels.
Friction, which allows you to accelerate, is proportional to the normal force, i.e. the force of the ground pushing up on the tires. That is, the traction capacity of a tire contacting the road is proportional to the weight on the tire. With a better distribution of weight, the torque resulting from the frictional force at the rear wheels would increase the normal force there, resulting in the tendency to do a wheelie. This car will never do a wheelie. It lacks the torque, even if the meathead driving it floors it before dumping the clutch.
Figure A is an exaggeration of what was going on in the heaps driven by my classmates.
Above, I noted that the traction capacity of a tire contacting the road is proportional to the weight on the tire. The constant of proportionality is called the coefficient of friction. From this we get F = uN, meaning frictional force equals the coefficient of friction (“u”) times the normal force, which is, roughly speaking, the weight pushing on the tire.
The maximum possible coefficient of friction on smooth surfaces is 1.0. That means a car’s maximum possible acceleration would be 1g: 32 feet per second per second. Calculating a 0-60 time based on 1g yields 2.73 seconds. Hot cars can momentarily exceed that acceleration, because tires sink into small depressions in pavement, like a pinion engaging a rack (round gear on a linear gear).
Here’s how Isaac Newton, who was into hot cars, viewed the 0-60-at-1-g problem:
- Acceleration is change in speed over time. a = dv/t.
- Acceleration due to gravity (body falling in a vacuum) is 32.2 feet per second.
- 5280 feet in a mile. 60 seconds in a minute.
- 60 mph = 5280/60 ft/sec = 88 ft/sec .
- a = delta v/t . Solve for t: t = dv/a. dv = 88 ft/sec. a = 32.2 ft/sec/sec. t = dv/a = 88/32.2 (ft/sec) / (ft/sec squared) = 2.73 sec. Voila.
The early 428 Shelby Mustangs were amazing, even by today’s acceleration standards, though they were likely still awful to steer. In contrast to the noble Shelbys, some late ’60s – early ’70s Mustangs with inline-six 3-liter engines topped out at just over 100 hp. Ford even sold a V8 version of the Mustang with a pitiful 140 hp engine. Shame, Lee Iacocca. It could do zero to 60 in around 13 seconds. Really.
Those cars had terrible handling because their suspensions were lousy and because of subtle aspects of weight distribution (extra credit: see polar moment of inertia).
If you can’t have power, at least have noise. To make your car or bike really loud, do this simple trick. Insert a stack of washers or some nuts between the muffler and exhaust pipe to leave a big gap, thereby effectively disconnecting the muffler. This worked back in 1974 and, despite civic awareness and modern sensitivity to air and noise pollution, it still works great today. For more hearing damage, custom “exhaust” systems, especially for bikes (cops have deep chopper envy and will look they other way when your hog sets off car alarms), can help you exceed 105 db SPL. Every girl’s eye will be on you, bud. Hubba hubba. See figure B.

Fig. B
I get a bit of nostalgia when I hear those marvels of engineering from the ’60s and ’70s on Market Street nightly, at Fisherman’s Wharf, and even in my neighborhood. Our police can endure that kind of racket because they’re well-paid to tolerate it. Wish I were similarly compensated. I sometimes think of this at 4 am on Sunday morning even if my windows are closed.
I visited the old country, Ohio, last year. There were no squealing tires and few painfully loud motors on the street. Maybe the motorheads evolved. Maybe the cops aren’t paid enough to tolerate them. Ohio was nice to visit, but the deplorable intolerance was stifling.
Love Me I’m an Agile Scrum Master
Posted by Bill Storage in Engineering on July 19, 2016
In the 1966 song, Love Me I’m a Liberal, protest singer Phil Ochs mocked the American left for insincerely pledging support for civil rights and socialist causes. Using the voice of a liberal hypocrite, Ochs sings that he “hope[s] every colored boy becomes a star, but don’t talk about revolution; that’s going a little too far.” The refrain is, “So love me, love me, love me, I’m a liberal.” Putting Ochs in historical context, he hoped to be part of a major revolution and his anarchic expectations were deflated by moderate democrats. In Ochs’ view, limousine liberals and hippies with capitalist leanings were eroding the conceptual purity of the movement he embraced.
If Ochs were alive today, he probably wouldn’t write software; but if he did he’d feel right at home in faux-agile development situations where time-boxing is a euphemism for scheduling, the scrum master is a Project Manager who calls Agile a process, and a goal has been set for increased iteration velocity and higher story points per cycle. Agile can look a lot like the pre-Agile world these days. Scrum in the hands of an Agile imposter who interprets “incremental” to mean “sequential” makes an Agile software project look like a waterfall.
While it’s tempting to blame the abuse and dilution of Agile on half-converts who endorsed it insincerely – like Phil Ochs’ milquetoast liberals – we might also look for cracks in the foundations of Agile and Scrum (Agile is a set of principles, Scrum is a methodology based on them). After all, is it really fair to demand conformity to the rules of a philosophy that embraces adaptiveness? Specifically, I refer to item 4 in the list of values called out in the Agile Manifesto:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
A better charge against those we think have misapplied Agile might be based on consistency and internal coherence. That is, item 1 logically puts some constraints on item 4. Adapting to a business situation by deciding to value process and tools over individuals can easily be said to violate the spirit of the values. As obvious as that seems, I’ve seen a lot of schedule-driven “Agile teams” bound to rigid, arbitrary coding standards imposed by a siloed QA person, struggling against the current toward a product concept that has never been near a customer. Steve Jobs showed that a successful Product Owner can sometimes insulate himself from real customers; but I doubt that approach is a good bet on average.
It’s probably also fair to call foul on those who “do Agile” without self-organizing teams and without pushing decision-making power down through an organization. Likewise, the manifesto tells us to build projects around highly motivated individuals and give them the environment and trust they need to get the job done. This means we need motivated developers worthy of trust who actually can the job done, i.e., first rate developers. Scrum is based on the notion of a highly qualified self-organizing, self-directed development team. But it’s often used by managers as an attempt to employ, organize, coordinate and direct an under-qualified team. Belief that Scrum can manage and make productive a low-skilled team is widespread. This isn’t the fault of Scrum or Agile but just the current marker of the enduring impulse to buy software developers by the pound.
But another side of this issue might yet point to a basic flaw in Agile. Excellent developers are hard to find. And with a team of excellent developers, any other methodology would work as well. Less competent and less experienced workers might find comfort in rules, thereby having little motivation or ability to respond to change (Agile value no. 4).
As a minor issues with Agile/Scrum, some of the terminology is unfortunate. Backlog traditionally has a negative connotation. Starting a project with backlog on day one might demotivate some. Sprint surely sounds a lot like pressure is being applied; no wonder backsliding scrum masters use it to schedule. Is Sprint a euphemism for death-march? And of all the sports imagery available, the rugby scrum seems inconsistent with Scrum methodology and Agile values. Would Scrum Servant change anything?
The idea of using a Scrum burn-down chart to “plan” (euphemism for schedule) might warrant a second look too. Scheduling by extrapolation may remove the stress from the scheduling activity; but it’s still highly inductive and the future rarely resembles the past. The final steps always take the longest; and guessing how much longer than average is called “estimating.” Can we reconcile any of this with Agile’s focus on being value-driven, not plan-driven? Project planning, after all, is one of the erroneous assumptions of software project management that gave rise to Agile.
Finally, I see a disconnect between the method of Scrum and the values of Agile. Scrum creates a perverse incentive for developers to continually define sprints that show smaller and smaller bits of functionality. Then a series of highly successful sprints, each yielding a workable product, only asymptotically approaches the Product Owner’s goal.
Are Agile’s days numbered, or is it a good mare needing a better jockey?
———–
“People who enjoy meetings should not be in charge of anything.” – Thomas Sowell
Marcus Vitruvius’s Science
Posted by Bill Storage in Engineering, Philosophy of Science on June 26, 2015
Science, as an enterprise that acquires knowledge and justified beliefs in the form of testable predictions by systematic iterations of observation and math-based theory, started around the 17th century, somewhere between Copernicus and Newton. That, we learned in school, was the beginning of the scientific revolution. Historians of science tend to regard this great revolution as the one that never happened. That is, as Floris Cohen puts it, the scientific revolution, once an innovative and inspiring concept, has since turned into a straight-jacket. Picking this revolution’s starting point, identifying any cause for it, and deciding what concepts and technological innovations belong to it are problematic.
That said, several writers have made good cases for why the pace of evolution – if not revolution – of modern science accelerated dramatically in Europe, only when it did, why it has continuously gained steam rather than petering out, its primary driving force, and the associated transformations in our view of how nature works. Some thought the protestant ethic and capitalism set the stage for science. Others thought science couldn’t emerge until the alliance between Christianity and Aristotelianism was dissolved. Moveable type and mass production of books can certainly claim a role, but was it really a prerequisite? Some think a critical mass of ancient Greek writings had to have been transferred to western Europe by the Muslims. The humanist literary critics that enabled repair and reconstruction of ancient texts mangled in translation from Greek to Syriac to Persian to Latin and botched by illiterate medieval scribes certainly played a part. If this sounds like a stretch, note that those critics seem to mark the first occurrence of a collective effort by a group spread across a large geographic space using shared standards to reach a peer-reviewed consensus – a process sharing much with modern science.
But those reasons given for the scientific revolution all have the feel of post hoc theorizing. Might intellectuals of the day, observing these events, have concluded that a resultant scientific revolution was on the horizon? Francis Bacon comes closest to fitting this bill, but his predictions gave little sense that he was envisioning anything like what really happened.
I’ve wondered why the burst of progress in science – as differentiated from plain know-how, nature-knowledge, art, craft, technique, or engineering knowledge – didn’t happen earlier. Why not just after the period of innovation in from about 1100 to 1300 CE in Europe. In this period Jean Buridan invented calculators and almost got the concept of inertia right. Robert Grosseteste hinted at the experiment-theory model of science. Nicole Oresme debunked astrology and gave arguments for a moving earth. But he was the end of this line. After this brief awakening, which also included the invention of banking and the university, progress came to a screeching halt. Some blame the plague, but that can’t be the culprit. Literature of the time barley mentions the plague. Despite the death toll, politics and war went on as usual; but interest in resurrecting ancient Greek knowledge of all sorts tanked.
Why not in the Islamic world in the time of Ali al-Qushji and al-Birjandi? Certainly the mental capacity was there. A layman would have a hard time distinguishing al-Birjandi’s arguments and thought experiments for the earth’s rotation from those of Galileo. But Islamic civilization at the time had plenty of scholars but no institutions for making practical use of such knowledge and its society would not have tolerated displacement of received wisdom by man-made knowledge.
The most compelling case for civilization having been on the brink of science at an earlier time seems to be the late republic or early imperial Rome. This may seem a stretch, since Rome is much more known for brute force than for finesse, despite their flying buttresses, cranes, fire engines, central heating and indoor plumbing.
Consider the writings of one Vitruvius, likely Marcus Vitruvius Pollio, in the early reign of Augustus. Vitruvius wrote De Architectura, a ten volume guide to Roman engineering knowledge. Architecture, in Latin, translates accurately into what we call engineering. Rediscovered and widely published during the European renaissance as a standard text for engineers, Vitruvius’s work contains text that seems to contradict what we were all taught about the emergence of the – or a – scientific method.
Vitruvius is full of surprises. He acknowledges that he is not a scientist (an anachronistic but fitting term) but a collator of Greek learning from several preceding centuries. He describes vanishing point perspective: “…the method of sketching a front with the sides withdrawing into the background, the lines all meeting in the center of a circle.” (See photo below of a fresco in the Oecus at Villa Poppea, Oplontis showing construction lines for vanishing point perspective.) He covers acoustic considerations for theater design, explains central heating technology, and the Archimedian water screw used to drain mines. He mentions a steam engine, likely that later described by Hero of Alexandria (aeolipile drawing at right), which turns heat into rotational energy. He describes a heliocentric model passed down from ancient Greeks. To be sure, there is also much that Vitruvius gets wrong about physics. But so does Galileo.
Most of De Architectura is not really science; it could more accurately be called know-how, technology, or engineering knowledge. Yet it’s close. Vitruvius explains the difference between mere machines, which let men do work, and engines, which derive from ingenuity and allow storing energy.
What convinces me most that Vitruvius – and he surely could not have been alone – truly had the concept of modern scientific method within his grasp is his understanding that a combination of mathematical proof (“demonstration” in his terms) plus theory, plus hands-on practice are needed for real engineering knowledge. Thus he says that what we call science – theory plus math (demonstration) plus observation (practice) – is essential to good engineering.
The engineer should be equipped with knowledge of many branches of study and varied kinds of learning, for it is by his judgement that all work done by the other arts is put to test. This knowledge is the child of practice and theory. Practice is the continuous and regular exercise of employment where manual work is done with any necessary material according to the design of a drawing. Theory, on the other hand, is the ability to demonstrate and explain the productions of dexterity on the principles of proportion.
It follows, therefore, that engineers who have aimed at acquiring manual skill without scholarship have never been able to reach a position of authority to correspond to their pains, while those who relied only upon theories and scholarship were obviously hunting the shadow, not the substance. But those who have a thorough knowledge of both, like men armed at all points, have the sooner attained their object and carried authority with them.
It appears, then, that one who professes himself an engineer should be well versed in both directions. He ought, therefore, to be both naturally gifted and amenable to instruction. Neither natural ability without instruction nor instruction without natural ability can make the perfect artist. Let him be educated, skillful with the pencil, instructed in geometry, know much history, have followed the philosophers with attention, understand music, have some knowledge of medicine, know the opinions of the jurists, and be acquainted with astronomy and the theory of the heavens. – Vitruvius – De Architectura, Book 1
Historians, please correct me if you know otherwise, but I don’t think there’s anything else remotely like this on record before Isaac Newton – anything in writing that comes this close to an understanding of modern scientific method.
So what went wrong in Rome? Many blame Christianity for the demise of knowledge in Rome, but that is not the case here. We can’t know for sure, but the later failure of science in the Islamic world seems to provide a clue. Society simply wasn’t ready. Vitruvius and his ilk may have been ready for science, but after nearly a century of civil war (starting with the Italian social wars), Augustus, the senate, and likely the plebes, had seen too much social innovation that all went bad. The vision of science, so evident during the European Enlightenment, as the primary driver of social change, may have been apparent to influential Romans as well, at a time when social change had lost its luster. As seen in writings of Cicero and the correspondence between Pliny and Trajan, Rome now regarded social innovation with suspicion if not contempt. Roman society, at least its government and aristocracy, simply couldn’t risk the main byproduct of science – progress.
———————————-
History is not merely what happened: it is what happened in the context of what might have happened. – Hugh Trevor-Roper – Oxford Valedictorian Address, 1998
The affairs of the Empire of letters are in a situation in which they never were and never will be again; we are passing now from an old world into the new world, and we are working seriously on the first foundation of the sciences. – Robert Desgabets, Oeuvres complètes de Malebranche, 1676
Newton interjected historical remarks which were neither accurate nor fair. These historical lapses are a reminder that history requires every bit as much attention to detail as does science – and the history of science perhaps twice as much. – Carl Benjamin Boyer, The Rainbow: From Myth to Mathematics, 1957
Text and photos © 2015 William Storage
Pure Green Sense
Posted by Bill Storage in Engineering, Sustainable Energy on March 6, 2015
With some sadness I recently received a Notice of Assignment for the Benefit of Creditors signaling the demise of PureSense Environmental, Inc. PureSense was real green – not green paint.
It’s ironic that PureSense was so little known. Environmental charlatans and quacks continue to get venture capital and government grants for businesses built around absurd “green” products debunkable by anyone with knowledge of high school physics. PureSense was nothing like that. Their down-to-earth (literally) concept provides real-time irrigation and agricultural field management with inexpensive hardware and sophisticated software. Their matrix of sensors record soil moisture, salinity, soil temperature and climate data from crop fields every 15 minutes. Doing this eliminates guesswork, optimizing use of electricity, water, and pesticides. Avoiding over- and under-watering maximizes crop yield while minimizing use of resources. It’s a win-win.
But innovation and farming are strange bedfellows. Apparently, farmers didn’t all jump at the opportunity. I did some crop disease modelling work for PureSense a few years back. Their employees told me that a common response to showing farmers that their neighbors had substantially increased yield using PureSense was along the lines of, “we’re doing ok with what we’ve got…” Perhaps we shouldn’t be surprised. Not too long ago, farmers who experimented too wildly left no progeny.
The ever fascinating Jethro Tull, inventor of the modern seed drill and many other revolutionary farming gadgets in the early 1700s, was flabbergasted at the reluctance of farmers to adopt his tools and methods. Tull wrote on Soil and Civilization, predicting that future people would have easier lives, since “the Produce of Land Will be Increased, and the Usual Expence Lessened” through a scientific (though that word is an anachronism) approach to agriculture.
The editor of his 2nd edition of his Horse-hoeing Husbandry, Or, An Essay on the Principles of Vegetation and Tillage echoed Tull’s astonishment at farmers’ behavior.
How it has happened that a Method of Culture which proposes such advantages to those who shall duly prosecute it, hath been so long neglected in this Country, may be matter of Surprize to such as are not acquainted with the Characters of the Men on whom the Practice thereof depends; but to those who know them thoroughly it can be done. For it is certain that very few of them can be prevailed on to alter their usual Methods upon any consideration; though they are convinced that their continuing therein disables them from paying their Rents, and maintaining their Families.
And, what is still more to be lamented, these People are so much attached to their old Customs, that they are not only averse to alter them themselves, but are moreover industrious to prevent others from succeeding, who attempt to introduce anything new; and indeed have it too generally in their Power, to defeat any Scheme which is not agreeable to their own Notions; seeing it must be executed by the same sort of Hands.
Tull could have predicted PureSense’s demise. I think its employees could have as well. GlassDoor comments suggested that PureSense needed “a more devoted sales staff.” That is likely an understatement given the market. A more creative sales model might be more on the mark. Knowing that farmers, even while wincing at ever-shrinking margins, will cling to their established methods for better or worse, PureSense should perhaps have gotten closer to the culture of farming.
PureSense’s possible failure to tap into farmers’ psyche aside, America’s vulnerability to futuristic technobabble is no doubt a major funding hurdle. You’d think that USDA REAP loan providers and NRCS Conservation Innovation Grants programs would be lining up at their door. But I suspect crop efficiency pales in wow factor compared to a cylindrical tower of solar cells that somehow magically increases the area of sun-facing photovoltaics (hint: Solyndra’s actual efficiency was about 8.5%, a far cry from their claims that got them half a billion from the Obama administration).
Ozzie Zehner nailed this problem in Green Illusions. In his chapter on the alternative-energy fetish, he discusses energy pornographers, the enviro-techno-enthusiasts who jump to spend billions on dubious green tech that yields less benefit than home insulation and proper tire inflation would. Insulation, light rail, and LED lighting isn’t sexy; biofuels, advanced solar, and stratospheric wind turbines are. Jethro Tull would not have been surprised that modern farmers are as resistant to change as those of 17th century Berkshire. But I think he’d be appalled to learn the extent to which modern tech press, business and government line up for physics-defying snake oil while ignoring something as fundamental as agriculture.
As I finished writing this I learned that Jain Irrigation has just acquired the assets of PureSense and has pledged a long-term commitment to the PureSense platform.
Jethro Tull smiles.
More Philosophy for Engineers
Posted by Bill Storage in Engineering, Philosophy of Science, Probability and Risk on January 9, 2015
In a post on Richard Feynman and philosophy of science, I suggested that engineers would benefit from a class in philosophy of science. A student recently asked if I meant to say that a course in philosophy would make engineers better at engineering – or better philosophers. Better engineers, I said.
Here’s an example from my recent work as an engineer that drives the point home.
I was reviewing an FMEA (Failure Mode Effects Analysis) prepared by a high-priced consultancy and encountered many cases where a critical failure mode had been deemed highly improbable on the basis that the FMEA was for a mature system with no known failures.
How many hours of operation has this system actually seen, I asked. The response indicated about 10 thousand hours total.
I said on that basis we could assume a failure rate of about one per 10,001 hours. The direct cost of the failure was about $1.5 million. Thus the “expected value” (or “mathematical expectation” – the probabilistic cost of the loss) of this failure mode in a 160 hour mission is $24,000 or about $300,000 per year (excluding any secondary effects such as damaged reputation). With that number in mind, I asked the client if they wanted to consider further mitigation by adding monitoring circuitry.
I was challenged on the failure rate I used. It was, after all, a mature, ten year old system with no recorded failures of this type.
Here’s where the analytic philosophy course those consultants never took would have been useful.
You simply cannot justify calling a failure mode extremely rare based on evidence that it is at least somewhat rare. All unique events – like the massive rotor failure that took out all three hydraulic systems of a DC-10 in Sioux City – were very rare before they happened.
The authors of the FMEA I was reviewing were using unjustifiable inductive reasoning. Philosopher David Hume debugged this thoroughly in his 1738 A Treatise of Human Nature.
Hume concluded that there simply is no rational or deductive basis for induction, the belief that the future will be like the past.
Hume understood that, despite the lack of justification for induction, betting against the sun rising tomorrow was not a good strategy either. But this is a matter of pragmatism, not of rationality. A bet against the sunrise would mean getting behind counter-induction; and there’s no rational justification for that either.
In the case of the failure mode not yet observed, however, there is ample justification for counter-induction. All mechanical parts and all human operations necessarily have nonzero failure or error rates. In the world of failure modeling, the knowledge “known pretty good” does not support the proposition “probably extremely good”, no matter how natural the step between them feels.
Hume’s problem of induction, despite the efforts of Immanuel Kant and the McKinsey consulting firm, has not been solved.
A fabulously entertaining – in my view – expression of the problem of induction was given by philosopher Carl Hempel in 1965.
Hempel observed that we tend to take each new observation of a black crow as incrementally supporting the inductive conclusion that all crows are black. Deductive logic tells us that if a conditional statement is true, its contrapositive is also true, since the statement and its contrapositive are logically equivalent. Thus if all crows are black then all non-black things are non-crow.
It then follows that if each observation of black crows is evidence that all crows are black (compare: each observation of no failure is evidence that no failure will occur), then each observation of a non-black non-crow is also evidence that all crows are black.
Following this line, my red shirt is confirming evidence for the proposition that all crows are black. It’s a hard argument to oppose, but it simply does not “feel” right to most people.
Many try to salvage the situation by suggesting that observing that my shirt is red is in fact evidence that all crows are black, but provides only unimaginably small support to that proposition.
But pushing the thing just a bit further destroys even this attempt at rescuing induction from the clutches of analysis.
If my red shirt gives a tiny bit of evidence that all crows are black, it then also gives equal support to the proposition that all crows are white. After all, my red shirt is a non-white non-crow.
Incommensurability and the Design-Engineering Gap
Posted by Bill Storage in Engineering, Innovation management, Interdisciplinary teams on April 4, 2014
Those who conceptualize products – particularly software – often have the unpleasant task of explaining their conceptual gems to unimaginative, sanctimonious engineers entrenched in the analytic mire of in-the-box thinking. This communication directs the engineers to do some plumbing and flip a few switches that get the concept to its intended audience or market… Or, at least, this is how many engineers think they are viewed by designers.
Truth is, engineers and creative designers really don’t speak the same language. This is more than just a joke. Many posts here involve philosopher of science, Thomas Kuhn. Kuhn’s idea of incommensurability between scientific paradigms also fits the design-engineering gap well. Those who claim the label, designers, believe design to be a highly creative, open-ended process with no right answer. Many engineers, conversely, understand design – at least within their discipline – to mean a systematic selection of components progressively integrated into an overall system, guided by business constraints and the laws of nature and reason. Disagreement on the meaning of design is just the start of the conflict.
Kuhn concluded that the lexicon of a discipline constrains the problem space and conceptual universe of that discipline. I.e., there is no fundamental theory of meaning that applies across paradigms. The meaning of expressions inside a paradigm comply only with the rules of that paradigm. Says Kuhn, “Conceptually, the world is our representation of our niche, the residence of the particular human community with whose members we are currently interacting” (The Road Since Structure, 1993, p. 103). Kuhn was criticized for exaggerating the extent to which a community’s vocabulary and word usage constrains the thoughts they are able to think. Kuhn saw this condition as self-perpetuating, since the discipline’s constrained thoughts then eliminate any need for expansion of its lexicon. Kuhn may have overplayed his hand on incommensurability, but you wouldn’t know it from some software-project kickoff meetings I’ve attended.
This short sketch, The Expert, written and directed by Lauris Beinerts, portrays design-engineering incommensurability from the perspective of the sole engineer in a preliminary design meeting.
See also: Debbie Downer Doesn’t Do Design
Sun Follows the Solar Car
Posted by Bill Storage in Engineering, Sustainable Energy on January 25, 2014
Bill Storage once got an A in high school Physics and suggests no further credentials are needed to evaluate the claims of most eco-fraud.
Once a great debate raged in America over the matter of whether man-mad climate change had occurred. Most Americans believed that it had. There were theories, models, government-sponsored studies, and various factions arguing with religious fervor. The time was 1880 and the subject was whether rain followed the plow – whether the westward expansion of American settlers beyond the 100th meridian had caused an increase in rain that would make agricultural life possible in the west. When the relentless droughts of the 1890s offered conflicting evidence, the belief died off, leavings its adherents embarrassed for having taken part in a mass delusion.
We now know the dramatic greening of the west from 1845 to 1880 was due to weather, not climate. It was not brought on by Mormon settlements, vigorous tilling, or the vast amounts of dynamite blown off to raise dust around which clouds could form. There was a shred of scientific basis for the belief; but the scale was way off.
It seems that the shred of science was not really a key component of the widespread belief that rain would follow the plow. More important was human myth-making and the madness of crowds. People got swept up in it. As ancient Jewish and Roman writings show, public optimism and pessimism ebbs and flows across decades. People confuse the relationship between man and nature. They either take undue blame or undo credit for processes beyond their influence, or they assign their blunders to implacable cosmic forces. The period of the Western Movement was buoyant, across political views and religions. Some modern writers force-fit the widely held belief about rain following the plow in the 1870s into the doctrine of Manifest Destiny. These embarrassing beliefs were in harmony, but were not tied genetically. In other words, don’t blame the myth that rain followed the plow on the Christian right.
Looking back, one wonders how farmers, investors and politicians, possibly including Abraham Lincoln, could so deeply indulge in belief held on irrational grounds rather than evidence and science. Do modern humans do the same? I’ll vote yes.
Today’s anthropogenic climate theories have a great deal more scientific basis than those of the 1870s. But many of our efforts at climate cure do not. Blame shameless greed for some of the greenwashing; but corporations wouldn’t waste their time if consumers weren’t willing to waste their dollars and hopes.
Take Ford’s solar-powered hybrid car, about which a SmartPlanet writer recently said:
Imagine an electric car that can charge without being plugged into an outlet and without using electricity from dirty energy sources, like coal.
He goes on to report that Ford plans to experiment with such a solar-hybrid concept car having a 620-mile range. I suspect many readers will understand that experimentation to mean experimenting in the science sense rather than in the marketability sense. Likewise I’m guessing many readers will allow themselves to believe that such a car might derive a significant part of the energy used in a 620-mile run from solar cells.
We can be 100% sure that Ford is not now experimenting on – nor will ever experiment on – a solar-powered car that will get a significant portion of its energy from solar cells. It’s impossible now, and always will be. No technology breakthrough can alter the laws of nature. Only so much solar energy hits the top of a car. Even if you collected every photon of it, which is again impossible because of other laws of physics, you couldn’t drive a car very far on it.
Most people – I’d guess – learned as much in high school science. Those who didn’t might ask themselves, based on common sense and perhaps seeing the size of solar panels needed to power a telephone in the desert, if a solar car seems reasonable.
The EPA reports that all-electric cars like the Leaf and Tesla S get about 3 miles per kilowatt-hour of energy. The top of a car is about 25 square feet. At noon on June 21st in Phoenix, a hypothetically perfect, spotless car-top solar panel could in theory generate 30 watts per square foot. You could therefore power half of a standard 1500 watt toaster with that car-top solar panel. If you drove your car in the summer desert sun for 6 hours and the noon sun magically followed it into the shade and into your garage – like rain following the plow – you could accumulate 4500 watt-hours (4.5 kilowatt hours) of energy, on which you could drive 13.5 miles, using the EPA’s numbers. But experience shows that 30 watts per square foot is ridiculously optimistic. Germany’s famous solar parks, for example, average less than one watt per square foot; their output is a few percent of my perpetual-noon-Arizona example. Where you live, it probably doesn’t stay noon, and you’re likely somewhat north of Phoenix, where the sun is far closer to the horizon, and it’s not June 21st all year (hint: sine of 35 degrees times x, assuming it’s not dark). Oh, and then there’s clouds. If you live in Bavaria or Cleveland, or if your car roof’s dirty – well, your mileage may vary.
Recall that this rather dim picture cannot be made much brighter by technology. Physical limits restrict the size of the car-top solar panel, nature limits the amount of sun that hits it, and the Shockley–Queisser limit caps the conversion efficiency of solar cells.
Curbing CO2 emissions is not a lost cause. We can apply real engineering to the problem. Solar panels on cars isn’t real engineering; it’s pandering to public belief. What would Henry Ford think?
—————————-
.
Tom Hight is my name, an old bachelor I am,
You’ll find me out West in the country of fame,
You’ll find me out West on an elegant plain,
And starving to death on my government claim.
Hurrah for Greer County!
The land of the free,
The land of the bed-bug,
Grass-hopper and flea;
I’ll sing of its praises
And tell of its fame,
While starving to death
On my government claim.
Opening lyrics to a folk song by Daniel Kelley, late 1800s
Is Fault Tree Analysis Deductive?
Posted by Bill Storage in Engineering, Probability and Risk, Risk Management on December 2, 2013
An odd myth persists in systems engineering and risk analysis circles. Fault tree analysis (FTA), and sometimes fault trees themselves, are said to be deductive. FMEAs are called inductive. How can this be?
By fault trees I mean Boolean logic modeling of unwanted system states by logical decomposition of equipment fault states into combinations of failure states of more basic components. You can read more on fault tree analysis and its deductive nature at Wikipedia. By FMEA (Failure Mode & Effects Analysis) I mean recording all the things that can go wrong with the components of a system. Writers who find fault trees deductive also find FMEAs, their complement, to be inductive. I’ll argue here that building fault trees is not a deductive process, and that there is possible harm in saying so. Secondarily, I’ll offer that while FMEA creation involves inductive reasoning, the point carries little weight, since the rest of engineering is inductive reasoning too.
Word meanings can vary with context; but use of the term deductive is consistent across math, science, law, and philosophy. Deduction is the process of drawing a logically certain conclusion about a particular instance from a rule or premise about the general. Assuming all men are mortal, if Socrates is a man, then he is mortal. This is true regardless of the meaning of the word mortal. It’s truth is certain, even if Socrates never existed, and even if you take mortal to mean living forever.
Example from a software development website:
FMECA is an inductive analysis of system failure, starting with the presumed failure of a component and analyzing its effect on system stability: “What will happen if valve A sticks open?” In contrast, FTA is a deductive analysis, starting with potential or actual failures and deducing what might have caused them: “What could cause a deadlock in the application?”
The well-intended writer says we deduce the causes of the effects in question. Deduction is not up to that task. When we infer causes from observed effects, we are using induction, not deduction.
How did the odd claims that fault trees and FTAs are deductive arise? It might trace to William Vesely, NASA’s original fault tree proponent. Vesely sometimes used the term deductive in his introductions to fault trees. If he meant that the process of reducing fault trees into cut sets (sets of basic events or initiators) is deductive, he was obviously correct. But calculation isn’t the critical aspect of fault trees; constructing them is where the effort and need for diligence lie. Fault tree software does the math. If Vesely saw the critical process of constructing fault trees and supplying them with numerical data (often arduous, regardless of software) as deductive – which I doubt – he was certainly wrong.
Inductive reasoning, as used in science, logic and philosophy, means inferring general rules or laws from observations of particular instances. The special use of the term math induction actually refers to deduction, as mathematicians are well aware. Math induction is deductive reasoning with a confusing title. Induction in science and engineering stems from our need to predict future events. We form theories about how things will behave in the future based on observations of how similar things behaved in the past. As I discussed regarding Bacon vs. Descartes, science is forced into the realm of induction because deduction never makes contact with the physical world – it lives in the mind.
Inductive reasoning is exactly what goes on when you construct a fault tree. You are making inferences about future conditions based on modeling and historical data – a purely inductive process. The fact that you use math to solve fault trees does not make fault trees any more deductive than the presence of math in lab experiments makes empirical science deductive.
Does this matter?
It’s easy enough to fix this technical point in descriptions fault tree analysis. We should do so, if merely to avoid confusing students. But more importantly, quantitative risk analysis – including FTA – has its enemies. They range from several top consultancies selling subjective, risk-score matrix methodologies dressed up in fancy clothes (see Tony Cox’s SIRA presentation on this topic) to some of NASA’s top management – those flogged by Richard Feynman in his minority report on the Challenger disaster. The various criticisms of fault tree analysis say it is too analytical and correlates poorly with the real world. Sound familiar? It echoes a feud between the heirs of Bacon (induction) and the heirs of Descartes (deduction). Some of fault trees’ foes find them overly deductive. They then imply that errors found in past quantitative analyses impugn objectivity itself, preferring subjective analyses based on expert opinion. This curious conclusion would not follow, even if fault tree analyses were deductive, which they are not.
.
——————————————
Science is the belief in the ignorance of experts. – Richard Feynman
.
.
Feynman’s Minority Report and Top-Down Design
Posted by Bill Storage in Aerospace, Engineering, Probability and Risk, Systems Engineering on November 11, 2013
On reading my praise of Richard Feynman, a fellow systems engineer and INCOSE member (International Council on Systems Engineering) suggested that I read Feynman’s Minority Report to the Space Shuttle Challenger Enquiry. He said I might not like it. I read it, and I don’t like it, not from the perspective of a systems engineer.
![]() Challenger explosion, Jan. 28, 1986 |
I should be clear on what I mean by systems engineering. I know of three uses of the term: first, the engineering of embedded systems, i.e., firmware (not relevant here); second, an organizational management approach (relevant, but secondary); third, a discipline aimed at design of assemblies of components to achieve a function that is greater than those of its constituents (bingo). Definitions given by others are useful toward examining Feynman’s minority report on the Challenger.
Simon Ramo, the “R” in TRW and inventor of the ICBM, put it like this: “Systems engineering is a discipline that concentrates on the design and application of the whole (system) as distinct from the parts. It involves looking at a problem in its entirety, taking into account all the facets and all the variables and relating the social to the technical aspect.”
Howard Eisner of GWU says, “Systems engineering is an iterative process of top-down synthesis, development, and operation of a real-world system that satisfies, in a near optimal manner, the full range of requirements for the system.”
INCOSE’s definition is pragmatic (pleasantly, as their guide tends a bit toward strategic-management jargon): “Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems.”
Feynman reaches several sound conclusions about root causes of the flight 51-L Challenger disaster. He observes that NASA’s safety culture had critical flaws and that its management seemed to indulge in fantasy, ignoring the conclusions, advice and warnings of diligent systems and component engineers. He gives specific examples of how NASA management grossly exaggerated the reliability of many systems and components in the shuttle. On this point he concludes, “reality must take precedence over public relations, for nature cannot be fooled.” He describes a belief by management that because an anomaly was without consequence in a previous mission, it is therefore safe. Most importantly, he cites the erroneous use of the concept of factor of safety around the O-ring seals between the two lower segments of the solid rocket motors by NASA management (the Rogers Commission also agrees that failure of these O-rings was the root cause of the disaster). An NASA report on seal erosion in an earlier mission (flight 51-C) had assigned a safety factor of three, based on the seals having eroded only one third of the amount thought to be critical. Feynman replies that the O-rings were not designed to erode, and hence the factor-of-safety concept did not apply. Seal erosion was a failure of the design, catastrophic or not; there was no safety factor at all. “Erosion was a clue that something was wrong; not something from which safety could be inferred.”
But later Feynman incorrectly states that establishing a hypothetical propulsion system failure rate of 1 in 100,000 missions would require an inordinate number of tests to determine with confidence. Here he seems not to grasp both the exponential impact of redundancy on reliability, and that fault tree analysis could confidently calculate low system failure rates based on historical failure rates of large populations of constituent components, combined with the output of FMEAs (failure mode effects analyses) on those components in the relevant systems. This error does not impact Feynman’s conclusions about the root cause of the Challenger disaster. I mention it here because Feynman might be viewed as an authoritative source on systems engineering, but is here doing a poor job of systems engineering.
Discussing the liquid fuel engines, Feynman then introduces the concept of top-down design, which he criticizes. It isn’t clear exactly what he means by top-down. The most charitable reading would be a critique of NASA top management’s overruling the judgments of engineering management and engineers; but, on closer reading, it’s clear this cannot be his meaning:
The usual way that such engines are designed (for military or civilian aircraft) may be called the component system, or bottom-up design. First it is necessary to thoroughly understand the properties and limitations of the materials to be used (for turbine blades, for example), and tests are begun in experimental rigs to determine those. With this knowledge larger component parts (such as bearings) are designed and tested individually…
The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components. Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes.
All mechanical-system design is necessarily top-down, in the sense of top-down used by Eisner, above. This use of the term is metaphor for progressive functional decomposition from mission requirements down to component requirements. Engineers cannot, for example, size a shuttle’s fuel pumps based on the functional requirement of having five men and two women orbit the earth to deploy a communications satellite. The fuel pump’s performance requirements ultimately emerge from successive derivations of requirements for subsystem design candidates. This design process is top-down, whether the various layers of subsystem design candidates are themselves newly conceived systems or ones that are already mature products (“off the shelf”). Wikipedia’s article and several software methodology sites incorrectly refer to design using off-the-shelf components as bottom-up – not involving functional decomposition. They err by failing to consider that piecing together existing subsystems toward a grander purpose still first requires functional decomposition of that grander purpose into lower-level requirements that serve as a basis for selecting existing subsystems. Simply put, you’ve got to know what you want a thing to do, even if you build that thing from available parts – software or hardware – in order to select those parts. Using off-the-shelf software subsystems still requires functional decomposition of the desired grander system.
F-117 frontal view
Off-the-shelf is a common strategy in aerospace, primarily for cost and schedule reasons. The Lockheed F-117, despite its unique design, used avionics taken from the C-130 and the F-16, brakes from the F-15, landing gear from the T-38, and other parts from commercial and military aircraft. This was for expediency. For the F-117, these off-the-shelf components still had to go through the necessary requirements validation, functional and stress testing, certification, and approval by all of the “ilities” (reliability, maintainability, supportability, durability, etc) required to justify their use in the vehicle – just as if they were newly designed. Likewise for the Challenger, the choice of new design vs. off-the-shelf should have had no impact on safety or reliability if proper systems engineering occurred. Whether its constituents were new designs or off-the-shelf, the shuttle’s propulsion system is necessarily – and desirably – the result of top-down design. Feynman may simply mean that the design and testing phases were rushed, that omissions were made, and that testing was incomplete. Other evidence suggests this; but these omissions are not a negative consequence of top-down design, which is the only sound process for the design of aircraft and other systems of systems.
It is difficult to imagine any sound basis for Feynman’s use of – and defense of – bottom-up design other than the selection of off-the-shelf components, which, as mentioned above, still entails functional decomposition (top-down design). Other uses of the term appear in discussions of software methodologies. I also found a handful of academic papers that incorrectly – incoherently, in my view – equate top-down with analysis and deduction, and bottom-up with synthesis and induction. The erroneous equation of analysis with deductive reasoning pops up in Design Thinking and social science literature (e.g., at socialresearchmethods.net). It fails to realize that analysis as a means of inferring cause from observed result (i.e., what made this happen?) always entails inductive reasoning. Geometry is deduction; science and engineering are inherently inductive.
The use of bottom-up shows up in software circles in a disparaging sense. It describes a state of system growth that happens with no conscious design beyond that of an original seed. It is non-design, in a sense. Such “organic growth” happens in enterprise software when new features, not envisioned during the original design, are later bolted-on. This can stem from naïve mismanagement by those unaware of the damage done to maintainability and further extensibility of the software system, or through necessity in a merger/acquisition scenario where the system’s owners are aware of the consequences but have no other alternatives. This scenario obviously does not apply to the hardware or software of the Challenger; and if it did, such bottom-up “design” would be a defect of the system, not a virtue.
![]() Hydro-mechanical system components in 737 gear bay |
Aerospace has in its legacy an attitude – as opposed to a design method – sometimes called a bottom-up mindset. I’ve encountered this as a form of resistance to methodological system-design-for-safety and the application of redundancy. In my experience it came from expert designers of electro-hydro-mechanical subsystems. A legendary aerospace systems designer once told me with a straight face, “I don’t believe in probability.” You can trace this type of thinking back to the rough and ready pioneers of manned flight. Charles Lindbergh, for example, said something along the lines of, “give me one good engine and one good pilot.” Implicit in this mentality is the notion that safety emerges from component quality rather than from system design. The failure rates of the best aerospace components tend to vary from those of average components by factors of two or ten, whereas redundancy has an exponential effect. Feynman’s criticism of top-down and endorsement of bottom-up – whatever he meant by it – could unfortunately be seen as support for this harmful and oddly persistent notion of bottom-up.
Toward the end of Feynman’s report, he reveals another misunderstanding about design of life-critical systems. In the section on avionics, he faults NASA for using 15-year-old software and hardware designs, concluding that the electronics are obsolete. He claims that modern chip sets are more reliable and of higher quality. This criticism runs contrary to his complaint about top-down design of the main engines, and it misses a key point. The improvements in reliability of newer chips would contribute only negligibly toward improved availability of the quad-redundant system containing them. More importantly, older designs of electronic components are often used in avionics precisely because they are old, mature designs. Accelerated-life testing of electronics is known to be tricky business. We use old-design chips because there is enough historical usage data to determine their failure rates without relying on accelerated-life testing. Long ago at McDonnell Douglas I oversaw use of the Intel 87C196 chip for a system on the C-17 aircraft. The Intel rep told me that this was the first use of the Intel 8086-derivative chip in a military aircraft. We defended its use, over the traditional but less capable Motorola chips, on the basis that the then 10+ year history of 8086’s in similar environments was finally sufficient to establish a statistical failure rate usable in our system availability calculations. Interestingly, at that time NASA had already been using 8086 chips in the shuttle for years.
Feynman’s minority report on the Challenger contains misunderstandings and technical errors from the perspective of a systems engineer. While these errors may have little impact on his findings, they should be called out because of the possible influence they may have on future generations of engineers. The tyranny of pedigree, as we saw with Galileo, can extend a wrong idea’s life for generations.
That said, Feynman makes several key points about the psychology of engineering management that deserve much more attention than they get in engineering circles. First among these in my mind is the fallacy of induction from near-misses viewed as successes, thereby producing undue confidence about future missions.
“His legs were weary, but his mind was at ease, free from the presentiment of change. The sense of security more frequently springs from habit than from conviction, and for this reason it often subsists after such a change in the conditions as might have been expected to suggest alarm. The lapse of time during which a given event has not happened is, in the logic of habit, constantly alleged as a reason why the event should never happen, even when the lapse of time is precisely the added condition which makes the event imminent. A man will tell you that he has worked in a mine for forty years unhurt by an accident, as a reason why he should apprehend no danger, though the roof is beginning to sink; and it is often observable that the older a man gets, the more difficult it is to retain a believing conception of his own death.”
– from Silas Marner, by George Eliot (Mary Ann Evans Cross), 1861
—–
Text and aircraft photos copyright 2013 by William Storage. NASA shuttle photos public domain.
Just a Moment, Galileo
Posted by Bill Storage in Engineering, Innovation management on October 29, 2013
Bruce Vojak’s wonderful piece on innovation and the minds of Newton and Goethe got me thinking about another 17th century innovator. Like Newton, Galileo was a superstar in his day – a status he still holds. He was the consummate innovator and iconoclast. I want to take a quick look at two of Galileo’s errors, one technical and one ethical, not to try to knock the great man down a peg, but to see what lessons they can bring to the innovation, engineering and business of this era.
Less well known than his work with telescopes and astronomy was Galileo’s work in mechanics of solids. He seems to have been the first to explicitly identify that the tensile strength of a beam is proportional to its cross-sectional area, but his theory of bending stress was way off the mark. He applied similar logic to cantilever beam loading, getting very incorrect results. Galileo’s bending stress illustration is shown below (you can skip over the physics details, but they’re not all that heavy).
For bending, Galileo concluded that the whole cross section was subjected to tension at the time of failure. He judged that point B in the diagram at right served as a hinge point, and that everything above it along the line A-B was uniformly in horizontal tension. Thus he missed what would be elementary to any mechanical engineering sophomore; this view of the situation’s physics results in an unresolved moment (tendency to twist, in engineer-speak). Since the cantilever is at rest and not spinning, we know that this model of reality cannot be right. In Galileo’s defense, Newton’s 3rd law (equal and opposite reaction) had not yet been formulated; Newton was born a year after Galileo died. But Newton’s law was an assumption derived from common sense, not from testing.
It took more than a hundred years (see Bernoulli and Euler) to finally get the full model of beam bending right. But laboratory testing in Galileo’s day could have shown his theory of bending stress to make grossly conservative predictions. And long before Bernuolli and Euler, Edme Mariotte published an article in which he got the bending stress distribution mostly right, identifying that the neutral axis should be down the center of the beam, from top to bottom. A few decades later Antoine Parent polished up Mariotte’s work, arriving at a modern conception of bending stress.
But Mariotte and Parent weren’t superstars. Manuals of structural design continued to publish Galileo’s equation, and trusting builders continued to use them. Beams broke and people died. Deference to Galileo’s authority, universally across his domain of study, not only led to needless deaths but also to the endless but fruitless pursuit of other causes for reality’s disagreement with theory.
So the problem with Galileo’s error in beam bending was not so much the fact that he made this error, but the fact that for a century it was missed largely for social reasons. The second fault I find with Galileo’s method is intimately tied to his large ego, but that too has a social component. This fault is evident in Galileo’s writing of Dialogue on the Two Chief World Systems, the book that got him condemned for heresy.
Galileo did not invent the sun-centered model of our solar system; Copernicus did. Galileo pointed his telescope to the sky, discovered four moons of Jupiter, and named them after influential members of the Medici family, landing himself a job as the world’s highest paid scholar. No problem there; we all need to make a living. He then published Dialogue arguing for Copernican heliocentrism against the earth-centered Ptolemaic model favored by the church. That is, Galileo for the first time claimed that Copernicanism was not only an accurate predictive model, but was true. This was tough for 17th century Italians to swallow, not only their clergy.
For heliocentrism to be true, the earth would have to spin around at about 1000 miles per hour on its surface. Galileo had no good answer for why we don’t all fly off into space. He couldn’t explain why birds aren’t shredded by supersonic winds. He was at a loss to provide rationale for why balls dropped from towers appeared to fall vertically instead of at an angle, as would seem natural if the earth were spinning. And finally, if the earth is in a very different place in June than in December, why do the stars remain in the same pattern year round (why no parallax)? As UC Berkeley philosopher of science Paul Feyerabend so provocatively stated, “The church at the time of Galileo was much more faithful to reason than Galileo himself.”
At that time, Tycho Brahe’s modified geocentric theory of the planetary system (Mercury and Venus go around the sun, which goes around the earth), may have been a better bet given the evidence. Brahe’s theory is empirically indistinguishable from Copernicus’s. Venus goes through phases, like the moon, in Brahe’s model just as it does in Copernicus’s. No experiment or observation of Galileo could refute Brahe.
Here’s the rub. Galileo never mentions Brahe’s model once in Dialogue on the Two Chief World Systems. Galileo knew about Brahe. His title, Two Systems, seems simply a polemic device – at best a rhetorical ploy to eliminate his most worthy opponent by sleight of hand. He’d rather fight Ptolemy than Brahe.
Likewise, Galileo ignored Johannes Kepler in Dialogue. Kepler’s work (Astronomia Nova) was long established at the time Galileo wrote Dialogue. Kepler correctly identified that the planetary orbits were elliptical rather than circular, as Galileo thought. Kepler also modeled the tides correctly where Galileo got them wrong. Kepler wrote congratulatory letters to Galileo; Galileo’s responses were more reserved.
Galileo was probably a better man (or should have been) than his behavior toward Kepler and Brahe reveal. His fans fed his ego liberally, and he got carried away. Galileo, Brahe, Kepler and everyone else would have been better served by less aggrandizing and more humility. The tech press and the venture capital worlds that fuel what Vivek Wadhwa calls the myth of the 20-year old white male genius CEO should take note.
Recent Comments