Feynman’s Minority Report and Top-Down Design

On reading my praise of Richard Feynman, a fellow systems engineer and INCOSE member (International Council on Systems Engineering) suggested that I read Feynman’s Minority Report to the Space Shuttle Challenger Enquiry. He said I might not like it. I read it, and I don’t like it, not from the perspective of a systems engineer.

Challenger_explosion
Challenger explosion, Jan. 28, 1986

I should be clear on what I mean by systems engineering. I know of three uses of the term: first, the engineering of embedded systems, i.e., firmware (not relevant here); second, an organizational management approach (relevant, but secondary); third, a discipline aimed at design of assemblies of components to achieve a function that is greater than those of its constituents (bingo). Definitions given by others are useful toward examining Feynman’s minority report on the Challenger.

Simon Ramo, the “R” in TRW and inventor of the ICBM, put it like this: “Systems engineering is a discipline that concentrates on the design and application of the whole (system) as distinct from the parts. It involves looking at a problem in its entirety, taking into account all the facets and all the variables and relating the social to the technical aspect.”

Howard Eisner of GWU says, “Systems engineering is an iterative process of top-down synthesis, development, and operation of a real-world system that satisfies, in a near optimal manner, the full range of requirements for the system.” 

INCOSE’s definition is pragmatic (pleasantly, as their guide tends a bit toward strategic-management jargon): “Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems.”

Feynman reaches several sound conclusions about root causes of the flight 51-L Challenger disaster. He observes that NASA’s safety culture had critical flaws and that its management seemed to indulge in fantasy, ignoring the conclusions, advice and warnings of diligent systems and component engineers. He gives specific examples of how NASA management grossly exaggerated the reliability of many systems and components in the shuttle. On this point he concludes, “reality must take precedence over public relations, for nature cannot be fooled.” He describes a belief by management that because an anomaly was without consequence in a previous mission, it is therefore safe. Most importantly, he cites the erroneous use of the concept of factor of safety around the O-ring seals between the two lower segments of the solid rocket motors by NASA management (the Rogers Commission also agrees that failure of these O-rings was the root cause of the disaster). An NASA report on seal erosion in an earlier mission (flight 51-C) had assigned a safety factor of three, based on the seals having eroded only one third of the amount thought to be critical. Feynman replies that the O-rings were not designed to erode, and hence the  factor-of-safety concept did not apply. Seal erosion was a failure of the design, catastrophic or not; there was no safety factor at all. “Erosion was a clue that something was wrong; not something from which safety could be inferred.”

But later Feynman incorrectly states that establishing a hypothetical propulsion system failure rate of 1 in 100,000 missions would require an inordinate number of tests to determine with confidence. Here he seems not to grasp both the exponential impact of redundancy on reliability, and that fault tree analysis could confidently calculate low system failure rates based on historical failure rates of large populations of constituent components, combined with the output of FMEAs (failure mode effects analyses) on those components in the relevant systems. This error does not impact Feynman’s conclusions about the root cause of the Challenger disaster. I mention it here because Feynman might be viewed as an authoritative source on systems engineering, but is here doing a poor job of systems engineering.

Discussing the liquid fuel engines, Feynman then introduces the concept of top-down design, which he criticizes. It isn’t clear exactly what he means by top-down. The most charitable reading would be a critique of NASA top management’s overruling the judgments of engineering management and engineers; but, on closer reading, it’s clear this cannot be his meaning:

The usual way that such engines are designed (for military or civilian aircraft) may be called the component system, or bottom-up design. First it is necessary to thoroughly understand the properties and limitations of the materials to be used (for turbine blades, for example), and tests are begun in experimental rigs to determine those. With this knowledge larger component parts (such as bearings) are designed and tested individually…

The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components.  Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes.

All mechanical-system design is necessarily top-down, in the sense of top-down used by Eisner, above. This use of the term is metaphor for progressive functional decomposition from mission requirements down to component requirements. Engineers cannot, for example, size a shuttle’s fuel pumps based on the functional requirement of having five men and two women orbit the earth to deploy a communications satellite. The fuel pump’s performance requirements ultimately emerge from successive derivations of requirements for subsystem design candidates. This design process is top-down, whether the various layers of subsystem design candidates are themselves newly conceived systems or ones that are already mature products (“off the shelf”). Wikipedia’s article and several software methodology sites incorrectly refer to design using off-the-shelf components as bottom-up – not involving functional decomposition. They err by failing to consider that piecing together existing subsystems toward a grander purpose still first requires functional decomposition of that grander purpose into lower-level requirements that serve as a basis for selecting existing subsystems. Simply put, you’ve got to know what you want a thing to do, even if you build that thing from available parts –  software or hardware –  in order to select those parts. Using off-the-shelf software subsystems still requires functional decomposition of the desired grander system.

Stealth Fighter, Frontal ViewF-117 frontal view

Off-the-shelf is a common strategy in aerospace, primarily for cost and schedule reasons. The Lockheed F-117, despite its unique design, used avionics taken from the C-130 and the F-16, brakes from the F-15, landing gear from the T-38, and other parts from commercial and military aircraft. This was for expediency. For the F-117, these off-the-shelf components still had to go through the necessary requirements validation, functional and stress testing, certification, and approval by all of the “ilities” (reliability, maintainability, supportability, durability, etc) required to justify their use in the vehicle – just as if they were newly designed. Likewise for the Challenger, the choice of new design vs. off-the-shelf should have had no impact on safety or reliability if proper systems engineering occurred. Whether its constituents were new designs or off-the-shelf, the shuttle’s propulsion system is necessarily – and desirably – the result of top-down design. Feynman may simply mean that the design and testing phases were rushed, that omissions were made, and that testing was incomplete. Other evidence suggests this; but these omissions are not a negative consequence of top-down design, which is the only sound process for the design of aircraft and other systems of systems.

It is difficult to imagine any sound basis for Feynman’s use of – and defense of – bottom-up design other than the selection of off-the-shelf components, which, as mentioned above, still entails functional decomposition (top-down design). Other uses of the term appear in discussions of software methodologies. I also found a handful of academic papers that incorrectly – incoherently, in my view – equate top-down with analysis and deduction, and bottom-up with synthesis and induction. The erroneous equation of analysis with deductive reasoning pops up in Design Thinking and social science literature (e.g., at socialresearchmethods.net). It fails to realize that analysis as a means of inferring cause from observed result (i.e., what made this happen?) always entails inductive reasoning. Geometry is deduction; science and engineering are inherently inductive.

The use of bottom-up shows up in software circles in a disparaging sense. It describes a state of system growth that happens with no conscious design beyond that of an original seed. It is non-design, in a sense. Such “organic growth” happens in enterprise software when new features, not envisioned during the original design, are later bolted-on. This can stem from naïve mismanagement by those unaware of the damage done to maintainability and further extensibility of the software system, or through necessity in a merger/acquisition scenario where the system’s owners are aware of the consequences but have no other alternatives. This scenario obviously does not apply to the hardware or software of the Challenger; and if it did, such bottom-up “design” would be a defect of the system, not a virtue.

Detail of 737 Gear Bay
Hydro-mechanical system components in 737 gear bay

Aerospace has in its legacy an attitude – as opposed to a design method – sometimes called a bottom-up mindset. I’ve encountered this as a form of resistance to methodological system-design-for-safety and the application of redundancy. In my experience it came from expert designers of electro-hydro-mechanical subsystems. A legendary aerospace systems designer once told me with a straight face, “I don’t believe in probability.” You can trace this type of thinking back to the rough and ready pioneers of manned flight. Charles Lindbergh, for example, said something along the lines of, “give me one good engine and one good pilot.” Implicit in this mentality is the notion that safety emerges from component quality rather than from system design. The failure rates of the best aerospace components tend to vary from those of average components by factors of two or ten, whereas redundancy has an exponential effect. Feynman’s criticism of top-down and endorsement of bottom-up – whatever he meant by it – could unfortunately be seen as support for this harmful and oddly persistent notion of bottom-up.

Toward the end of Feynman’s report, he reveals another misunderstanding about design of life-critical systems. In the section on avionics, he faults NASA for using 15-year-old software and hardware designs, concluding that the electronics are obsolete. He claims that modern chip sets are more reliable and of higher quality. This criticism runs contrary to his complaint about top-down design of the main engines, and it misses a key point. The improvements in reliability of newer chips would contribute only negligibly toward improved availability of the quad-redundant system containing them. More importantly, older designs of electronic components are often used in avionics precisely because they are old, mature designs. Accelerated-life testing of electronics is known to be tricky business. We use old-design chips because there is enough historical usage data to determine their failure rates without relying on accelerated-life testing. Long ago at McDonnell Douglas I oversaw use of the Intel 87C196 chip for a system on the C-17 aircraft. The Intel rep told me that this was the first use of the Intel 8086-derivative chip in a military aircraft. We defended its use, over the traditional but less capable Motorola chips, on the basis that the then 10+ year history of 8086’s in similar environments  was finally sufficient to establish a statistical failure rate usable in our system availability calculations. Interestingly, at that time NASA had already been using 8086 chips in the shuttle for years.

Feynman’s minority report on the Challenger contains misunderstandings and technical errors from the perspective of a systems engineer. While these errors may have little impact on his findings, they should be called out because of the possible influence they may have on future generations of engineers. The tyranny of pedigree, as we saw with Galileo, can extend a wrong idea’s life for generations.

That said, Feynman makes several key points about the psychology of engineering management that deserve much more attention than they get in engineering circles. First among these in my mind is the fallacy of induction from near-misses viewed as successes, thereby producing undue confidence about future missions.

 “His legs were weary, but his mind was at ease, free from the presentiment of change. The sense of security more frequently springs from habit than from conviction, and for this reason it often subsists after such a change in the conditions as might have been expected to suggest alarm. The lapse of time during which a given event has not happened is, in the logic of habit, constantly alleged as a reason why the event should never happen, even when the lapse of time is precisely the added condition which makes the event imminent. A man will tell you that he has worked in a mine for forty years unhurt by an accident, as a reason why he should apprehend no danger, though the roof is beginning to sink; and it is often observable that the older a man gets, the more difficult it is to retain a believing conception of his own death.”

 – from Silas Marner, by George Eliot (Mary Ann Evans Cross), 1861

—–

Text and aircraft photos copyright 2013 by William Storage. NASA shuttle photos public domain.

, ,