Category Archimedes

Archimedes

All technologies differ from one another. They are as varied as humanity’s interaction with the physical world. Even people attempting to do the same thing produce multiple technologies. For example, John H. White discovered more than 1000 patents in the 19th century for locomotive smokestacks.1 Yet all technologies are processes by which humans seek to control their physical environment and bend nature to their purposes. All technologies are alike.

The tension between likeness and difference runs through this collection of papers. All focus on atmospheric flight, a twentieth-century phenomenon. But they approach the topic from different disciplinary perspectives. They ask disparate questions. And they work from distinct agendas. Collectively they help to explain what is different about aviation – how it differs from other technologies and how flight itself has varied from one time and place to another.

The importance of this topic is manifest. Flight is one of the defining technologies of the twentieth century. Jay David Bolter argues in Turing’s Man that certain technologies in certain ages have had the power not only to transform society but also to shape the way in which people understand their relationship with the physical world. “A defining technology,” says Bolter, “resembles a magnifying glass, which collects and focuses seemingly disparate ideas in a culture into one bright, sometimes piercing ray.” Flight has done that for the twentieth century.

Though the authors represented in this volume come from very different backgrounds, we share a concern to move beyond a fascination with origins and firsts. Instead, the essays of this book all attend to a technology forever in process, a technology modified all the way through its history. From its shifting relationship with the aerodynamic sciences to the shop-floor culture of bomber production; from the changing functions of patented mechanisms to the standards of pilot training, protocol, and disaster inquiries. Through and through, this is a book about the heterogeneous practices of aviation all the way down the line.

In some ways the technologies of flight seem remarkably stable: in looking at the earliest airplanes, the wings, ailerons, rudder and elevators seem remarkably congruent with the same features on a 747. Yet, over the course of the twentieth century, the technologies of flight have radically altered. Consider the perspective of Hugh Dryden, the former Director of the National Advisory Committee for Aeronautics and Associate Director of the National Aeronautics and Space Administration. He used to say the he grew up with the airplane. He wrote his first paper on flight in 1910, when he was 12 and the airplane was 7. In it he argued for [1] “The Advantages of an Airship over an Airplane,” earning an F from a prescient if harsh teacher. At the time of his death in 1965, Dryden was helping to orchestrate humankind’s first journey to an extraterrestrial body. Bom before the airplane, Dryden lived to see humans fly into space.

Now, at the end of the twentieth century, humans are about to occupy an international space station. Its supporters believe that it will begin a permanent human presence in space. The Wright brothers could hardly have imagined that their primitive attempt to fly would lead within a century to a permanent presence off the Earth. The technology that they inaugurated transformed humans from gravity – bound creatures scurrying about the face of the Earth to spacefaring explorers looking back at the home planet as if it were an artifact of history. At the time of Apollo, historian Arthur Schlesinger, Jr., hazarded the guess that when future generations reflected on the twentieth century they would remember it most for the first moon landing.

Flight has defined the 20th century symbolically, spiritually, and spatially. Individual airplanes such as the workhorse DC-3, the democratic Piper Cub, the dreadful B-29, the rocket-like X-15, and the angular Stealth fighter have imprinted their shapes and their personalities on modem life. They represent our contemporary ability to deliver people, bombs, or disaster relief anywhere in the world in a matter of hours. Popular imagination has rendered the Wright brothers, Charles Lindbergh, and Chuck Yeager as quintessential^ heroic individuals, icons of the human yearning to subdue nature, achieve freedom of movement, and conquer time and space. To the extent that the world has become Marshall McLuhan’s global village, flight has made it so. Communications put us in touch with each other, but airplanes put us in place.

Is a defining technology like other technologies? Or is it different? Does it obey the same mles, evince the same patterns, produce the same results? Or do defining technologies, by virtue of their powerful interaction with society, operate differently? The essays collected in this volume shed considerable light on these questions.

This volume – and the conference that launched it – began with a series of discussions between Alex Roland and Dibner Institute co-directors Jed Buchwald and Evelyn Simha. Would it not be original and fruitful, they wondered, to bring together historians of flight with a wider group of scholars and engineers from related fields – people who had not necessarily written on the history of flight? Peter Galison was recmited as a historian of science and private pilot – and together Roland and Galison began assembling the mix of historians of the technology of flight and the engineers, philosophers, sociologists, and historians who are represented here. Our great debt is to the Dibner Institute for their support of our conference from 3-5 April 1997, and the continuing interest they have had in seeing this volume come to fruition.

In addition to the individual merits of the papers gathered in this volume, we believe that collectively they shed light on the question of whether or not flight functions like other technologies. The simple answer is yes and no. The complete answer is more interesting and more provocative. Readers may find their own version of that answer in the papers that follow. Here we will attempt only to point out some of the ways in which the answer might be construed from these contributions.

First, flight may be seen as similar to other technologies. Patents represent one area in which this is true. These government charters to promote and reward innovation are often depicted as measures of inventive activity and stimulants to technological change. They might be expected to have played a significant role in the development of flight. Thomas Crouch and Alex Roland confirm this expectation, but find that it operated in unexpected ways. Crouch debunks the myth that the Wright patent choked U. S. aviation development in the period leading up to World War I. Though the Wright patent was surely unusual in its scope and impact, it did not retard development, as its opponents claimed, and it was not unique. Roland takes up the same issue where Crouch leaves it. Studying the impact of patents on airframe manufacture in the period from World War I to the present, he finds that patents were important at the outset, less so over time. This pattern is familiar in cumulative industries where foundational patents launch a new technological trajectory but then decline in relative importance.

National subsidy also shaped aviation in the same way that it has shaped other technologies, such as shipbuilding, armaments, microelectronics, and computers. Increasingly in the modem world, industrialized states have intervened in technological arenas deemed important to national security or prosperity. Aviation is no exception. Takehiko Hashimoto’s paper demonstrates the strong role of government policy in the development of British aviation, a pattern repeated in other developed European nations. Walter Vincenti describes a research project within the National Advisory Committee for Aeronautics (NACA), one of the institutional mechanisms by which the United States government subsidized aviation development. The cross-licensing agreement at the heart of Roland’s paper came into being at government behest and with the benefit of a $2 million government buy-out.

Dual-use is another characteristic that likens aviation to other technologies. It means that the technology has both military and civilian applications. From the very first, aviation has been dual-use; the Wright brothers built their plane as an end in itself, but first sought to sell it to the U. S. Army. George Lewis, the Director of Research of the National Advisory Committee for Aeronautics in the 1930s and 1940s, confessed that he could not think of any improvement in aviation that would not benefit military and civilian aviation alike. Thus the research conducted at the Army’s McCook Field in the teens and twenties, examined here in Peter Jakab’s paper, turned out to have important civilian applications. Likewise, the research that Walter Vincenti and his colleagues conducted at the NACA in the 1940s was equally applicable to the wings of military and commercial aircraft. And the production methods worked out in the mass assembly of B-17’s and B-29’s described by Robert Ferguson utterly transformed the practice of airplane assembly after 1945.

Ironically, dual-use has become less pervasive in modem aviation at just the time when the military services have focused more attention upon it. The reason

is the increasingly specialized nature of modern, high-performance military aircraft. Many of them, for example, now feature skins that resist heating at high speeds, a characteristic unnecessary on commercial aircraft. Stealth technology, one of the marvels of recent research and development, has no utility for civilian aircraft. The special stability characteristics of fighter aircraft are unique. The avionics of high-speed, low-level flight have few applications on the commercial side, nor do electronic countermeasures, ejection seats, and the ultra-high flying technologies reserved almost exclusively to reconnaissance aircraft. Of course the period before World War II had its share of military technologies with no civilian analog, such as bombsights, armaments, and carrier-landing capability. But the irony remains that civilian applications of military aeronautical technology have become more elusive just when the military and the aerospace industry have taken the greatest interest in them.

The other side of dual-use, of course, is that research and development aimed at military products often differs from that supported by the commercial market. The military usually requires higher standards of performance and reliability. Perfecting such technology may require more research and development than market forces could support. But once the technology has been perfected, it may be transferred to the commercial marketplace fairly cheaply; the overhead has already been absorbed. The classic example of this is U. S. computer development during the Cold War,3 but aviation provides a similar instance. The instrumentation developed by Frederick Suppe and his colleagues to test the performance of military aircraft could later be installed on commercial planes for a fraction of the cost. The turbofan engine development described by George Smith and David Mindell came free to the commercial manufacturers, fully paid for by the military. This phenomenon, a commonplace of United States development during the Cold War,3 seeps into the issue of national subsidy. Roland’s paper concludes that one reason for the success of commercial airframe manufacture in this country was the indirect subsidy of government research. Much of that subsidy took the form of military R&D.

Also, in its relation to science, aeronautics resembles other science-based technologies. John Anderson traces paths by which scientific knowledge has entered the realm of aeronautical engineering. Similar paths have marked the intercourse between thermodynamics and engine design, between solid-state physics and microelectronics, and between microbiology and genetic engineering. In aeronautics, as in all of these other realms, traffic moves along these paths in both directions. Just as science often provides theoretical models for better technology, so too does technological development often provide challenges to theory and new tools for scientific investigation. Walter Vincenti’s paper offers a stunning example of the way in which cross fertilization of an experimental technique from one investigation allowed a breakthrough to conceptual understanding of physical phenomena in another. George Smith and David Mindell explain how advances in metallurgy yielded titanium fan blades for more efficient engines. By exploring the texture of shop-floor life in World War II aircraft production, Robert Ferguson shows how the design process was never restricted to the “top” of the assembly process – innovation, modification, re-design occurred all the way down from initial sketches to the final stages of production. These papers thoroughly rebut the naive picture in which design and knowledge enter only at the start of a massive project.

Equally revealing are the ways in which flight is different from other technologies. First, it is more dangerous than most. Peter Galison’s paper wrests technological insight from two gripping commercial airline accidents; the imperative to identify the cause of an accident drives investigators toward a definition of agency that challenges our very understanding of technological systems and they ways in which they fail. Whether it is the test pilots in Frederick Suppe’s account of flight instrumentation or the fatal crash of Otto Lilienthal, whom Roland represents as the inspiration for the Wright brothers, disaster accompanies the failure of this technology more swiftly and surely than almost any other.

Cost also separates flight from most technologies. Deborah Douglas explores the price of passenger accommodation in the early years of airport design and construction. If customers were going to pay for air transportation, they had to pass through a site that connected everyday life in two dimensions with a technology of three dimensions. The problems were enormous and costly. Frederick Suppe opens up the world of flight instrumentation, one of the auxiliary technologies without which flying would be riskier and less understood. Walter Vincenti reveals the painstaking detail required to understand – or begin to understand – the character of supersonic flow over an airfoil. George Smith and David Mindell track the evolving relationship among compressor, turbine, and airflow that characterized the incremental development of high-bypass jet engines. The wind tunnel in which these ideas were tested cost more to design, build, operate, and staff than did the complete research and development programs in many other technologies.

The romance of flight permeates all these papers, and sets this technology apart from most others. Frederick Suppe captures it in his account of daring test flights in the desert. Even Deborah Douglas’ account of early airport design resonates with the adventure and excitement that airport designers were trying to exploit. The heroic airmanship of pilot A1 Haynes and his crew in nursing United Airlines Flight 232 to a controlled crash ennobles an otherwise tragic technological failure. A technology that allows humans to “slip the surly bonds of earth” cannot help but appear romantic in comparison to the mundane tasks to which most technology is committed. Indeed, in recent years scholars have begun to historicize the romance of aviation, using the airplane as a means of exploring larger issues of twentieth century cultural history.4

Few technologies generate the infrastructure that has grown up around atmospheric flight. The Wrights achieved flight with the materials that they could haul by rail and boat from Ohio to North Carolina, supplemented by food and shelter purchased locally. Today aviation needs research and development of the kind described by Smith and Mindell, Hashimoto, Eric Schatzberg, and Vincenti; testing and instrumentation like that explored in Suppe’s paper; institutional guarantees of rights to innovation as laid out by Crouch, Roland, and Roger Bilstein; operating

infrastructure such as airports (Douglas) and accident investigation (Galison); and much more. Some of the infrastructure is private, some public; most of it now has to be coordinated internationally, so that flight can cross national borders without loss of system integrity.

Finally, atmospheric flight requires higher standards than most other technologies, in part because of the danger involved, in part because of the cost. When a single airliner can cost more than $100 million and an airport costs billions, the incentive to ensure their faultless operation is high. When a single accident can kill hundreds of people, the incentive is incalculable. Hashimoto, Schatzberg and Ferguson show the ways in which standardization entered aircraft design and testing early in the century. Suppe’s paper demonstrates how the price of standardization has risen, with ever more expensive and accurate instrumentation and ever more data points required to get the information necessary for confident operations. Galison’s inquiry into accident investigations reveals the lengths to which the government will go to root out system weakness and replace it with the standardized practice linked to higher levels of safety.

These papers also demonstrate the ways in which flight has varied from time to time and place to place. The historical literature of flight is notoriously parochial and nationalistic; indeed, it proved difficult to break that pattern in assembling the participants in this volume. But even when flight is studied comparatively and from various perspectives, it is difficult to discern international patterns over time. Rather the principal artifacts of this technology, the airplanes themselves, along with their support equipment and infrastructure, reflect the national styles and the periods in which they were generated. The reasons for this are not hard to find.

The main reason is institutional. In consonance with recent scholarship on the shaping influence of the research environment, several papers explore the ways in which differing research styles produced differing artifacts. Hashimoto takes sociologist of science Bruno Latour’s notion of inside/outside research behavior as his explicit model for understanding the impact of Leonard Bairstow on the development of British aviation in the years between the world wars. The concentration of the British on stability research and wind-tunnel modeling, and their tardiness in adopting boundary layer theory and corrections for wall interference effects, were a direct result of Bairstow’s determination to defend his research base in empirical, wind-tunnel studies. It took an international, comparative research project in the 1930s to reveal the extent to which this concentration had retarded British development. Robert Ferguson explores the specificity of engineering cultures even when the companies are producing the identical airplane. Or perhaps, as Ferguson shows, “identical” needs to be put between quotation marks: it seems that no amount of drawing, personnel exchange, or even exchange of airplanes could surmount the myriad of details that separated production at Boeing from that at Vega or Douglas.

Like Hashimoto, our commentator, David Bloor, has stressed the potential fruitfulness of a sociological reading of institutional identity, though Bloor invokes the sociologist Mary Douglas. Douglas’s idea is that rather than dichotomizing institutions, we might invoke a two-by-two matrix, so to speak: institutions are either egalitarian or hierarchical, and they are either boundary-policing or boundary – permeable. Using this four-way typology, Bloor queries our various authors as to where on such a chart they might find their “engineering cultures,” e. g., General Electric versus Pratt and Whitney or Goettingen versus Cambridge. That is, Bloor wants to know whether the various ways that engineers treat objects reflect basic sociological features of the way they treat the people with whom they work.

Also attentive to engineering culture is Walter Vincenti, who attributes the success of his research team to the creative, unstructured, eclectic laboratory environment they enjoyed at the Ames Aeronautical Laboratory of the U. S. National Advisory Committee for Aeronautics. He pictures a free association between theory and empiricism, in which individual researchers were able to bring new ideas and proposals from any source. They were measured by their efficacy in solving the problems at hand, as opposed to the doctrinaire constraints imposed by Bairstow in the British environment.

Suppe presents an entirely different research environment, instrumentation of flight testing. The principal dynamic at work here is the relationship between improving instrument technology and the ever increasing demand for more data. Instrumentation offers more and better data, but it can hardly keep pace with the demands for more data points and more precision as aircraft speed and performance improve. The research imperative is therefore not to figure out the design of better aircraft but to develop and field equipment that will keep up.

Research and development in aerodynamics took a fascinating, counterintuitive turn in the account Smith and Mindell provide of the high-bypass jet engine. While one might expect that a radical new design by one company would stimulate equally radical changes in the competition, these authors show quite the reverse took place. In a highly secretive program, General Electric stunned the aviation world with their novel 1957 single-stage, aft-mounted fan. Pratt and Whitney responded, but their counter was a similarly efficient but completely incremental two-stage fan with vastly simpler aerodynamics. In part because of novelty-related start-up problems for GE, P&W triumphed in the marketplace.

Peter Galison describes an entirely different institutional imperative. The National Transportation Safety Board (NTSB) is required by law to investigate technological failure in a particular way. Tom between seeking to understand an accident in all its complexity of contributing causes and the institutional demand to locate a more localized “probable cause,” accident investigation is a vexed enterprise. Under these constraints, the investigating team is often driven to identify point failures, especially point failures that are subject to remedy. Thus an institution that is poised at the very nexus of technological understanding, i. e., at the point where technology fails, is bound by law to view that failure narrowly and instrumentally. This can lead to great technical virtuosity and poor contextual understanding. And so, while trying to preserve a “condensed” notion of causality, the investigators time and again sought to embed the causal account in the wider spheres aimed at by psychological, organizational, and sociological approaches.

Peter Jakab captures the excitement of McCook Field in its early years. Before the U. S. Army knew what it was going to do with aviation, and before its institutional research arrangements settled into routinized patterns, McCook Field was a hothouse of innovative ideas and experiments. Distinguished researchers accepted appointment there and brought their creative energies to a field rich with promise and interest. If anything, there was too much innovation and experimentation at McCook Field in these years, with the research program seemingly running off in many different directions at once. The result was that McCook Field did not itself come to be credited with any great technological breakthroughs, but the people who worked there honed their research skills and gained invaluable experience. As an institution, it turned out to be a better training ground than a proving ground.

Even Deborah Douglas’s account of airport development in the United States suggests the powerful ways in which institutions shape technological development. Commercial passenger travel achieved market viability in the United States in the 1930s. The so-called “airframe revolution” that produced the DC-3 is most often credited. But Douglas reveals that airport design also played a role. Only when the airport came to be envisioned as a user-friendly, comfortable, safe, and aesthetically pleasing nexus between air and land travel could airlines hope to attract the passengers who would make their enterprise profitable. The American decision to make airports local institutions prodded the market toward competitive design and production that pitted cities against one another in their claims to be most progressive and advanced. The results were airports like LaGuardia in New York, which lent a unique stamp to American aviation and helped to foster development of the entire commercial enterprise.

John Anderson attests to the importance of institutions in transferring knowledge and understanding back and forth between scientists and technologists of flight. Nikolay Joukowski, the head of the Department of Mechanics at Moscow University, was the first scientist to take Otto Lilienthal’s work with gliders as a fit subject for scientific investigation. The resulting Kutta-Joukowski theorem, which revolutionized theoretical aerodynamics, gained purchase in part because of the weight of Joukowski’s reputation and his institutional setting. So too did Ludwig Prandtl’s position at Goettingen University lend credibility to his research on the boundary layer. He took up a practical problem, theorized it in a revolutionary scientific concept that transformed modem fluid dynamics, and then gave it back to practical application in his own work and his students’ on the flow of air over wings and fuselage.

The cases of Joukowski and Prandtl serve not only to illustrate the ways in which institutions have shaped the development of flight technology but also to introduce a final way in which these appear to address differences in flight. University research in Russia and Germany influenced aeronautical development long before American and British universities achieved such an impact. In fact the German style of university-based, theoretical research in aerodynamics was spread to the United States by two of Prandtl’s students. As Roger Bilstein makes clear, Max Munk went to the National Advisory Committee for Aeronautics in 1929 and developed there the innovative variable density wind tunnel for which the NACA won its first

Collier Trophy. Even more significantly, Theodore von Karman accepted the invitation of Nobel laureate Robert Millikan to join the faculty at the California Institute of Technology and direct its Guggenheim Aeronautical Laboratory. From that institutional base von Karman went on to exert a formative influence on aeronautical research and development and on the policies of the United States Air Force. American aeronautical development took on a more theoretical turn because of the immigration of this European, especially German, style of research.

National variations in research styles are evident in other papers as well. Eric Schatzberg reveals the impact of national tastes for materials in his discussion of the wooden airplane in the 1930 and 1940s. The United States’ preference for metal as an aircraft building material flowed from preconceptions about the modernity of aluminum, not from a judicious evaluation of the merits of wood. For equally nationalistic reasons, Canadians preferred wooden aircraft and developed them with great success during World War II. And the Americans, under the pressure of World War II developed modes of exchange between competing airframe manufacture that fundamentally altered the character of the industry.

Hashimoto uses the International Trials of the early 1920s to demonstrate the differences in national research styles and practices and the difficulties involved in standardization. The Trials also revealed the parochialism of the British and contributed to their movement toward continental practice. Roland demonstrates the ways in which specific national experience in the United States differentiated the impact of patent practice from that in other countries. The introduction of a patent pool in 1917 was driven by the legal logjam surrounding the Wright patent. The government intervened to buy out the Wright interests and the interests of their leading competitor, Glenn Curtiss. The resulting patents pool lasted for 58 years and distinguished United States patent experience from that of any other nation. This history cries out for a comparative study of aircraft patenting experience in other nations to see what conclusions might be drawn about the impact of patents in general and the comparative efficacy of the American model.

Bilstein’s paper is the most self-consciously international and comparative. It both reinforces and challenges the general perception of aviation as a parochial and nationalistic technology. Bilstein notes, for example, that American aeronautical development really was different from that in other countries, a fact that no doubt helps to account for America’s remarkable domination of this industry for so many years. But Bilstein also notes that ideas and innovations from other countries were constantly finding their way to America, undermining the stereotypes of native American genius that have plagued the field since the remarkable achievement of the Wright brothers.

But Bilstein’s paper also helps to point up one of the great generalizations that may be applied to this quintessential twentieth-century technology. As the century has proceeded, the technology has become more universal and homogeneous, less parochial and nationalistic. Japan is licensed to produce a version of the American F-16. American airlines fly European-manufactured Airbuses. American aircraft manufacturers mount Rolls Royce engines on their planes. Virtually every large

airplane in the world uses fundamentally the same landing gear. Airports, navigation, and ground support equipment the world over are becoming increasingly standardized. The differences between aircraft remain stark and obvious, and the variations from country to country continue to reflect idiosyncrasies of national style and infrastructure. But all this diversity persists in the midst of a general trend toward uniform and standardized technology. This, too, is a mark of the twentieth century.

Alex Roland Peter Galison

NOTES

1 John H. White, American Locomotives (Baltimore: Johns Hopkins University Press, 1968), p. 115.

2 J. David Bolter, Turing’s Man: Western Culture in the Computer Age (Chapel Hill: University of North Carolina Press, 1984), p. 11.

3 Paul Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996).

4 Peter Fritzsche, A Nation of Fliers: German Aviation and the Popular Imagination (Cambridge, MA: Harvard University Press, 1992); Joseph J. Com, The Winged Gospel: America’s Romance with Aviation, 1900-1950 (New York; (Oxford University Press, 1983).

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

The propulsion efficiency advantage of turbofans was well-known nearly as long as turbojets themselves. In 1936, before he actually built a working turbojet, Frank Whittle patented a scheme to compress more air than was necessary for the turbine and to force it rearwards as a cold jet. Whittle wished to “gear down the jet” to make it more efficient, maintaining the overall mass-flow while reducing exhaust velocity.10 During the next decade, as Whittle developed the first successful turbojets, he patented several other configurations of bypass engines, including ones with a fan both fore and aft of the rest of the engine (he did not use the term ‘fan’ or ‘bypass’ for any of these designs).11

Whittle was not alone among the British in putting preliminary designs of bypass engines on paper. A. A. Griffith of Rolls-Royce devised a multistage axial fan in 1941.12 Figure 4 shows a cutaway of a Metropolitan Vickers turbofan engine from just after World War II in which the fan, consisting of two counter-rotating stages, is located downstream of the core engine, using its exhaust to drive turbine stages to which the fan blades are connected. Figure 5 shows a drawing of a De Havilland bypass engine in which the flow leaving the last stage of the axial compressor is split, with the outer portion bypassing the rest of the core engine and the inner portion proceeding on to a centrifugal compressor, then a combustor, and finally a pair of turbines. We have been unable to determine whether either of these engines was ever built and tested and, if they were, why they died in their infancy.13 Counter-rotating stages are notoriously difficult to make work in anything but the

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

Figure 4. Cutaway view of the Metropolitan-Vickers F-3 turbofan engine, late 1940s. The fan at the rear consists of two counter-rotating stages. [G. Geoffrey Smith, Gas Turbines and Jet Propulsion (London: Iliffe & Sons Ltd., 1955), p. 66.]

precise conditions for which they were designed – a significant drawback for an operational engine. The De Havilland engine, however, did not reach so far beyond the state of the art. From the perspective of hindsight, the main question about it is whether its compressors, combustor, and turbines performed well enough to provide the energy demanded by the bypass flow in its axial compressor.

These two British engines call attention to one of the two fundamental problems in designing practical bypass engines capable of realizing their theoretical promise. For high-subsonic flight the bypass flow needs to be pressurized to a level around 1.5 times the inlet pressure. The De Havilland design employed 6 axial compressor stages to achieve the requisite pressure in the bypass stream. The Metro-Vick design used counter-rotating stages to try to achieve the requisite pressure in merely two stages, saving weight, but with the risk of being unable to coordinate the flow in the two stages. Thus, one fundamental problem in designing a bypass engine for high-subsonic flight was to achieve the needed pressure rise in the bypass stream without incurring an excessive weight penalty. The later successful turbofan engines shown in Figure 2 used an aerodynamic design technology that did not exist in the late 1940s.

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

The other fundamental problem in designing a bypass engine was the need for more powerful core engines. The greater the bypass airflow, the more energy that is needed to pressurize it. The core engine must generate this energy using only the air passing through it. Whether the overall engine is a turboprop, turbofan, or turbojet, its core engine consists of a gas generator that converts chemical into mechanical energy. One of the basic performance parameters of gas generators is called specific-power – the power produced per unit of airflow. The specific-power of the aircraft gas turbines of the late 1940s was low, limiting the amount of bypass airflow. As a consequence the most anyone could even hope to achieve in a bypass engine at the time was a small incremental gain over turboprops or turbojets.

Realizing the promise of bypass engines required core engines with significantly higher specific-power. Higher specific-power calls for higher overall engine compression ratios.14 In other words, to make a working bypass engine, the core engine compressor had to achieve markedly higher pressures than the engines of the late 1940s were able to do.

The State of Axial Fan and Compressor Technology These two problems share the common demand of achieving a pressure-ratio, in the one case across a fan and in the other across a compressor. An axial fan stage, however, amounts to nothing but an axial compressor stage. An axial compressor stage consists of a row (or cascade) of rotating blades followed by a row of stationary blades, as shown schematically in Figure 6. Energy is added to the flow in the rotating blade row, while the stationary blade row redirects the flow and recovers the kinetic energy imparted by the rotor, in the process converting the velocity head into pressure. In contrast to a turbine stage, a compressor stage tries to make air do something that it does not want to do, namely flow against an opposing or adverse pressure gradient. The effects of the adverse pressure gradient ultimately limit the pressure-ratio that can be achieved in a single stage; above this limiting

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

Figure 6. Schematic of an axial compresor stage consisting of a row of rotor blades followed by a row of stators. Air flows from left to right. In velocity triangles, w designates absolute air velocity relative to the rotor, c designates absolute air velocity, and U is velocity of the rotor. [P. Hill and C. Peterson, Mechanics and Thermodynamics of Propulsion (Reading, Mass: Addison-Wesley, 1965) p. 245.]

point, which varies from one airfoil type to another, irreversible thermodynamic losses become excessive. This is why the axial compressors in the engines shown in the earlier figures all had several consecutive stages. It is also why more than one stage was used to pressurize the bypass flow in both the De Havilland and the Metro-Vick engines.

Despite its critical role, axial compressor technology was in its infancy in the 1940s. Not only was the pressure-ratio that could be achieved in any one stage quite low, but also “many early axial compressors worked more as stirring devices”15 instead of achieving compression. Although a base point in axial compressor design technology had emerged in 1945, lending some rationality to the design process, designers remained restricted in the performance demands they could place on the compressor.16 These restrictions in turn limited the performance that one might hope to achieve in a bypass engine by limiting both the pressure-ratio per stage achievable in a fan and the specific-power achievable in the core engine. These same restrictions were limiting the performance of turbojet engines as well. The military was rapidly converting to turbojet-powered aircraft in the late 1940s, with increasing emphasis on supersonic flight. Turbojets for supersonic flight required higher specific-power than the turbojets already flying were achieving.

Because of its role in dictating performance limitations, no component received more research and development effort between 1945 and 1955 than the axial compressor. This effort had three goals: (1) to achieve considerably higher overall compressor pressure-ratios at high thermodynamic efficiency; (2) to increase the predictability of axial compressors, especially at off-design operating conditions, so that fewer compressor designs would turn out to be unacceptable on test; and (3) to increase the pressure-ratio achievable in a single stage so that higher overall compression ratios could be achieved without exacting a penalty in the thrust-to – weight ratio of the engine.17 Although most of this research and development effort was applicable to fans as much as to compressors, the turbofan engine largely disappeared from view during these years. R&D funds went into developing better turbojets, not into transforming the promise of the bypass concept into successful engines.

From the perspective of hindsight, however, this was appropriate even from the point of view of the bypass engine, for the gains that were achieved in gas generator performance in the late 1940s and early 1950s ended up contributing crucially to the first turbofan engines to enter flight service. Moreover, as we shall see below, the advances that were made in compressor aerodynamic design technology during these same years contributed no less crucially to the aerodynamic design of the fans of these engines.

A New Conception of Progress

With these questions in mind, we ask, why did the turbofan engine, once it emerged, so totally dominate commercial aviation? P&W’s JT8D low-bypass turbofan engine, which went into service in 1964, is still powering Douglas’s DC-9 and Boeing’s 727 and 737. High-bypass turbofans, like P&W’s JT9D, GE’s CF6, and Rolls-Royce’s RB.211, have powered virtually all wide-body aircraft since the late 1960s. (The high-bypass turbofans required once more the same sort of steps in core-engine specific-power and fan tip Mach number as the initial low-bypass engines had required, and hence they need a separate analysis.85) The economics of the turbofan engine helped shape commercial jet aviation and stabilize it technologically and economically, putting air travel within the reach of a much larger segment of the public than it would otherwise have been. In other words, the turbofan engine has dominated high-subsonic flight because these two were mutually constitutive and emerged in parallel. Until the latter became important, the former did not make sense, technically or economically.

The turbofan responded to the decline of the notion that commercial jet flight would continually progress along the axis of speed. Much of the technology underlying turbofans had developed for entirely different purposes. Compressors received a great deal of attention in both industry and government, but none of that effort specifically sought a turbofan; it focused on turbojets. Immediately after World War II it seemed obvious that the continued progress of commercial flight would move, like military flight, toward higher and higher speeds. The only real customer for aircraft gas turbine engines before the mid-1950s, especially in the U. S., was the military, and they rightly pursued speed, and hence supersonic flight, above all else. More than a decade of supersonic flight and jet engines were required before it became clear that commercial air travel would follow many pathways, but increasing speed would not be one of them. Until the late 1950s, engineers simply did not see high-subsonic flight as a technical, or commercial, frontier. (Military flight leveled in speed as well: the aircraft that holds the world speed record, even today, was developed in the years just before and after 1960.) High-subsonic jet flight emerged as a dominant category, and ever increasing speed declined in importance as a category of problems, simultaneously with GE’s and P&W’s efforts to develop turbofan engines. Progress scarcely came to an end at this point, however.

The turbofan episode illustrates a dramatic, yet subtle shifting, we might even say a turning, in the parameters of progress in the narrative of aviation. The ever – increasing advance of the raw, physical parameter of speed ended in the 1950s, as commercial aviation settled into the high-subsonic regime. As an indicator of this shift, consider the proliferation of performance parameters in this story: stage pressure-ratio, thrust-to-weight ratio, propulsion efficiency, specific fuel consumption, cost per passenger mile. Significant progress was made in each of these measures with the emergence of the turbofan and in the years since, but they are less visible to the naked eye, less viscerally physical than speed. (An engineer, though, might argue that thrust-to-weight ratio is as “natural” a physical parameter as Newtonian mass and velocity.) Today’s airliners, to the untrained eye, look much like the 707 of four decades ago; for comparison, consider that forty years before the 707 were the biplanes of World War I. Of course, appearance is misleading. Stability in configuration masks substantial changes in engines (as we have shown), as well as in wing design, materials, control systems, and numerous other systems. Hence, the progress narrative in commercial aviation remains, but embedded in newer, seemingly more artificial measures that define “success” for advanced technologies, measures that embody social assumptions in machinery. The significance of the turbofan engine, and its intricate history, derives from this turning: from outward parameters of physics to internal parameters of systems.

This turning is evident not only in the broad parameters which evaluate aircraft performance, but also in the fine-grained texture of engineering practice. Engineers, in the story we have told, relied heavily on non-dimensional parameters of performance. Vincenti has characterized such “dimensionless groups” as useful in relating the performance of models to the performance of working prototypes.86 We here identify two additional categories of such parameters. One, typified by the diffusion factor, provided independent variables for empirical correlations. Such parameters enable a great deal of complexity to be digested into a form that allows designers to interpolate and extrapolate reliably from past experience. Another category consisted of performance parameters like the pressure-ratio and efficiency of compressor and fan stages and the thrust-to-weight ratio and specific fuel consumption of engines. These parameters provide a generic way of characterizing the state of the art and advances in it; by decoupling issues of performance from issues of implementation, they allow such thoroughly different approaches to turbofan design as GE’s and P&W’s to be meaningfully compared. One way in which engineering research has contributed to the turbofan has been through identifying and honing parameters that enable past successes to be repeated and that open the way to processes of continuous improvement.

CONTROVERSY OVER SCALE EFFECT

During World War I, many scientists, including science students, were mobilized for weapons development. While Ernest Rutherford and other physicists were engaged in devising a submarine detection system, many Cambridge scholars gathered at the Royal Aircraft Factory to assist in the development of the airplane. George P. Thomson, the son of J. J. Thomson; Francis Aston, the inventor of the mass spectrograph; Geoffrey I. Taylor, a specialist in fluid mechanics; and other excellent students or fresh graduates, including Hermann Glauert and William S. Farren, participated in the war work. The Factory in Famborough thus became another prominent center of aeronautical research in Britain.

Famborough approached aeronautical problems differently than the NPL. Whereas the NPL relied on wind tunnel experiments using small-scale airplane models, the Royal Aircraft Factory performed test flights of full-scale aircraft. For example, Oxford physicist Frederick Lindemann performed dangerous spinning flight and his data were analyzed by G. P. Thomson. The Factory’s primary function was to construct full-scale airplanes and conduct flight tests on them. Cambridge scientists collaborated closely with pilots and aircraft designers in their aeronautical investigations.

Through a number of full-scale flight tests, Factory investigators became aware of discrepancies between model tests and corresponding full-scale tests. They prepared a preliminary report noting the differences in terms of values of drag and lift of the airplane.9 To discuss the problem, a subcommittee was formed in 1917 including among its members representatives from the NPL and the Factory.10 Its official name was the “Scale Effect” subcommittee. The term “scale effect” was enclosed in quotation marks, suggesting that its significance was a matter in question.

A vehement debate arose at the first meeting of the subcommittee. Bairstow, the advocate of model experiments, argued against the Factory conclusion that the discrepancies between the measurements achieved by the two methods was attributable to scale effect. In his report, he referred to various causes of error other than scale effect, including errors in full-scale tests themselves. He even pointed out that a previous Factory report was “illogical” because it neglected the effect of interference on airplane drag. He also mentioned French aeronautical research in which model tests at Eiffel’s laboratory and full-scale tests at St. Cyr showed fairly good correspondence.11

The subcommittee considered a variety of causes for the discrepancies, examining each cause extensively. For example, the full-scale measurement of the drag of an airplane depended on the value of the power of its engine and the efficiency of its propeller. The suggestion was raised at one meeting that the power of the engine measured during flight would be different from that measured on the ground.12 In this case, it appeared that the pressure distribution should be measured by both full-scale and model methods.

Among the causes of errors investigated by the subcommittee, the most notable was the effect of the propeller on full-scale data. To investigate this effect, it was suggested in 1917 that full-scale tests should be made while the airplane was gliding with its engine stopped. Farren at Famborough objected that airplanes suitable for such gliding tests were no longer available there, having all been sent to the front. Bairstow observed that the Factory should always be able to secure airplanes for experiments.13

Through the discussions and investigations, the subcommittee reduced the original differences between the two sets of test results. Yet, subcommittee members remained divided in their conclusions regarding scale effect. In preparing the subcommittee’s final report, Bairstow insisted that scale effect was not a significant factor. When a Factory report on the collection of full-scale data was circulated among subcommittee members, Bairstow severely criticized the report. Although it did not explicitly refer to the unreliability of model tests, it listed full-scale data as necessary and sufficient for the calculation of drag. This form of presentation, Bairstow contended, would leave the impression that model test results could not be readily applied to full-scale planes. He suggested that the subcommittee should take some steps to correct the “wrong” impression created by the Factory report.14 The subcommittee’s final report therefore carried a statement on the usefulness of model tests, virtually neglecting the significance of the scale effect. Bairstow was willing to support the publication of the complete data only if the final report explicitly stated that observed differences had not been found to be due to scale effect.15

The different positions on the scale effect taken by the Factory and the NPL investigators reflected the different research strategies pursued at the two research facilities. The NPL concentrated on model testing in wind tunnels only, whereas the Factory’s main focus was in full-scale testing using its own planes. Bairstow was apparently afraid that invalidation of the model test results would seriously undermine the significance of aerodynamic investigations in which he had been engaged while at the NPL.

AN ACCIDENT OF HISTORY

We regularly ask after the limits of historical inquiry; we agonize over the right combination of psychological, sociological, and technical explanations. We struggle over how to combine the behavior of machines and practices of their users. Imagine, for a moment, that there was a nearly punctiform scientific-technological event that took place in the very recent past for which an historical understanding was so important that the full resources of the American government bore down upon it. Picture further that every private and public word spoken by the principal actors had been recorded, and that their every significant physical movement had been inscribed on tape. Count on the fact that lives were lost or jeopardized in the hundreds, and that thousands of others might be in the not so distant future. Expect that the solvency of some of the largest industries in the United States was on the line through a billion dollars in liability coverage that would ride, to no small extent, on the causal account given in that history. What form, we can ask, would this high-stakes history take? And what might an inquiry into such histories tell us about the project of – and limits to – historical inquiry more generally, as it is directed to the sphere of science and technology?

There are such events and such histories – the unimaginably violent, destructive, and costly crash of a major passenger-carrying airplane. We can ask: What is the concept of history embedded in the accident investigation that begins while crushed aluminum is still smoldering? Beginning with the Civil Aeronautics Act of 1938, the Civil Aeronautics Authority (a portion of which became today’s National Transportation Safety Board) and its successors have been assigned the task of reporting on each accident, determining what happened, producing a “probable cause” and arriving at recommendations to what is now the Federal Aviation Authority (and through them to industry and government) that would avoid repetition. Quite deliberately, the NTSB report conclusions were disqualified from being used in court: the investigative process was designed to have some freedom both from the FAA and from the courts. Since its establishment, the system of inquiry has evolved in ways I will discuss, but over the last half century there are certain elements that remain basically constant. From these consistencies, and from the training program and manuals of investigation, I believe we can understand the guiding historiographical principles that underlie these extraordinary inquiries. What they say – and do not say – can tell us about the broad system of aviation, its interconnectedness and vulnerabilities, but also, perhaps, something larger about the reconstruction of the intertwined human and machinic world as it slips into the past.

3

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 3-43 © 2000 Kluwer Academic Publishers.

There is a wide literature that aims to re-explain aviation accidents. Such efforts are not my interest here. Instead, I want to explore the form of historical explanation realized in the accident reports. In particular, I will focus on a cluster of closely related instabilities, by which I mean unresolvable tensions between competing norms of explanation. Above all, the reports are pulled at one and the same time towards localizing accounts (causal chains that end at particular sites with a critical action) and towards diffusing accounts (causal chains that spread out to human interactions and organizational cultures). Along the way, two other instabilities will emerge: first, a sharp tension between an insistence on the necessity of following protocol and a simultaneous commitment to the necessary exercise of protocol-defying judgment. Second, there is a recurrent strain between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors. To approach these and related questions, one needs sources beyond the reports alone. And here an old legislative stricture proves of enormous importance: for each case the NTSB investigates, it is possible to see the background documentation, sometimes amounting to many thousands of pages. From this “docket” emerge transcripts of the background material used to assemble the reports themselves: recordings and data from the flight, metallurgical studies, interviews, psychological analyses. But enough preliminaries. Our first narrative begins in Washington, DC, on a cold Wednesday afternoon, January 13, 1982.

The accident report opened its account at Washington National Airport. Snow was falling so hard that, by 1338, the airport had to shut down for 15 minutes of clearing. At 1359, Air Florida Flight 90, a Boeing 737-222 carrying 5 crewmembers and 74 passengers, requested and received their Instrument Flight Rules clearance. Twenty minutes later, a tug began de-icing the left side of the plane, then halted because of further departure delays. With the left side of the aircraft cleared, a relief operator replaced the initial one, and resumed the spraying of heated glycol-water mixture on the right side. By 1510, the relief operator finished with a final coat of glycol, inspected the plane’s engine intakes and landing gear, and found all surfaces clear of snow and ice. Stuck in the snow, the Captain blasted the engines in reverse for about a minute in a vain effort to free the plane from its deepening prison of water, glycol, ice, and snow. With a new tug in place, the ground crew successfully pulled flight 90 out of the gate at 1535. Planes were backed up in holding patterns up and down the East Coast as they waited for landing clearance. Taxiways jammed: flight 90 was seventeenth in line for takeoff.

When accident investigators dissected the water-soaked, fuel-encrusted cockpit voice recorder (cvr), here is what they transcribed from time code 1538:06 forward. We are in the midst of their “after start” checklist. Captain Larry Michael Wheaton, a 34 year-old captain for Air Florida, speaks first on CAM-1. The first officer is Roger Alan Pettit, a 31 year-old ex-fighter pilot for the Air Force; he is on CAM-2.

1538:06 Wheaton/CAM-1 {my insertions in curly brackets} After start

Pettit/CAM-2 Electrical

Wheaton/CAM-1 Generators

Pettit/CAM-2 Pitot heat {heater for the ram air intake that measures airspeed} Wheaton/CAM-1 On Pettit/CAM-2 Anti-ice

Wheaton/CAM-1 {here, because some of the listeners heard “on” and the majority “off’, the tape was sent to FBI Technical Services Division where the word was judged to be “off’.} Off.

Pettit/CAM-2 Air conditioning pressurization Wheaton/CAM-1 Packs on flight Pettit/CAM-2 APU {Auxiliary Power Unit}

Wheaton/CAM-1 Running Pettit/CAM-2 Start levers Wheaton/CAM-1 Idle [ … ]

Preparation for flight includes these and many other checklist items, each conducted in a format in which the first officer Pettit “challenges” captain Wheaton, who then responds. Throughout this routine, however, the severe weather commanded the flightcrew’s attention more than once as they sat on the taxiway. In the reportorial language of the investigators’ descriptive sections, the following excerpt illustrates the flight crew’s continuing concern about the accumulating ice, snow and slush, as they followed close behind another jet:

At 1540:42, the first officer continued to say, “It’s been a while since we’ve been deiced.” At 1546:21, the captain said, “Tell you what, my windshield will be deiced, don’t know about my wings.” The first officer then commented, “well – all we need is the inside of the wings anyway, the wingtips are gonna speed up on eighty anyway, they’ll shuck all that other stuff.” At 1547:32, the captain commented, “(Gonna) get your wing now.” Five seconds later, the first officer asked, “D’they get yours? Did they get your wingtip over ’er’?” The captain replied, “I got a little on mine.” The first officer then said, “A little, this one’s got about a quarter to half an inch on it all the way.”1

Then, just a little later, the report on voice recordings indicates:

At 1548:59, the first officer asked, “See this difference in that left engine and right one?” The captain replied, “Yeah.” The first officer then commented, “I don’t know why that’s different – less it’s hot air going into that right one, that must be it – from his exhaust – it was doing that at the chocks awhile ago but, ah.”

Which instrument exactly the first officer had in mind is not clear; the NTSB (for reasons that will become apparent shortly) later argued that he was attentive to the fact that, despite similar Engine Pressure Ratios (the ratio of pressure at the intake and exhaust of the jet and therefore a primary measure of thrust), there was a difference in the readout of the other engine instruments. These others are the N1 and N2 gauges – displaying the percent of maximum rpm of low and high pressure compressors respectively – , the Exhaust Gas Temperature gauge (EGT), and the fuel flow gauge that reads in pounds per minute. Apparently satisfied with the first officer’s explanation that there was hot air entering the right engine from the preceding plane, and that somehow this was responsible for the left-right discrepancy, the captain and first officer dropped the topic. But ice and snow continued to accumulate on the wings, as was evident from the cockpit voice recorder tape recorded four minutes later. To understand the first officer’s intervention at 1558:12, you need to know that the “bugs” are hand-set indicators on the airspeed gauge; the first corresponds to VI, the “decision speed” above which the plane has enough speed to accelerate safely to flight on one engine and below which the plane can (theoretically) be stopped on the runway. The second speed is VR, rotation speed at which the nosewheel is pulled off the ground, and the third, V2, is the optimal climbout speed during the initial ascent, a speed set by pitching the plane to a pre-set angle (here 18°).

1553:21 Pettit/CAM-2 Boy, this is a losing battle here on trying to deice those things, it (gives) you a false sense of security that’s all that does

Wheaton/CAM-1 That, ah, satisfied the Feds Pettit/CAM-2 Yeah

1558:10 Pettit/CAM-2 EPR all the way two oh four {Engine Pressure Ratio, explained below}

1558:12 Pettit/CAM-2 Indicated airspeed bugs are a thirty-eight, forty, forty four

Wheaton/CAM-1 Set

1558:21 Pettit/CAM-2 Cockpit door

1558:22 Wheaton/CAM-1 Locked

1558:23 Pettit/CAM-2 Takeoff briefing

1558:25 Wheaton/CAM-1 Air Florida standard

1558:26 Pettit/CAM-2 Slushy runway, do you want me to do anything special for this or just go for it?

1558:31 Wheaton/CAM-1 Unless you got anything special you’d like to do 1558:33 Pettit/CAM-2 Unless just takeoff the nose well early like a soft field takeoff or something 1558:37 Pettit/CAM-2 I’ll take the nose wheel off and then we’ll let it fly off

1558:39 Pettit/CAM-2 Be out of three two six, climbing to five, I’ll pull it back to about one point five five supposed to be about one six depending on how scared we are.

1558:45 (Laughter)

As in most flights, the captain and first officer were alternating as “pilot flying”; on this leg the first officer had the airplane. For most purposes, and there are significant exceptions, the two essentially switch roles when the captain is the pilot not flying. In the above remarks, the first officer was verifying that he would treat the slushy runway as one typically does any “soft field” – the control wheel is pulled back to keep weight off the front wheel and as soon as the plane produces enough lift to keep the nosewheel off the runway, it is allowed to do so. His next remark re-stated that the departure plan calls for a heading of 326-degrees magnetic, that their first altitude assignment was for 5,000 feet, and that he expected to throttle back from thrust (EPR) takeoff setting of 2.04 to a climb setting of between 1.55 and 1.6. Takeoff clearance came forty seconds later, with the urgent injunction “no delay.” There was another incoming jet two and a half miles out heading for the same runway. Flight 90’s engines spooled up, and the 737 began its ground roll down runway 36. Note that the curly brackets indicate text I have added to the transcript.

1559:54 {Voice identification unclear} CAM-? Real cold here 1559:55 Pettit/CAM-2 Got ‘em?

1559:56 Wheaton/CAM-1 Real cold 1559:57 Wheaton/CAM-1 Real cold 1559:58 Pettit/CAM-2 God, look at that thing 1600:02 Pettit/CAM-2 That doesn’t seem right does it?

1600:05 Pettit/CAM-2 Ah, that’s not right 1600:07 Pettit/CAM-2 (Well) –

1600:09 Wheaton/CAM-1 Yes it is, there’s eighty {knots indicated airspeed}

1600:10 Pettit/CAM-2 Naw, I don’t think that’s right

1600:19 Pettit/CAM-2 Ah, maybe it is

1600:21 Wheaton/CAM-1 Hundred and twenty

1600:23 Pettit/CAM-2 I don’t know

1600:31 Wheaton/CAM-1 Vee one

1600:33 Wheaton/CAM-1 Easy

1600:37 Wheaton/CAM-1 Vee two

1600:39 CAM (Sound of stickshaker starts and continues to impact)

1600:45 Wheaton/CAM-1 Forward, forward {presumably the plane is over-rotating to too high a pitch attitude}

1600:47 CAM-? Easy

1600:48 Wheaton/CAM-1 We only want five hundred {feet per minute climb}

1600:50 Wheaton/CAM-1 Come on, forward 1600:53 Wheaton/CAM-1 Forward 1600:55 Wheaton/CAM-1 Just barely climb 1600:59 Pettit/CAM-2 (Stalling) we’re (falling)

1601:00 Pettit/CAM-2 Larry we’re going down, Larry 1601:01 Wheaton/CAM-1 I know it 1601:01 ((Sound of impact))

The aircraft struck rush-hour traffic on the Fourteenth Street Bridge, hitting six occupied automobiles and a boom truck, ripping a 41-foot section of the bridge wall along with 97 feet of railings. The tail section pitched up, throwing the cockpit down towards the river. Tom to pieces by the impact, the airplane ripped and buckled, sending seats into each other amidst the collapsing structure. According to pathologists cited in the NTSB report, seventy passengers, among whom were three infants and four crewmembers, were fatally injured; seventeen passengers were incapacitated by the crash and could not escape.2 Four people in vehicles died immediately of impact-induced injuries as cars were spun across the bridge. Only the tail section of the plane remained relatively intact, and from it six people were plunged into the 34-degree ice-covered Potomac. The one surviving flight attendant, her hands immobilized by the cold, managed to chew open a plastic bag containing a flotation device and give it to the most seriously injured passenger. Twenty minutes later, a Parks Department helicopter arrived at the scene and rescued four of the five survivors; a bystander swam out to rescue the fifth.3

AN ACCIDENT OF HISTORY

Figure 1. Flightpath. Sources: National Transportation Safety Board, Aircraft Accident Report, Air Florida, Inc. Boeing 737-222, N62AF, Collision with 14th Street Bridge, near Washington National Airport Washington, D. C., January 13, 1982, p. 7, figure 1. Hereafter, NTSB-90.

THE EVOLUTION OF THE TURBOJET ENGINE: 1945-1956

The first turbojet engines to achieve truly high performance (even by today’s standards) emerged at the end of the 1940s and in the early 1950s. These engines required several advances in technology, including better alloys, especially for blading, and higher turbine inlet temperatures. The most important advance, however, was to raise the overall compressor pressure-ratio from around 5 to 1 – as in General Electric’s J-47, the engine with by far the most flight hours as of 1952 – to more than 10 to 1. Because the average pressure-ratio per stage in axial compressors was then limited to around 1.15, this meant many more stages. It also meant a much smaller annulus area for the flow in the rear stages than in the forward stages. This reduction in annulus area required that a new, fundamental problem in compressor design had to be solved. When the rotational speed of the compressor was low, the front stages would not compress the flow enough to pass through the smaller annuli of the rear stages, causing these stages to stall and the compressor to go into a violent instability called surge. Consequently, some special provision was needed to enable the engine just to sustain operation at off-design conditions. A second factor exacerbated this problem. As the flow acquires a tangential com­ponent of velocity in a stage, a centrifugal force arises in it. A radial pressure gradient balances this force, resulting in radial equilibrium. Unless this pressure gradient is accounted for carefully in design, the flow can become so radially “distorted” by the time it reaches the rear stages that they are forced to operate far off the incidence angles for which they were intended and hence with high thermodynamic losses.

Accordingly, in order to design high pressure-ratio, multistage compressors, the engine companies had to find a solution to the problem of matching the rear stages with the front stages at off-design as well as at design operating conditions. The three engine companies that emerged as dominant in the U. S. and Britain by the early 1960s, Pratt & Whitney Aircraft, General Electric, and Rolls-Royce – solved this problem in three different ways.18

Pratt & Whitney – The Two-Spool Engine Pratt & Whitney, in spite of decades of experience with reciprocating aircraft engines, entered the turbojet business well behind GE and Rolls-Royce. During the late 1940s P&W invested heavily in jet engine technology, including extensive in­house tests of the performance of compressor airfoil profiles in cascade at off-design incidence angles. P&W received a study contract to design a high-thrust engine for a strategic bomber in 1947.19 They decided that the best way to achieve the requisite compressor pressure-ratio was, in effect, to divide the compressor into two separate compressors, powered by two separate turbines, turning at different speeds. This arrangement (displayed in Figure 1) is called a two-spool engine, with the front compressor serving as a low-pressure compressor and the rear one, as a high – pressure compressor. At off-design conditions the low-pressure spool rotates at much lower speed than the high-pressure spool; as a consequence the low-pressure compressor passes less flow into the high-pressure compressor at these conditions. Individually, each of the two compressors requires a comparatively modest number of stages, so that the cumulative effects of radial equilibrium on the back stages of each spool are not that severe.

The two-spool engine P&W designed under its 1947 study contract became the J – 57, powering the B-52 bomber, among other aircraft. It was a remarkable engine by any standards, all the more so considering that it was designed between 1947 and 1949, essentially using slide rule methods. The initial version was a 10,000-pound – thrust engine for subsonic flight; with afterburner added, it produced 15,000 pounds of thrust for low supersonic flight. It had an overall compressor pressure-ratio of 12.5 to 1, achieved in a 9-stage low-pressure compressor and a 7-stage high – pressure compressor (for an average pressure-ratio of 1.17 per stage). The J-57 went into service in 1953. Using basically the same design approach, P&W designed a somewhat larger two-spool engine in the early 1950s, the J-75, for Mach 2 flight.20

Working Around Ignorance

The presence of such parameters points to another layer of engineering knowledge, or lack thereof. A striking feature of this episode is the extent to which the design process revolved around ignorance – more precisely, the recognition of ignorance and ways of compensating for and safeguarding against it. No one working on axial compressors and fans in that era knew what the flow inside a blade row was at any level of detail. It was not just that they could not calculate the detailed flow; they could not even measure it inside the rotating blade rows – only at their inlet and

outlet. Rolls-Royce’s way of dealing with this in the case of the bypass flow in the Conway was to use several stages with standard subsonic, low pressure-ratio airfoils whose “black-box” performance had been established empirically. Even though Pratt & Whitney knew that General Electric had achieved a 1.6 pressure-ratio in a single stage, they recognized that they did not know how to do this and opted for two stages. They too used pre-defined, pre-tested airfoils – in their case double­circular-arc airfoils that could be pushed to inlet Mach numbers of 1.15 and a little above. The boundaries of ignorance within P&W had been pushed back somewhat by the mid-1950s compared with those of Rolls-Royce two or three years earlier, but these boundaries still dictated the design.

The boundaries of GE’s ignorance had been pushed even further back, yet most of their design effort was still aimed primarily at compensating for what they did not know. They did not know how to control the effects of shocks, but they recognized that they could get away with not knowing this if they limited the tip Mach number to 1.25, safely below the 1.35 level where the losses had jumped in Klapproth’s NACA rotor. GE had no way of knowing the complicated three-dimensional flow inside their rotor blade row, but they knew they could get away with this so long as their calculated radial and axial velocity distributions were sufficiently similar in key respects to those of conventional airfoils and their diffusion factors remained below the established limiting values. The novel computer program they devised, besides giving them information about the velocity distributions, allowed them to work backwards from these distributions to plausible blade contours. Even so, as their tests showed, the actual flow departed non-trivially from their calculation. Yet they came sufficiently close to the actual flow in crucial respects, most notably the diflusion factor, to achieve a breakthrough in stage performance.

A related point about dealing with ignorance holds for the NACA compressor research program. Its aim was not one of obtaining detailed knowledge of the three­dimensional flow inside a blade row and how to control it. Rather, the aim was to find ways of achieving both consistent and superior designs without having to know the detailed flow. The cascade wind-tunnel tests gave black-box performance of two-dimensional airfoils, and the NACA design method provided ways of compensating for radial effects in using this two-dimensional performance. The transonic research program searched for ways of pushing the boundaries of ignorance back a little, and the supersonic program explored the possibility of pushing them back dramatically. The most striking example of compensating for ignorance, however, is the diffusion factor. The whole idea behind it was to employ quantities that could be measured, at the inlet and outlet of blade rows, to provide an approximation to a feature of the flow inside the blade row that generally could not be measured or calculated with confidence. The diffusion factor enabled higher pressure-ratio stages to be pursued without having to know more about the flow inside the blade row. The rule of thumb it gave for limiting blade loading defined a boundary of ignorance. Reasonable stages could be designed without mastery of the detailed flow inside the blade rows so long as the diffusion factor remained below its empirically determined critical value and velocity distributions did not depart radically from those of the past. The correlation of airfoil profile losses with the diffusion factor, together with a subsequently developed NACA model for calculating shock losses at higher Mach numbers87, allowed designers in the engine companies to live with their ignorance. Engineers do not need to know why some­thing works so long as they know how to stay safely within the bounds of their ignorance and still produce competitive designs.

These practices differ markedly from Vincenti’s characterization of uncertainty in engineering, where engineers usually “did not know as much as they thought they did,” and sometimes “didn’t even know what they didn’t know.”88 In our case, engineers knew rather acccurately what they did not know and hence endeavored specifically to work around the boundaries of their ignorance. Nonetheless, their work is well described by Vincenti’s observation that such work often serves “to free engineering from the limitations of science.”89 Although physicists had established the equations of motion for fluid flow more than a century earlier, these equations remained intractable even for flows enormously simpler than those in compressor and fan stages. Engineers could turn to physics for simplified, approximate reformulations of these equations, but engineering judgment then became crucial in deciding which features of the flow could be ignored or represented grossly by an empirically-based model.90 Experiments could be carried out in wind tunnels and measurements could be made on full stages, but again judgment and ingenuity were indispensable in drawing conclusions from data that designers could use. The shortage of science fostered an engineering practice epitomized by the following recommendation, made not in the early 1950s, but in 1978: “No compressor designer should overlook the possibility or underestimate the advantages of scaling an existing compressor geometry of known performance to meet his current design goals.”91 Even when existing designs could not be so used, they served as starting points for incremental advances. The continuous improvement achieved in axial fan and compressor design in the period covered in this paper, and subsequently, has not come from being better able to exploit scientific knowledge of fluid flow, but rather from sophisticated aspects of engineering practice aimed at defining, surmounting, and hence shifting, boundaries of ignorance.

We have shown how the development of turbofan engines, a technology with significant technical preconditions and precedents, emerged out of a disparate, but rich set of experiments and designs, working with knowledge of fluid flow very close to its boundaries of uncertainty. How well do the historical phenomena in this analysis apply to engineering epistemology in general? This question can be reformulated: is the role of uncertainty in engineering design exaggerated when one examines cutting-edge aerodynamics, where the physics of turbulence, that paradigm of poorly-understood phenomena, so dominate? Isn’t design in other contexts a more “certain” endeavor? Anecdotal evidence suggests otherwise. A prominent computer scientist and algorithm designer, when recently asked this question, responded emphatically in the negative. Any number of parameters in computer systems, from network behavior to algorithmic complexity, display similar phenomena. We understand, after all, how any individual Newtonian air particle behaves, just as we understand individual transistors. Their sum totals, however, exhibit behaviors currently beyond “the limitations of science.” It is at this boundary, we argue, literally at the border of complexity, that engineering begins.

THE AEROPLANE OF 1930

Bairstow’s coercive behavior in the subcommittee meetings reappeared after the war. A special meeting was held in early 1921 to formulate a postwar research program under the new Aeronautical Research Committee (ARC), the successor of the Advisory Committee for Aeronautics.16 Called “The Discussion of the Aeroplane of 1930,” this unique event aimed at discussing the most important fields of investigation in designing the future airplanes. While various conflicting issues emerged, the discussion was chiefly focused on establishing the priority of two different research programs: one concentrating on the production of more stable and controllable airplanes and the other directed towards designing faster planes by reducing head resistance. The two programs were advocated by Bairstow and B. Melvill Jones respectively, Bairstow the new chair in aeronautics at the Imperial College of Science and Technology and Jones holding the same position at Cambridge University.

The meeting of 1921 arose from an idea of Henry R. M. Brooke-Popham, Director of Research of the Air Ministry, who asked Henry Tizard to explore “the most important lines of research which might be expected to lead up to the 1930 aircraft.”17 While preparing his own article on the topic, Tizard asked for opinions on this question from leading aeronautical engineers in Britain. Through the Secretary of the Aeronautical Research Committee, a letter was sent to these engineers in November of 1920, requesting comments and suggestions on Tizard’s article.

About ten aeronautical engineers responded, from the military, industry, and academia. Their answers conformed roughly to the format of Tizard’s questions. The report returned by Jones contained specific research proposals and a methodological discussion on the nature of technological investigations. Jones distinguished between long-term and short-term investigations. In his view the most promising field for long-term investigation was the airplane body, especially the aerodynamic interference between the propeller and the body.18

After Jones and the three other Committee members who had submitted preliminary papers had spoken at the 1921 meeting, Bairstow offered a long criticism of Jones’s proposals. The sharp conflict between Bairstow and Jones became the central issue of the Discussion of the Aeroplane of 1930. Bairstow’s position was clear: continue research into aeronautical control and stability. In arguing for this policy, he stressed three points: the main cause of airplane accidents, the high cost of insurance premiums for commercial aviation, and the need for night flying capability.19 A recent report of the Accidents subcommittee had stated that in order to decrease accidents, the investigation of lateral control and stability was of pressing importance. The report also concluded that “the knowledge of longitudinal motion is in a far more satisfactory condition than knowledge of lateral motion.” On this point, however, Bairstow drew attention to a recent accident of the Tarrant Tabor, the giant experimental airplane which had lost its longitudinal balance while taking off, causing the death of the two crewmen on board.20 Though the real causes of the accident were still unclear, Bairstow emphasized that more investigations on models were needed to secure longitudinal balance in large, manually-controlled aeroplanes.

Tizard attempted to find a compromise between Bairstow’s insistence on stability and control and Jones’s refusal to fragment the study of aerodynamic forces on the airframe. Might not some research be terminated to free resources for a new project? Specifically, he questioned the urgency of an investigation into the stability and control of a twin-engine airplane when one of its engines suddenly stopped. This problem was so complicated, he commented, that by the time it was finally solved, the current type of twin-engined airplanes might be completely outdated. Funds for this research might be better invested in Jones’s plan. But Tizard could not convince the Committee. Instead, Jones was nominated to be chairman of the Stability and Control subcommittee, and was obliged to continue his study on the control of airplanes flying at low speeds. Bairstow’s power prevailed. Researches on stability and control continued to dominate for most of the next decade.

Bairstow’s power in this instance can be compared to that of Pasteur, as described in Bruno Latour’s Pasteurization of France. In Latour’s view, Pasteur parlayed his research achievements within the laboratory into power in the outside world.21 In Bairstow’s attempt to combine the inside and the outside, his manner appears coercive. In the controversy over scale effect, for example, his contention was too one-sided. In the discussion of the Aeroplane of 1930, his statement defended his own vested interest rather than reflecting on the best research program for the next decade.

When the controversy over scale effect was finally settled, Bairstow’s argument turned out wrong. The complete controversy is cogently summarized by Joseph L. Nayler, longtime Secretary of the Aeronautical Research Committee, in his obituary account of Bairstow. There was a great controversy in the early days between Bairstow and research staff at the Royal Aircraft Factory that boiled up in the ‘Scale Effect’ subcommittee which gave rise to the Aerodynamics subcommittee. Bairstow maintained that full-scale was inaccurate and model work was dead accurate. This position did not alter much until an ‘international’ aerofoil was sent to laboratories abroad by Richard Southwell for the A. R.C., and a variety of results obtained. That led to the investigation of turbulence in wind tunnels. In another respect Bairstow was at fault. He disagreed about corrections for wind tunnel walls brought forward by Glauert, who had studied Prandtl, and the Aerodynamics Committee actually voted against their inclusion under Bairstow’s influence; but the position changed so rapidly that in a couple of years or so the swing was all the other way.22 The following section traces the story in more detail. It begins with another connection between the inside and the outside of the laboratory.

THE PHYSICS OF FAILURE

Why did flight 90 crash? At a technical level (and as we will see the technical never is purely technical) the NTSB concluded that the answer was twofold: not enough thrust and contaminated wings. Easily said, less easily demonstrated. The crash team mounted three basic arguments. First, from the cockpit voice recorder, investigators could extract and frequency analyze the background noise, noise that was demonstrably dominated by the rotation of the low-pressure compressor. This frequency, which corresponds to the number of blades passing per second (BPF), is closely related to the instrument panel gauge N1 (percentage of maximum rpm for the low pressure compressor) by the following formula:

BPF (blades per second) = (rotations per minute (rpm) x number of blades)/60

or

Percent max rpm (N1) = (rpm x 60 x BPF x 100)/(maximum rpm x number of blades)

Applying this formula, the frequency analyzer showed that until 1600:55 – about six seconds before the crash – N1 remained between 80 and 84 percent of maximum. Normal N1 during standard takeoff thrust was about 90 percent. It appeared that only during these last seconds was the power pushed all the way. So why was N1 so low, so discordant with the relatively high setting of the EPR at 2.04? After all, we heard a moment ago on the CVR that the engines had been set at 2.04, maximum takeoff thrust. How could this be? The report then takes us back to the gauges.

The primary instrument for takeoff thrust was the Engine Pressure Ratio gauge, the EPR. In the 737 this gauge was read off of an electronically divided signal in which the inlet engine nose probe pressure given by Pt2 was divided by the engine exhaust pressure probe Pt7. Normally the Pt2 probe was deiced by the anti-ice bleed air from the engine’s eighth stage compressor. If, however, ice were allowed to form in and block the probe Pt2, the EPR gauge would become completely unreliable. For with Pt2 frozen, pressure measurement took place at the vent (see figure 2) – and the pressure at that vent was significantly lower than the compressed air in the midst of the compressor, making

apparent EPR = Pt7/(Pt2-vent) > real EPR = Pt7/Pt2.

Since takeoff procedure was governed by throttling up to a fixed EPR reading of 2.04, a falsely high reading of the EPR meant that the “real” EPR could have been much less, and that meant less engine power.

To test the hypothesis of a frozen low pressure probe, the Boeing Company engineers took a similarly configured 737-200 aircraft with JT8D engines resembling those on the accident flight, and blocked with tape the Pt2 probe on the number one engine (simulating the probe being frozen shut). They left the number two engine probe unblocked (normal). The testers then set the Engine Pressure Ratio

THE PHYSICS OF FAILURE

Figure 2. Pt2 and Pt7. Source: NTSB-90, p. 25, figure 5.

indicator for both engines at takeoff power (2.04), and observed the resulting readings on the other instruments for both “frozen” and “normal” cases. This experiment made it clear that the EPR reading for the blocked engine was deceptive – as soon as the tape was removed from Pt2, the EPR revealed not the 2.04 to which it had been set, but a mere 1.70. Strikingly, all the other number one engine gauges

THE PHYSICS OF FAILURE

Figure 3. Instruments for Normal/Blocked Pt2. Source: NTSB-90, p. 26, figure 6.

– N1, N2, EGT, and Fuel Flow – remained at the level expected for an EPR of 1.70. One thing was now clear: instead of two engines operating at an EPR of 2.04 or 14,500 lbs of thrust each, flight 90 had taken off, hobbled into a stall, and begun falling towards the 14th Street Bridge with two engines delivering an EPR of 1.70, a mere 10,750 lbs of thrust apiece. At that power, the plane was only marginally able
to climb under perfect conditions. And with wings covered with ice and snow, flight 90 was not, on January 13, flying under otherwise perfect conditions.

Finally, in Boeing’s Flight Simulator Center in Renton, Washington, staff unfolded a third stage of inquiry into the power problem. With some custom programming the computer center designed visuals to reproduce the runway at Washington National Airport, the 14th Street Bridge and the railroad bridge. Pilots flying the simulator under “normal” (no-ice configuration) concurred that the simulation resembled the 737s they flew. With normalcy defined by this consensus, the simulator was then set to replicate the 737-200 with wing surface contamination – specifically the coefficient of lift was degraded and that of drag augmented. Now using the results of the engine test and noise spectrum analysis, engineers set the EPR at 1.70 instead of the usual takeoff value of 2.04. While alone the low power was not “fatal” and alone the altered lift and drag were not catastrophic, together the two delivered five flights that did reproduce the flight profile, timing and position of impact of the ill-starred flight 90. Under these flight conditions the last possible time in which recovery appeared possible by application of full power (full EPR = 2.23) was about 15 seconds after takeoff. Beyond that point, no addition of power rescued the plane.4

Up to now the story is as logically straightforward as it is humanly tragic: wing contamination and low thrust resulting from a power setting fixed on the basis of a frozen, malfunctioning gauge drove the 737 into a low-altitude stall. But from this point on in the story that limpid quality clouds. Causal lines radiated every which way like the wires of an old, discarded computer – some terminated, some crossed, some led to regulations, others to hardware; some to training, and others to individual or group psychology. At the same time, this report, like others, began to focus the causal inquiry upon an individual element, or even on an individual person. This dilemma between causal diffusion and causal localization lay at the heart of this and other inquiries. But let us return to the specifics.

The NTSB followed, inter alia, the deicing trucks. Why, the NTSB asked, was the left side of the plane treated without a final overspray of glycol while the right side received it? Why was the glycol mixture wrongly reckoned for the temperature? Why were the engine inlets not properly covered during the spraying? Typical of the ramified causal paths was the one that led to a non-regulation nozzle used by one of the trucks, such that its miscalibration left less glycol in the mixture (18%) than there should have been (30%).5 What does one conclude? That the replacement nozzle killed these men, women and children? That the purchase order clerk who bought it was responsible? That the absence of a “mix monitor” directly registering the glycol-to-water ratio was the seed of destruction?6 And the list of circumstances without which the accident would not have occurred goes on – including the possibility that wing de-icing could have been used on the ground, that better gate holding procedures would have kept flight 90 from waiting so long between de­icing and takeoff, to name but two others.7

There is within the accident report’s expanding net of counterfactual conditionals a fundamental instability that, I believe, registers in the very conception of these accident investigations. For these reports in general – and this one in particular – systematically turn in two conflicting directions. On one side the reports identify a wide net of necessary causes of the crash, and there are arbitrarily many of these – after all the number of ways in which the accident might not have happened is legion. Human responsibility in such an account disperses over many individuals. On the other side, the reports zero in on sufficient, localizable causes, often the actions of one or two people, a bad part or faulty procedure. Out of the complex net of interactions considered in this particular accident, the condensation was dramatic: the report lodged immediate, local responsibility squarely with the captain.

Fundamentally, there were two charges: that the captain did not reject the takeoff when the first officer pointed out the instrument anomalies, and that, once in the air, the captain did not demand a full-throttle response to the impending stall. Consider the “rejection” issue first. Here it is worth distinguishing between dispersed and individuated causal agency (causal instability), and individual and multiple responsibility (agency instability). There is also a third instability that enters, this one rooted between the view that flight competence stems from craft knowledge and the view that it comes from procedural knowledge (protocol instability).

The NTSB began its discussion of the captain’s decision not to reject by citing the Air Florida Training and Operations Manual:

Under adverse conditions on takeoff, recognition of an engine failure may be difficult. Therefore, close reliable crew coordination is necessary for early recognition.

The captain ALONE makes the decision to “REJECT.”

On the B-737, the engine instruments must be closely monitored by the pilot not flying. The pilot flying should also monitor the engine instruments within his capabilities. Any crewmember will call out any indication of engine problems affecting flight safety. The callout will be the malfunction, e. g., “ENGINE FAILURE,” “ENGINE FIRE,” and appropriate engine number.

The decision is still the captain’s, but he must rely heavily on the first officer.

The initial portion of each takeoff should be performed as if an engine failure were to occur.8

The NTSB report used this training manual excerpt to show that despite the fact that the co-pilot was the “pilot flying,” responsibility for rejection lay squarely and unambiguously with the captain. But intriguingly, this document also pointed in a different direction: that rejection was discussed in the training procedure uniquely in terms of the failure of a single engine. Since engine failure typically made itself known through differences between the two engines’ performance instruments, protocol directed the pilot’s attention to a comparison (cross-check) between the number one and number two engines, and here the two were reading exactly the same way. Now it is true that the NTSB investigators later noted that the reliance on differences could have been part of the problem.9 In the context of training procedures that stressed the

cross-check, the absence of a difference between the left and right engines strikes me not as incidental, but rather as central. In particular it may help explain why the first officer saw something as wrong – but not something that fell into the class of expectations. He did not see a set of instruments that protocol suggested would reflect the alternatives “ENGINE FAILURE” or “ENGINE FIRE.”

But even if the first officer or captain unambiguously knew that, say, N1 was low for a thrust setting of the EPR readout of 2.04, the rejection process itself was riddled with problems. Principally, it makes no sense. The airspeed V1 functioned as the speed below which it was supposed to be safe to decelerate to a stop and above which it was safe to proceed to takeoff even with an engine failure. But this speed was so racked with confusion that it is worth discussing. Neil Van Sickle gives a typical definition of VI in his Modern Airmanship, where he writes that VI is “The speed at which… should one engine fail, the distance required to complete the takeoff exactly equals the distance required to stop.”10 So before VI, if the engine failed, you could stop in less distance than you could get off the ground. Other sources defined V1 as the speed at which air would pass the rudder rapidly enough for rudder authority to keep a plane with a dead engine from spinning. Whatever its basis, as the Air Florida Flight Operations Manual for the Boeing 737 made clear, pilots were to reject a takeoff if the engine failed before V1; afterwards, presumably, the takeoff ought be continued. The problem is that, by its use, the speed V1 had come to serve as a marker for the crucial spatial point where the speed of the plane and distance to go made it possible to stop (barely) before overrunning the runway. In the supporting documents of the NTSB report (called the Docket) one finds in the Operations Group “factual report” the following hybrid definition of VI:

[V1 is] the speed at which, if an engine failure occurs, the distance to continue the takeoff to a height of 35 feet will not exceed the usable takeoff distance; or the distance to bring the airplane to a full stop will not exceed the acceleration – stop distance available. VI must not be greater than the rotation speed, Vr [rejecting after rotation would be enormously dangerous], or less than the ground minimum control speed Vmcg [rejecting before the plane achieves sufficient rudder authority to become controllable would be suicidal].11

Obviously, VI cannot possibly do the work demanded of it: it is the wrong parameter to be measuring. Suppose the plane accelerated at a slow, constant rate from the threshold to the overrun area, achieving V1 as it began to cross the far end of the runway. That would, by the book, mean it could safely take off where in reality it would be within a couple of seconds of collapsing into a fuel-soaked fire. The question should be whether V1 has been reached by a certain point on the runway where a maximum stop effort will halt the plane before it runs out of space (a point known elsewhere in the lore as the acceleration-stop distance). If one is going to combine the acceleration-stop distance with the demand that the plane have rudder authority and that it be possible to continue in the space left to an engine-out takeoff, then one way or another, the speed VI must be achieved at or before a fixed point on the runway. No such procedure existed.

Sadly, as the NTSB admitted, it was technically unfeasible to marry the very precise inertial navigation system (fixing distance) to a simple measurement of time elapsed since the start of acceleration. And planting distance-to-go markers on the runway was dismissed because of the “fear of increasing exposure to unnecessary high-speed aborts and subsequent overruns… .[that might cause] more accidents than they might prevent.”12 With such signs the rolling protocol would presumably demand that the pilots reject any takeoff where VI was reached after a certain point on the runway. But given the combination of technical limitations and cost-benefit decisions about markers, it was, in fact, impossible to know in a protocol-following way whether VI had been achieved in time for a safe rejection. This meant that the procedure of rejection by VI turns out to be completely unreliable in just that case where the airplane is accelerating at a less than normal rate. And it is exactly such a low-acceleration case that we are considering in flight 90. What is demanded of a pilot – a pilot on any flight using VI as a go-no-go speed – is a judgment, a protocol – defying judgment, that VI has been reached “early enough” (determined without an instrument or exterior marking) in the takeoff roll and without a significant anomaly. (Given the manifest and recognized dangers of aborting a high-speed roll, “significant” here obviously carries much weight; Air Florida, for example, forbids its pilots from rejecting a takeoff solely on the basis of the illumination of the Master Caution light.)13

The NTSB report “knows” that there is a problem with the V1 rejection criterion, though it knows it in an unstable way:

It is not necessary that a crew completely analyze a problem before rejecting a takeoff on the takeoff roll. An observation that something is not right is sufficient reason to reject a takeoff without further analysis… The Safety Board concludes that there was sufficient doubt about instrument readings early in the takeoff roll to cause the captain to reject the takeoff while the aircraft was still at relatively low speeds; that the doubt was clearly expressed by the first officer; and that the failure of the captain to respond and reject the takeoff was a direct cause of the accident.14

Indeed, after a careful engineering analysis involving speed, reverse thrust, the runway surface, and braking power, the NTSB determined the pilot could have aborted even with a frictional coefficient of 0.1 (sheet ice) – the flight 90 crew should not have had trouble braking to a stop from a speed of 120 knots on the takeoff roll. “Therefore, the Safety Board believes that the runway condition should not have been a factor in any decision to reject the takeoff when the instrument anomaly was noted.”15

What does this mean? What is this concept of agency that takes the theoretical engineering result computed months later and uses it to say “therefore… should not have been a factor”? Is it that the decision that runway condition “should not have been a factor” would have been apparent to a Laplacian computer, an ideal pilot able to compute friction coefficients by sight and from it deceleration distance using weight, wind, breaking power, and available reverse thrust? Robert Buck, a highly experienced pilot – a 747 captain, who was given the Air Medal by President Truman – wrote about the NTSB report on flight 90: “How was a pilot to know that [he could have stopped]? No way from training, no way was there any runway coefficient information given the pilot; a typical NTSB after-the-fact, pedantic, unrealistic piece of laboratory-developed information.”16 Once the flight was airborne with the stickshaker vibrating and the stall warning alarm blaring, the NTSB had a different criticism: the pilot did not ram the throttles into a full open position. Here the report has an interesting comment. “The Board believes that the flightcrew hesitated in adding thrust because of the concern about exceeding normal engine limitations which is ingrained through flightcrew training programs.” If power is raised to make the exhaust temperature rise even momentarily above a certain level, then, at bare minimum, the engine has to be completely disassembled and parts replaced. Damage can easily cost hundreds of thousands of dollars, and it is no surprise that firewalling a throttle is an action no trained pilot executes easily. But this line of reasoning can be combined with arguments elsewhere in the report. If the captain believed (as the NTSB argues) that the power delivered was normal takeoff thrust, he might well have seen the stall warning as the result of an over-rotation curable by no more than some forward pressure on the yoke. By the time it became clear that the fast rate of pitch and high angle of attack were not easily controllable (737s notoriously pitch up with contaminated wings), he did apply full power – but given the delay in jet engines between power command and delivery, it was too late. The NTSB recommended changes in “indoctrination” to allow for modification if loss of aircraft is the alternative.17

In the end, the NTSB concluded their analysis with the following statement of probable cause, the bottom line:

The National Transportation Safety Board determines that the probable cause of this accident was the flightcrew’s failure to use engine anti-ice during ground operation and takeoff, their decision to take off with snow/ice on the airfoil surfaces of the aircraft, and the captain’s failure to reject the takeoff during the early stage when his attention was called to anomalous engine instrument readings.18

But there was one more implied step to the account. From an erroneous gauge reading and icy wing surfaces, the Board had driven their “probable cause” back to a localizable faulty human decision. Now they began, tentatively, to put that human decision itself under the microscope. Causal diffusion shifted to causal localization.

General Electric – Variable Geometry

General Electric’s approach to solving the high pressure-ratio compressor problem, by contrast, was to stay with the single spool design they had employed on their highly successful earlier engines, and to adopt “variable geometry” in the forward stages of the compressor in order to modulate the flow at off-design operation. Specifically, the stationary blades, or “stator vanes,” in the forward stages were rotated to different stagger angles, depending on the operating point, thereby altering the flow area in these stages in order to maintain favorable incidence angles on the blades at different conditions.21 The first flight-qualified engine GE designed with variable stator vanes was the J-79, which powered the Mach 2.2 B – 58 bomber and several Mach 2.2 fighters, including the F-104 and the F-4H.22 The design that evolved into the J-79 was begun in 1951, with the first flight test of the engine in 1955. The J-79 produced 12,000 pounds of thrust without afterburner and 17,000 pounds of thrust with afterburner. Its 17-stage compressor had variable stator vanes in the first 6 stages, as well as variable inlet guide vanes; its overall compressor pressure-ratio was 12 to 1 (for an average pressure-ratio just below 1.16 per stage).23