Category Archimedes

SUMMARY AND CONCLUSION

Without diminishing the original contribution of many figures who were bom and trained in America, the pervasive influence of international factors in the evolution of American aviation has been significant. Prior to World War I, European experience often provided the starting points for successful aeronautical investigations and served as the model for research institutions like the National Advisory Committee for Aeronautics. During and after the war, a considerable number of European emigres brought knowledge and entrepreneurial skills, providing a distinct legacy in both theoretical and applied aeronautics. There were degree programs at a handful of universities, but hardly a nucleus large enough to train hundreds of aero engineers needed to sustain a major aviation industry. Despite production of the DH-4 and biplane trainers during the war, there was still no comprehensive infrastructure to serve the requirements of aeronautics. During the 1920s and 1930s the Europeans helped fill these gaps. They were the theoreticians for the NACA; educators in universities; organizers of professional societies; leaders in industry.

During the decades between World War I and World War II, it might have been possible for Americans themselves to fill in the gaps in the aeronautical infrastructure. But it would have required many additional years, and America may not have been prepared for World War II. America’s postwar success in jet engines and high-speed flight technology likewise received invaluable momentum from foreign legacies. It might have been possible for the United States to develop large rockets for space exploration without the contributions of the von Braun team, but the lunar landing would probably have occurred in the 1970s, not the 1960s. Through professional literature, individuals, and hardware, the European influence on American aviation and aerospace history has been profound. Minus that influence, the record of American achievements in flight would have been dramatically diminished.

THE EMERGENCE OF THE TURBOFAN ENGINE

If you have looked out the window of an airplane lately, you may have noticed that jet engines are gradually getting shorter and fatter. You will see 737s, the most common airliner in service, with two types of engines of distinctly different shapes. The older models have long, stovepipe-shaped engines under the wings, where the newer ones (or older ones which have been retrofitted with new engines) have rounder, shorter powerplants, with a large shell or nacelle around the outside and a smaller cylinder protruding from the rear. Boeing’s latest, the 777, has relatively short but immense engines – each with diameter equivalent to the fuselage of the 737. This change represents the maturing of the turbofan engine, which in the early 1960s superseded the older turbojet engine. Strictly speaking, for the past thirty-five years we have been living in the fan age more than the jet age.

Turbofans have a number of advantages over turbojets, particularly lower noise and higher efficiency – both key factors in making commercial jet air travel socially acceptable and economically feasible. Yet they appeared relatively late: no aircraft was powered by a turbofan engine until after 1960. Flight Magazine, in its 1957 prediction of aero engines ten years in the future, did not even mention the fan engine.1 As late as 1959, after airlines had begun to contract for turbofan engines, at least one expert was still expressing skepticism about their practicality.2 Once they appeared, however, turbofans almost immediately became the dominant engine for high-subsonic flight – the regime in which commercial airliners fly. In 1960, Flight Magazine declared that engineers had agreed that all high-subsonic engines would be fans.3

Today, turbofans power virtually all large commercial transports, as well as most large military transports and many business jets, and afterburning turbofans power most military supersonic aircraft. While the technology has certainly evolved in the last thirty-five years, the original turbofan configuration nevertheless stabilized quite quickly. Pratt and Whitney introduced the JT8D in 1963, and it remains the single most common jet engine in commercial service – with more than 13,000 sold.

The rapidity, scope, and permanence of the turbofan’s proliferation suggests a new technology with such obvious advantages that it met no resistance and spread rapidly – a veritable “turbofan revolution,” to modify Edward Constant’s phrase.4 But the obviousness argument, that hallmark of corporate histories and trope of technological progress, breaks down upon closer analysis. For the advantages of the turbofan engine, or more generically of the bypass engine, were recognized almost

107

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 107—155 © 2000 Kluwer Academic Publishers.


as early as those of the turbojet itself – Frank Whittle patented the idea in 1936, and a number of bypass engines were designed in the mid 1940s. Thus, nearly a quarter of a century elapsed between when fan engines were considered a good idea and when they actually became good enough to put them into service on airplanes. This paper explores this odd historical trajectory by asking two questions. First, if they took over so quickly, why did it take so long for turbofan engines to enter flight? And second, why did turbofan engines emerge when they did?

The answers to these questions include new engineering techniques, government – funded research, military requirements, and corporate competition. The story has a broad historical significance because the turbofan depended on and contributed to a stable configuration for commercial jet air travel at high-subsonic speeds5 – a major feature of today’s technological life. We are also interested, however, in questions of engineering epistemology – i. e. what knowledge do engineers use in design? how is this knowlege developed? and how precisely is it utilized? Walter Vincenti began to address these questions with his series of case studies in aeronautics, and we build on his work, especially regarding the role of uncertainty in design.6

Examining epistemological issues in the design of turbofans sheds light on other questions as well. For example, why, in 1997, might you be likely to fly on an aircraft with engines designed more than thirty years ago? Why do technologies experience periods of rapid change, followed by long periods of stability and incremental change? What follows, we argue, is fundamentally a story of radical and incremental change, but one that ends in a counterintuitive way. Rather than a radical innovation winning out over incremental improvements, we find a radical design that spurred incremental innovation in a competitor. The latter succeeded commercially and established the turbofan as an accepted technology.

THE SUBSEQUENT HISTORY OF THE CJ805-23 AND THE JT3D

Although GE’s CJ805-23 was the first flight-qualified turbofan engine, it was not the first to enter commercial service. Because it weighed 1000 pounds more than the CJ805 turbojet, it could not be installed on the Convair 880. It did fit both the 707 and the DC-8, but P&W’s rapid response pre-empted any chance for its replacing the JT3C on either of these aircraft. The CJ805-23 thus had to await the development of a new aircraft, the Convair 990, to enter service. First flight was scheduled for Fall of 1960, with production deliveries scheduled for March, 1961. Aerodynamic performance problems with the aircraft ended up moving the latter date back to September, 1962. Ultimately only 37 Convair 990s were sold. GE attempted to have the CJ805-23 introduced on the Caravelle, replacing the Rolls – Royce Avon, but this too fell through. The breakthrough turbofan engine ended up without an aircraft to fly on.82

The CJ805-23 had some problems in the field. Leakage from the hot turbine stream to the cold fan stream proved more of a problem on production engines than it had on the prototype, necessitating some minor redesign. More seriously, the turbofan bluckets began suffering thermal fatigue cracks, owing to the combination of transient thermal stresses (during start-up and shutdown) and the opposite camber of the fan and turbine blading. For a while the blucket thermal fatigue problem looked like it might be a fundamental fact of bluckets and hence not solvable at all, threatening to create a small financial disaster for GE.83 The problem was solved, but it surely did not help GE convince anyone to consider the engine on other aircraft. The last CJ805-23 was shipped in 1962. Its great engineering achievement notwithstanding, it was by all standards a commercial failure. The contrast between this outcome and the commercial success of P&W’s JT3D led Jack Parker, the head of GE Aerospace and Defense, to remark, “We converted the heathen but the competitor sold the bibles.”84 The fan design of the CJ805-23, however, had a more illustrious history. A scaled-down version of it was installed behind GE’s small J-85 engine to form the CF-700, a 4000 pound thrust engine. This engine flew on business jets into the 1990s, most notably the Falcon 20F and the Sabre 75A. The commercial failure of the CJ805-23 was not the fault of the fan design.

P&W’s JT3D entered service on the Boeing 707 in July, 1960, more than two years before the CJ805-23. Shortly thereafter it began powering Boeing 720B’s and DC-8’s, and the TF-33 entered service on the KC-135 and the eight-engine B-52H bomber, of which the military had ordered 102 in September 1959, and a few years later on the Lockheed C-141. JT3D-powered 707s were still in service into the 1990s, and the TF-33-powered B-52 served in the Persian Gulf War. P&W had delivered 8550 JT3D’s, including JT3C conversions, by 1983. Its success was outdone only by P&W’s JT8D, designed in 1959 on largely the same basis as the JT3D, with more than 13,000 delivered.

THE WIND TUNNEL AND THE EMERGENCE OF AERONAUTICAL RESEARCH IN BRITAIN

INTRODUCTION

The wind tunnel has been an essential instrument for the development of the airplane. From the time of the Wright brothers to the present, it has served aeronautical investigators as an indispensable tool for the improvement of aerodynamic performance. With the emergence of practical aviation on the eve of World War I, European and American countries set up their research programs and built laboratories with wind tunnels to conduct their investigations.

The wind tunnel is a relatively simple instrument, making air flow in a tunnel and measuring the force or moment exerted by wind on a body placed in it. As the theoretical treatment of aerodynamic flow is so difficult and complex, the wind tunnel serves as a useful device to gather empirical data in realms not predicted by theory. And yet, the measured data does not necessarily represent the aerodynamic performance of a real airplane in the sky. The theory of fluid dynamics tells us that the difference between the dimensions of the model and those of a full-scale aircraft would cause scale effect, a phenomenon measured by the dimensionless Reynolds number. Besides scale effect, wind tunnel data could be compromised by errors inherent in experimental procedures and wind tunnel structures such as the aerodynamic effect from the walls of a closed tunnel.

This chapter explores the early use of the wind tunnel by British aeronautical researchers and the controversy over the validity of its use. The main character is Leonard Bairstow, an aerodynamic experimenter who worked on the stability of the airplane through wind tunnel experiments, and who argued for the usefulness of such model experiments. Bairstow and his colleagues at the National Physical Laboratory (NPL) conducted aerodynamic experiments beginning in 1904. While their research produced useful data for airplane designers, investigators became increasingly aware of the discrepancies between the data from model experiments and those from full-scale experiments, as well as discrepancies between the data from different wind tunnels. Those discrepancies form one major thread in this story.

This paper also compares the activities inside and outside the laboratory setting, and the interrelations between these two realms. In his Science in Action, Bruno Latour presented a model to explain the process by which research results are generated from the inside of a laboratory and applied to the outside world, making the laboratory in the end an Archimedean point to move the world.1 The Aeronautics

223

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 223—239 © 2000 Kluwer Academic Publishers.


Division of the NPL can be considered as such a laboratory. Its history reveals it to be a typical case of Latour’s laboratory, though its story differed from that of the ideal laboratory recounted in Science in Action.

In what follows, I will first briefly explain Bairstow’s stability research at the NPL, and the worldwide appreciation of its aeronautical significance. I will then present two episodes in which Bairstow rather coercively argued for the validity of model experiments and the postwar continuation of stability research. After describing how Bairstow became an influential leader of the British aeronautical community, I will explain how he came to be criticized for his insistent stand. The controversy illuminates not only the strengths and limitations of wind tunnel research but also differing perceptions of research inside and outside the laboratory.

WHAT IS A TURBOFAN ENGINE?

An aircraft gas turbine engine takes in air through an inlet, increases its pressure in a compressor, adds fuel to the high-pressure air and bums the mixture in a combustion chamber, and then exhausts the heated air and combustion products, expanding first through a turbine, where energy is extracted from it to drive the compressor, and finally through a nozzle. Schematics of the three principal types of aircraft gas turbines are shown in Figure 1. In a turboprop engine, the turbine also drives a propeller, connected to the rotor shaft through gears, and it supplies the thrust required by the aircraft; the gas turbine in this case is just an alternative to a piston engine, converting chemical energy into mechanical energy. In a turbojet, by contrast, the thrust comes from the energized flow exiting the nozzle, literally the “jet” of exhaust. In bypass engines a significant portion of the thrust comes from exhaust air that bypasses the combustor and turbine. The bypass air must receive energy from one source or another in order to supply thrust. In the case of a turbo­fan engine the bypass air is pressurized by a fan. The ratio of the bypass air to the air that passes through the combustor and turbine is called the bypass ratio. Bypass engines are typically classified as low or high bypass in accord with this ratio. One

GAS GENERATOR

 

TURBOJET

 

TURBOFAN

 

Figure 1. Schematic principle of operation of turbojet, turboprop, and turbofan engines. Each engine has a compressor, combustion chambers, and a turbine, forming the “gas generator”. Note cool air from compressor, or “bypass flow” in turbofan. [The Aircraft Gas Turbine Engine and its Operation, (United Technologies Corporation: 1974) p.44.}

 

WHAT IS A TURBOFAN ENGINE?WHAT IS A TURBOFAN ENGINE?

WHAT IS A TURBOFAN ENGINE?

WHAT IS A TURBOFAN ENGINE?

Figure 2. Cutaway drawings of Pratt and Whitney JT8D, a low-bypass turbofan, and General Electric CF-6, a high-bypass turbofan. These engines power numerous modern commercial airliners, including the Boeing 727 and 737 and Douglas DC-9. [Jack L. Kerrebrock Aircraft Engines and Gas Turbines (Cambridge: MIT Press, 1981) p. 19A, Flight Magazine, February 29, 1961.]

of the two engines on the 737 mentioned above, for example, is an older low-bypass engine, Pratt & Whitney’s JT8D, with a bypass ratio of 1.1 to 1; the other is a more recent high-bypass engine, General Electric’s CF6, with a bypass ratio of 5 to 1 – i. e. less than 17 percent of the total airflow goes through the combustor and turbine. (Hence the shorter, fatter appearance of the larger fan.) Figure 2 displays cutaways of these two engines.

In both turbojet and turbofan or bypass engines, indeed in aircraft engines generally, the magnitude of the thrust is the product of the exhaust mass-flow rate and the difference between the exhaust velocity and the flight speed. Turbojets typically achieve their thrust from a comparatively small mass-flow exiting at a comparatively high velocity. Bypass engines can achieve the same thrust from more mass-flow exiting at a lower velocity. One advantage this gives them is lower

Подпись: TUR&OPftOP Подпись: TURSOFAN
Подпись: О
Подпись: MACH NO.

WHAT IS A TURBOFAN ENGINE?Figure 3. Typical propulsion efficiency ranges for turboprop, turbofan, and turbojet engines. [L. C. Wright and R. A. Novak, “ Aerodynamic Design and Development of the General Electric CJ805-23 Aft Fan Component,” ASME Paper 60-WA-270, 1060.]

exhaust noise, for exhaust noise is a function of exhaust velocity to the 7th power. Their more important advantage, however, is that they offer higher propulsion efficiency in the range from around 450 to 750 miles per hour. By definition, propulsion efficiency is the fraction of the mechanical energy of the exhaust flow that is realized in propulsion of the vehicle. After a little algebraic manipulation, it turns out that:

2 Vflight

propulsion efficiency = ~——— ———–

‘exhaust ‘flight

Thus, the nearer the exhaust velocity is to the flight velocity, the higher the propulsion efficiency. So long as the components of the engine themselves perform at high thermodynamic efficiency, high propulsion efficiency can be turned into fuel savings.

Figure 3, taken from the technical paper describing the design of General Electric’s first successful turbofan engine7, indicates how propulsion efficiency varies with flight speed for turboprops, turbofans, and turbojets. The propulsion efficiency of turboprops drops rapidly above Mach 0.5, roughly 300 miles per hour, because of increasingly severe aerodynamic losses at the tips of propellers of that era.8 Because of their high exhaust velocities, turbojets do not match the maximum propulsion efficiency of turboprops until they reach flight speeds above Mach 1.

Turbofans are, in effect, hybrids, filling the propulsion efficiency gap between turboprops and turbojets. Just as in a turboprop, the flow leaving the combustor is used in part to drive a fan that supplies thrust from a high mass-flow air stream; the ducting leading into the fan controls the air flow entering it, enabling much higher speeds without the tip losses experienced in propellers. Just as in a turbojet, the thrust is coming from ducted flow exiting a nozzle; but the exhaust velocities are comparatively low in the colder fan stream, resulting in higher propulsion efficiency and hence more thrust for the same fuel consumption.

The emergence of the turbofan engine involved four partly overlapping steps: (1) advances in the turbojet by the engine companies in the late 1940s and early 1950s, leading to a new generation of military jet engines with increased power that later provided core gas turbines for bypass engines (including Rolls-Royce’s Conway, the earliest bypass engine to enter flight service); (2) major breakthroughs in axial compressor aerodynamic design by the NACA in the early 1950s which, though intended primarily for supersonic flight, ended up providing the separate technological bases for the contrasting fan designs of General Electric’s and Pratt & Whitney’s first turbofan engines; (3) GE covertly developing a turbofan engine in 1957 that achieved a quantum jump in flight performance by employing an aerody­namically very advanced single-stage fan, located aft of the core engine; and (4) P&W, in response to GE, rapidly developing in 1958 what proved to be the commercially more successful turbofan engine, with performance comparable to GE’s even though it employed less advanced aerodynamics in a two-stage fan at the front of the core engine.

The fact that these four steps do not form a single, simple evolutionary pathway demonstrates, among other things, the futility of attempting to tell the story of the “first” turbofan. Debates over firsts usually degenerate into questions of definition, and here such an approach would miss much of what is instructive in the episode. The critical historical and epistemological points surface from a number of separate threads of development (several of them not specifically aimed at turbofans), as well as from the interaction of several development projects, particularly those at GE and P&W. In place of the notion of “first,” we deploy and expand ideas of “normal” and “radical” design, which Vincenti proposes based on a schema set forth by Edward Constant.9 Vincenti closely examines normal design, where engineers work to improve performance of technologies whose fundamental layout and principles are established. He has little to say, however, about radical design, in which a new technology’s basic arrangement and function are yet to be determined (he believes, perhaps correctly, radical design to have received undue attention from historians). In the following history of the turbofan, however, we show that normal and radical design can interact, even when producing a final result that is in many respects incremental. The normal versus radical distinction remains clear, but less clear is whether the two must, or can, exist as separate trajectories. To trace their interactions, we shall describe the four steps listed above after briefly reviewing early efforts on turbofan engines and the reasons they did not displace the then existing turbojets.

WHY THE TURBOFAN EMERGED WHEN IT DID

Let us return to our initial questions. First, given that the turbofan engine was long recognized as promising better propulsion efficiency in high-subsonic flight, and given that the original patent was in 1936, why did turbofans enter flight-service only in the early 1960s? A simple technical answer, recognized to at least some extent from the 1940s on, is that no turbofan was going to offer markedly superior performance until (1) gas generators – i. e., turbojets – had reached a reasonably high level of performance, especially in specific-power; and (2) compressor and fan aerodynamic design had reached a point where a sufficient pressure-ratio could be achieved in the bypass stream for efficient high-subsonic flight without excessive weight. Until these advances in technology had been achieved, turboprops like the Lockheed Electra, with flight speeds around 400 miles per hour, made a great deal more economic sense for most commercial flight. This simple technical explanation, however, masks an underlying complexity. For, the two requisite advances in jet engine technology would not have been sufficient for the turbofan to have emerged until the problem to which it was an answer had been identified as important.

As a first step toward unraveling this complexity, we can identify the several local factors that lay behind General Electric’s developing their first turbofan, the CJ805- 23, when they did: (1) persistent advocates of fan engines within GE, especially Peter Kappus; (2) an established gas generator with sufficient specific-power to drive the fan; (3) the aft fan concept, which allowed the turbofan engine to be developed at remarkably little cost; (4) the realization, which emerged in the last years of the NACA supersonic compressor research program, that comparatively high Mach number transonic stages could be designed without first having to learn how to control shocks; (5) the shift of key figures in this research program from NACA to GE, especially Lin Wright; (6) the advent of the computer, allowing the introduction of streamline-curvature methods for analyzing radial equilibrium effects in compressors; (7) the idea of adapting streamline-curvature methods to provide a through-blade analysis that could define a blade contour precisely tailored for the significant radial redistribution of the flow that occurs within a high pressure-ratio transonic blade row. Three other factors may have been important in

GE’s decision to commit money to developing the CJ805-23: (1) Rolls-Royce’s Conway engine, perceived perhaps by some as heralding the advent of bypass engines; (2) Pratt & Whitney’s overwhelmingly dominant position in high-subsonic flight, achieved initially through their J-57 on the B-52 and then in the process of being repeated by the commercial version of the J-57, the JT3C, on the Boeing 707 and Douglas DC-8; and (3) Wislicenus’s talk at the SAE Golden Anniversary Aeronautical Meeting, promoting the concept of an aft fan engine.

In contrast to the dismissive stance they had adopted in response to the Conway, Pratt & Whitney responded to GE’s engine by designing a competing fan engine of their own. While GE turned to the radical design of a high-Mach-number single­stage aft fan to achieve the requisite pressure-ratio in the bypass stream, P&W relied on a more incremental design, a two-stage forward fan, to achieve this pressure – ratio, compensating for the added weight by employing titanium. In effect, the competitive pressure of GE’s engine forced P&W to leapfrog over the Conway. Although the tip Mach number of P&W’s fan was significantly lower than GE’s, it was still far enough above Mach 1.0 to preclude the use of standard blading of the general type Rolls-Royce had used in the six bypass stages of the Conway. P&W instead had to use transonic blading of the type NACA-Lewis had proven shortly before on their 5-stage and 8-stage demonstrator compressors.

Pratt & Whitney’s leapfrogging over the Conway exemplifies the phenomenon, often noted but rarely analyzed, that just knowing something has been done makes it much easier to match. Other examples abound in twentieth-century history, but no one has yet collected them and systematically studied the phenomenon. Such a study would likely examine the role of uncertainty in technical developments. Once a certainty of outcome is assured – in this case, that a fan engine can supply a quantum jump in performance – engineering fits itself into the space between the boundaries of possibility.

The GE and P&W turbofan engines, taken together with the Conway, raise a historical issue of perhaps less interest to historians of technology than of interest for them. Within the field, the question of “firsts” does not frequently arise in discussion as a historiographic problem – most historians agree it is not the most productive focus of inquiry. Broader audiences, however, particularly engineers, often assume that the business of historians does involve establishing priority and allocating credit. Therefore, narratives which illustrate that technical firsts are not the keys to understanding a complex history can clarify the work of historians of technology for technical audiences. The turbofan case serves this purpose well, because the radical GE design, which was arguably more notable from a technical point of view, did not end up as the commercially successful innovation. Rather, the more incremental design of P&W, spurred by GE’s advances, established the still prevailing configuration for low-bypass engines. Here, as everywhere, the question of firsts becomes a problem of definition: Was the CJ805-23 the first turbofan? What about the Whittle proposals? Or the Metro-Vick engine of the 1940s? Or the Rolls-Royce Conway? Answering these questions requires examining the ontologies embedded in the notions of turbofan and bypass engine – topics, we contend, more worthy of historical attention than questions about firsts. The question of firsts then becomes: how did a particular machine, or individual, or group, stabilize a dynamic category, such as bypass engine, or airplane, or commercial jet air travel? Or destabilize existing categories?

STABILITY RESEARCH AT THE NATIONAL PHYSICAL LABORATORY

“No Longer an Island” was the phrase that characterized the attitude of British citizens after the Wrights’ European demonstration in 1908 and Louis Bleriot’s successful flight across the English Channel in the following year.2 Immediately responding to this rapid technological development, the British government set up an Advisory Committee for Aeronautics (АСА) consisting of representatives from universities, industry, and the military. The committee’s function was defined as “the superintendence of the investigations at the National Physical Laboratory and… general advice on the scientific problems arising in connection with the work of the Admiralty and the War Office in aerial construction and navigation.”3 Accordingly, an Aeronautics Division was founded in the Engineering Department of the NPL in Teddington, and Department Superintendent Thomas Stanton and his assistant Leonard Bairstow started to measure aerodynamic forces in a new wind tunnel.4

In formulating their research program from autumn of 1911, Stanton and Bairstow were influenced by George Bryan’s new book, Stability in Aviation. Bryan, an applied mathematician, proposed a general theory of stability, suggesting it as a basis for NPL wind tunnel experiments.5 To utilize Bryan’s theory, Stanton and Bairstow had to measure not only lift and resistance but also rotative moments of an airplane model caused by wind from all directions. Bairstow devised original instruments different from those suggested by Bryan and had them constructed by instrument makers at the NPL and the Cambridge Scientific Instrument Company.

Experimental data were produced by the spring of 1913. When the data were plugged into Bryan’s theoretical equation, they produced a measure of the model’s stability. From these calculations, Bairstow offered practical suggestions to airplane designers on the position and size of tail planes to maintain stable flight. Bairstow’s experimental results were summarized in several technical reports and used by Edward Busk, an aircraft designer at the Royal Aircraft Factory in Famborough. Based on the data and the suggestions from the NPL, Busk succeeded in designing a very stable biplane, the B. E.2c, which was mass-produced during World War I. In this intermediary role between Bryan the theoretician and Busk the practitioner,

Bairstow served as a “translator” between scientists and practical engineers, a role described by historian Hugh Aitken and others.6 Bairstow was aptly called “an aeronautical form of the ‘scientific middleman.’”7

Bairstow’s stability research was taken very seriously by aeronautical engineers in Britain and abroad. On the eruption of the First World War in 1914, the British Advisory Committee for Aeronautics decided to classify all technical reports. Neutral Americans lost access to on-going aeronautical research in England. Edwin Wilson at the Massachusetts Institute of Technology became reluctant to continue his stability research for fear of duplication. When the United States entered the war in 1917, the National Advisory Committee for Aeronautics (NACA) in the United States officially requested the АСА to permit access to technical reports. The АСА discussed the matter at its main meeting and decided to open its technical results except for one subject – stability. The stability research of Bairstow and other workers was regarded too important to share even with the Americans.

Bairstow had thus achieved a remarkable prominence for a young man. Bom in 1880, he had studied at the Royal College of Science and entered the Engineering Department of the NPL in 1904. In 1917, he was elected a Fellow of the Royal Society. In the same year, Richard Glazebrook, NPL Director and АСА Chairman, asked him to assume the new post of Superintendent of the NPL’s Aerodynamics Department, which had evolved from the former Aeronautics Division. Despite this favorable offer, however, Bairstow decided to work instead for the Air Board as a scientific researcher and consultant.8 It was in this role that he would become a controversial advocate for a certain kind of aeronautical research.

Archimedes

All technologies differ from one another. They are as varied as humanity’s interaction with the physical world. Even people attempting to do the same thing produce multiple technologies. For example, John H. White discovered more than 1000 patents in the 19th century for locomotive smokestacks.1 Yet all technologies are processes by which humans seek to control their physical environment and bend nature to their purposes. All technologies are alike.

The tension between likeness and difference runs through this collection of papers. All focus on atmospheric flight, a twentieth-century phenomenon. But they approach the topic from different disciplinary perspectives. They ask disparate questions. And they work from distinct agendas. Collectively they help to explain what is different about aviation – how it differs from other technologies and how flight itself has varied from one time and place to another.

The importance of this topic is manifest. Flight is one of the defining technologies of the twentieth century. Jay David Bolter argues in Turing’s Man that certain technologies in certain ages have had the power not only to transform society but also to shape the way in which people understand their relationship with the physical world. “A defining technology,” says Bolter, “resembles a magnifying glass, which collects and focuses seemingly disparate ideas in a culture into one bright, sometimes piercing ray.” Flight has done that for the twentieth century.

Though the authors represented in this volume come from very different backgrounds, we share a concern to move beyond a fascination with origins and firsts. Instead, the essays of this book all attend to a technology forever in process, a technology modified all the way through its history. From its shifting relationship with the aerodynamic sciences to the shop-floor culture of bomber production; from the changing functions of patented mechanisms to the standards of pilot training, protocol, and disaster inquiries. Through and through, this is a book about the heterogeneous practices of aviation all the way down the line.

In some ways the technologies of flight seem remarkably stable: in looking at the earliest airplanes, the wings, ailerons, rudder and elevators seem remarkably congruent with the same features on a 747. Yet, over the course of the twentieth century, the technologies of flight have radically altered. Consider the perspective of Hugh Dryden, the former Director of the National Advisory Committee for Aeronautics and Associate Director of the National Aeronautics and Space Administration. He used to say the he grew up with the airplane. He wrote his first paper on flight in 1910, when he was 12 and the airplane was 7. In it he argued for [1] “The Advantages of an Airship over an Airplane,” earning an F from a prescient if harsh teacher. At the time of his death in 1965, Dryden was helping to orchestrate humankind’s first journey to an extraterrestrial body. Bom before the airplane, Dryden lived to see humans fly into space.

Now, at the end of the twentieth century, humans are about to occupy an international space station. Its supporters believe that it will begin a permanent human presence in space. The Wright brothers could hardly have imagined that their primitive attempt to fly would lead within a century to a permanent presence off the Earth. The technology that they inaugurated transformed humans from gravity – bound creatures scurrying about the face of the Earth to spacefaring explorers looking back at the home planet as if it were an artifact of history. At the time of Apollo, historian Arthur Schlesinger, Jr., hazarded the guess that when future generations reflected on the twentieth century they would remember it most for the first moon landing.

Flight has defined the 20th century symbolically, spiritually, and spatially. Individual airplanes such as the workhorse DC-3, the democratic Piper Cub, the dreadful B-29, the rocket-like X-15, and the angular Stealth fighter have imprinted their shapes and their personalities on modem life. They represent our contemporary ability to deliver people, bombs, or disaster relief anywhere in the world in a matter of hours. Popular imagination has rendered the Wright brothers, Charles Lindbergh, and Chuck Yeager as quintessential^ heroic individuals, icons of the human yearning to subdue nature, achieve freedom of movement, and conquer time and space. To the extent that the world has become Marshall McLuhan’s global village, flight has made it so. Communications put us in touch with each other, but airplanes put us in place.

Is a defining technology like other technologies? Or is it different? Does it obey the same mles, evince the same patterns, produce the same results? Or do defining technologies, by virtue of their powerful interaction with society, operate differently? The essays collected in this volume shed considerable light on these questions.

This volume – and the conference that launched it – began with a series of discussions between Alex Roland and Dibner Institute co-directors Jed Buchwald and Evelyn Simha. Would it not be original and fruitful, they wondered, to bring together historians of flight with a wider group of scholars and engineers from related fields – people who had not necessarily written on the history of flight? Peter Galison was recmited as a historian of science and private pilot – and together Roland and Galison began assembling the mix of historians of the technology of flight and the engineers, philosophers, sociologists, and historians who are represented here. Our great debt is to the Dibner Institute for their support of our conference from 3-5 April 1997, and the continuing interest they have had in seeing this volume come to fruition.

In addition to the individual merits of the papers gathered in this volume, we believe that collectively they shed light on the question of whether or not flight functions like other technologies. The simple answer is yes and no. The complete answer is more interesting and more provocative. Readers may find their own version of that answer in the papers that follow. Here we will attempt only to point out some of the ways in which the answer might be construed from these contributions.

First, flight may be seen as similar to other technologies. Patents represent one area in which this is true. These government charters to promote and reward innovation are often depicted as measures of inventive activity and stimulants to technological change. They might be expected to have played a significant role in the development of flight. Thomas Crouch and Alex Roland confirm this expectation, but find that it operated in unexpected ways. Crouch debunks the myth that the Wright patent choked U. S. aviation development in the period leading up to World War I. Though the Wright patent was surely unusual in its scope and impact, it did not retard development, as its opponents claimed, and it was not unique. Roland takes up the same issue where Crouch leaves it. Studying the impact of patents on airframe manufacture in the period from World War I to the present, he finds that patents were important at the outset, less so over time. This pattern is familiar in cumulative industries where foundational patents launch a new technological trajectory but then decline in relative importance.

National subsidy also shaped aviation in the same way that it has shaped other technologies, such as shipbuilding, armaments, microelectronics, and computers. Increasingly in the modem world, industrialized states have intervened in technological arenas deemed important to national security or prosperity. Aviation is no exception. Takehiko Hashimoto’s paper demonstrates the strong role of government policy in the development of British aviation, a pattern repeated in other developed European nations. Walter Vincenti describes a research project within the National Advisory Committee for Aeronautics (NACA), one of the institutional mechanisms by which the United States government subsidized aviation development. The cross-licensing agreement at the heart of Roland’s paper came into being at government behest and with the benefit of a $2 million government buy-out.

Dual-use is another characteristic that likens aviation to other technologies. It means that the technology has both military and civilian applications. From the very first, aviation has been dual-use; the Wright brothers built their plane as an end in itself, but first sought to sell it to the U. S. Army. George Lewis, the Director of Research of the National Advisory Committee for Aeronautics in the 1930s and 1940s, confessed that he could not think of any improvement in aviation that would not benefit military and civilian aviation alike. Thus the research conducted at the Army’s McCook Field in the teens and twenties, examined here in Peter Jakab’s paper, turned out to have important civilian applications. Likewise, the research that Walter Vincenti and his colleagues conducted at the NACA in the 1940s was equally applicable to the wings of military and commercial aircraft. And the production methods worked out in the mass assembly of B-17’s and B-29’s described by Robert Ferguson utterly transformed the practice of airplane assembly after 1945.

Ironically, dual-use has become less pervasive in modem aviation at just the time when the military services have focused more attention upon it. The reason

is the increasingly specialized nature of modern, high-performance military aircraft. Many of them, for example, now feature skins that resist heating at high speeds, a characteristic unnecessary on commercial aircraft. Stealth technology, one of the marvels of recent research and development, has no utility for civilian aircraft. The special stability characteristics of fighter aircraft are unique. The avionics of high-speed, low-level flight have few applications on the commercial side, nor do electronic countermeasures, ejection seats, and the ultra-high flying technologies reserved almost exclusively to reconnaissance aircraft. Of course the period before World War II had its share of military technologies with no civilian analog, such as bombsights, armaments, and carrier-landing capability. But the irony remains that civilian applications of military aeronautical technology have become more elusive just when the military and the aerospace industry have taken the greatest interest in them.

The other side of dual-use, of course, is that research and development aimed at military products often differs from that supported by the commercial market. The military usually requires higher standards of performance and reliability. Perfecting such technology may require more research and development than market forces could support. But once the technology has been perfected, it may be transferred to the commercial marketplace fairly cheaply; the overhead has already been absorbed. The classic example of this is U. S. computer development during the Cold War,3 but aviation provides a similar instance. The instrumentation developed by Frederick Suppe and his colleagues to test the performance of military aircraft could later be installed on commercial planes for a fraction of the cost. The turbofan engine development described by George Smith and David Mindell came free to the commercial manufacturers, fully paid for by the military. This phenomenon, a commonplace of United States development during the Cold War,3 seeps into the issue of national subsidy. Roland’s paper concludes that one reason for the success of commercial airframe manufacture in this country was the indirect subsidy of government research. Much of that subsidy took the form of military R&D.

Also, in its relation to science, aeronautics resembles other science-based technologies. John Anderson traces paths by which scientific knowledge has entered the realm of aeronautical engineering. Similar paths have marked the intercourse between thermodynamics and engine design, between solid-state physics and microelectronics, and between microbiology and genetic engineering. In aeronautics, as in all of these other realms, traffic moves along these paths in both directions. Just as science often provides theoretical models for better technology, so too does technological development often provide challenges to theory and new tools for scientific investigation. Walter Vincenti’s paper offers a stunning example of the way in which cross fertilization of an experimental technique from one investigation allowed a breakthrough to conceptual understanding of physical phenomena in another. George Smith and David Mindell explain how advances in metallurgy yielded titanium fan blades for more efficient engines. By exploring the texture of shop-floor life in World War II aircraft production, Robert Ferguson shows how the design process was never restricted to the “top” of the assembly process – innovation, modification, re-design occurred all the way down from initial sketches to the final stages of production. These papers thoroughly rebut the naive picture in which design and knowledge enter only at the start of a massive project.

Equally revealing are the ways in which flight is different from other technologies. First, it is more dangerous than most. Peter Galison’s paper wrests technological insight from two gripping commercial airline accidents; the imperative to identify the cause of an accident drives investigators toward a definition of agency that challenges our very understanding of technological systems and they ways in which they fail. Whether it is the test pilots in Frederick Suppe’s account of flight instrumentation or the fatal crash of Otto Lilienthal, whom Roland represents as the inspiration for the Wright brothers, disaster accompanies the failure of this technology more swiftly and surely than almost any other.

Cost also separates flight from most technologies. Deborah Douglas explores the price of passenger accommodation in the early years of airport design and construction. If customers were going to pay for air transportation, they had to pass through a site that connected everyday life in two dimensions with a technology of three dimensions. The problems were enormous and costly. Frederick Suppe opens up the world of flight instrumentation, one of the auxiliary technologies without which flying would be riskier and less understood. Walter Vincenti reveals the painstaking detail required to understand – or begin to understand – the character of supersonic flow over an airfoil. George Smith and David Mindell track the evolving relationship among compressor, turbine, and airflow that characterized the incremental development of high-bypass jet engines. The wind tunnel in which these ideas were tested cost more to design, build, operate, and staff than did the complete research and development programs in many other technologies.

The romance of flight permeates all these papers, and sets this technology apart from most others. Frederick Suppe captures it in his account of daring test flights in the desert. Even Deborah Douglas’ account of early airport design resonates with the adventure and excitement that airport designers were trying to exploit. The heroic airmanship of pilot A1 Haynes and his crew in nursing United Airlines Flight 232 to a controlled crash ennobles an otherwise tragic technological failure. A technology that allows humans to “slip the surly bonds of earth” cannot help but appear romantic in comparison to the mundane tasks to which most technology is committed. Indeed, in recent years scholars have begun to historicize the romance of aviation, using the airplane as a means of exploring larger issues of twentieth century cultural history.4

Few technologies generate the infrastructure that has grown up around atmospheric flight. The Wrights achieved flight with the materials that they could haul by rail and boat from Ohio to North Carolina, supplemented by food and shelter purchased locally. Today aviation needs research and development of the kind described by Smith and Mindell, Hashimoto, Eric Schatzberg, and Vincenti; testing and instrumentation like that explored in Suppe’s paper; institutional guarantees of rights to innovation as laid out by Crouch, Roland, and Roger Bilstein; operating

infrastructure such as airports (Douglas) and accident investigation (Galison); and much more. Some of the infrastructure is private, some public; most of it now has to be coordinated internationally, so that flight can cross national borders without loss of system integrity.

Finally, atmospheric flight requires higher standards than most other technologies, in part because of the danger involved, in part because of the cost. When a single airliner can cost more than $100 million and an airport costs billions, the incentive to ensure their faultless operation is high. When a single accident can kill hundreds of people, the incentive is incalculable. Hashimoto, Schatzberg and Ferguson show the ways in which standardization entered aircraft design and testing early in the century. Suppe’s paper demonstrates how the price of standardization has risen, with ever more expensive and accurate instrumentation and ever more data points required to get the information necessary for confident operations. Galison’s inquiry into accident investigations reveals the lengths to which the government will go to root out system weakness and replace it with the standardized practice linked to higher levels of safety.

These papers also demonstrate the ways in which flight has varied from time to time and place to place. The historical literature of flight is notoriously parochial and nationalistic; indeed, it proved difficult to break that pattern in assembling the participants in this volume. But even when flight is studied comparatively and from various perspectives, it is difficult to discern international patterns over time. Rather the principal artifacts of this technology, the airplanes themselves, along with their support equipment and infrastructure, reflect the national styles and the periods in which they were generated. The reasons for this are not hard to find.

The main reason is institutional. In consonance with recent scholarship on the shaping influence of the research environment, several papers explore the ways in which differing research styles produced differing artifacts. Hashimoto takes sociologist of science Bruno Latour’s notion of inside/outside research behavior as his explicit model for understanding the impact of Leonard Bairstow on the development of British aviation in the years between the world wars. The concentration of the British on stability research and wind-tunnel modeling, and their tardiness in adopting boundary layer theory and corrections for wall interference effects, were a direct result of Bairstow’s determination to defend his research base in empirical, wind-tunnel studies. It took an international, comparative research project in the 1930s to reveal the extent to which this concentration had retarded British development. Robert Ferguson explores the specificity of engineering cultures even when the companies are producing the identical airplane. Or perhaps, as Ferguson shows, “identical” needs to be put between quotation marks: it seems that no amount of drawing, personnel exchange, or even exchange of airplanes could surmount the myriad of details that separated production at Boeing from that at Vega or Douglas.

Like Hashimoto, our commentator, David Bloor, has stressed the potential fruitfulness of a sociological reading of institutional identity, though Bloor invokes the sociologist Mary Douglas. Douglas’s idea is that rather than dichotomizing institutions, we might invoke a two-by-two matrix, so to speak: institutions are either egalitarian or hierarchical, and they are either boundary-policing or boundary – permeable. Using this four-way typology, Bloor queries our various authors as to where on such a chart they might find their “engineering cultures,” e. g., General Electric versus Pratt and Whitney or Goettingen versus Cambridge. That is, Bloor wants to know whether the various ways that engineers treat objects reflect basic sociological features of the way they treat the people with whom they work.

Also attentive to engineering culture is Walter Vincenti, who attributes the success of his research team to the creative, unstructured, eclectic laboratory environment they enjoyed at the Ames Aeronautical Laboratory of the U. S. National Advisory Committee for Aeronautics. He pictures a free association between theory and empiricism, in which individual researchers were able to bring new ideas and proposals from any source. They were measured by their efficacy in solving the problems at hand, as opposed to the doctrinaire constraints imposed by Bairstow in the British environment.

Suppe presents an entirely different research environment, instrumentation of flight testing. The principal dynamic at work here is the relationship between improving instrument technology and the ever increasing demand for more data. Instrumentation offers more and better data, but it can hardly keep pace with the demands for more data points and more precision as aircraft speed and performance improve. The research imperative is therefore not to figure out the design of better aircraft but to develop and field equipment that will keep up.

Research and development in aerodynamics took a fascinating, counterintuitive turn in the account Smith and Mindell provide of the high-bypass jet engine. While one might expect that a radical new design by one company would stimulate equally radical changes in the competition, these authors show quite the reverse took place. In a highly secretive program, General Electric stunned the aviation world with their novel 1957 single-stage, aft-mounted fan. Pratt and Whitney responded, but their counter was a similarly efficient but completely incremental two-stage fan with vastly simpler aerodynamics. In part because of novelty-related start-up problems for GE, P&W triumphed in the marketplace.

Peter Galison describes an entirely different institutional imperative. The National Transportation Safety Board (NTSB) is required by law to investigate technological failure in a particular way. Tom between seeking to understand an accident in all its complexity of contributing causes and the institutional demand to locate a more localized “probable cause,” accident investigation is a vexed enterprise. Under these constraints, the investigating team is often driven to identify point failures, especially point failures that are subject to remedy. Thus an institution that is poised at the very nexus of technological understanding, i. e., at the point where technology fails, is bound by law to view that failure narrowly and instrumentally. This can lead to great technical virtuosity and poor contextual understanding. And so, while trying to preserve a “condensed” notion of causality, the investigators time and again sought to embed the causal account in the wider spheres aimed at by psychological, organizational, and sociological approaches.

Peter Jakab captures the excitement of McCook Field in its early years. Before the U. S. Army knew what it was going to do with aviation, and before its institutional research arrangements settled into routinized patterns, McCook Field was a hothouse of innovative ideas and experiments. Distinguished researchers accepted appointment there and brought their creative energies to a field rich with promise and interest. If anything, there was too much innovation and experimentation at McCook Field in these years, with the research program seemingly running off in many different directions at once. The result was that McCook Field did not itself come to be credited with any great technological breakthroughs, but the people who worked there honed their research skills and gained invaluable experience. As an institution, it turned out to be a better training ground than a proving ground.

Even Deborah Douglas’s account of airport development in the United States suggests the powerful ways in which institutions shape technological development. Commercial passenger travel achieved market viability in the United States in the 1930s. The so-called “airframe revolution” that produced the DC-3 is most often credited. But Douglas reveals that airport design also played a role. Only when the airport came to be envisioned as a user-friendly, comfortable, safe, and aesthetically pleasing nexus between air and land travel could airlines hope to attract the passengers who would make their enterprise profitable. The American decision to make airports local institutions prodded the market toward competitive design and production that pitted cities against one another in their claims to be most progressive and advanced. The results were airports like LaGuardia in New York, which lent a unique stamp to American aviation and helped to foster development of the entire commercial enterprise.

John Anderson attests to the importance of institutions in transferring knowledge and understanding back and forth between scientists and technologists of flight. Nikolay Joukowski, the head of the Department of Mechanics at Moscow University, was the first scientist to take Otto Lilienthal’s work with gliders as a fit subject for scientific investigation. The resulting Kutta-Joukowski theorem, which revolutionized theoretical aerodynamics, gained purchase in part because of the weight of Joukowski’s reputation and his institutional setting. So too did Ludwig Prandtl’s position at Goettingen University lend credibility to his research on the boundary layer. He took up a practical problem, theorized it in a revolutionary scientific concept that transformed modem fluid dynamics, and then gave it back to practical application in his own work and his students’ on the flow of air over wings and fuselage.

The cases of Joukowski and Prandtl serve not only to illustrate the ways in which institutions have shaped the development of flight technology but also to introduce a final way in which these appear to address differences in flight. University research in Russia and Germany influenced aeronautical development long before American and British universities achieved such an impact. In fact the German style of university-based, theoretical research in aerodynamics was spread to the United States by two of Prandtl’s students. As Roger Bilstein makes clear, Max Munk went to the National Advisory Committee for Aeronautics in 1929 and developed there the innovative variable density wind tunnel for which the NACA won its first

Collier Trophy. Even more significantly, Theodore von Karman accepted the invitation of Nobel laureate Robert Millikan to join the faculty at the California Institute of Technology and direct its Guggenheim Aeronautical Laboratory. From that institutional base von Karman went on to exert a formative influence on aeronautical research and development and on the policies of the United States Air Force. American aeronautical development took on a more theoretical turn because of the immigration of this European, especially German, style of research.

National variations in research styles are evident in other papers as well. Eric Schatzberg reveals the impact of national tastes for materials in his discussion of the wooden airplane in the 1930 and 1940s. The United States’ preference for metal as an aircraft building material flowed from preconceptions about the modernity of aluminum, not from a judicious evaluation of the merits of wood. For equally nationalistic reasons, Canadians preferred wooden aircraft and developed them with great success during World War II. And the Americans, under the pressure of World War II developed modes of exchange between competing airframe manufacture that fundamentally altered the character of the industry.

Hashimoto uses the International Trials of the early 1920s to demonstrate the differences in national research styles and practices and the difficulties involved in standardization. The Trials also revealed the parochialism of the British and contributed to their movement toward continental practice. Roland demonstrates the ways in which specific national experience in the United States differentiated the impact of patent practice from that in other countries. The introduction of a patent pool in 1917 was driven by the legal logjam surrounding the Wright patent. The government intervened to buy out the Wright interests and the interests of their leading competitor, Glenn Curtiss. The resulting patents pool lasted for 58 years and distinguished United States patent experience from that of any other nation. This history cries out for a comparative study of aircraft patenting experience in other nations to see what conclusions might be drawn about the impact of patents in general and the comparative efficacy of the American model.

Bilstein’s paper is the most self-consciously international and comparative. It both reinforces and challenges the general perception of aviation as a parochial and nationalistic technology. Bilstein notes, for example, that American aeronautical development really was different from that in other countries, a fact that no doubt helps to account for America’s remarkable domination of this industry for so many years. But Bilstein also notes that ideas and innovations from other countries were constantly finding their way to America, undermining the stereotypes of native American genius that have plagued the field since the remarkable achievement of the Wright brothers.

But Bilstein’s paper also helps to point up one of the great generalizations that may be applied to this quintessential twentieth-century technology. As the century has proceeded, the technology has become more universal and homogeneous, less parochial and nationalistic. Japan is licensed to produce a version of the American F-16. American airlines fly European-manufactured Airbuses. American aircraft manufacturers mount Rolls Royce engines on their planes. Virtually every large

airplane in the world uses fundamentally the same landing gear. Airports, navigation, and ground support equipment the world over are becoming increasingly standardized. The differences between aircraft remain stark and obvious, and the variations from country to country continue to reflect idiosyncrasies of national style and infrastructure. But all this diversity persists in the midst of a general trend toward uniform and standardized technology. This, too, is a mark of the twentieth century.

Alex Roland Peter Galison

NOTES

1 John H. White, American Locomotives (Baltimore: Johns Hopkins University Press, 1968), p. 115.

2 J. David Bolter, Turing’s Man: Western Culture in the Computer Age (Chapel Hill: University of North Carolina Press, 1984), p. 11.

3 Paul Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996).

4 Peter Fritzsche, A Nation of Fliers: German Aviation and the Popular Imagination (Cambridge, MA: Harvard University Press, 1992); Joseph J. Com, The Winged Gospel: America’s Romance with Aviation, 1900-1950 (New York; (Oxford University Press, 1983).

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

The propulsion efficiency advantage of turbofans was well-known nearly as long as turbojets themselves. In 1936, before he actually built a working turbojet, Frank Whittle patented a scheme to compress more air than was necessary for the turbine and to force it rearwards as a cold jet. Whittle wished to “gear down the jet” to make it more efficient, maintaining the overall mass-flow while reducing exhaust velocity.10 During the next decade, as Whittle developed the first successful turbojets, he patented several other configurations of bypass engines, including ones with a fan both fore and aft of the rest of the engine (he did not use the term ‘fan’ or ‘bypass’ for any of these designs).11

Whittle was not alone among the British in putting preliminary designs of bypass engines on paper. A. A. Griffith of Rolls-Royce devised a multistage axial fan in 1941.12 Figure 4 shows a cutaway of a Metropolitan Vickers turbofan engine from just after World War II in which the fan, consisting of two counter-rotating stages, is located downstream of the core engine, using its exhaust to drive turbine stages to which the fan blades are connected. Figure 5 shows a drawing of a De Havilland bypass engine in which the flow leaving the last stage of the axial compressor is split, with the outer portion bypassing the rest of the core engine and the inner portion proceeding on to a centrifugal compressor, then a combustor, and finally a pair of turbines. We have been unable to determine whether either of these engines was ever built and tested and, if they were, why they died in their infancy.13 Counter-rotating stages are notoriously difficult to make work in anything but the

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

Figure 4. Cutaway view of the Metropolitan-Vickers F-3 turbofan engine, late 1940s. The fan at the rear consists of two counter-rotating stages. [G. Geoffrey Smith, Gas Turbines and Jet Propulsion (London: Iliffe & Sons Ltd., 1955), p. 66.]

precise conditions for which they were designed – a significant drawback for an operational engine. The De Havilland engine, however, did not reach so far beyond the state of the art. From the perspective of hindsight, the main question about it is whether its compressors, combustor, and turbines performed well enough to provide the energy demanded by the bypass flow in its axial compressor.

These two British engines call attention to one of the two fundamental problems in designing practical bypass engines capable of realizing their theoretical promise. For high-subsonic flight the bypass flow needs to be pressurized to a level around 1.5 times the inlet pressure. The De Havilland design employed 6 axial compressor stages to achieve the requisite pressure in the bypass stream. The Metro-Vick design used counter-rotating stages to try to achieve the requisite pressure in merely two stages, saving weight, but with the risk of being unable to coordinate the flow in the two stages. Thus, one fundamental problem in designing a bypass engine for high-subsonic flight was to achieve the needed pressure rise in the bypass stream without incurring an excessive weight penalty. The later successful turbofan engines shown in Figure 2 used an aerodynamic design technology that did not exist in the late 1940s.

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

The other fundamental problem in designing a bypass engine was the need for more powerful core engines. The greater the bypass airflow, the more energy that is needed to pressurize it. The core engine must generate this energy using only the air passing through it. Whether the overall engine is a turboprop, turbofan, or turbojet, its core engine consists of a gas generator that converts chemical into mechanical energy. One of the basic performance parameters of gas generators is called specific-power – the power produced per unit of airflow. The specific-power of the aircraft gas turbines of the late 1940s was low, limiting the amount of bypass airflow. As a consequence the most anyone could even hope to achieve in a bypass engine at the time was a small incremental gain over turboprops or turbojets.

Realizing the promise of bypass engines required core engines with significantly higher specific-power. Higher specific-power calls for higher overall engine compression ratios.14 In other words, to make a working bypass engine, the core engine compressor had to achieve markedly higher pressures than the engines of the late 1940s were able to do.

The State of Axial Fan and Compressor Technology These two problems share the common demand of achieving a pressure-ratio, in the one case across a fan and in the other across a compressor. An axial fan stage, however, amounts to nothing but an axial compressor stage. An axial compressor stage consists of a row (or cascade) of rotating blades followed by a row of stationary blades, as shown schematically in Figure 6. Energy is added to the flow in the rotating blade row, while the stationary blade row redirects the flow and recovers the kinetic energy imparted by the rotor, in the process converting the velocity head into pressure. In contrast to a turbine stage, a compressor stage tries to make air do something that it does not want to do, namely flow against an opposing or adverse pressure gradient. The effects of the adverse pressure gradient ultimately limit the pressure-ratio that can be achieved in a single stage; above this limiting

THE EARLY HISTORY OF TURBOFANS. Early Bypass Engines

Figure 6. Schematic of an axial compresor stage consisting of a row of rotor blades followed by a row of stators. Air flows from left to right. In velocity triangles, w designates absolute air velocity relative to the rotor, c designates absolute air velocity, and U is velocity of the rotor. [P. Hill and C. Peterson, Mechanics and Thermodynamics of Propulsion (Reading, Mass: Addison-Wesley, 1965) p. 245.]

point, which varies from one airfoil type to another, irreversible thermodynamic losses become excessive. This is why the axial compressors in the engines shown in the earlier figures all had several consecutive stages. It is also why more than one stage was used to pressurize the bypass flow in both the De Havilland and the Metro-Vick engines.

Despite its critical role, axial compressor technology was in its infancy in the 1940s. Not only was the pressure-ratio that could be achieved in any one stage quite low, but also “many early axial compressors worked more as stirring devices”15 instead of achieving compression. Although a base point in axial compressor design technology had emerged in 1945, lending some rationality to the design process, designers remained restricted in the performance demands they could place on the compressor.16 These restrictions in turn limited the performance that one might hope to achieve in a bypass engine by limiting both the pressure-ratio per stage achievable in a fan and the specific-power achievable in the core engine. These same restrictions were limiting the performance of turbojet engines as well. The military was rapidly converting to turbojet-powered aircraft in the late 1940s, with increasing emphasis on supersonic flight. Turbojets for supersonic flight required higher specific-power than the turbojets already flying were achieving.

Because of its role in dictating performance limitations, no component received more research and development effort between 1945 and 1955 than the axial compressor. This effort had three goals: (1) to achieve considerably higher overall compressor pressure-ratios at high thermodynamic efficiency; (2) to increase the predictability of axial compressors, especially at off-design operating conditions, so that fewer compressor designs would turn out to be unacceptable on test; and (3) to increase the pressure-ratio achievable in a single stage so that higher overall compression ratios could be achieved without exacting a penalty in the thrust-to – weight ratio of the engine.17 Although most of this research and development effort was applicable to fans as much as to compressors, the turbofan engine largely disappeared from view during these years. R&D funds went into developing better turbojets, not into transforming the promise of the bypass concept into successful engines.

From the perspective of hindsight, however, this was appropriate even from the point of view of the bypass engine, for the gains that were achieved in gas generator performance in the late 1940s and early 1950s ended up contributing crucially to the first turbofan engines to enter flight service. Moreover, as we shall see below, the advances that were made in compressor aerodynamic design technology during these same years contributed no less crucially to the aerodynamic design of the fans of these engines.

A New Conception of Progress

With these questions in mind, we ask, why did the turbofan engine, once it emerged, so totally dominate commercial aviation? P&W’s JT8D low-bypass turbofan engine, which went into service in 1964, is still powering Douglas’s DC-9 and Boeing’s 727 and 737. High-bypass turbofans, like P&W’s JT9D, GE’s CF6, and Rolls-Royce’s RB.211, have powered virtually all wide-body aircraft since the late 1960s. (The high-bypass turbofans required once more the same sort of steps in core-engine specific-power and fan tip Mach number as the initial low-bypass engines had required, and hence they need a separate analysis.85) The economics of the turbofan engine helped shape commercial jet aviation and stabilize it technologically and economically, putting air travel within the reach of a much larger segment of the public than it would otherwise have been. In other words, the turbofan engine has dominated high-subsonic flight because these two were mutually constitutive and emerged in parallel. Until the latter became important, the former did not make sense, technically or economically.

The turbofan responded to the decline of the notion that commercial jet flight would continually progress along the axis of speed. Much of the technology underlying turbofans had developed for entirely different purposes. Compressors received a great deal of attention in both industry and government, but none of that effort specifically sought a turbofan; it focused on turbojets. Immediately after World War II it seemed obvious that the continued progress of commercial flight would move, like military flight, toward higher and higher speeds. The only real customer for aircraft gas turbine engines before the mid-1950s, especially in the U. S., was the military, and they rightly pursued speed, and hence supersonic flight, above all else. More than a decade of supersonic flight and jet engines were required before it became clear that commercial air travel would follow many pathways, but increasing speed would not be one of them. Until the late 1950s, engineers simply did not see high-subsonic flight as a technical, or commercial, frontier. (Military flight leveled in speed as well: the aircraft that holds the world speed record, even today, was developed in the years just before and after 1960.) High-subsonic jet flight emerged as a dominant category, and ever increasing speed declined in importance as a category of problems, simultaneously with GE’s and P&W’s efforts to develop turbofan engines. Progress scarcely came to an end at this point, however.

The turbofan episode illustrates a dramatic, yet subtle shifting, we might even say a turning, in the parameters of progress in the narrative of aviation. The ever – increasing advance of the raw, physical parameter of speed ended in the 1950s, as commercial aviation settled into the high-subsonic regime. As an indicator of this shift, consider the proliferation of performance parameters in this story: stage pressure-ratio, thrust-to-weight ratio, propulsion efficiency, specific fuel consumption, cost per passenger mile. Significant progress was made in each of these measures with the emergence of the turbofan and in the years since, but they are less visible to the naked eye, less viscerally physical than speed. (An engineer, though, might argue that thrust-to-weight ratio is as “natural” a physical parameter as Newtonian mass and velocity.) Today’s airliners, to the untrained eye, look much like the 707 of four decades ago; for comparison, consider that forty years before the 707 were the biplanes of World War I. Of course, appearance is misleading. Stability in configuration masks substantial changes in engines (as we have shown), as well as in wing design, materials, control systems, and numerous other systems. Hence, the progress narrative in commercial aviation remains, but embedded in newer, seemingly more artificial measures that define “success” for advanced technologies, measures that embody social assumptions in machinery. The significance of the turbofan engine, and its intricate history, derives from this turning: from outward parameters of physics to internal parameters of systems.

This turning is evident not only in the broad parameters which evaluate aircraft performance, but also in the fine-grained texture of engineering practice. Engineers, in the story we have told, relied heavily on non-dimensional parameters of performance. Vincenti has characterized such “dimensionless groups” as useful in relating the performance of models to the performance of working prototypes.86 We here identify two additional categories of such parameters. One, typified by the diffusion factor, provided independent variables for empirical correlations. Such parameters enable a great deal of complexity to be digested into a form that allows designers to interpolate and extrapolate reliably from past experience. Another category consisted of performance parameters like the pressure-ratio and efficiency of compressor and fan stages and the thrust-to-weight ratio and specific fuel consumption of engines. These parameters provide a generic way of characterizing the state of the art and advances in it; by decoupling issues of performance from issues of implementation, they allow such thoroughly different approaches to turbofan design as GE’s and P&W’s to be meaningfully compared. One way in which engineering research has contributed to the turbofan has been through identifying and honing parameters that enable past successes to be repeated and that open the way to processes of continuous improvement.