Category Archimedes

THE UNSTABLE SEED OF DESTRUCTION

We now come to a point where we can begin to answer the question addressed at the outset. A history of a nearly punctiform event, conducted with essentially unlimited resources, yields a remarkable document. Freed by wealth to explore at will, the NTSB could mock up aircraft or recreate accidents with sophisticated simulators. Forensic inquiries into metallurgy, ffactography, and chemical analysis have allowed extraordinary precision. Investigators have tracked documents and parts back two decades, interviewed hundreds of witnesses, and in some cases ferreted out real-time photographs of the accident in progress. But even when the evidence is in, the trouble only just begins. For deep in the ambition of these investigations lie contradictory aims: inquiries into the myriad of necessary causes evaporate any single cause or single cluster of causes from fully explaining the event. At the same time, the drive to regain control over the situation, to present recommendations for the future, to lodge moral and legal responsibility all urge the narrative towards a condensed causal account. Agency is both evaporated and condensed in the investigative process. Within this instability of scale the conflict between undefinable skill and fixed procedure is played out time and again. On the flightdeck and in the maintenance hangers, pilots and technicians are asked at one and the same time to use an expansive, protocol-defying judgment and to follow restricted set procedures. Both impulses – towards diffused and localized accounts – are crucial. We find in systemic or network analysis an understanding of the connected nature of institutions, people, philosophies, professional cultures, and objects. We find in localization the prospect of immediate and consequential remediation: problems can be posed and answered by pragmatic engineering. To be clear: I do not have the slightest doubt that procedural changes based on accident reports have saved lives. At the same time, it is essential to recognize in such inquiries and in technological-scientific history more generally, the inherent strains between these conflicting explanatory impulses.

In part, the impulse towards condensation of cause, agency, and protocol in the final “probable cause” section of the accident report emerges from an odd alliance among the sometimes competing groups that contribute to the report. The airplane industry itself has no desire to see large segments of the system implicated, and pushes for localization both to solve problems and to contain litigation. Following United’s 232 crash, General Electric (for example) laid the blame on United’s fluorescent penetration inspection and ALCOA’s flawed titanium.57 Pilots have a stake in maintaining the status of the captain as fully in control of the flight: their principal protest in the 232 investigation was that the FAA’s doctrine of “extremely improbable” design philosophy was untenable. In particular, the pilots lobbied for a control system for wide body planes that would function even if all hydraulic fluid escaped.58 But just in the measure that the pilots remain authors of the successful mission, they also have their signatures on the accident, and their recommendation was aimed at insuring that a local fix be secured that would keep their workplace control uncompromised. Government regulators, too, have an investment in a regulatory structure aimed at local causes admitting local solutions. Insofar as regulations protect safety, the violation of regulations enter as potential causal elements in the explanation of disaster. Powerful as this confluence of stakeholders can be in focusing causality to a point, it is not the whole of the story.

Let us push further. In the 1938 Civil Aviation Act that enjoined the Civil Aeronautics Authority to create accident reports, it is specified that the investigation should culminate in the ascription of a “probable cause” of the accident.59 Here “probable cause” is a legal concept, not a probabilistic one. Indeed, while probability plays a vital role in certain sectors of legal reasoning, “probable cause” is not one of them. Instead, “probable cause” issues directly from the Fourth Amendment of the U. S. Constitution, prohibiting unreasonable searches and seizures, probable cause being needed for the issuance of a warrant. According to Fourth Amendment scholar Wayne R. LaFave, the notion of probable cause is never defined explicitly in either the Amendment itself nor in any of the federal statutory provisions; it is a “juridical construct.” In one case of 1925, the court ruled that if a “reasonably discreet and prudent man would be led to believe that there was a commission of the offense charged,” then, indeed, there was “probable cause justifying the issuance of a warrant.”60 Put bluntly in an even older (1813) ruling,

probable cause was not “proof’ in any legally binding sense; required were only reasonable grounds for belief. “[T]he term ‘probable cause’ … means less than evidence which would justify condemnation.”61

Epistemically and morally, probable cause inculpates but does not convict. It points a finger and demands explanation of the evidence. Within the framework of accidents, however, in only the rarest of cases does malicious intent figure in the explanation, and this very circumstance brings forward the elusive notion of “human error.” Now while the notion of probable cause had its origins in American search and seizure law, international agreements rapidly expanded its scope. Delegates from many countries assembled in Chicago at the height of World War II to create the Convention on International Civil Aviation. Within that legal framework, in 1951 the Council of the International Civil Aviation Organization (ICAO) adopted Annex 13 to the Convention, an agreement specifying standards and practices for aircraft accident inquiries. These were not binding, and considerable variation existed among participating countries.

Significantly, though ICAO documents sometimes referred to “probable cause” and at other times to “cause,” their meanings were very similar – not surprising since the ICAO reports were so directly modeled on the American standards. ICAO defined “cause,” for example, in 1988 as “action(s), omission(s), event(s), condition(s), or a combination thereof, which led to the accident or incident.”62 Indeed, ICAO moved freely in its documents between “cause” and “probable cause,” and for many years ICAO discussion of cause stood extremely close to (no doubt modeled on) the American model.63 But to understand fully the relation between NTSB and ICAO inquiries, it would be ideal to have a case where both investigations inquired into a single crash.

Remarkably, there is such an event precipitated by the crash of a Simmons Airlines/American Eagle Avions de Transport Regional-72 (ATR-72) on 31 October 1994 in Roselawn, Indiana. On one side, the American NTSB concluded that the probable cause of the accident was a sudden and unexpected aileron hinge reversal, precipitated by a ridge of ice that accumulated beyond the de-ice boots. This, the NTSB investigators argued, took place 1) because ATR failed to notify operators how freezing precipitation could alter stability and control characteristics and associated behaviors of the autopilot; 2) because the French Directorate General pour Aviation Civile failed to exert adequate oversight over the ATR-72, and 3) because the French Directorate General pour Aviation Civile failed to provide the Federal Aviation Authority with adequate information on previous incidents and accidents with the ATR in icing conditions.64 Immediately the French struck back: It was not the French plane, they argued, it was the American crew. In a separate volume, the Bureau Enquetes Accidents submitted, under the provisions of ICAO Annex 13, a determination of probable cause that, in its content, stood in absolute opposition to the probable cause adduced by the National Transportation Safety Board. As far as the French were concerned, the deadly ridge of ice was due to the crew’s prolonged operation of their flight in a freezing drizzle beyond the aircraft’s certification envelope – with an airspeed and flap configuration altogether incompatible with the Aircraft Operating Manual.65

In both American and French reports we find the same instability of scale that we have already encountered in Air Florida 90 and United 232. On one hand both Roselawn reports zeroed in on localized causes (though the Americans fastened on a badly designed de-icing system and the French on pilot error), and both reports pulled back out to a wider scale as they each pointed a finger at inadequate oversight and research (though the Americans fastened on the French Directorate General and the French on the American Federal Aviation Authority). For our purposes, adjudicating between the two versions of the past is irrelevant. Rather I want to emphasize that the tension between localized and diffused causation remains a feature of all these accounts, even though some countries conduct their inquiries through judicial rather than civil authority (and some, such as India, do both). Strikingly, many countries, including the United States, have become increasingly sensitive to the problematic tension between condensed and diffused causation-contrast, for example, the May 1988 and July 1994 versions of Annex 13:

May 1988: “State findings and cause(s) established in the investigation.”

July 1994: “List the findings and causes established in the investigation. The list

of causes should include both the immediate and the deeper systemic causes.”66

Australia simply omits a “cause” or “probable cause” section. And in many recent French reports – such as the one analyzing the January 1992 Airbus 320 crash near Strasbourg – causality as such has disappeared. Does this mean that the problem of causal instability has vanished? Not at all. In the French case, the causal conclusion is replaced by two successive sections. One, “Mechanisms of the Accident,” aimed specifically at local conditions and the second, “Context of Use” (Contexte de Vexploitation”) directed the reader to the wide circle of background conditions.67 The drive outwards and inwards now stood, explicitly, back to back. Scale and agency instability lie deep in the problematic of historical explanation, and they survive even the displacement of the specific term “cause.”

There is enormous legal, economic, and moral pressure to pinpoint cause in a confined spacetime volume (an action, a metal defect, a faulty instrument). A frozen pitot tube, a hard alpha inclusion, an ice-roughened wing, a failure to throttle up, an overextended flap – such confined phenomena bring closure to catastrophe, restrict liability and lead to clear recommendations for the future. Steven Cushing has written effectively, in his Fatal Words, of phrases, even individual words, that have led to catastrophic misunderstandings.68 “At takeoff,” with its ambiguous reference to a place on the runway and to an action in process, lay behind one of the greatest aircraft calamities when two jumbo jets collided in the Canary Islands. Effectively if not logically, we want the causal chain to end. Causal condensation promises to close the story. As the French Airbus report suggests, over the last twenty-five years the accident reports have reflected a growing interest in moving beyond the individual action, establishing a mesoscopic world in which patterns of behavior and small-group sociology could play a role. In part, this expansion of scope aimed to relieve the tension between diagnoses of error and culpability. To address the dynamics of the small “cockpit culture,” the Safety Board, the FAA, the pilots, and the airlines brought in sociologists and social psychologists. In the Millsian world of CRM that they collectively conjured, the demon of unpredictable action in haste, fear or boredom is reduced to a problem of information transfer. Inquire when you don’t know, advocate when you do, resolve differences, allocate resources – the psychologists urged a new set of attitudinal corrections that would soften the macho pilot, harden the passive one and create coordinated systems. Information, once blocked by poisonous bad attitudes, would be freed, and the cockpit society, with its benevolent ruling captain, assertive, clear-thinking officers, and alert radio-present controllers, would outwit disaster. As we saw, under the more sociological form of CRM, it has been possible, even canonical, to re-narrate crashes like Air Florida 90 and United 232 in terms of small-group dynamic. But beyond the cockpit scale of CRM, sociologists have begun to look at larger “organizational cultures.” Diane Vaughan, for example, analyzed the Challenger launch decision not in terms of cold О-rings or even in the language of managerial group dynamics, but rather through organizational structures: faulty competitive, organizational, and regulative norms.69 And James Reason, in his Human Error invoked a medical model in which ever-present background conditions located in organizations are like pathogens borne by an individual: under certain conditions disease strikes. Reason’s work, according to Barry Strauch, Chief of the Human Performance Division at the NTSB, had a significant effect in bolstering attention to systemic, organizational dynamics as part of the etiology of accidents.70

Just as lines of causation radiate outwards from individual actions through individuals to small collectives, so too is it possible to pull the camera all the way back to a macroanalysis that puts in narrative view the whole of the technological infrastructure. Roughly speaking, this was Charles Perrow’s stance in his Normal Accidents.71 For Perrow, given human limitations, it was simply inevitable that tightly-coupled complex, dangerous technologies have component parts that interact in unforeseen and threatening ways.

Our narration of accidents slips between these various scales, but the instability goes deeper in two distinct ways. First, it is not simply that the various scales can be studied separately and then added up. Focusing on the cubic millimeter of hard alpha inclusion forces us back to the conditions of its presence, and so to ALCOA, Titanium Metals Inc., General Electric, or United Airlines. The alpha inclusion takes us to government standards for aircraft materials, and eventually to the whole of the economic-regulative environment. This scale-shifting undermines any attempt to fix a single scale as the single “right” position from which to understand the history of these occurrences. It even brings into question whether there is any single metric by which one can divide the “small” from the “large” in historical narration.

Second, throughout these accident reports (and I suspect more generally in historical writing), there is an instability between accounts terminating in persons and those ending with things. At one level, the report of United 232 comes to rest in the hard alpha inclusion buried deep in the titanium. At another level, it fingers the maintenance technician who did not see fluorescent penetrant dye glowing from a crack. Read different ways, the report on Air Florida flight 90 could be interpreted as spotlighting the frozen pitot tube that provided a low thrust indication; read another way the 737’s collision impact into the Fourteenth Street Bridge was due to the pilot’s failure to de-ice adequately, to abort the takeoff, or to firewall the throttle at the first sign of stall. Protocol and judgment stood in a precarious and unstable equilibrium. What to the American investigators of the Roselawn ATR-72 crash looked like a technological failure appeared to the French team as a human failing.

Such a duality between the human and the technological is general. It is always possible to trade a human action for a technological one: failure to notice can be swapped against a system failure to make noticeable. Conversely, every technological failure can be tracked back to the actions of those who designed, built, or used that piece of the material world. In a rather different context, Bruno Latour and Michel Callon have suggested that the non-human be accorded equal agency with the human.72 I would rather bracket any fixed division between human and technological in our accounts and put it this way: it is an unavoidable feature of our narratives about human-technological systems that we are always faced with a contested ambiguity between human and material causation.

Though airplane crashes are far from the world of the historian of science and technology or that of the general historian interested in technology, the problems that engaged the attention of the NTSB investigators are familiar ones. We historians also want to avoid ascribing inarticulate confusion to the historical actors about whom we write – we seek a mode of reasoning in terms that make sense of the actors’ understanding. We try to reconstruct the steps of a derivation of a theorem or the construction of an object just as NTSB investigators struggle to recreate the Air Florida 90’s path to the Fourteenth Street Bridge. We interpret the often castaway, fragmentary evidence of an incomplete notebook page or overwritten equation; they argue over the correct interpretation of “really cold” or “that’s not right.”

But the heart of the similarity lies elsewhere, not just in the hermeneutics of interpretation but in the tension between the condensation and diffusion of historical explanation. The NTSB investigators, like historians, face a world that often doesn’t make sense; and our writings seek to find in it a rational kernel of controllability. We know full well how interrelated, how deeply embedded in a broader culture scientific developments are. At the same time we search desperately to find a narrative that at one moment tracks big events back to small ones, that hunts a Copemican revolution into the lair of Copernicus’s technical objections to the impure equant. And at another moment the scale shifts to Copernicus’s neo-Platonism or his clerical humanism.73 At the micro-scale, we want to find the real source, the tiny anomaly, asymmetry, or industrial demand that eats at the scientific community until it breaks open into a world-changing discovery. Value inverted, from the epoch-defining scientific revolution to the desperate disaster, catastrophe too has its roots in the molecular: in a badly chosen word spoken to the АТС controller, in a too sharp application of force to the yoke, in a tiny, deadly alpha inclusion that spread its flaw for fifteen thousand cycles until it tore a jumbo jet to pieces.

At the end of the day, these remarkable accident reports time and time again produce a double picture printed once with the image of a whole ecological world of causation in which airplanes, crews, government, and physics connect to one another, and printed again, in overstrike, with an image tied to a seed of destruction, what the chief investigator of flight 800 called the “eureka part.” In that seed almost everyone can find satisfaction. All at once it promises that guilty people and failed instruments will be localized, identified, confined, and that those who died will be immortalized through a collective immunization against repetition through regulation, training, simulation. But if there is no seed, if the bramble of cause, agency, and procedure does not issue from a fault nucleus, but is rather unstably perched between scales, between human and non-human, and between protocol and judgment, then the world is a more disordered and dangerous place. These reports, and much of the history we write, struggle, incompletely and unstably, to hold that nightmare at bay.

NACA TRANSONIC AND SUPERSONIC COMPRESSOR RESEARCH: 1945-1955

The need to use axial, instead of centrifugal, compressors in order to attain high levels of thrust in aircraft gas turbine engines had become increasingly clear by the end of World War II.29 Unlike centrifugal compressors, however, axial compressors were proving to be difficult to design with consistency. The base point in aero­dynamic design technology that had emerged by 1945 allowed efficient axial compressor stages to be designed30, but only under the restriction that the aerodynamic demands made on the compressor remained modest. The design method in question was based to a considerable extent on empirical data from tests of some airfoil profiles in cascade31 over a limited aerodynamic range. Specifically, the pressure-rise, turning, and thermodynamic losses had been determined for these airfoils in cascade as functions of incidence conditions in two-dimensional wind – tunnel tests. Compressor blades were then formed by selecting and stacking a sequence of these airfoil profiles radially on top of one another, as if the air flows through the blade row in a modular series of radially stacked two-dimensional blade passages. Achieving more ambitious levels of compressor performance was going to require this method to be extended, if not modified, and this in turn was going to require a substantial research effort, including extensive wind-tunnel tests of a wider range of airfoils in cascade. The engine companies conducted some research to this end – e. g., P&W carried out their own wind-tunnel airfoil cascade tests. Never­theless, the main body of research fell to government laboratories like the National Gas Turbine Establishment in England and the National Advisory Committee for Aeronautics in the U. S.

The applied research program on axial compressors carried out by the NACA in the decade following World War II was especially important in advancing the state of the art. This program involved a number of diverse efforts, most of them located at the Lewis Flight Propulsion Laboratory, in Cleveland, though a few at the Langley Aeronautical Laboratory, in Virginia, as well. While this research program deserves a historical study unto itself, we will confine ourselves here primarily to results that ended up contributing crucially to the design of the first successful turbofan engines. We say “ended up contributing” because none of this work appears at the time to have been aimed at the design of turbofan engines. The goal throughout was to advance the performance of axial compressors in what were then standard aircraft gas turbines.

CONCLUDING REMARKS

I have attempted, using a case study of wings at supersonic speed, to show how research engineers built their knowledge of aerodynamics in the days before electronic computers. As in other advanced fields of engineering, the process involved the comparative, mutually illuminating use of experiment and theory, neither of which could reproduce exactly the actual problem. In theoretical work, limited ability at direct numerical computation required physical approximations and assumptions – in the present case, the customary inviscid gas, plus thin wings at small angles of attack for three-dimensional problems – to bring the calculations within the scope of analytical techniques then available. For the two­dimensional problem of airfoils, the linear, thin-wing approximation could be improved upon; even then, however, the inviscid assumption could be circumvented only by means of qualitative concepts and quantitative estimates from the independent boundary-layer theory. On the experimental side, the effect of the inevitable support-body interference could be estimated in some aspects, but then only roughly. The inaccuracies accompanying experimental measurement, which I have not gone into, also had to be considered. As in much engineering research, experimental ingenuity, theoretical capability, and analytical insight formed essential parts of the total process.

The material here illustrates clearly two parts of the threefold makeup of modem engineering research (and much design and development) pointed out in the introduction. All three parts – theory, experiment, and use – appeared varying degree in my recent paper on the early development of transonic airfoil theory, research in which I participated later at Ames.17 To quote Constant again in regard to still another example from aeronautics, “the approaches were synergistic: discovery or design progressed faster when the three modes interacted.”18 Use in flight could not be involved in the wing research here, since supersonic aircraft were not yet available; the principle still appeared in the motivation for the study, however. It is this kind of synergism, as much as anything else, I believe, that provides the power of modern engineering generally.

My transonic paper contains a further concept pertinent here: the view of an engineering theory as an artifact – more precisely, a tool – to use in testing the performance of another artifact.19 Here the linear theory was used to test wings on paper in much the same way as the wind tunnel served to test them in a physical environment. In the present application, both tools were put to use in research for knowledge that might someday be employed in design of aircraft. They are also employed regularly side by side in the typical design process. This view – of theory and experiment as analogous artifacts for both research and design – I find useful in thinking analytically about modem engineering. It helps me to focus on theory and experiment in a parallel way in sorting out the synergistic interaction pointed to above. (Use might be looked at similarly, though I have not thought the matter through.)

As pointed out earlier, the wing research did not achieve anything broad and reliable enough to be included under the “theoretical design methods” mentioned in the introduction. Whether and to what extent the research contributed to the accompanying “general understanding” and “ways of thinking” is difficult to know; this would depend on what the audience took away from my New York talk and on how widely and thoroughly our reports were read and thought about. The story does exemplify, however, the kinds of things that make up those categories.20

“Ways of thinking” in my view comprises more or less structured procedures, short of complete calculative methods, for thinking about and analyzing engineering problems. As appears here, aeronautical engineers had long found it useful to regard the aerodynamic force on a wing in terms of lift, center of lift, and drag. This division has the virtue, among other things, of relegating the influence of viscosity, and hence the need to take it into account as a major factor, primarily to drag. Such division has been the practice for so long that engineers dealing with wings take it as almost natural. Designers of axial turbines and compressors, however, because the airfoil-like blades of their machines operate in close proximity to one another, think of the forces on them rather differently. A second example in the present work is the manner of accounting for the various performance characteristics of a wing in terms of the interplay of the inviscid pressure distribution and the viscous boundary layer, both of which can be analyzed, to a first approximation, independently. This procedure too had been around for some time, but the present example, by being fairly clear, may have added something. I have found it useful, at least, in teaching.

“General understanding” consists of the shared, less structured understandings and notions – the basic mental equipment – that engineers carry around to deal with their design and research problems. This and ways of thinking are perhaps best seen as separated, indistinctly bounded portions of a continuum rather than discrete categories. At the time of the present work, the difference in propagation of pressure signals between subsonic and supersonic flow had been understood for many years, and the concept of the Mach cone was becoming well known. Its consequences for the flow over wings were being explored, and the benefits and problems of sweepback were topics of widespread research and discussion. A feeling was developing in at least the research portion of the aerodynamic community that in some semiconscious way we “understood” something of the realities of supersonic flow. In our work at Ames, we contributed to this understanding in a small degree and advanced the knowledge of the powers and limitations of linear theory. After three years of living with supersonic wing problems, our group had acquired some of the mental equipment needed to understand and deal with such problems; the necessary ideas had become incorporated into our technical intuition. Indications later materialized that some of this was picked up from our reports by the aerodynamic community, but how much is anyone’s guess. It is these kinds of knowledge that I see under the rubric “general understanding.”

Other concerns appear here that are treated at some length in my transonic paper.21 I mention them briefly for the reader who may wish to look into them further:

(1) Our story provides examples of both the experimental and theoretical aspects of what scientist-cum-philosopher Michael Polanyi called “systematic technology,” which I take to be the same as what scholars and engineers currently speak of as engineering science. This, in Polanyi’s words, “lies between science and technology” and “can be cultivated in the same way as pure science” (Polanyi’s emphasis) but “would lose all interest and fall into oblivion” if the device or process to which it applies should for some reason cease to be found useful by society.22

(2) Our research at Ames, by requiring as many people as it did, illustrates how engineering advance is characteristically a community activity. I subscribe wholeheartedly to Edward Constant’s contention that communities committed to a given practical problem or problem area form “the central locus of technological cognition” and hence a community of practitioners provides “a primary unit of historical analysis.”23 Here the community we have examined existed entirely at Ames, where our wind-tunnel group depended critically also on the laboratory’s machine-shop, instrumentation, and electrical – maintenance sections. Externally, however, through our reports, my New York talk, and visitors who came to consult us, we were at the same time becoming increasingly a part of the international supersonic research-and-design community that was then forming.

(3) The personal motivation for some in our group came from the fact that the work was part of the job from which they earned a living. For people with greater responsibility, the work offered intellectual and experimental challenge and excitement, heightened by the potential utility of the results – typical research-engineering incentives. (The necessary administrative motive had come from discussions about our section’s overall program with our research superior, the chief of the laboratory’s High-Speed Research Division; the choice was fairly obvious, however, given our new wind tunnel and the existing state of knowledge.) Motivation of the NACA as a governmental institution flowed presumably from its desire to maintain its competitive position vis-a-vis other countries in supplying knowledge to the aircraft industry for design of supersonic airplanes, should they prove practical. Motivation overall was thus a complex mix.

(4) The laboratory’s institutional context for research could scarcely have been improved. Supervision, which was by engineers who had done (or, in the case of our section’s division chief, was still doing) research, was informed, supportive, and free of pressure; interaction with other research sections of the laboratory was encouraged. Skilled service groups provided support when called upon. My fellow research engineers and I didn’t realize how fortunate we were.

In closing, I would make one more point. The process we have seen is now in one

respect a thing of the past. Thanks to large-scale digital computers, designers and

research engineers today can calculate the flow over complicated shapes in detail without either the inviscid assumption or mathematical linearization or other approximation of any sort. To the categories of physical experiment, analytical theory, and actual use, we can thus add a kind of direct “numerical experiment” as a fourth instrument in both our search for aerodynamic design knowledge and in design itself24 The resort to mathematical analysis in the way seen here is thus no longer essential. Our ability to incorporate turbulence and turbulent boundary layers into direct calculations, however, still leaves something to be desired. Where such phenomena are important, which includes most practical problems in aerodynamics, comparison between numerical and physical experiment still plays a role. Such comparison can also be important in instances of great geometrical complexity, which computers encourage aerodynamic designers to attempt.

The foregoing statements, I must emphasize, have been entirely about aerodynamics; it should not be assumed that they apply to engineering generally. In fields where analytical and numerical methods are not so advanced, experiment and use may still predominate. Overall, the situation is still very mixed. Other fields and details of the present case aside, however, the point I would emphasize for the topic of the workshop is this: To understand the evolution of flight in the twentieth century, tracing the nature and evolution of research and knowledge may be as necessary as is the study of aircraft and the people and circumstances behind them.

EPILOGUE

The work we have followed found echo years later at the renowned Lockheed Skunk Works in southern California. In 1975, Richard Scherrer, one of the test engineers in the Ames research, headed a Skunk Works group engaged in preliminary design that would lead to the F-l 17A “Stealth Fighter.”25 Mathematical studies by one of Scherrer’s group had suggested that the military goal of negligible radar reflection might best be attained by a shape made up of a small number of suitably oriented flat panels. Scherrer’s memory of the Ames tests encouraged him to believe that such a startlingly unorthodox shape might in fact have acceptable aerodynamic performance. His faceted flying wing, laid out along the lines of the double-wedge triangular wings of figure 10, became known to his skeptical Skunk Works colleagues as the “Hopeless Diamond.” The idea, however, proved sound. The largely forgotten research from Ames thus contributed 30 years later to cutting – edge technology that could not have been imagined when the research was done. As in human affairs generally, serendipity plays a role in engineering.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

In March of 1927, in B. Franklin Mahoney’s small San Diego manufacturing plant, the construction of Charles Lindbergh’s Spirit of St. Louis began. Less than three months later, this modest little monoplane touched off a burst of aeronautical enthusiasm that would serve as a catalyst for the nascent American aircraft industry. Just when the first bits of wood and metal that would become the Spirit of St. Louis were being fashioned into shape, another project of significance to the history of American aeronautics commenced. This was the dismantling of the experiment station of the U. S. Air Service’s Engineering Division at McCook Field, Dayton, Ohio.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 1. Aerial view of the Engineering Division’s installation at McCook Field, Dayton, Ohio.

45

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 45-66 © 2000 Kluwer Academic Publishers.

For ten years, this bustling 254-acre installation, was the site of an incredible breadth of aeronautical research and development activity. By the mid-1920s, however, the Engineering Division, nestled within the confines of the Great Miami River and the city of Dayton, literally had outgrown its home, McCook Field. In the spring of 1927, the 69 haphazardly constructed wooden buildings that housed the installation were torn down, and the tons of test rigs, machinery, and personal equipment were moved to Wright Field, the Engineering Division’s new, much larger site several miles down the road.1 The move to Wright Field would be followed by further expansion in the 1930s with the addition of Patterson Field. In 1948, these two main sites were formally combined to create the present Wright- Patterson Air Force Base, one of the world’s premier aerospace R&D centers.

Although an event hardly equal to Lindbergh’s epic transatlantic flight, historically, the shut down of McCook Field offers a useful vantage point to reflect upon the beginnings of American aerospace research and development. In the 1920s, before American aeronautical R&D matured in the form of places such as Wright-Patterson AFB, basic research philosophies, and the roles of the government, the military, and private industry in the development of the new technology of flight, were being formulated and fleshed out. Just how research and manufacture of military aeronautical technology would be organized, how aviation was to become a part of overall national defense, and how R&D conducted for the military would influence and be incorporated into civil aviation, were still all wide open questions. The resolution of these issues, along with the passage of several key pieces of regulatory legislation,2 were the foundation of the dramatic expansion of American aviation after 1930. Lindbergh’s flight was a catalyst for this development, a spark of enthusiasm. But the organization of manufacture and the refinement of engineering knowledge and techniques in this period were the substantive underpinnings of future U. S. leadership in aerospace.

The ten-year history of McCook Field is a rich vehicle for studying these origins of aerospace research and manufacture in the United States. The facility was central to the emergence of a body of aeronautical engineering practices that brought aircraft design out of dimly lit hangars and into the drafting rooms of budding aircraft manufacturers. Further, McCook served as a crossroads for three of the primary players in the creation of a thriving American aircraft industry – the government, the military, and private aircraft firms.

A useful way to characterize this period is the “adolescence” of American aerospace development. The decade after the Wrights’ invention of the basic technology in 1903 was dominated by bringing aircraft performance and reliability to a reasonable level of practicality. One might think of this era as the “gestation,” or “birth,” of aeronautics. To continue the metaphor, it can be argued that by the 1930s aviation and aeronautical research and development had reached early “maturity.” The extensive and pervasive aerospace research establishment, and its interconnections to industry and government, of the later twentieth century was in place in recognizable form by this time. It was in the years separating these two stages of development, the late teens and 1920s, that the transition from rudimentary flight technology supported by minimal resources to sophisticated R&D carried out by professional engineers and technicians in well-organized institutional settings took place. In this period of “adolescence,” aeronautical research found its organizational structure and direction, aeronautical engineering practices and knowledge grew and became more formalized, and the relationship of this emerging research enterprise and manufacturing was established. McCook Field was a nexus of this process. In the modest hangars and shops of the Engineering Division, not only were the core problems of aircraft design and performance pursued, but also energetically engaged was research on the wide range of related equipment and technologies that today are intimately associated with the field of aeronautics. The catch-all connotation of “aerospace technology” that undergirds our modem use of the term took shape in the 1920s at facilities such as McCook. Moreover, the administrators and engineers at McCook were at the center of the debate over how the fruits of this research should be incorporated into the burgeoning American aircraft industry and into national defense policy. In large measure, the structure of the United States’ aerospace establishment that matured after World War II came of age in this period, when aerospace was in adolescence.

There were of course several other key centers of early aeronautical R&D beyond McCook Field, most notably the National Advisory Committee for Aeronautics and the Naval Aircraft Factory. Both of these government agencies had significant resources at their command and made important contributions to aeronautics. My focus on McCook is not to suggest that these other organizations were peripheral to the broader theme of the origins of modem flight research. They were not. McCook does, however, as a case study, present a somewhat more illuminating picture than the other facilities because of the broader range of activities conducted there. Moreover, NACA and the Naval Aircraft Factory are the subjects of several scholarly and popular books. The story of McCook Field remains largely untreated by professional historians. If nothing else, this presentation should demonstrate the need for additional study of this important installation.3

As is often the case, a temporary measure taken in time of emergency ends up serving a permanent function after the crisis has subsided. This was true of the Engineering Division at McCook Field. Established as a stopgap facility to meet some very specific needs when the United States entered World War I, McCook remained in existence after the war and developed into an important research center for the still young technology of flight. (“McCook Field” quickly became the unofficial shorthand reference for the facility and was used interchangeably with “Engineering Division.”)

Heavier-than-air aviation formally entered the American military in 1907 with the creation of an aeronautical division within the U. S. Army Signal Corps.4 In 1909, the Army purchased its first airplane from Wilbur and Orville Wright for $30,000.5 With the acquisition of several others, the Signal Corps began training pilots and exploring the military potential of aircraft in the early teens. Even with these initial steps, however, there was little significant American military aeronautical activity before World War I.

A seemingly ubiquitous feature of human conflict throughout history is the entrepreneur who, when others are weighing the geopolitical and military factors of an impending war, see a golden opportunity for financial gain. The First World War is a most conspicuous example. In that war, there is likely no better case of extreme private profit at the expense of the government war effort than the activities of the Aircraft Production Board. In the midst of this financial legerdemain, McCook Field was bom.

After the United States declared its involvement in the war and the Aircraft Production Board was set up, the dominance of Army aviation quickly settled in Dayton, Ohio. Howard E. Coffin, a leading industrialist in the automobile engineering field, was put in charge of the APB. Coffin appointed to the board another powerful leader of the Dayton-Detroit industrial circle, Edward A. Deeds, general manager of the National Cash Register Company.6 Deeds was given an officer’s commission and headed up the Equipment Division of the aviation section of the Signal Corps. This gave him near complete control over aircraft production.

Earlier, in 1911, Deeds had begun to organize his industrial interests with the formation of the Dayton Engineering Laboratories Company (DELCO). His partners included Charles Kettering and H. E. Talbott. In 1916, when European war clouds were drifting toward the United States, Deeds and his DELCO partners, along with Orville Wright, formed the Dayton-Wright Airplane Company in anticipation of large wartime contracts.7

By the eve of the American declaration of war, Coffin and Deeds had the framework for a government supported aircraft industry in place, organized around their own automotive, engineering, and financial interests and connections. Carefully arranged holding companies obfuscated any obvious conflict of interest, while Coffin and Deeds administered government aircraft contracts with one hand and received the profits from them with the other.8 Having orchestrated this grand profit-making venture in the name of making the world safe for democracy, Coffin crowned the achievement with a rather pretentious comment in June of 1917:

We should not hesitate to sacrifice any number of millions for the sake of the more precious lives which the expenditures of this money will save.9

An easy statement of conviction to make coming from someone who stood to reap a significant portion of those “any number of millions.”

Ambitious military plans for thousands of U. S.-built aircraft10 quickly pointed to the need for a centralized facility to carry out the design and testing of new aircraft, the reconfiguration of European airframes to accept American powerplants, and to perform the developmental work on the much lauded Liberty engine project. The Aircraft Production Board was concerned that a “lack of central engineering facilities” was delaying production and requested that “immediate steps be taken to provide proper facilities.”11 Here again, Edward Deeds was at the center of things, succeeding at maneuvering government money into his own pocket.

The engineers of the Equipment Division suggested locating a temporary experiment and design station at South Field, just outside Dayton. This field, not so coincidently, was owned by Deeds and used by the Dayton-Wright Airplane

Company. Charles Kettering and H. E. Talbott, Deeds’ partners, objected to the idea, arguing that they needed South Field for their own experimental work for the government contracts already awarded to Dayton-Wright. Kettering and Talbott suggested a nearby alternative, North Field.12

Found acceptable by the Army, this site was also owned by Deeds, along with Kettering. Deeds conveyed his personal interest in the property to Kettering, who in turn signed the field over to the Dayton Metal Products Company, a firm founded by Deeds, Kettering, and Talbott in 1915. In terms arranged by Deeds, Dayton Metal Products leased North Field to the Army beginning on October 4, 1917, at an initial rate of $12,000 per year.13

As the lease was being negotiated, the Aircraft Production Board adopted a resolution renaming the site McCook Field in honor of the “Fighting McCooks,” a family that had distinguished itself during the Civil War and had owned the land for a long period prior to its acquisition by Deeds.14

Thus, the creation of McCook Field took place amidst a series of complex financial and bureaucratic dealings against a backdrop of world war. The basic result was the centralization of American aeronautical research and production, both financially and physically, in the hands of this tightly integrated, Dayton – based industrial group. During the war, the Aircraft Production Board and the people who controlled it would direct American aeronautical research and production. The issue of the individual roles of government and private industry in aviation, however, would re-emerge and continue to be addressed in the postwar decade. The engineering station at McCook Field would be a principal arena for this process.

The experimental facility at McCook was almost as well known for its numerous reorganizations as it was for the research it conducted. Shortly after the American declaration of war, the meager airplane and engine design sections that comprised the engineering department of the Signal Corps’ aviation section were consolidated and expanded into the Airplane Engineering Department. Headed by Captain Virginius E. Clark, this department was under the Signal Corps’ Equipment Division that Edward Deeds administered.15 The aviation experiment station at McCook would be continually restructured and compartmentalized throughout the war. It officially became known as the Engineering Division in March 1919 when the entire Air Service was totally reorganized.16

The Army’s aeronautical engineering activity in Dayton began even before the facilities at McCook were ready. With wartime emergency at hand, Clark and his people started work in temporary quarters set up in Dayton office buildings. By December 4, 1917, construction at McCook had progressed to the point where Clark and his team could take up residency. Always intended to be a temporary facility, the buildings were simple wooden structures with a minimum of conveniences. They were cold and drafty in the winter and hot and vermin-infested in the summer. A variety of flies, insects, and rodents were constant research companions.17 Upkeep and heating were terribly expensive and the slapdash wooden construction was an ever-present fire hazard.18

In spite of these less than ideal working conditions, the station immersed itself in a massive wartime aeronautical development program. It was quickly realized that if the United States’ aviation effort was to have any impact in Europe at all, it would have to limit attempts at original designs and concentrate on re-working existing European aircraft to suit American engines and production techniques. This scheme, however, proved to be nearly as involved as starting from scratch because of the difference in approach of American mass production to that of Europe.

During World War I, European manufacturing techniques still involved a good deal of hand crafting. Engine cylinders, for example, were largely hand-fitted, a handicap that became very evident when the need to replace individual cylinders arose at the battle front. Although the production of European airframes was becoming increasingly standardized, each airplane was still built by a single team from start to finish.

American mass production, by contrast, had by this time largely moved away from such hand crafting in many industries. During the nineteenth century, mass production of articles with interchangeable parts became increasingly common in American manufacture. Evolving within industries such as firearms, sewing machines, and bicycles, production with truly interchangeable parts came to fruition with Henry Ford’s automobile assembly line early in the twentieth century.19

By 1917, major American automobile manufacturers were characterized by efficient, genuine mass production. When the U. S. entered World War I, it was hoped that a vast air fleet could be produced in short order by adapting American production techniques and facilities already in place for automobiles to aircraft. The

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 2. The main design and drafting room at McCook.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 3. A biplane being load tested in the Static Testing Laboratory at McCook.

most notable example of this auto-aero production crosslink was the highly touted Liberty engine project.20

If U. S. assembly line techniques were to be effectively employed, however, accurate, detailed drawings of every component of a particular airplane or engine were required. Consequently, when the engineers at McCook began re-working European designs, huge numbers of production drawings had to be prepared. To produce the American version of the British De Havilland DH-9, for instance, approximately 3000 separate drawings were made. This was exclusive of the engine, machine guns, instruments, and other equipment apart from the airframe. Another principle re-design project, the British Bristol Fighter F-2B, yielded 2500 production drawings for all the parts and assemblies.21 As a result, the time saved re­working European aircraft to take advantage of American assembly line techniques, rather than creating original designs, was minimal.

In addition to adopting assembly line type production, the McCook engineers developed a number of other aids that helped transcend cut-and-try style manufacture. For example, a systematic method of stress analysis using sand bags to determine where and how structures should be reinforced was devised. Also, a fairly sophisticated wind tunnel was constructed enabling the use of models to determine appropriate wing and tail configurations before building the full-size aircraft. (This was the first of two tunnels. The more famous “Five-Foot Wind Tunnel” would be built in 1922.) These and other design tools began to transform the staff at McCook from mere airplane builders into aeronautical engineers.

In the end, even with all the effort to gear up for mass production, American industry produced comparatively few aircraft,22 and did so at a very high cost to the government. But this was due more to corruption in the administration of aircraft production than to the techniques employed.23 Still, the efforts of the engineers at McCook Field were not fruitless. They contributed to bringing aviation into the professional discipline of engineering that had been developing in other fields since the late nineteenth century. Although the American aeronautical effort had little impact in Europe, the approach adopted at McCook was an important long term contribution to the field of aeronautical engineering and aircraft production. It was, in the United States at least, the bridge over which homespun flying machines stepped into the realm of truly engineered aircraft.

Even though it was only intended to serve as a temporary clearinghouse for the wartime aeronautical build up, McCook Field did not close down after hostilities ended. In fact, it was in the postwar phase of its existence that the station made its most notable contributions. Colonel Thurman Bane took over command from Virginius Clark in January 1919, and under his leadership McCook expanded into an extremely wide-ranging research and development center. During the war, the facility was primarily involved with aircraft design and production problems. After, the Engineering Division continued to design aircraft and engines, but its most significant achievements were in the development of related equipment, materials, testing rigs, and production techniques that enhanced the performance and versatility of aircraft and aided in their manufacture. Virtually none of the thirty-odd airplanes designed by McCook engineers during the 1920s were particularly remarkable machines. (Except, perhaps, for their nearly uniform ugliness.) But in terms of related equipment, materials, and refinement of aeronautical engineering knowledge, the R&D at McCook was cutting edge. The list of McCook firsts is lengthy. The depth and variety of projects tackled by the Engineering Division made it one of the richest sources of engineering research in its day.

Among the most significant contributions made by the Engineering Division were those in the field of aero propulsion. The Liberty engine was a principal project during the war and after. Although fraught with problems early in its development, in its final form the Liberty was one of the best powerplants of the period. It was clearly the single most important technological contribution of the United States’ aeronautical effort during World War I. In addition, it powered the Army’s four Douglas World Cruisers that made the first successful around-the-world flight in 1924.

The Liberty engine was only part of the story. As early as 1921, the Engineering Division had built a very successful 700 hp engine known as the Model W, and was at work on a 1000 hp version.24 These and other engines were developed in what was recognized as the finest propulsion testing laboratory in the country. It featured several very large and sophisticated engine dynamometers. The McCook engineers also built an impressive portable dynamometer mounted on a truck bed. Engine and test bed were driven up mountainsides to simulate high altitude running conditions.25

The Engineering Division had a particularly strong reputation for its propeller research. Some of the most impressive test rigs anywhere operated at McCook. In fact, one of the earliest, first set up in 1918, is still in use at Wright-Paterson AFB. High speed whirling tests were done to determine maximum safe rotation speeds, and water spray tests were conducted to investigate the effects of flying in rain storms. Extensive experimentation with all sorts of woods, adhesives, and construction techniques was also performed. In addition, some of the earliest work with metal and variable pitch propellers was carried out at McCook. Propulsion research also included work on superchargers, fuel systems, carburetors, ignition systems, and cooling systems. Experimental work with ethylene-glycol as a high temperature coolant that allowed for the reduction in size of bulky radiators was another significant McCook contribution in this field.26

Aerodynamic and structural testing were other key aspects of the Engineering Division’s research program. Alexander Klemin headed what was called the Aeronautical Research Department. Klemin had been the first student in Jerome Hunsaker’s newly established aeronautical engineering course at MIT. So successful had Klemin been that he succeeded Hunsaker as head of the aeronautics program at MIT. When the United States entered the war, he joined the Army and went to McCook.27

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 4. The propulsion research at McCook was particularly strong. One of these early propeller test rigs is still in use today at Wright-Patterson Air Force Base.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 5. The propeller shop hand-crafted propellers of all varieties for research and flight test purposes.

Klemin’s work during and after the war centered around bringing theory and practice together in the McCook hangars. The Engineering Division’s wind tunnel work was a prime example. The tunnel built during World War I was superseded by a much larger tunnel built in 1922. Known as the “five foot tunnel,” it was a beautiful creation built up of lathe-turned cedar rings. The McCook tunnel was 96 feet in length and had a maximum smooth airflow diameter of five feet, hence the name.28 Although the National Advisory Committee for Aeronautics’ variable density tunnel completed the following year was the real breakthrough instrument in the field,29 the McCook tunnel provided important data and helped standardize the use of such devices for design purposes.

Among the activities of the Aeronautical Research Department were the famous sand loading tests. Under Klemin’s direction this method of structural analysis was refined to a high degree. Although the NACA became the American leader in aerodynamic testing with its variable density tunnel, McCook led the way in structural analysis.30

Materials research was another area in which the Engineering Division was heavily involved. Great strides were made in their work with aluminum and magnesium alloys. These products found important applications in engines, airframes, propellers, airship structure, and armament. In 1921, the Division was at work on this country’s first all duraluminum airplane.31 Materials research also included developmental work on adhesives and paints, fuels and lubricants, and fabrics, tested for strength and durability for applications in both aircraft coverings and parachutes.32

One of the most often-cited achievements at McCook was the perfecting of the free-fall parachute by Major Edward L. Hoffman. First used at the inception of human flight by late-eighteenth century balloonists, the parachute remained a somewhat dormant technology until after World War I. Prior to Hoffman’s work, bulk and weight concerns overrode the obvious life-saving potential of the device. Hoffman experimented with materials, various shapes and sizes for the canopy, the length of the shroud lines, the harness, vents for controlling the descent, all with an eye toward increased efficiency and reliability. His systematic approach was characteristic of the emerging McCook pattern.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 6. The Flight Test hangar at McCook, showing the range of aircraft types being evaluated by the Engineering Division.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 7. The Five-Foot Wind Tunnel, built in 1922, bad a maximum airflow speed of 270 mph.

 

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCHFigure 8. The “pack-on-lhc-aviator” parachute design that was perfected at MrC innlr

After numerous tests with dummies, Leslie Irvin made the first human test of Hoffman’s perfected chute on April 28, 1919. Designated as the Type A, this was a modem-style “pack-on-the-aviator” design with a ripcord that could be manually activated during free fall. Though completely successful, parachutes did not become mandatory equipment for U. S. Army airmen until 1923, a few months after Lt. Harold Harris was saved by one after his Loening monoplane broke apart in the air on October 20, 1922. Harris’ exploit was the first instance of an emergency use of a free-fall parachute by a U. S. Army pilot.33

Aerial photography was another of the related fields that was significantly advanced during the McCook years. The Air Service had initiated a sizeable photo reconnaissance program during the war. Work in this field continued during the 1920s, and it became one of the most noted contributions of the Engineering Division. Albert Stevens and George Goddard were the central figures of aerial photography and mapping at McCook. Goddard made the first night aerial photographs and developed techniques for processing film on board the aircraft. In 1923, Stevens, with pilot Lt. John Macready, made the first large-scale photographic survey of the United States from the air. Stevens had particular success with his work in high altitude photography. By 1924, Air Service photographers were producing extremely detailed, undistorted images from altitudes above 30,000 feet, covering 20 square miles of territory.34

In addition to the obvious military value of aerial photography, this capability was also being employed in fields such as soil erosion control, tax assessment, contour mapping, forest conservation, and harbor improvements. The fruits of the research at McCook often extended beyond purely aeronautical applications.

The demands of the aerial photography work were also an impetus to other areas of aeronautical research. The need to carry cameras higher and higher stimulated propulsion technology, particularly superchargers. Flight clothing and breathing devices were similarly influenced. Extreme cold and thin air at high altitudes resulted in the development of electrically heated flight suits, non-frosting goggles, and oxygen equipment.35

Several important contributions in the fields of navigation and radio communication that would help spur civil air transport were developed at McCook. The first night airways system in the United States was established between Dayton and Columbus, Ohio. This route was used to develop navigation and landing lights, boundary and obstacle lights, and airport illumination systems. Experimentation with radio beacons and improved wireless telephony were also part of the program. These innovations proved especially valuable when the Department of Commerce inaugurated night airmail service. Advances in the field of aircraft instrumentation, included improvements in altimeters, airspeed indicators, venturi tubes, engine tachometers, inclinometers, tum-and-bank indicators, and the earth induction compass, just to name a few. Refinement of meteorological data collection also made great strides at McCook. The development of such equipment was essential for the creation of a safe, reliable, efficient, and profitable, commercial air transport industry.36

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 9. An example of the mapping produced by the aerial mapping photography program conducted by the Engineering Division at McCook Field.

Another significant economic application of aeronautics that saw development at McCook was crop dusting. The advantages of releasing insecticide over wide areas by air compared to hand spraying on the ground were obvious. In the summer of 1921, when a serious outbreak of catalpa sphinx caterpillars occurred in a valuable catalpa grove near Troy, Ohio, the opportunity to demonstrate the effectiveness of aerial spraying presented itself. A dusting hopper designed by E. Dormoy was fitted to a Curtiss JN-6. Lt. Macready flew the airplane over the affected area at an altitude of about 30 feet as he released the insecticide. He accomplished in a few minutes what normally would have taken days.37

Of course, McCook Field was a military installation, and a good deal of their research focused on improving and expanding the uses of aircraft for war. Perhaps the most significant long term contribution in this area made by the Engineering Division was their work with the heavy bomber. In the early twenties, General William “Billy” Mitchell, assistant chief of the Air Service, began to vociferously promote aerial bombardment as a pivotal instrument of war. The Martin Bomber was the Army’s standard bombing aircraft at the time. The Engineering Division worked with the Glenn L. Martin Company to re-design the aircraft, but were unable to meet General Mitchell’s requirements for a long range, heavily loaded bomber.

In 1923, the Air Service bought a bomber designed by an English engineer named Walter Barling. Spanning 120 feet and powered by six Liberty engines, the Barling Bomber was the largest airplane yet built in America. So big and heavy was the craft that it could not operate from the confined McCook airfield. Consequently, the Engineering Division had it transported by rail to the nearby Fairfield Air Depot to conduct flying tests. First flown by Lt. Harold Harris in August of 1923, the Barling Bomber proved largely unsuccessful. It was a heavy, ungainly craft that never lived up to expectations. Nevertheless, it in part influenced the Air Service, in terms of both technology and doctrine, toward strategic bombing as a central element of the application of air power.38

Complementary to the development of military aircraft was, of course, armament. McCook engineers turned out a continuous stream of new types of gun mounts, bomb racks, aerial torpedoes, machine gun synchronization devices, bomb sights, and armament handling equipment. Even experiments with bullet proof glass were conducted. The advances in metallurgy that were revolutionizing airframe and engine construction were also being employed in the development of lightweight aircraft weaponry.39

Another distinct avenue of aeronautical research that saw at least limited development at McCook was vertical flight. George de Bothezat, a Russian emigre who worked on the famous World War I Ilya Muromets bomber, designed a workable helicopter for the U. S. Army in the early 1920s. Built in total secrecy, the complex maze of steel tubing and rotor blades was ready for testing on December 18,1922. In its first public demonstration the craft stayed aloft for one minute and 42 seconds and reached a maximum altitude of eight feet. Flight testing continued during 1923. On one occasion it carried four people into the air. Although it met with some success, de Bothezat’s helicopter did not live up to its initial expectations and the project was eventually abandoned.40 Still, the vertical flight research, like the heavy bomber, demonstrates McCook’s pioneering role in numerous areas of long range importance.

Equally important as conducting research is, of course, dissemination of the results. Here again the Engineering Division’s efforts are noteworthy. During the war, the McCook Field Aeronautical Reference Library was created to serve as a clearinghouse for all pertinent aeronautical engineering literature and a repository for original research conducted at the station. By war’s end, the library contained approximately 5000 domestic and foreign technical reports, over 900 reference works, and had subscriptions to 42 aeronautical and technical periodicals. All of the material was cataloged, cross-indexed, and made available to any organization involved in aeronautical engineering. During the war, an in-house periodical called the Bulletin of the Experimental Department, Airplane Engineering Division was published. After 1918, at the urging of the National Advisory Committee for Aeronautics, the Division increased distribution of the journal to over 3000 engineering societies, libraries, schools, and technical institutes. Through these instruments, the research of the Engineering Division was documented and disseminated. McCook proved to be an invaluable information resource to both the military and private manufacturing firms throughout the period.41

In addition, in 1919, the Air Service set up an engineering school at McCook. Carefully selected officers were trained in the rudiments of aircraft design, propulsion theory, and other related technical areas. This school still operates today as the Air Force Institute of Technology.42

The Engineering Division’s role as a technical, professional information resource was complemented by its efforts to keep aviation in the public eye. During the 1920s, Dayton became almost as famous for the aerial exploits of the McCook flying officers as it was for being the home of Wilbur and Orville Wright. Speed and altitude records were being set on a regular basis. These flights were in part integral to the research, but they had a public relations component as well. With the postwar wind down of government contracts, private investment had to be cultivated. The Engineering Division saw a thriving private aircraft industry that could be tapped in time of war as essential to national security. The publicity garnered from record­setting flights was in part intended to draw support for a domestic industry.

There were hundreds of celebrated flying achievements that originated with the Engineering Division, but two events in particular brought significant notoriety to McCook Field and aviation. In 1923, McCook test pilots Lt. Oakley G. Kelly and Lt. John A. Macready made the first non-stop coast-to-coast flight across the United States. Their Fokker T-2 aircraft was specially prepared for the flight by the Engineering Division. Kelly and Macready departed from Roosevelt Field, Long Island, on May 2, and completed a successful flight with a landing in San Diego, California, in just under 27 hours.43

The following year, the Air Service decided to attempt an around-the-world flight. Again, preparations and prototype testing were done at McCook. Four Douglas-built aircraft were readied and on April 6, 1924, the group took off from Seattle, Washington. Only two of the airplanes completed the entire trip, but it was a technological and logistical triumph nonetheless. The achievement received international acclaim and was one the most notable flights of the decade.44

This cursory discussion of McCook Field research and development from propulsion to public relations is intended to be merely suggestive of the rich and diversified program administered by the Engineering Division of the U. S. Air Service. McCook is something of an historical Pandora’s box. Once looked into, the list of technological project areas is almost limitless. One program dovetails into the next, and all were carried out with thoroughness and sophistication.

One obvious conclusion that can be drawn from this brief overview is the powerful place McCook Field holds in the maturation of professional, high-level

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 10. Among the more famous aircraft prepared at McCook Field were the Fokker T-2 (center), which made the first non-stop U. S. transcontinental flight in 1923, and the Verville – Sperry Racer (right), which featured retracting landing gear.

aeronautical engineering in the United States, and its influence on the embryonic American aircraft industry. Beginning with the World War I experience, aircraft were now studied and developed in terms of their various components. Systematic testing and design had replaced cut-and-try. The organized approach to problems that characterized the Engineering Division’s research program became a model for similar facilities. Many who would later become influential figures in the American aircraft industry were “graduates” of McCook. They took with them the experience and techniques learned at the small Dayton experiment station and helped create an industry that dominated World War II and became essential thereafter. While the Engineering Division was by no means the singular source of aeronautical information and skill in this period, a review of their research activity and style clearly illustrate their extensive contributions to aeronautical engineering knowledge, as well as the formation of the professional discipline. In these ways aeronautics was transformed from simply a new technology into a new field, a new arena of professional, economic, and political significance.

The crosslink between McCook and private industry involved more than the transfer of technical data and experienced personnel. There was also a philosophical component at work of great importance with respect to how future government sponsored research would be conducted. Military engineers and private aircraft manufacturers agreed that a well developed domestic industry was in the best interest of all concerned. Yet, each had very different ideas regarding how it should be organized and what would be their individual roles.

McCook Field had, of course, been intimately tied to private industry since its creation. Its initial purpose was to serve as a clearinghouse for America’s hastily gathered aeronautical production resources upon the United States’ entry into World War I. Although the installation had a military commander, it was under the administration of industrial leader Edward Deeds.

During the war, when contracts were sizeable and forthcoming, budding aircraft manufacturers had few problems with the Army’s involvement in design and production. By 1919, however, when heavy government subsidy dried up and contracts were canceled, the interests of the Engineering Division and private manufacturers began to diverge. Throughout the twenties, civilian industry leaders and the military engineers at McCook exchanged accusations concerning responsibilities and prerogatives.

Even though government contracts were severely curtailed after the war, the military was still the primary customer for most private manufacturers. Keenly aware of this, the Army attempted to follow a course that would aid these still relatively small, hard pressed private firms, as well as facilitate their needs for aircraft and equipment. They continually reaffirmed their position that a thriving private industry that could be quickly enlisted in time of national emergency was an essential component of national defense. In a 1924 message to Congress, President Coolidge commented that “the airplane industry in this country at the present time is dependent almost entirely upon Government business. To strengthen this industry is to strengthen our National Defense.”45 Such statements reflected the “pump-priming” attitude toward the aircraft industry that was typical throughout the government, not only among the military. By providing the necessary funds to get private manufacturers on sound footing, government officials felt they were at once bolstering the economy as well as meeting their mandate of providing national security.46

These sentiments were backed up with action. For example, in 1919, Colonel Bane, head of the Engineering Division, recommended an order be placed with the Glenn L. Martin Company for fifty bombers. The Army needed the airplanes and such an order would at least cover the costs of tooling up and expanding the Martin factory. In addition to supplying aircraft, it was believed that this type of patronage would help create a “satisfactory nucleus,…, capable of rapid expansion to meet the Government’s needs in an emergency.”47 On the surface, it seemed like a beneficial approach all the way around.

This philosophy, however, met with resistance from the civilian industry. They liked the idea of government contracts, but they felt the Army was playing too large a role in matters of design and the direction the technology should go. They were concerned private manufacturers would become slaves to restrictive military design concepts as a result of their financial dependency on government contracts. By centralizing the design function of aircraft production within the military, it would stifle originality and leave many talented designers idle.48 Moreover, they believed that in a system where private firms merely built aircraft to predetermined Army specifications, they would be in a vulnerable position. They feared the Army would take the credit for successful designs and that they would be blamed for the failures.49 The civilian industry hoped to gain government subsidy, but wanted to do their own developmental work and then provide the Army with what they believed would best serve the nation’s military needs.

The Engineering Division’s response to this philosophical divergence was twofold. First, they asserted that Army engineers were in the best position to assess the Air Service’s needs and having them do the design work was the most efficient way to build up American air defenses. They claimed civilian designers sacrificed ease of production and maintenance for superior flight performance. Key to a military aircraft construction program, it was argued, were designs that were simple enough to mass produce and then maintain in the field by minimally-trained mechanics. When other performance parameters are the primary goal, complexity and expense often creep into the final product. Although performance factors such as speed and maneuverability were certainly important to the Army, utility and practicality remained higher priorities. This difference in outlook was among the principal reasons why the Engineering Division did not want to give up their design and development prerogatives.50

The other divisive issue was the conduct of basic research. The Engineering Division stressed the crucial nature of this type work with a new technology such as aeronautics. They were concerned that private industry, particularly in light of its troubled financial situation, would be reluctant to undertake fundamental research due to its frequent indefinite results and prohibitive costs. They would, understandably, focus on projects that promised fairly immediate financial return. Leon Cammem, a prominent New York engineer, skillfully summarized the Army’s position in an article that appeared in The Journal of the American Society of Mechanical Engineers. He concluded that “it is obvious that if aeronautics is to be developed in this country there must be some place where investigations into matters pertaining to this new art can be carried on without any regard to immediate commercial returns.” He suggested that place should be McCook Field.51

Throughout the 1920s, the civilian industry assailed the government, and the Engineering Division in particular, for attempting to undercut what they saw as their role in the development of this new field of technological endeavor. Although the military always had the upper hand in the McCook era, industry leaders managed to keep the issues on the table. Pressures on the industry eased somewhat in the 1930s because a sizeable commercial aviation market was emerging and gave private manufacturers a greater degree of financial autonomy. Yet, battles over research and decision making prerogatives continued to arise whenever government contracts were involved. Although the dollar amounts are higher and the technological and ethical questions more complex, many of the organizational issues of modern, multi-billion-dollar aerospace R&D are not new. The historical point of significance is that it was in the 1920s that such organizational issues were first raised and began to be sorted out. Again, the notion of a field in adolescence, finding its way, establishing its structural patterns for the long term clearly presents itself A look at the formative years of the American aircraft industry and government-sponsored aeronautical research shows that these organizational debates were an early feature of aviation in the United States, and that the Engineering Division at McCook Field was an intimate player in this history. Given this, and McCook’s countless contributions to aeronautical engineering, it is perhaps only a slight overstatement to suggest that the beginnings of our current-day aerospace research establishment lie in a small piece of Ohio acreage just west of Interstate 75 where today, among other things, the McCook Bowling Alley now resides.

Diffusing Knowledge – The Compressor “Bible ”

One product of the NACA research program was a three-volume Confidential Re­search Memorandum, issued in 1956, often referred to as the “Compressor Bible” in the industry.32 These volumes presented a complete semi-empirical method for designing axial compressors achieving levels of performance far beyond the standard of the mid-1940s. Subsequent advances notwithstanding, including the advent of computer-based analytical techniques in the mid-1950s, this design method remained in use for at least the next quarter century, if not still today. Strikingly, however, while the “bible” often mentions turboprops and turbojets, and it expressly lays out compressor design requirements for both of them, it makes no mention of turbofans.33

The empirical component of the NACA design method was based primarily on a huge number of cascade performance tests of NACA 65-Series airfoils carried out at Langley. Airfoils in cascade perform somewhat differently from isolated airfoils. The two-dimensional wind-tunnel tests determined air deflections, irreversible pressure losses, and airfoil surface pressures as functions of incidence conditions across the family of NACA 65-Series airfoils for a range of cascade stagger angles and solidities (i. e. chord-to-space ratios). These data allowed designers first to select preferred airfoil shapes along a blade to achieve a given design performance, including thermodynamic loss requirements, and then to predict the performance of the airfoils at specified off-design operating conditions.34 In large part because of the availability of this data-base, NACA 65-Series airfoils became the most widely used airfoils in axial compressors.

Constructing a Parameter for Blade Loading – The Diffusion Factor A critical element in the NACA design method was a new parameter, devised by Seymour Lieblein and others in 1953, called the “diffusion factor.” Losses result from many effects, but most important, in the absence of shocks, are viscous losses related to diffusion – i. e., deceleration – acting on boundary layers. As the loading on a given airfoil increases, a point is reached where the losses abruptly increase. Designers needed a non-dimensional parameter that could serve as a measure of the loading, allowing them to anticipate, in the form of a critical value, where the losses abruptly increase. Axial compressor blading had originally been conceptualized on the basis of isolated airfoil theory, using the lift coefficient as a non-dimensionalized measure of loading, but the losses in cascades did not correlate well with it. As a consequence designers did not have an adequate way of anticipating loading limits. Other parameters were tried before the diffusion factor, but with limited success.35

The diffusion factor was derived from a series of simplifying assumptions from boundary layer theory, applied to the suction surface. The basic idea was that the ultimately dominating losses came from turbulence developing in the boundary layer along the rear half of the airfoil suction surface, where the velocity drops from its peak to its discharge value. The problem was that any correlating parameter had to be defined in terms of quantities that could be determined with confidence; this did not include the peak velocity along the suction surface in rotating blade rows. The simplifying assumptions allowed this peak velocity to be replaced by quantities that could be measured upstream and downstream of blade rows:

W2 ^2C02“rlC01

0=1 2 rm<5Wx

where W is the relative velocity, C0 is the absolute tangential velocity, о is the cascade solidity, the subscripts 1 and 2 designate upstream and downstream of the blade row, and rm is the average of the radii rj and r2. The multi-term structure of this formula should make clear that Lieblein’s diffusion factor was not an entirely obvious, intuitive parameter. Yet, when assessed against the NACA 65-Series cascade data, it turned out to indicate a clear loading limit criterion.36 This criterion was equally successful when tried with cascade data from other airfoils.37 It has subsequently proved to be applicable to compressor blading quite generally, lending an element of rationality to compressor design much as the lift coefficient did to wing design.38

The importance of having a clear loading limit criterion is best seen by considering the ramifications of not having one. The obvious way to pursue improvements in performance was by trying to develop new airfoils; and the natural way of trying this was to test airfoils in cascade and then make incremental modifications in shape that promised incremental gains in performance. The problem with this approach in the absence of a clear loading limit criterion was that any incremental modification in shape might well cross some unrecognized barrier, resulting not in an incremental gain, but in a prohibitively large fall-off in performance. The diffusion factor and the empirically determined loading limit expressed in terms of it defined the barrier that the exploration of new airfoil designs needed to remain within.

The diffusion factor did indeed play a key role in the pursuit of higher stage pressure-ratios. The overall pressure-ratio of a compressor amounts to a product of the individual stage pressure-ratios. The pressure-ratio per stage tends to increase as the velocity of the flow relative to the rotating blades increases. As the so-called velocity triangles shown in Figure 6 indicate, if the flow approaching a rotor blade

is axial in the absolute frame of reference, then the velocity relative to the rotor blade increases as the blade tip-speed increases. Ultimately, stress considerations limit tip-speed. Aerodynamic considerations, however, were imposing limits on tip – speed far below those imposed by stresses. As the relative incident Mach number at the tip increases, shocks begin to form in the outer portion of the airfoil passages, resulting in a sharp increase in losses. In the case of NACA 65-Series airfoils, the ‘ losses rise sharply for incident Mach numbers above 0.8. This limited the pressure – ratio in stages using these airfoils to around 1.15, as we saw earlier.

Pushing Blade Loading – Transonic Stages This restriction, coupled with the goals of achieving higher pressure-ratios per stage in order to use fewer stages, hence saving weight, and higher airflow per unit frontal-area, hence limiting engine drag, led to the research problem of finding airfoil shapes that would allow the incident tip Mach number to rise above 1. That is, the goal was to find airfoil shapes that would permit efficient transonic stages – stages in which the inlet relative velocity is supersonic in the outer portion of the blade and subsonic in the inner portion (which, at the same RPM, is moving at a lower velocity). NACA researchers at Langley and Lewis had been working on the problem of transonic airfoil and stage design from 1947 on as another part of their axial compressor research program. They had achieved some successes before the diffusion factor was identified – e. g. a stage with a 1.1 tip Mach number without excessive losses39 – but not consistently. They began having more success with the diffusion factor in hand by limiting attention to velocity triangles that met the loading limit criterion for this parameter. In particular, they designed an experimental 5-stage transonic compressor with a tip-speed of 1100 ft/sec in which the tip Mach numbers were as high as 1.18. Although the efficiency fell off at 100 percent speed, this compressor did achieve an overall pressure-ratio of 4.6 at an adiabatic efficiency of 85 percent, or, in other words, an average stage pressure-ratio of 1.3 5.40 Furthermore, the measured performance of the double-circular-arc airfoils used in these and other NACA test stages, along with wind-tunnel testing of double­circular-arc cascades, began to yield a data-base for transonic airfoils, supplementing the NACA-65 Series data-base.

Save perhaps for the early efforts in the mid-1940s, the NACA work on transonic stages was focused on improving axial compressors, not on fans that could be used in bypass engines. The primary application of the NACA transonic stage research was in the early stages of axial compressors, yielding both higher pressure-ratio per stage and higher airflow per unit frontal-area.41 Nevertheless, as we shall see, NACA’s success in pushing tip Mach numbers well above 1.0 was an important step in the emergence of the turbofan engine. Turbofans with tip Mach numbers below 1 would have offered at most only small gains in performance over turbojets. Once it became clear that the tip Mach number can exceed 1.0 without a large drop­off in performance, the question became, how far above Mach 1 can the tip Mach number go?

Pursuing a Quantum Jump – Supersonic Stages Another, more radical part of the NACA compressor program proved in hindsight to be even more important to the emergence of the turbofan engine. It explored a some­what revolutionary way of trying to achieve higher pressure-ratio per stage: actually using the sharp pressure increase across a normal shock to greatly increase the pressure rise in a stage. The idea of a supersonic compressor stage – one in which the incident relative velocity is supersonic along the entire span of the blade – was first proposed in 1937. Arthur Kantrowitz initiated research on supersonic stages at NACA Langley in 1942. Shortly after the War several young engineers joined him, forming a research group at Langley and then at Lewis as well. Their fundamental problem was to control and limit the thermodynamic losses in a supersonic stage. The abrupt pressure-rise across the shock acts as an adverse pressure-gradient at the point where it meets the boundary layer, threatening to cause the boundary layer to separate, resulting in large losses. An example of such shock-induced boundary layer separation is shown in Figure 8, for a Mach number of 1.5. The problem was to find

Diffusing Knowledge - The Compressor “Bible ”

Figure 8. Shock waves and boundary layer separation in a Mach 1.5 cascade. Note shock waves at blade tips (left). Boundary layer separates on suction (i. e. convex) surface, where the shock intersects it (dark region above each airfoil), with attendent thermodynamic losses. [F. A.E. Breugelmans, “High Speed Cascade Testing and its Application to Axil Flow Supersonic Compressors,” ASME paper 68-GT-10, 1968, p. 6.]

airfoil shapes for which the attendant losses would be greatly outweighed by performance gains. Since analytical methods at the time were worthless for attacking this problem, the only approach was to learn through testing experimental designs.

NACA engineers designed and tested an impressively large number of experimental supersonic stages between 1946 and 1956.42 Virtually all of these research compressors performed poorly when judged by the standards that would have to be met for flight. In the last years of the effort, however, some designs began showing promise. Of particular note was a 1400 fit/sec tip-speed compressor rotor designed by John Klapproth and others, which took into account Lieblein’s diffusion factor. It achieved a pressure-ratio of 2.17 at an adiabatic efficiency (for the rotor alone) of 89 percent, with a tip Mach number of 1.35. As the report describing these results notes, however, its greatest significance lay elsewhere:

Inlet relative Mach numbers were supersonic across the entire blade span for speeds of 90 percent design and above. There were no appreciable effects of Mach number on blade-element losses below 90 percent of design speed. At 90 percent design speed and above, there was an increase in the relative total – pressure losses at the tip. However, based on rotor diffusion factor, these losses for Mach numbers up to 1.35 are comparable with the losses in subsonic and transonic compressors at equivalent values of blade loading.43

This was the first clear evidence that losses continue to correlate with the diffusion factor to much higher Mach numbers than in the tests which had provided the basis for this parameter – a result that was by no means assured a priori.

…. While the derivation of the diffusion factor D was based on incompressible flow, the primary factors influencing performance, that is, over-all diffusion and blade circulation, would not be expected to change for high Mach number applications…. The applicability of the correlation of D should be expected only in cases having similar velocity profiles on the blade suction surface. This similarity existed for the theoretical velocity profiles for this rotor, although the actual distribution was probably altered somewhat by differences between the assumed and real flow. On the basis of [our results], the diffusion factor appears to be a satisfactory loading criterion even for very high Mach number blading when the velocity distribution approximates that of conventional airfoils in cascade 44

In other words, for the first time, the performance in a supersonic blade row cor­related continuously – i. e. seamlessly – with the performance achieved in subsonic and transonic compressor stages, up to as high as Mach 1.35. An approach to design­ing much higher Mach number stages was beginning to emerge.45

WOODEN AIRPLANES IN WORLD WAR II:. NATIONAL COMPARISONS AND SYMBOLIC CULTURE

INTRODUCTION

Most histories of technology focus on single national contexts, and for good reason. The contextualist history of technology requires an intimate knowledge not only of technical history, but also of the institutional, political, and cultural context in which specific technologies are created and used. Such expertise is difficult enough to maintain for a single nation. Yet national specialization has real costs for the history of technology. While historians may respect national borders, technologies do not. Since the Industrial Revolution, technologists have self-consciously worked within an international context, insuring that no major technology has remained confined to a single national context.1

Airplane technology has always been strongly transnational, despite its dependence on government-funded aeronautical establishments. In fact, the military significance of the airplane helps explain its transnational characters, as every major power kept close watch on aeronautical developments abroad. In consequence, the similarities among nations have been more striking than their differences.2 But there have always been real differences too, differences that cannot be explained by variations in technological knowledge.

Such differences appear clearly in the use of wood as an alternative aircraft material during World War II. Britain, Canada and the United States all launched major programs in wooden aircraft construction early in the war. Despite close technical cooperation between these allies, the success of their national programs varied remarkably. Britain and Canada proved much more successful than the United States in designing, producing and using wooden aircraft. To explain national differences in the use of materials, historians of technology typically invoke variations in resource endowments, design traditions, or available skills. Yet such variations do not account for the American failure and the British and Canadian successes. These divergent outcomes resulted, rather, from differences in the symbolic meanings of airplane materials, meanings drawn from the culture of each nation.

The wooden airplanes of World War II are part of the lost history of failed technologies. The modernist ideology of technology looks resolutely forward, embracing innovation and novelty while disparaging unsuccessful alternatives

183

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 183-205 © 2000 Kluwer Academic Publishers.


supposedly mired in tradition. Historians of technology have for some time rejected this vision of technology’s history, but the work of reconstruction has only just begun.3 Like most failed technologies, the history of the wooden airplane remains largely buried. This paper resurrects one chapter in its history.

THE CHANGING NATURE OF FLIGHT AND GROUND TEST. INSTRUMENTATION AND DATA: 1940-1969

Before a new engine or airframe achieves its first flight much prior ground testing has been done in wind tunnels or engine test cells. Ground and flight tests are run to establish performance characteristics and to aid in design development and refinement. This requires collection and analysis of relevant test data from test runs in specially instrumented engines, scale models, and aircraft.

The fundamental task of such tests is collecting performance and reference data. What data are collected depends upon the purpose of the tests:

Development testing to refine the final production design;

Type or endurance testing as precursor to military or civilian acceptance of the basic design;

Flight tests demonstrate aircraft or engine ability to operate under realistic circumstances, uncover design difficulties, and establish maintenance schedules for production aircraft or engines;

Acceptance tests to show that individual production engines meet minimum contractual performance characteristics.[2]

Some development, acceptance, and engine endurance testing can be done in wind tunnel and engine test stand ground facilities; the others invariably are airborne.

Instrumentation, which is the source of data from tests, tends to be most extensive in development testing and flight test – which are my focus. Airborne tests technically are the most demanding. For data to be useful, they must be recorded and processed into interpretable forms.

Instrumentation, recording, and processing of aircraft data have evolved substantially since the latter 1800s. These developments do not sort themselves into nice periodizations, but can be construed as three contrasting testing styles, overlapping for as much as 40 years, but each dominating different periods.

In the first style the primary airborne instrument is the test pilot’s subjective judgments augmented by notes on a knee pad and whatever readings of basic flying instruments could be jotted down. This style is important from the beginning of flight until about 1945, though a remnant today is the test pilot controlling what parts of the test flight are recorded at what data density.

67

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 67-105 © 2000 Kluwer Academic Publishers.

The second style emphasizes enhanced instrumentation recorded by something/ someone other than the test pilot, where recorded data, not the pilot’s reactions, are the primary data. Instrumentation can include an observer taking manual readings, gun cameras recording duplicate instrument panels, recording barographs, photopanels, transducer fed oscillographs, and telemetering to ground stations. Another defining characteristic is that data recording does not allow direct computerized analysis of the data. This style begins in the 1920s and dominates in the 1950s and 1960s.

The third style emphasizes very extensive automated instrumentation using transducers and probes, automatic pre-processing of data and digital data recording for computer analysis. This first comes in with the XB-70 and dominates high-end flight test subsequently – though style two continues in lower-end testing today where oscillographs continue to be used.

My story concerns evolution of instrumentation and data from style one to three. On the eve of World War II pilot reports, limited recording of data, and hand-analysis of recorded flight test data were typical. During the next three decades, automated data collection and digital computer reduction and analysis of data became the norm,2 with as many as 1200 channels of data being recorded and analyzed. The transformation essentially was complete with the instrumentation and data handling systems of the XB-70.1 will discuss the main transformations and changes in flight – test and ground-test instrumentation, data reduction, and analysis during that pivotal thirty-year period. Consideration will be given to wind tunnel testing, engine test-cell investigations, and flight-testing of both engines and airframes. I focus on turbojet – powered aircraft.

I also make some systematic philosophical remarks on data and modeling and offer concluding observations. 1

THE CHANGING NATURE OF FLIGHT AND GROUND TEST. INSTRUMENTATION AND DATA: 1940-1969

Figure 1. General Electric modified F-102 used for flight test of the J-85; circa 1960. [Suppe collection.]

far enough to ensure reliable performance, the now proven new engine could be put into the unproven airframe.6

Before an engine ever is taken aloft, it undergoes a great deal of testing on the ground in test cells. Similarly, before a new airframe is taken aloft, it has undergone extensive aerodynamic testing in wind tunnels. A general rule of thumb is to use flight test primarily for what cannot be studied in ground testingfacilities.

NACA Ends Compressor Research

The NACA research on transonic and supersonic compressors remained classified until the late 1950s. (Even the design “bible,” which focused on more conventional stages, was classified until 1958.) Consequently, the results of the research were not generally disseminated to those outside the United States, and even in this country they were not readily accessible. Moreover, unlike the “bible,” the reports themselves were aimed more toward providing a record of what had been done than toward instructing those outside NACA how to exploit the results. Even today, when read from the perspective of our far greater knowledge of transonic and supersonic stages, the reports are not always easy to assess. A large fraction of the knowledge that the NACA had gained on high Mach number stages remained in the heads of the engineers who had conducted the research.

This knowledge diffused out of the NACA through more than publications, however. Many engineers who had worked on high-Mach-number stages throughout the decade left NACA in 1955 and 1956. The Committee curtailed compressor research when Lewis, believing no fundamental problems remained in air-breathing engines, turned its attention to nuclear and rocket propulsion.46 Langley’s Jack Erwin and Lewis’s John Klapproth, Karl Kovach, and Lin Wright moved to General Electric. Kovach and Wright joined the company’s axial compressor aerodynamic design group, headed by Richard Novak, where they shifted their primary focus from research on airfoil shapes and parameters to design.

In some respects this timing was opportune. NACA research had produced the compressor design bible and had achieved sufficient success with transonic stages to turn the future over to the engine companies. The decade of research on supersonic compressors, the promising results in the last years notwithstanding, had yet to yield flight-worthy designs, making it hard to argue for continued funding. General Electric proved the beneficiary of the NACA’s change in focus, for GE offered the NACA engineers the chance to apply their experience with advanced, experimental designs to real engines. The knowledge Kovach and Wright brought from the government research establishment into the industry immediately began having an impact on the advanced designs GE was then developing. Wright’s knowledge, in particular, proved crucial to GE’s development of a radically advanced fan that formed the basis of their first flight-worthy turbofan engine, to which we now turn.47

FROM WOOD TO METAL: THE EARLY HISTORY

Wood was the dominant structural material for airplanes from the pre-history of flight until the early 1930s. By the late 1930s, however, wood was rapidly disappearing, especially in the structures of high-performance military aircraft and multi-motored passenger airplanes. Metal succeeded as a result of intense efforts to develop all-metal airplanes, efforts that began in Germany during World War I and quickly spread to Britain, France and the United States after the Armistice.4

In both Europe and the United States, national aeronautical communities maintained a powerful commitment to developing metal airplanes between the world wars. As I have argued elsewhere, this commitment cannot be explained by the technical advantages of metal. The technical choice between wood and metal remained indeterminate between the world wars; wood had advantages in some circumstances, metal in others. Claims for metal’s superiority in fire safety, weight, cost, and durability all proved equivocal throughout the 1920s.5

Despite the questionable advantages of metal in the 1920s, national governments and private firms concentrated their research and development programs on improving metal airplanes, while shortchanging research and development on wood structures. This bias was especially strong in the United States, where the Army Air Service began shifting research funds from wood to metal as early as 1920. Nevertheless, successful metal aircraft proved quite difficult to design, and the U. S. Army remained heavily dependent on wooden-winged aircraft until the mid-1930s. After about 1933, however, new all-metal stressed-skin structures proved competitive with wood, especially in larger airplanes. Even with the substantially increased production costs required by the new all-metal stressed-skin structures, wood quickly disappeared from most high-performance airplanes in both the United States and Europe.6

One cannot, however, invoke metal’s eventual success to explain why this path was chosen in the first place. Metal’s success resulted from years of intensive development before the predicted advantages of metal became manifest. Proponents of metal advanced no clear-cut technical arguments to justify continued support for metal in the 1920s, when experience with metal failed to corroborate claims for its superiority to wood.7 In the United States, at least, the embrace of metal was driven not so much by technical criteria as by the symbolic meanings of airplane materials.

Metal’s supporters openly articulated these symbolic meanings in the 1920s. They insisted that the shift from wood to metal was an inevitable aspect of technical progress, arguing that the airplane would recapitulate the triumph of metal in prior wood-using technologies, such as ships, railroad cars, and bridges.

Advocates of metal drew upon pre-existing cultural meanings to link metal with progress, modernity and science, while associating wood with backwardness, tradition and craft methods. These symbolic associations gained their evocative power from the ideology of technological progress, a set of beliefs deeply embedded within the aviation community. By linking metal with progress, advocates of metal were able to construct a narrative of technological change that predicted the inevitable replacement of wood by metal in airplane structures. This narrative provided more than rhetoric; it also inhibited expressions of support for wood while insuring that metal received a disproportionate share of funds for research and development.8