Category Archimedes

KNOWLEDGE CIRCA 1945

Theory – To describe the shape of a wing, engineers distinguish between planform (outline of the wing viewed from above) and airfoil (shape of a fore-and-aft section). Though taking account of planform at supersonic speeds was just beginning in the mid-1940s, methods for calculating two-dimensional (i. e., planar) supersonic flow over airfoils seen as sections of a constant-chord wing of infinite span had been available for some time. The physical and mathematical principles went back to the nineteenth century, and the groundwork for aerodynamic applications had been set down in papers by Ludwig Prandtl, Theodore von Karman, and Adolf Busemann at the landmark Volta conference on “High Speed in Aviation” at Rome in 1935.3 Even at that early stage, results for the supersonic flow around a sharp-nosed airfoil could be obtained with a degree of rigor unusual for the nonlinear equations of gas dynamics.

Подпись: Shock waves

The method has its basis in the special properties of supersonic flow. In such flows generally, a pressure signal moves past a point at the speed of sound relative to the local flow at that point. As a consequence, and in contrast to the situation in subsonic flow, a signal cannot propagate upstream, and the flow at a point on an airfoil surface cannot be affected by the shape of the airfoil aft of that point. Flow along the surface can therefore be calculated stepwise from the leading to the trailing edge, taking into consideration only the flow ahead of the point in question. With only very weak approximation, the method reduces in practice to sequential application of known nonlinear relationships for two flow situations (fig. 1): (a) discontinuous compression through a shock wave, used to find conditions at the point immediately behind the sharp concave turn at the leading edge, and (b) continuous expansion through a distributed fan-like field, to calculate flow properties along the convex surface of the airfoil. The latter relationship exists by

KNOWLEDGE CIRCA 1945

Figure 1. Supersonic flow over biconvex airfoil.

virtue of the simplicity of planar flow. (The Mach lines in the expansion fan of figure 1 show the limited, rearward-growing region of influence of representative points on the airfoil’s surface.) For more rapid calculation, this nonlinear “shock – expansion” method can be approximated by a linear (or first-order) theory initiated by Jacob Ackeret in 19254 or by a more accurate second-order theory put forward by Busemann in his paper at the Volta conference. By 1946, the various methods had been used to calculate the performance of a variety of airfoils.

The foregoing theories all depend on the assumption of a fictitious inviscid gas, that is, a gas lacking the viscosity present in real gases. They thus omit viscous forces and deal only with pressure forces. Such theories had long been supplemented at subsonic speeds by Prandtl’s boundary-layer theory of 1903, which deals specifically with the ffictionally retarded viscous layer that forms close to a surface in a real gas.5 On this basis, a large body of subsonic experience had been accumulated with both quantitative calculation and qualitative thinking regarding viscous effects. At supersonic speeds, little such experience was available, though indications existed that the presence of shock waves might lead to new kinds of viscous phenomena.

Experiment – Supersonic wind tunnels large enough for experiments in flight aerodynamics came into being in the 1930s. In a paper at the Volta conference, Jacob Ackeret described at length an impressive tunnel recently completed at the Federal Technical University in Zurich, and, in comments following the talk, Mario Gaspari of the University of Rome added details of a near copy then under construction at Guidonia, a short distance from Rome.6 (The most advanced tunnels, however, came into operation in 1939 at the German army’s laboratories at Peenemtlnde following small-scale development at the Technical University of Aachen. These tunnels were used in the design of ballistic missiles. Their existence did not become known in the United States until the end of World War II.)7 It was in the Guidonia tunnel that Antonio Ferri conducted in the late 1930s the first extensive experiments on airfoils at supersonic speeds. These tests, made on a constant – section model spanning the rectangular test section of the tunnel and thus simulating infinite span, supplied a wealth of pressure-distribution and other data on an assortment of airfoil shapes. Except for a few overall-force tests in the late 1920s and during World War II in small tunnels at the National Physical Laboratory in England,8 Ferri’s results provided the only experimental assessment of airfoil theory available at the time of the wing studies to be discussed here.

Comparison – Ferri’s findings, which were to prove useful for our Ames work, can be characterized by a figure reproduced from the latter work (fig. 2). This shows the theoretical and experimental distribution of pressure along the surface of an uncambered 10-percent-thick biconvex airfoil at a free-stream Mach number M0 of 2.13 and an angle of attack a of 10 degrees. (The Mach number M at a point in a flow is the ratio of the speed of flow to the speed of sound, both at that point.) The vertical scale in figure 2 is a dimensionless measure of the difference between the surface pressure p and the free-stream pressure p°. As is customary for airfoil work, negative values are plotted upward, positive downward, so that the area

10% THICK

KNOWLEDGE CIRCA 1945

Fig. 2. Pressure distribution on surface of biconvex airfoil. (This and subsequent figures from Vincenti, “Comparison between Theory and Experiment for Wings at Supersonic Speeds,” Report 1033 [Washington, D. C.: NACA, 1951].)

between the upper – and lower-surface values can be seen as a close measure of the overall lift. The experimental points and the shock-expansion curve are taken from Ferri’s report; linear theory was added as part of the research to be described here.

As can be seen, Ferri’s measurements for the biconvex profile showed near agreement with the shock-expansion theory over most of the airfoil, a typical finding. The higher-than-theoretical pressures over the rear 40 percent of the upper surface he attributed to interaction between the viscous boundary later and the shock wave at the airfoil’s trailing edge. The retarded air in the boundary layer apparently lacked the kinetic energy necessary to negotiate the pressure rise through the shock wave. As revealed by optical studies, the resulting readjustment of the flow found the boundary layer separating from the surface ahead of the trailing edge, with a shock wave forming a short distance above the surface at the location of the separation and a more or less constant surface pressure from there to the trailing edge; a second shock wave formed outside the separated region at about the latter location. Because of the unpredictedly high pressures on the upper surface in the separated region, measured overall lift on the airfoil was less than calculated from the theory. Fern’s results thus brought to light one of the new viscous phenomena characteristic of supersonic flow. They also illustrate how theory and experiment are frequently used together to understand phenomena that are (a) ruled out of the theory by the assumptions that make it mathematically feasible but (b) would be difficult to comprehend without the theory for comparison. The relevance of the linear theory, which Ferri did not concern himself with, will become apparent later.

THE PRANDTL CORRECTION

Ludwig Prandtl’s laboratory at Gottingen did not participate in this large “international” project. Nevertheless, Prandtl’s theory of interference effects played a crucial role. The British learned about it from the report of the French tests for the International Trials.38 Compared with British results, the French results gave lower values for lift coefficients but close values for drag coefficients and the center of pressure. More importantly, the report revealed that the French used very different testing procedures from the British, particularly in their employment of the Prandtl correction for the aerodynamic interference due to wind tunnel walls.

The Prandtl correction derived from Prandtl’s concept of the trailing vortex.39 Prandtl’s aerodynamic theory posited that the lift of the airplane was due to the circulation of the air flow around its wings. This airflow produced trailing vortices, which stretched out behind the tips of the wings. These vortices, in turn, produced “induced drag,” which retarded the movement of the airfoil through the air stream. Inside a closed space like a wind tunnel, these trailing vortices were more deformed and more condensed than in the open air because of the existence of the tunnel wall. Prandtl’s theory could derive the effect of their deformation and quantitatively determine the difference in induced drag in full-scale and small-scale testing. All these theoretical discussions were then being introduced to the British aeronautical community through Glauert’s technical reports.40

In their report, the French gave results without this theoretical correction, since they had been requested to do so for the purpose of comparison with the results of other laboratories. But the French representatives emphatically recommended use of the Prandtl correction, stating that its application to raw data was a normal procedure for all their tests, especially for those made for aircraft constructors.41 In its conclusion, this interim report gave the results to which the Prandtl correction had been applied. The application of the Prandtl correction did not necessarily give uniformly better agreement between the French and the British results nor between the two French results, but it gave an agreement between the two French results on lift coefficient.

The French investigators’ confident reliance on the Prandtl correction surprised the British. In March 1923, after this report was prepared, ARC Secretary Nayler and another British official, R. J. Goodman Crouch, were sent to the French laboratories at St. Cyr and Auteuil. Both of them discussed with French representatives the Prandtl correction as well as French methods of testing in general. The representatives of both laboratories told Nayler that they considered the Prandtl correction very accurate and hoped to see the comparison between the British and the French testing results after the Prandtl correction was applied.42 Another French engineer told Crouch that the French had compared the results at the Gottingen laboratory with their own by constructing and using models of the size given in German reports, and that the results of this comparison conformed closely.

With all this information, Crouch suggested in his report to the Aerodynamics subcommittee that these test results should be reported to the Main Committee so that they could again discuss the possibility of the Gottingen laboratory participating in the International Trials.43 The report encouraged the investigators at Famborough to recommend strongly the adoption of Prandtl’s theory. In discussions at meetings of the Design Panel, the Aerodynamics subcommittee, and the Main Committee, Wood and Glauert argued for the use of the Prandtl correction and of his theory in general. However, they were still unable to persuade their British colleagues.

OUT OF CONTROL

United Airlines flight 232 was 37,000 feet above Iowa traveling at 270 knots on 19 July 1989, when, according to the NTSB report, the flightcrew heard an explosion and felt the plane vibrate and shutter. From instruments, the crew of the DC-10-10 carrying 285 passengers could see that the number 2, tail-mounted engine, was no longer delivering power (see figure 5). The captain, A1 Haynes, ordered the engine shutdown checklist, and first officer Bill Records reported first that the airplane’s

OUT OF CONTROL

Figure 5. DC-10 Engine Arrangement. Source: National Transportation Safety Board, Aircraft Accident Report, United Airlines Flight 232, McDonnell Douglas DC-10-10, Sioux Gateway Airport, Sioux City, Iowa, July 19, 1989, p. 2, figure 1. Hereafter, NTSB-232.

normal hydraulic systems gauges had just gone to zero. Worse, he notified the captain that the airplane was no longer controllable as it slid into a descending right turn. Even massive yoke movements were futile as the plane reached 38 degrees of right roll. It was about to flip on its back. Pulling power completely off the number 1 engine, Haynes jammed the number three throttle (right wing engine) to the firewall, and the plane began to level off. “I have been asked,” Haynes later wrote, “how we thought to do that; I do not have the foggiest idea.”28 No simulation training, no manual, and no airline publication had ever contemplated a triple hydraulic failure;29 understanding how it could have happened became the centerpiece of an extraordinarily detailed investigation, one that, like the inquiry into the crash of Air Florida 90, surfaced the irresolvable tension between a search for a localized, procedural error and fault lines embedded in a wide array of industries, design philosophies, and regulations.

At 15:20, the DC-10 crew radioed Minneapolis Air Route Traffic Control Center declaring an emergency and requesting vectors to the nearest airport.30 Flying in a first class passenger seat was Dennis Fitch, a training check airman on the DC-10, who identified himself to a flight attendant, and volunteered to help in the cockpit. At 15:29 Fitch joined the team, where Haynes simply told him: “We don’t have any controls.” Haynes then sent Fitch back into the cabin to see what external damage, if any, he could see through the windows. Meanwhile, second officer Dudley Dvorak was trying over the radio to get San Francisco United Airlines Maintenance to help, but without much success: “He’s not telling me anything.” Haynes answered, “We’re not gonna make the runway fellas.” What Fitch had to say on his return was also not good: “Your inboard ailerons are sticking up,” presumably held up by aerodynamic forces alone, and the spoilers were down and locked. With flight attendants securing the cabin at 1532:02, the captain said, “They better hurry we’re gonna have to ditch.” Under the captain’s instruction, Fitch began manipulating the throttles to steer the airplane and keep it upright.31

Now it was time to experiment. Asking Fitch to maintain a 10-15° turn, the crew began to calculate speeds for a no-flap, no-slat landing. But the flight engineer’s response – 200 knots for clean maneuvering speed – was a parameter, not a procedure. Their DC-10-10 had departed from its very status as an airplane. It was an object lacking even ailerons, the fundamental flight controls that were, in the eyes of many historians of flight, Orville and Wilbur Wright’s single most important innovation. And that wasn’t all: flight 232 had no slats, no flaps, no elevators, no breaks. Haynes was now in command of an odd, unproven hybrid, half airplane and half lunar lander, controlling motion through differential thrust. Among other difficulties, the airplane was oscillating longitudinally with a period of40-60 seconds. In normal flight the plane will follow such long-period swings, accelerating on the downswing, picking up speed and lift, then rising with slowing airspeed. But in normal flight, these variations in pitch (phugoids) naturally damp out around the equilibrium position defined by the elevator trim. Here, however, the thrust of the numbers one and three engines which were below the center of gravity had no compensating force above the center of gravity (since the tail-mounted number two engine was now dead and gone). These phugoids could only be damped by a difficult and counter-intuitive out-of-phase application of power on the

OUT OF CONTROL

Figure 6. Ground Track of Flight 232. Source: NTSB-232, p. 4, figure 2.

downswing and, even more distressingly, throttling down on the slowing part of the cycle.32 At the same time, the throttles had become the only means of controlling airspeed, vertical speed, and direction: the flight wandered over several hundred miles as the crew began to sort out how they would attempt a landing (see figure 6).

To a flight attendant, Haynes explained that he expected to make a forced landing, allowed that he was not at all sure of the outcome, and that he expected serious difficulty in evacuating the airplane. His instructions were brief: on his words, “brace, brace, brace,” passengers and attendants should ready themselves for impact. At 15:51 Air Traffic Controller Kevin Bauchman radioed flight 232 requesting a wide turn to the left to enter onto the final approach for runway 31 – and to keep the quasi-controllable 370,000 pound plane clear of Sioux City itself. However difficult control was, Haynes concurred: “Whatever you do, keep us away from the city.” Then, at 15:53 the crew told the passengers they had about four minutes before the landing. By 15:58 it became clear their original plan to land on the 9,000 foot runway 31 would not happen, though they could make the closed runway 22. Scurrying to redeploy the emergency equipment that were lined up on 22 – directly in the landing path of the quickly approaching jet-Air Traffic Control began to order their last scramble, as tower controller Bauchman told them: “That’ll work sir, we’re gettin’ the equipment off the runway, they’ll line up for that one.” Runway 22 was only 6,600 feet long, but terminated in a field. It was the only runway they would have a chance to make and there would only be one chance. At

Подпись: Slage t fan rotor dtsk Figure 7. Fan Rotor Assembly. Source: NTSB Report,p. 9, figure 5.

1559:44 the ground proximity warning came on… then Haynes called for the throttles to be closed, to which check airman Fitch responded “nah I can’t pull ‘em off or we’ll lose it that’s what’s turnin’ ya.” Four seconds later, the first officer began calling out “left A1 [Haynes]” “left throttle,” “left,” “left,” left.” As they plunged towards the runway, the right wing dipped and the nose dropped. Impact was at 1600:16 as the plane’s right wing tip, then the right main landing gear, slammed into the concrete. Cartwheeling and igniting, the main body of the fuselage lodged in a com field to the west of runway 17/35, and began to bum. The crew compartment and forward side of the fuselage settled east of mnway 17/35. Within a few seconds, some passengers were walking, dazed and hurt, down mnway 17, others gathered themselves up in the midst of seven-foot com stalks, disoriented and lost. A powerful fire began to bum along the exterior of the fuselage fragment, and emergency personnel launched an all-out barrage of foam on the center section as surviving passengers emerged. One passenger went back into the burning wreckage to pull out a crying infant. As for the crew, for over thirty-five minutes they lay wedged in a waist-high cmmpled remnant of the cockpit – rescue crews who saw the airplane fragment assumed anyone inside was dead. When he regained consciousness, Fitch was saying something was cmshing his chest, dirt was in the fragmented cockpit. Second officer Dvorak found some loose insulation which he waved out a hole in the aluminum to attract attention. Finally, pried loose, emergency personnel brought the four injured crewmembers (Haynes, Records, Dvorak, and Fitch) to the local hospital.33 Despite the loss of over a hundred lives, it was, in the view of many pilots, the single most impressive piece of airmanship ever recorded. Without any functional control surface, the crew saved 185 of the 296 passengers on flight 232.

OUT OF CONTROL

Figure 8. Planform Elevator Hydraulics. Source: NTSB-232, p. 34, figure 14.

From the start, the search for probable cause centered on the number 2 (tail – mounted) engine. Not only had the crew witnessed the destruction wrought at the tail end of the plane, but Sioux City residents had photographed the damaged plane as it neared the airport; the missing conical section of the tail was immortalized in photographs. And the stage 1 fan (see figure 7), conspicuously missing from the number 2 engine after the crash, was almost immediately a prime suspect. It became, in its own right, an object of localized, historical inquiry.

From records, the NTSB determined that this particular item was brought into the General Electric Aircraft Engines facility between 3 September and 11 December 1971. Once General Electric had mounted the titanium fan disk in an engine, they shipped it to the Douglas Aircraft Company on 22 January 1972 where it began life on a new DC-10-10. For seventeen years, the stage 1 fan worked flawlessly, passing six fluorescent penetrant inspections, clocking 41,009 engine-on hours and surviving 15,503 cycles (a cycle is a takeoff and landing).34 But the fan did fail on the afternoon of 19 July 1989, and the results were catastrophic. When the tail engine tore itself apart, one hydraulic system was lost. With tell-tale traces of titanium, shrapnel-like fan blades left their distinctive marks on the empennage (see figure 8). Worst of all, the flying titanium severed the two remaining hydraulic lines.

With this damage, what seemed an impossible circumstance had come to pass: in a flash, all three hydraulic systems were gone. This occurred despite the fact that each of the three independent systems was powered by its own engine. Moreover, each system had a primary and backup pump, and the whole system was further backstopped by an air-powered pump powered by the slipstream. Designers even physically isolated the hydraulic lines one from the other.35 And again, as in the Air Florida 90 accident, the investigators wanted to push back and localize the causal structure. In Flight 90, the NTSB passed from the determination that there was low thrust to why there was low thrust to why the captain had failed to command more thrust. Now they wanted to pass from the fact that the stage 1 fan disk had disintegrated to why it had blown apart, and eventually to how the faulty fan disk could have been in the plane that day.

Three months after the accident, in October of 1989, a farmer found two pieces of the stage 1 fan disk in his com fields outside Alta, Iowa. Investigators judged from the picture reproduced here that about one third of the disk had separated, with one fracture line extending radially and the other along a more circumferential path. (See figure 9.)

Upon analysis, the near-radial fracture appeared to originate in a pre-existing fatigue region in the disk bore. Probing deeper, fractographic, metallographic and chemical analysis showed that this pre-existing fault could be tracked back to a metal “error” that showed itself in a tiny cavity only 0.055 inches in axial length and 0.015 inches in radial depth: about the size of a slightly deformed period at the end of this typed sentence. Titanium alloys have two crystalline structures, alpha and beta, with a transformation temperature above which the alpha transforms into beta. By adding impurities or alloying elements, the allotropic temperature could be lowered to the point where the beta phase would be present even at room temperature. One such alloy, ТІ-6А1-4У was known to be hard, very strong, and was expected to maintain its strength up to 600 degrees Fahrenheit. Within normal ТІ-6А1-4У titanium, the two microscopic crystal structures should be present in about equal quantities. But inside the tiny cavity buried in the fan disk lay traces of a “hard alpha inclusion” titanium with a flaw-a small volume of pure alpha-type crystal structure, and an elevated hardness due to the presence of (contaminating) nitrogen.36

Putting the myriad of the many other necessary causes for the accident aside, the gaze of the NTSB investigators focused on the failed titanium, and even more closely on the tiny cavity with its traces of an alpha inclusion. What caused the alpha inclusion? There were, according to the investigation, three main steps in the production of titanium-alloy fan disks. First, foundry workers melted the various materials together in a “heat” or heats after which they poured the mix into a titanium alloy ingot. Second, the manufacturer stretched and reduced the ingot into “billets” that cutters could slice into smaller pieces (“blanks”). Finally, in the third and last stage of titanium production, machinists worked the blank into the appropriate geometrical shapes – the blanks could later be machined into final form.

Hard alpha inclusions were just one of the problems that titanium producers and consumers had known about for years (there were also high-density inclusions, and

OUT OF CONTROL

Figure 9. Stage 1 Fan Disk (Reconstruction).Source: UAL 232 Docket, figure 1.10.2

the segregation of the alloy into flecks). To minimize the hard alpha inclusions, manufacturers had established various protective measures. They could melt the alloy components at higher heats, they could maintain the melt for a longer time, or they could conduct successive melting operations. But none of these methods offered (so to speak) an iron-clad guarantee that they would be able to weed out the impurities introduced by inadequately cleaned cutting, or sloppy welding residues. Nor could the multiple heats absolutely remove contamination from leakage into the furnace or even items dropped into the molten metal. Still, in 1970-71, General Electric was sufficiently worried about the disintegration of rotating engine parts that they ratcheted up the quality control on titanium fan rotor disks – after January 1972, the company demanded that only triple-vacuum-melted forgings be used. The last batch of alloy melted under the old, less stringent (double-melt) regime was Titanium Metals Corporation heat K8283 of February 23, 1971. Out of this heat, ALCOA drew the metal that eventually landed in the stage 1 fan rotor disk for flight 232.37

Chairman James Kolstad’s NTSB investigative team followed the metal, finding that the 7,000 pound ingot K8283 was shipped to Ohio for forging into billets of 16м diameter; then to ALCOA in Cleveland, Ohio, for cutting into 700 pound blanks; the blanks then passed to General Electric for manufacture. These 16м billets were tested with an ultrasonic probe. At General Electric, samples from the billet were tested numerous ways and for different qualities – tensile strength, microstructure, alpha phase content and amount of hydrogen. And, after being cut into its rectilinear machine-forged shape, the disk-to-be again passed an ultrasonic inquisition, this time by the more sensitive means of immersing the part in liquid. The ultrasonic test probed the rectilinear form’s interior for cracks or cavities, and it was supplemented by a chemical etching that aimed to reveal surface anomalies.38 Everything checked, and the fan was then machined and shot peened (that is, hammered smooth with a stream of metal shot) into its final form. On completion, the now finished disk fan passed a fluorescent penetrant examination – also designed to display surface cracking.39 It was somewhere at this stage – under the stresses of final machining and shot peening – that the investigators concluded cracking began around the hard alpha inclusion. But since no ultrasonic tests were conducted on the interior of the fan disk after the mechanical stresses of final machining, the tiny cavity remained undetected.^

The fan’s trials were not over, however, as the operator – United Airlines – would, from then on out, be required to monitor the fan for surface cracking. Protocol demanded that every time that maintenance workers disassembled part of the fan, they were to remove the disk, hang it on a steel cable, paint it with fluorescent penetrant, and inspect it with a 125-amp ultraviolet lamp. Six times over the disk’s lifetime, United Airlines personnel did the fluorescence check, and each time the fan passed. Indeed, by looking at the accident stage-1 fan parts, the Safety Board found that there were approximately the same number of major striations in the material pointing to the cavity as the plane had had cycles (15, 503). This led them to conclude that the fatigue crack had begun to grow more or less at the very beginning of the engine’s life. Then (so the fractographic argument went) with each takeoff and landing the crack began to grow, slowly, inexorably,

OUT OF CONTROL

Figure 10. Cavity and Fatigue Crack Area. Source: NTSB-232, p. 46, figure 19B.

out from the 1/100" cavity surrounding the alpha inclusion, over the next 18 years. (See figure 10.)

By the final flight of 232 on 19 July 1989, both General Electric and the Safety Board believed the crack at the surface of the bore was almost fi" long.41 This finding exonerated the titanium producers, since interior faults, especially one with no actual cavity, were much harder to find. It almost exonerated General Electric because their ultrasonic test would not have registered such an interior filled cavity with no cracks, and their etching test was performed before the fan had been machined to its final shape. By contrast, the NTSB laid the blame squarely on the United Airlines San Francisco maintenance team. In particular, the report aimed its cross hairs on the inspector who last had the fan on the wire in February 1988 for the Fluorescent Penetrant Inspection. At that time, 760 cycles before the fan disk disintegrated, the Safety Board judged that the surface crack would have grown to almost fi". They asked: why didn’t the inspector see the crack glowing under the illumination of the ultraviolet lamp?42 The drive to localization had reached its target. We see in our mind’s eye an inculpatory snapshot: the suspended disk, the inspector turning away, the half-inch glowing crack unobserved.

United Airlines’ engineers argued that stresses induced by rotation could have closed the crack, or perhaps the shot peening process had hammered it shut,

preventing the fluorescent dye from entering.43 The NTSB were not impressed by that defense, and insisted that the fluorescent test was valid. After all, chemical analysis had shown penetrant dye inside the half-inch crack found in the recovered fan disk, which meant it had penetrated the crack. So again: why didn’t the inspector see it? The NTSB mused: the bore area rarely produces cracks, so perhaps the inspector failed to look intently where he did not expect to find anything. Or perhaps the crack was obscured by powder used in the testing process. Or perhaps the inspector had neglected to rotate the disk far enough around the cable to coat and inspect all its parts. Once again, a technological failure became a “human factor” at the root of an accident, and the “performance of the inspector” became the central issue. True, the Safety Board allowed that the UA maintenance program was otherwise “comprehensive” and “based on industry standards.” But non-destructive inspection experts had little supervision and not much redundancy. The CRM equivalent conclusion was that “a second pair of eyes” was needed (to ensure advocacy and inquiry). For just this reason the NTSB had come down hard on human factors in the inspection program that had failed to find the flaws leading to the Aloha Airlines accident in April 1988.44 Here then was the NTSB-certified source of flight 232’s demise: a tiny misfiring in the microstructure of a titanium ingot, a violated inspection procedure, a humanly-erring inspector. And, once again, the NTSB produced a single cause, a single agent, a violated protocol in a fatal moment.45

But everywhere the report’s trajectory towards local causation clashes with its equally powerful draw towards the many branches of necessary causation; in a sense, the report unstably disassembled its own conclusion. There were safety valves that could have been installed to prevent the total loss of hydraulic liquid, screens that would have slowed its leakage. Engineers could have designed hydraulic lines that would have set the tubes further from one another, or devised better shielding to minimize the damage from “liberated” rotating parts. There were other ways to have produced the titanium – as, for example, the triple-vacuum heating (designed to melt away hard alpha defects) that went into effect mere weeks after the fateful heat number 8283. Would flight 232 have proceeded uneventfully if the triple-vacuum heating had been implemented just one batch earlier? There are other diagnostic tests that could have been applied, including the very same immersion ultrasound that GEAE used – but applied to the final machine part. After all, the NTSB report itself noted that other companies were using final shape macroetching in 1971, and the NTSB also contended that a final shape macroetching would have caught the problem.46 Any list of necessary causes – and one could continue to list them ad libidum – ramified in all directions, and with this dispersion came an ever-widening net of agency. For example, in a section labeled “Philosophy of Engine/Airframe Design,” the NTSB registered that in retrospect design and certification procedures should have “better protected the critical hydraulic systems” from flying debris. Such a judgment immediately dispersed both agency and causality onto the entire airframe, engine, and regulatory apparatus that created the control mechanism for the airplane.47

At an even broader level of criticism, the Airplane Pilots Association criticized the very basis of the “extremely improbable design philosophy” of the FAA. This “philosophy” was laid out in the FAA’s Advisory Circular 25.1309-1A of 21 June 1988, and displayed graphically in its “Probability versus Consequence” graph (figure 11) for aircraft system design.48 Not surprisingly, the FAA figured that catastrophic failures ought to be “extremely improbable,” (by which they meant less likely than one in a billion) while nuisances and abnormal procedures could be “probable” (1 in a hundred thousand). Recognizing that component failure rates were not easy to render numerically precise, the FAA explained that this was why they had drawn a wide line on figure 11, and why they added ‘The expression ‘on the order of when describing quantitative assessments.”49 A triple hydraulic failure was supposed to lie squarely in the one in a billion range – essentially so unlikely that nothing in further design, protection, or flight training would be needed to counter it. The pilots union disagreed. For the pilots, the FAA was missing the boat when it argued that the assessment of failure should be “so straightforward and readily obvious that… any knowledgeable, experienced person would unequivocally conclude that the failure mode simply would not occur, unless it is associated with a wholly unrelated failure condition that would itself be catastrophic.” For as they pointed out, a crash like that of 232 was precisely a catastrophic failure in one place (the engine) causing one in another (the flight control system). So while the hydraulic system might well be straightforwardly and obviously proof against independent failure, a piece of flying titanium could knock it out even if all three levels of pumps were churning away successfully. Such externally induced failures of the hydraulic system had, they pointed out, already occurred in a DC-10 (Air Florida), a 747 (Japan Air Lines) and an L-1011 (Eastern). “One in a billion” failures might be so in a make-believe world where hydraulic systems flew by themselves. But they don’t. Specifically, the pilots wanted a control system that was completely independent of the hydraulics. More generally, the pilots questioned the procedure of risk assessment. Hydraulic systems do not fly alone, and because they don’t, any account of causality and agency must move away from the local and into the vastly more complex world of systems interacting with systems.50 The NTSB report – or more precisely one impulse of the NTSB report – concurred: “The Safety Board believes that the engine manufacturer should provide accurate data for future designs that would allow for a total safety assessment of the airplane as a whole.”51 But a countervailing impulse pressed agency and cause into the particular and localized.

When I say that instability lay within the NTSB report it is all this, and more. For contained in the conclusions to the investigation of United 232 was a dissenting opinion by Jim Burnett, one of the lead investigators. Unlike the majority, Burnett saw General Electric, Douglas Aircraft and the Federal Aviation Agency as equally responsible.

I think that the event which resulted in this accident was foreseeable, even though remote, and that neither Douglas nor the FAA was entitled to dismiss a possible rotor failure as remote when reason­able and feasible steps could have been taken to “minimize” damage in the event of engine rotor failure. That additional steps could have been taken is evidenced by the corrections readily made, even as retrofits, subsequent to the occurrence of the “remote” event.52

OUT OF CONTROL
OUT OF CONTROL

Figure 11. Probability Versus Consequence. Source: UAL 232 Docket, U. S. Department of Transportation, Federal Aviation Administration, “System Design and Analysis,” 6/21/88, AC No. 25.1309-1 A, fiche 7, p. 7.

Like a magnetic force from a needle’s point, the historical narrative finds itself drawn to condense cause into a tiny space-time volume. But the narrative is constantly broken, undermined, derailed by causal arrows pointing elsewhere, more globally towards aircraft design, the effects of systems on systems, towards risk – assessment philosophy in the FAA itself. In this case that objection is not implicit but explicit, and it is drawn and printed in the conclusion of the report itself.

Along these same lines, I would like, finally, to return to the issue of pilot skill and CRM that we examined in the aftermath of Air Florida 90. Here, as I already indicated, the consensus of the community was that Haynes, Fitch, Dvorak, and Records did an extraordinary job in bringing the crippled DC-10 down to the threshold of Sioux City’s runway 22. But it is worth considering how the NTSB made the determination that they were not, in fact, contributors to the final crash landing of Flight 232. After the accident, simulators were set up to mimic a total, triple hydraulic failure of all control surfaces of the DC-10. Production test pilots were brought in, as were line DC-10 pilots; the results were that flying a machine in
that state was simply impossible, the skills required to manipulate power on the engines in such a way as to control simultaneously the phugoid oscillations, airspeed, pitch, descent rate, direction, and roll were quite simply “not trainable.” While individual features could be learned, “landing at a predetermined point and airspeed on a runway was a highly random event”53 and the NTSB concluded that “training… would not help the crew in successfully handling this problem. Therefore, the Safety Board concluded that the damaged airplane, although flyable, could not have been successfully landed on a runway with the loss of all hydraulic flight controls.” “[U]nder the circumstances,” the Safety Board concluded, “the UA flightcrew performance was highly commendable, and greatly exceeded reasonable expectations.”54 Haynes himself gave great credit to his CRM training, saying it was “the best preparation we had.”55

While no one doubted that flight 232 was an extraordinary piece of flying, not everyone concurred that CRM ought take the credit. Buck, ever dissenting from the CRM catechism, wrote that he would wager, whatever Haynes’s view subsequently was, that Haynes had the experience to handle the emergency of 232 with or without the aid of earthbound psychologists.56 But beyond the particular validity of cockpit resource management, the reasoning behind the NTSB satisfaction with the flightcrew is worth reviewing. For again, the Safety Board used post hoc simulations to evaluate performance. In the Air Florida Flight 90, the conclusion was that the captain could have aborted the takeoff safely, and so he was condemned for not aborting; because the simulator pilots could fly out of the stall by powering up quickly, the captain was damned for not having done so. In the case of flight 232, because the simulator-flying pilots were not able to land safely consistently, the crew was lauded. Historical re-enactments were used differently, but in both cases functioned to confirm the localization of cause and agency.

Rolls-Royce – Compressor Bleed

Rolls-Royce initially adopted still a third approach to solving the high pressure-ratio compressor problem. After experimenting with variable stator vanes, they elected to employ only variable inlet guide vanes, bleeding off flow from the middle stages of the compressor during off-design operation in order to limit the flow entering the rear stages. The principal engine produced with this approach, the Avon, went through several versions. The 16-stage compressor in one version produced an overall pressure-ratio of 8.5 to 1 (for an average pressure-ratio of 1.14 per stage), while a later version produced a pressure-ratio of 10 to 1 (for an average of 1.15 per stage).26 Commercial versions of the Avon powered the ill-fated Comet and the highly successful Caravelle.

In the early 1950s Rolls-Royce developed a new, larger engine, the Conway, that solved the compressor problem in a new way. The Conway, shown in Figure 7, was a two-spool engine, with 6 stages in its low-pressure compressor and 9 stages in its high-pressure compressor. Its overall pressure-ratio was 12 to 1 (for an average pressure-ratio of 1.18 per stage). Like the Avon, the Conway had flow bled off from the middle of the compressor in order not to overload the rear stages. In the Conway, however, the flow bled off from the tip of the low-pressure compressor became bypass flow, adding to the thrust of the engine, in essentially the same manner as in the De Havilland engine from the 1940s discussed earlier. The Conway thereby became the first bypass engine to enter flight service, operating at a bypass ratio of 0.6 – i. e. three-eighths of the total flow bypassed the gas generator. The bypass flow accomplished three things: (1) it provided cooling of the gas generator casing; (2) its lower exhaust velocity reduced exhaust noise, which was becoming an increasing concern in commercial aviation; and (3) it

Rolls-Royce - Compressor Bleed

Figure 7. Rolls Royce Conway bypass engine RC03, early 1950s. First bypass engine to enter service. Note bypass of cool, compressed air around remainder of gas generator. [Wilde, cited in text.]

improved overall propulsion efficiency, gaining more thrust per unit fuel. Rolls tended to emphasize the first two of these in their efforts to sell the Conway, for the bypass ratio was too small to produce a dramatic improvement in propulsion efficiency. Nevertheless, the improvement was there. Scaled-up versions of the Conway, producing more than 17,000 pounds of thrust, powered some of the advanced 707s27 and DC-8s, as well as the Vickers VC-10.

Pointing to the arbitrariness of restricting the designation “fan” to no more than 2 or 3 stages, Rolls-Royce has long argued that the Conway has claim to being at least the immediate progenitor of the turbofan engines that entered service in the early 1960s, if not the first turbofan.28 This underscores the futility of worrying about firsts here. The more important question is how the Conway fit into the evolutionary development of the turbofan. Over time, a sequence of incremental advances to the Conway in “normal design” might well have reduced the number of low-compressor stages pressurizing the bypass flow and increased the bypass ratio, resulting in an engine little different from P&W’s first turbofans. This “gradualist” evolution, however, is a history that might have been, not what happened. The turbofan engines that entered service in the early 1960s and established the turbofan’s dominant place in aviation did not evolve from the Conway, but instead emerged along a very different sort of pathway.

GENERATING NEW KNOWLEDGE, 1946-1948

With the end of the war, research on supersonic aerodynamics, both theoretical and experimental, began in earnest. Despite the daunting uncertainties of flight through the speed of sound – “breaking the sound barrier” in the popular terminology – the newly available jet and rocket engines made supersonic flight at least imaginable. A feeling prevailed among research managers and research workers alike that the time had come for serious study of supersonic problems. My own work at Ames in the period discussed here was in the new 1- by З-ft Supersonic Wind Tunnel Section, where I engaged primarily in wind-tunnel experiments and in comparison between experiment and theory. At the same time, a great deal of theoretical work was going on in other parts of the laboratory. I shall discuss our group’s activities under the same headings as before.

Theory – To design supersonic aircraft, airfoil theory, however accurate, would hardly be sufficient; actual airplanes have finite-span wings. Because of the three­dimensional complexity of such problems, little could be hoped for here beyond a linear theory, which in effect assumes small disturbances from the free stream and hence thin wings at small angles of attack. Fortunately, physical concepts and mathematical tools for such linear approximation had long been established from study of acoustic phenomena and the associated wave equation. With potential utility as motivation, a vast three-dimensional extension of Ackeret’s two­dimensional linear theory of 1925 appeared in the last half of the 1940’s. In this rapid growth, duplication was inevitable within and between aeronautically advanced countries. Here I deal only with work having direct bearing on our study at Ames.

Initial influence on our thinking came from the findings of Robert Jones of the NACA and Allen Puckett of the California Institute of Technology. Jones, working at the NACA’s Langley Aeronautical Laboratory in Virginia and using a linear approach of his own devising, was first in the United States to conceive (in early 1945) of the beneficial effects of wing sweepback at high speeds. He continued to elaborate his exciting and original ideas at Langley and after moving to Ames in August 1946. Puckett, working at about the same time, used a method employed on bodies of revolution in the early 1930’s by Theodore von Karman and his Caltech student Norton Moore. At Karman’s suggestion, Puckett extended this method to the zero-lift drag of triangular wings, with special attention to the influence of the sweepback angle of the leading edge and the chordwise location of maximum thickness. His results attracted considerable notice when presented at an aeronautical meeting in New York in January of 1946. Developments such as these could not help but catch the attention of the Ames Theoretical Aerodynamics Section under Max. Heaslet; and he, Harvard Lomax, and their coworkers were soon adding to the flood of linear theory. A body of potentially useful theory was thus appearing just as experimental work was beginning in earnest.9

Qualitative concepts from the linear theory are important for our later comparisons. Figure 3 concerns the behavior of three flat lifting surfaces of representative planform (such surfaces being sufficient for our general points). Instead of propagating upstream and throughout the field as in subsonic flow, the pressure signal from a disturbance in a supersonic flow is confined, in the linear approximation, to the interior of a “Mach cone” – a circular cone with axis aligned with the free stream and apex angle a decreasing function of the free-stream Mach number M0. In the figure, the trace of significant Mach cones in the plane of the lifting surfaces at a fixed M° is shown by the dashed lines. We see that the effect of the tips on the straight wing A is confined to small triangular regions beginning at the leading edge. The remaining, dotted region of the wing is, so to speak, unaware

GENERATING NEW KNOWLEDGE, 1946-1948

Fig 3. Flat lifting surfaces in supersonic flow according to linear theory.

of the presence of the tips, and has the constant fore-and-aft lift distribution characteristic of two-dimensional flow (compare, for example, the uniform vertical distance between the linear-theory curves for the upper and lower surfaces in figure 2). On the moderately swept wing В in the figure, the additional effect of the wing root is, like that of the tips, confined to a finite region aft of the leading edge. It turns out that the flow in the dotted region is again effectively two-dimensional and the lift distribution correspondingly constant. The highly swept wing – has its leading edge entirely within the region of influence of the wing root, and no regions of two­dimensional flow exist. Interestingly, the lift distribution here turns out to be similar in its general features to that given by linear theory in two-dimensional swisonic flow – infinite at the leading edge and decreasing to zero at the trailing edge. Though the linear behavior of figure 2 is approximate, there was reason to believe that the nonlinear inviscid situation would be at least qualitatively similar.

Experiment – The mid-1940s saw construction of numerous supersonic wind tunnels in the United States and other countries. The considerable and demanding complexity of supersonic as compared with subsonic tunnels can be found described elsewhere.10 The Ames supersonic tunnel, construction of which began in 1944, was a closed-return, variable-pressure facility powered by centrifugal compressors totaling 10,000 horsepower. These characteristics and its 1-foot-wide by 3-foot-high test section made it the NACA’s, and one of the country’s, first supersonic tunnels of adequate size and versatility for comprehensive aerodynamic testing. Design of the tunnel drew on findings from smaller experimental tunnels at Caltech and the NACA’s Langley Laboratory, small-scale tests of our own, and the little we knew of the tunnels at Zurich and Guidonia. (Our knowledge of these was not as great as it could have been, thanks to the limited attention given in the United States to the proceedings from the Volta conference. Existence of the more advanced tunnels at Peenemtlnde was still unknown.) I participated in design of the tunnel and was assigned supervisory responsibility for it and the activities of the 1- by З-ft Wind Tunnel Section when operation began in late 1945.11 The group, typical of a wind-tunnel staff at Ames, numbered around 35 people, of which 20 or so were research engineers.

Just as theoretical work requires mathematical techniques, experiment requires instrumentation. To measure forces on a model in the new tunnel, our group developed a new support and balance system that simplified such arrangements. This system supported a model from the rear on a slender rod (a “sting”) attached to a long, slender, fore-and-aft beam. The beam in turn was supported inside a housing that shielded it from the airstream and that could be adjusted angularly by an electric drive to change the model’s angle of attack. Motion of the beam in relation to the housing was constrained by small, stiff cantilever springs equipped with electric – resistance strain gages. These tiny gages, which had only recently been invented for structural testing, were made of a back-and-forth winding of fine wire cemented to the springs; they measured the deflections of the springs and hence the forces on them by measuring the change in electrical resistance of the wire as it was stretched by the deflection. The forces on the springs could then be used to calculate the forces on the model. It was the strain gages, in fact, that made a compact system interior to the tunnel feasible. As often happens, advance in one area of technology – structural testing – thus made possible advance in a very different one – supersonic experiment.

The wing tests in 1946-47 constituted the third and most extensive experiments thus far in the new tunnel. The move to test wings in relation to theory required approval, though hardly direction, from Ames management; with the body of theory then appearing, it was clearly the thing to do. In planning the tests, my prime concern was to explore a wide range of planforms while keeping the number of tests and accompanying theoretical calculations within doable bounds. In the end, I settled on 19 wings varying systematically in sweepback angle, taper ratio (ratio of tip chord to root chord), and aspect ratio (ratio of span to average chord). The airfoil section for most of the models was an isosceles triangle with a height of five-percent of the base (the airfoil chord). The sharp leading edge, a marked departure from the blunt edge employed for subsonic flight, was known to have advantages at supersonic speeds for platforms of moderate leading-edge sweep. The isosceles – triangle section was chosen primarily to facilitate construction, the flat bottom making for easy mounting for machining. As it turned out, the cambered section brought to light some interesting, if secondary, results that would not have been encountered with an aerodynamically simpler uncambered section. At the time of the tests, the planned adjustable wind-tunnel nozzle needed to vary M0 at supersonic speeds had not been finished, and all measurements were made in a fixed nozzle at M0 = 1.53. The free-stream Mach number for the tests was thus not a variable.

A reader of my book What Engineers Know… will recognize the scheme of testing just described as an example of the method of parameter variation, which I examined in connection with the Durand-Lesley propeller studies at Stanford University. This method can be defined in general as “the procedure of repeatedly determining the performance of some material, process, or device while systematically varying the parameters that define the object of interest or its condition of operation.”12 In the Durand-Lesley work, the variable parameters were five quantities that defined the complex shape of the propeller blades, plus two quantities – the speed of the airstream and the speed of rotation of the propeller – that defined the condition of operation. In the present tests, the geometrical parameters were the three planform quantities mentioned above (supplemented by a few individual planforms and airfoil sections); the single operational parameter was the wing’s angle of attack relative to the airstream. Engineers employ such experimental parameter variation widely to supply design data in situations where theory is unavailable, unreliable, or, for one reason or another, impractical to apply. It is also employed extensively, as here, in engineering research. The method has been used so much and for so long that it has become second nature to engineers. It had been constantly before me in my student days at Stanford in the collection of Durand-Lesley propellers mounted on the wall of the wind-tunnel laboratory; at the NACA it was embedded in the culture. I and my colleagues would never have thought of it at that time as a formal method nor felt the need to give it a name.

As usual in NACA wind-tunnel studies, a team of research engineers – in this case about ten – carried out the work. The team included test engineers running the experiments on a two-shift basis, a specialist responsible for the functioning of the new, still troublesome balance system, and an engineer who monitored the day-to­day reduction of the data to standard form. The numerical calculations for the last operation kept four or five young women busy operating electrically driven Friden mechanical calculators, the usual practice before the advent of electronic computers. Two additional engineers did the then difficult calculations of wing characteristics according to linear theory and analyzed the results in comparison with the experiments. The members of the team occupied the same or adjacent offices and exchanged experience and suggestions as part of their daily interaction. My own task, besides planning the research, was to oversee the operation, participate closely in the analysis, and handle much of the final reporting. Robert Jones, though not assigned formally to the l-by-3 Section after his move from Langley, occupied an office across from mine, and we talked regularly. Since few people had training in supersonic aerodynamics at that time, work such as ours tended to be a young person’s game; at 29,1 was the oldest in the Section and one of two with a graduate degree – a two-year Engineer’s degree for me and a one-year Master’s degree for the other. We learned as we went.

Though the planning had been exciting, running the tests and reducing the data were characteristically tedious. To carry out the tests, an experienced engineer operated the wind tunnel and other equipment, while a junior engineer recorded the meter readings from the strain gages. Though sitting side by side, they communicated by microphones and head sets because of the roar of the wind-tunnel compressors. The models could be seen through circular access windows in the sides of the tunnel’s test section, as was found useful for the boundary-layer observations to be described later. Two mechanical technicians prepared models for subsequent tests and took care of the trouble shooting and repair needed in those early years of the equipment’s operation. Reduction of the data by the young women required long hours of repeated calculations to fill the many columns of numbers leading to the standard forms (see below). Their supervising engineer, besides helping organize their effort, plotted the results in a uniform layout, sometimes detecting discrepancies that called for recalculation or retesting. A shared sense of purpose and the fact that there was no other way – plus a good deal of humor and give-and-take – made the tediousness of all this tolerable.

Intellectual excitement reappeared in the theoretical calculations and in the analysis of the results. The theoretical computations called for considerable mathematical skill and ingenuity in an area that was only then developing. The engineers doing the task kept in close touch with the people in the Theoretical Aerodynamics Section who were contributing to that development. As our work progressed, they made ongoing comparison, where possible, between the emerging theoretical findings and our accumulating test results. For my part, I looked in on the wind-tunnel testing when I could, making occasional suggestions. I also struggled to keep abreast of the theoretical work, especially the resulting comparisons, and once or twice a week I reviewed the accumulating data plots, looking especially for questionable results that might call for retesting. The entire activity was less rigidly organized than this account may sound, with much improvization and a great deal of back-and-forth suggestion. To keep my review of the plots from being interrupted by other duties, I regularly took refuge in an unused upstairs room, leaving instructions with my secretary that I was not to be bothered by phone calls or otherwise. When the laboratory’s director telephoned one day, she refused to put him through; the sharp reaction caused me to add an exception to my instructions.

All work in our tunnel, including the wing study, came up for discussion in Friday-aftemoon meetings between myself, the two engineers doing the theory, and the project engineers of two or three concurrent studies. These meetings led to vigorous and contentious, though friendly, debate. Although we did not think of it that way at the time, we were learning and educating each other in the complexities of supersonic aerodynamics, a field in which few people could claim broad knowledge.

I do not suggest that we were alone in the learning process. Experimental and theoretical study of supersonic flow grew rapidly at various laboratories in the period in question. In the year before the present work, researchers at the Langley Laboratory ran “preliminary” tests in their 9-by-9-inch experimental tunnel of eight triangular planforms of varying apex angle plus six sweptback wings; the lift for the triangular wings they compared with a limiting-case linear theory valid for small apex angles.13 The efforts of our team provided the first extended comparison of experimental results for symetrically related wings with calculations from the full linear theory.

Comparison – The results of the study appeared in three detailed reports (originally confidential, later declassified) in late 1947 and mid-1948.14 The plots reproduced here are taken from a later summary presentation. The sampling is a limited one, chosen to highlight the relationship of theory to experiment.

Variation of lift with angle of attack normally follows a straight line, both experimentally and theoretically, at the low angles useful in practice. Figure 4 gives the measured and theoretical slope of these lines for four unswept wings of varying aspect ratio. (Lift is the upward force perpendicular to the direction of the free stream. The quantity CL on the vertical axis is a dimensionless measure of lift.) The wings, illustrated by the sketch with each test point, had a common taper ratio of 1/2; each sketch shows the trace of the Mach cone from the forwardmost point of the wing. In this and later figures, results from linear theory were provided over as wide a range as was possible at the time.

The agreement between experiment and theory is seen to be excellent – too good, in fact, to be strictly true. The theory neglected viscosity and applied to the wing alone; the experiment took place in a viscous gas and involved aerodynamic interference from the slender body needed to support the model (illustrated later in figure 10). It seemed likely that these effects, probably small in the case of lift, just compensated for this family of wings. The theoretical reduction in lift-curve slope at

M0 = 153 EXPERIMENT

———- LINEAR THEORY (WING ALONE)

GENERATING NEW KNOWLEDGE, 1946-1948

ASPECT RATIO, A

Fig 4. Effect of aspect ratio on lift-curve slope.

low aspect ratios comes from a loss of lift within the Mach cones that originate at the leading edge of the wing tips; as the aspect ratio decreases, a greater fraction of the planform falls within these Mach cones, with resulting decrease in the calculated lifting effectiveness of the wing. The agreement of theory with experiment implied that such theoretical decrease in fact occurred.

The effect of sweep on the lift-curve slope appears in figure 5 for seven wings, also of taper ratio 1/2. The sweep angle in all cases was measured at the midchord line; the wing of 43E sweepback was chosen to have its leading edge coincident with the Mach cone at M0=1.53. A swept-forward wing in each case was obtained from the corresponding swept-back wing by reversing the model in the support body.

The theoretical results proved symmetrical about the vertical axis between ±43°. Such symmetry had been predicted analytically for certain classes of wings, though not the kind here. Shortly after our reports were written, this initially surprising “reversibility theorem” was verified with complete generality, so the theoretical curve could have been extended to -60°. The departure from symmetry in the experimental results was conjectured to be due to aeroelastic deformation, present in the experiments but absent from the theory. Such deformation increases the angle of attack of sections near the wing tips for forward portions of sweptforward wings and decreases it for rearward portions of sweptback wings;

GENERATING NEW KNOWLEDGE, 1946-1948

this difference changes the lift in opposite directions, increasing it for sweptforward wings and decreasing it for sweptback. Again, compensation of secondary effects, this aeroelastic deformation plus the two previous ones, was thought to play a role in the almost perfect agreement between experiment and theory for sweepforward.

Validity of linear theory in predicting lift must not be taken to imply validity in the prediction of lift distribution. This can be seen in the two-dimensional results of figure 2. The total lift, as indicated by the area between the upper – and lower-surface curves, is given very closely by linear theory in comparison with nonlinear shock – expansion theory, and, to a somewhat lesser degree, with experiment. The shock-expansion and experimental distributions of lift, however, are concentrated noticeably more toward the leading edge. As a consequence, the corresponding centers of lift are forward of the midchord location given by the uniform linear distribution, somewhat more so in the case of experiment thanks to the shock-wave, boundary-layer interaction near the trailing edge.

Observations of this kind helped explain the results of figure 6 for the family of unswept wings. The quantity on the vertical axis (whose strict definition is immaterial here) can be shown to be a close measure of the displacement of the center of lift forward of the geometric centroid of the planform (midchord at midspan for the present wings). Here, as a result again of the loss of lift within Mach cones (not illustrated) at the tips, which would be larger toward the trailing edge, linear theory shows a progressively forward displacement as the aspect ratio A is reduced. In the opposite direction, in the limit of infinite aspect ratio (A x 8), the tips disappear and the flow over the wing becomes entirely two­dimensional; the theoretical curve must accordingly approach the linear section value of zero (i. e., midchord, cf. fig. 2) in that limit, as indeed it appeared to do. Similarly, if a theoretical curve could be calculated over the entire range of A by a three-dimensional equivalent of the shock-expansion theory, it would have to approach (cf. again fig. 2) a limit forward of midchord; the calculated value for the present isosceles-triangle section is shown in the figure. The fact that the experimental curve appeared to be approaching an asymptote somewhat above this value was consistent with the presence of shock-wave, boundary-layer interaction as before. We inferred therefore that the departure here of experiment from linear theory for all aspect ratios (despite the agreement for overall lift) came from nonlinear pressure effects and shock-wave, boundary-layer interaction through their joint influence on chordwise lift distribution. We were here doing what engineers often find necessary – using experience from a simpler and hence more theoretically analyzable case to interpret (and sometimes

M0- 1.53 EXPERIMENT

———- LINEAR THEORY (WING ALONE)

GENERATING NEW KNOWLEDGE, 1946-1948

ASPECT RATIO, A

Fig 6. Effect of aspect ratio on position of center of lift.

GENERATING NEW KNOWLEDGE, 1946-1948

anticipate) the problems encountered in applying a necessarily more approximate theory to a more complicated case.

That even this may not be possible is illustrated by figure 7 for the center of lift of the swept wings. The unswept wing here is the aspect-ratio-4 wing of figure 6, for which the departure of experiment from theory was reconcilable as above. The complete disagreement in variation with angle of sweep, however, could not be reconciled on the basis of existing knowledge. Experimental studies of sweep at subsonic speeds had indicated major effects of viscosity on lift distribution, particularly at high sweep angles. Nonlinear pressure effects, however, could not be discounted here. Differences in elastic deformation between forward and backward sweep could also have greater influence on center of lift than on lift itself. As often happens with initial exploration into a new field, the findings here raised more questions than they answered.

Drag, the force parallel to the free stream, is influenced in a major way by viscous friction on the wing surface. Here a sampling of drag at the small angle of attack at which it is a minimum will serve our purpose. Figure 8 gives a dimensionless measure of this minimum drag for the family of swept wings. The theoretical pressure drag, like the lift-curve slope, proved symmetrical with regard to sweep

M0* 1.53 EXPERIMENT

———- LINEAR THEORY(WING ALONE)

GENERATING NEW KNOWLEDGE, 1946-1948

Fig 8. Effect of sweep on minimum drag.

angle over the range between ±43° within which drag calculations were then feasible. This was in keeping with a then recently proven theorem. The expectation (later confirmed) was that the theoretical curve, when continued, would reach its maximum for sweep in the vicinity of the Mach cone and then fall off with further sweep, again symmetrically. The experimental results showed just such overall behavior, though in view of the complexities likely to arise from viscosity and support-body interference, the near perfect symmetry here came as a surprise. The experimental – like the theoretical – fall off at high sweep, however, was expected, in keeping with Jones’s ideas and with what experimentalists were finding, by “stopgap” methods of varying accuracy, at supersonic and high subsonic speeds. With regard to viscous drag, the experimental point for zero sweep showed a reasonable increment beyond the theoretical pressure drag, tending to confirm the theory in this situation. Disappearance of this increment with increasing sweep in either direction, however, suggested that linear theory overestimates the pressure drag for sweep near the Mach cone. All we could do here was to point out the similarity to the then puzzling situation in two-dimensional transonic flow, where the pressure drag from linear supersonic theory becomes unreasonably high (in fact, rises without bound) as the free-stream Mach number approaches 1 from above. The similarity was supported by the fact that at free-stream Mach numbers above 1, the Mach number of the velocity component normal to the Mach cone, which coincided with the leading edge of the wing for sweep of ±43°, is likewise 1. (The difficulty in the two-dimensional case was indeed shown soon after to be due to nonlinear transonic effects inaccessible to linear theory.)

The most significant results concerning drag, however, dealt with that of triangular wings. A reason for the notice given Puckett’s theoretical work appears in the lower curve in figure 9, which shows how the minimum pressure drag of a triangular wing with uncambered double-wedge section and leading edge inside the Mach cone varies as the position of maximum thickness is altered. Results of this kind suggested that the drag of such wings could be lowered significantly by placing the position of maximum thickness (i. e., the ridge line of the double wedge) well forward on the wing. To assess this encouraging finding, our tests were extended to include the two triangular wings shown by the sketches, one with maximum thickness at 50-percent chord, the other at 20-percent.

As indicated by the small circles, the experimental measurements did not come out as hoped; the 20-percent location, in fact, gave a slightly higher drag than the 50-percent. Repeated tests showed that experimental error could not be blamed, and theoretical estimates indicated that support-body interference could hardly

M0* 1.53

section: uncambered

GENERATING NEW KNOWLEDGE, 1946-1948

Fig 9. Effect of position of maximum thickness on minimum drag of triangular wings.

account for the large increment above the pressure drag for either wing. Consideration of viscous friction finally suggested an explanation. As always, two kinds of friction must be considered: laminar friction, due to air flowing in a smooth, lamina-like boundary layer near the wing surface, and turbulent friction, associated with an eddying, more or less chaotic boundary-layer flow. Most wing boundary layers start out laminar and change to turbulent at some point aft of the leading edge; the location of this point is important, since a turbulent layer exerts considerably higher drag than a laminar one. Since location of the transition point was unknown, however, the best we could do theoretically for the total drag was to add to the curve of pressure drag in figure 9 a uniform friction drag under the assumptions of completely laminar and completely turbulent boundary layers. The positions of the experimental measurements relative to the two resulting curves suggested that the proportion of laminar to turbulent flow on the two wings might be considerably different.

This seemed as far as we could go until I happened, while browsing in the laboratory library, upon a report by W. E. Gray of the British Royal Aircraft Establishment describing a new “liquid-film” method he was using for experimental location of transition at subsonic speeds.15 In applying this method to our situation (after considerable developmental effort), a model was sprayed with a flat black lacquer and coated, just before installation in the tunnel, with a liquid mixture containing mainly glycerin. Since evaporation takes place much faster in a turbulent than a laminar region, it was then a simple matter to run the tunnel (sometimes as much as 20 minutes) until the liquid had disappeared where the boundary layer was turbulent but remained where it was laminar. By dusting the model with talcum powder, which adhered to the moist but not the dry area, the regions of laminar and turbulent flow could then be made visible.

Results for the two triangular wings appear in figure 10. With the maximum thickness at 50-percent chord, turbulent flow (the dark area) takes up only about half of the area aft of the ridge line; for the 20-percent chord location, turbulent flow occupies almost all of the considerably larger area to the rear of the ridge. As a general thing, laminar boundary layers tend to exist in regions in which the surface pressure decreases in the direction of flow, turbulent layers in regions in which it increases. Examination of the theoretically calculated pressure distributions for the two wings showed excellent correlation in both cases between these latter regions of “adverse pressure gradient” and the experimentally indicated regions of turbulent flow. Both the experimental and the detailed theoretical results thus implied a relatively larger viscous drag with the maximum thickness at 20-percent. Support – body interference prevented a decisive comparison between the experimental values of total drag and theoretical values calculated on the basis of the observed areas of laminar and turbulent flow. There could be little doubt, however, why forward displacement of the maximum thickness failed to produce the reduction predicted by inviscid theory.

Following appearance of our third report, NACA headquarters in Washington instructed that I prepare a summary for a joint conference of the American Institute

GENERATING NEW KNOWLEDGE, 1946-1948

Fig 10. Results of liquid-film tests on triangular wings at zero lift (minimum drag).

of the Aeronautical Sciences and the British Royal Aeronautical Society, to be held in New York City in May, 194916. My paper (from which figures 2 through 10 here are taken) was one of two from the NACA, the other being by Floyd Thompson of the Langley Laboratory dealing with rocket and falling-body tests at transonic speeds. That headquarters saw fit to declassify some of our results for this purpose suggests an eagerness, for whatever reason, to point to NACA’s competence in the increasingly important field of supersonic (as well as transonic) research. Our full reports became declassified in 1953.

In the end, our research did not provide an immediate tool for design – nor did we expect it to at this early stage in a complicated and unexplored area of engineering knowledge. Comparison of linear theory with experiment did give confidence in the theory’s potential as a quantitative design tool for certain properties of certain classes of wings. For other properties or other wings, differences between experiment and the findings from the linear inviscid approximation could be estimated or otherwise reconciled. In still other instances, the results posed more questions than they answered. In general, a great deal more would need to be done to achieve anything that could be included under the heading, mentioned in the introduction, of “theoretical design methods” – that is, reasonably general methods of quantitative use to the aircraft designer. The outcome overall was what one might expect at this stage in a new and unexplored area of complex engineering knowledge.

THE UNSTABLE SEED OF DESTRUCTION

We now come to a point where we can begin to answer the question addressed at the outset. A history of a nearly punctiform event, conducted with essentially unlimited resources, yields a remarkable document. Freed by wealth to explore at will, the NTSB could mock up aircraft or recreate accidents with sophisticated simulators. Forensic inquiries into metallurgy, ffactography, and chemical analysis have allowed extraordinary precision. Investigators have tracked documents and parts back two decades, interviewed hundreds of witnesses, and in some cases ferreted out real-time photographs of the accident in progress. But even when the evidence is in, the trouble only just begins. For deep in the ambition of these investigations lie contradictory aims: inquiries into the myriad of necessary causes evaporate any single cause or single cluster of causes from fully explaining the event. At the same time, the drive to regain control over the situation, to present recommendations for the future, to lodge moral and legal responsibility all urge the narrative towards a condensed causal account. Agency is both evaporated and condensed in the investigative process. Within this instability of scale the conflict between undefinable skill and fixed procedure is played out time and again. On the flightdeck and in the maintenance hangers, pilots and technicians are asked at one and the same time to use an expansive, protocol-defying judgment and to follow restricted set procedures. Both impulses – towards diffused and localized accounts – are crucial. We find in systemic or network analysis an understanding of the connected nature of institutions, people, philosophies, professional cultures, and objects. We find in localization the prospect of immediate and consequential remediation: problems can be posed and answered by pragmatic engineering. To be clear: I do not have the slightest doubt that procedural changes based on accident reports have saved lives. At the same time, it is essential to recognize in such inquiries and in technological-scientific history more generally, the inherent strains between these conflicting explanatory impulses.

In part, the impulse towards condensation of cause, agency, and protocol in the final “probable cause” section of the accident report emerges from an odd alliance among the sometimes competing groups that contribute to the report. The airplane industry itself has no desire to see large segments of the system implicated, and pushes for localization both to solve problems and to contain litigation. Following United’s 232 crash, General Electric (for example) laid the blame on United’s fluorescent penetration inspection and ALCOA’s flawed titanium.57 Pilots have a stake in maintaining the status of the captain as fully in control of the flight: their principal protest in the 232 investigation was that the FAA’s doctrine of “extremely improbable” design philosophy was untenable. In particular, the pilots lobbied for a control system for wide body planes that would function even if all hydraulic fluid escaped.58 But just in the measure that the pilots remain authors of the successful mission, they also have their signatures on the accident, and their recommendation was aimed at insuring that a local fix be secured that would keep their workplace control uncompromised. Government regulators, too, have an investment in a regulatory structure aimed at local causes admitting local solutions. Insofar as regulations protect safety, the violation of regulations enter as potential causal elements in the explanation of disaster. Powerful as this confluence of stakeholders can be in focusing causality to a point, it is not the whole of the story.

Let us push further. In the 1938 Civil Aviation Act that enjoined the Civil Aeronautics Authority to create accident reports, it is specified that the investigation should culminate in the ascription of a “probable cause” of the accident.59 Here “probable cause” is a legal concept, not a probabilistic one. Indeed, while probability plays a vital role in certain sectors of legal reasoning, “probable cause” is not one of them. Instead, “probable cause” issues directly from the Fourth Amendment of the U. S. Constitution, prohibiting unreasonable searches and seizures, probable cause being needed for the issuance of a warrant. According to Fourth Amendment scholar Wayne R. LaFave, the notion of probable cause is never defined explicitly in either the Amendment itself nor in any of the federal statutory provisions; it is a “juridical construct.” In one case of 1925, the court ruled that if a “reasonably discreet and prudent man would be led to believe that there was a commission of the offense charged,” then, indeed, there was “probable cause justifying the issuance of a warrant.”60 Put bluntly in an even older (1813) ruling,

probable cause was not “proof’ in any legally binding sense; required were only reasonable grounds for belief. “[T]he term ‘probable cause’ … means less than evidence which would justify condemnation.”61

Epistemically and morally, probable cause inculpates but does not convict. It points a finger and demands explanation of the evidence. Within the framework of accidents, however, in only the rarest of cases does malicious intent figure in the explanation, and this very circumstance brings forward the elusive notion of “human error.” Now while the notion of probable cause had its origins in American search and seizure law, international agreements rapidly expanded its scope. Delegates from many countries assembled in Chicago at the height of World War II to create the Convention on International Civil Aviation. Within that legal framework, in 1951 the Council of the International Civil Aviation Organization (ICAO) adopted Annex 13 to the Convention, an agreement specifying standards and practices for aircraft accident inquiries. These were not binding, and considerable variation existed among participating countries.

Significantly, though ICAO documents sometimes referred to “probable cause” and at other times to “cause,” their meanings were very similar – not surprising since the ICAO reports were so directly modeled on the American standards. ICAO defined “cause,” for example, in 1988 as “action(s), omission(s), event(s), condition(s), or a combination thereof, which led to the accident or incident.”62 Indeed, ICAO moved freely in its documents between “cause” and “probable cause,” and for many years ICAO discussion of cause stood extremely close to (no doubt modeled on) the American model.63 But to understand fully the relation between NTSB and ICAO inquiries, it would be ideal to have a case where both investigations inquired into a single crash.

Remarkably, there is such an event precipitated by the crash of a Simmons Airlines/American Eagle Avions de Transport Regional-72 (ATR-72) on 31 October 1994 in Roselawn, Indiana. On one side, the American NTSB concluded that the probable cause of the accident was a sudden and unexpected aileron hinge reversal, precipitated by a ridge of ice that accumulated beyond the de-ice boots. This, the NTSB investigators argued, took place 1) because ATR failed to notify operators how freezing precipitation could alter stability and control characteristics and associated behaviors of the autopilot; 2) because the French Directorate General pour Aviation Civile failed to exert adequate oversight over the ATR-72, and 3) because the French Directorate General pour Aviation Civile failed to provide the Federal Aviation Authority with adequate information on previous incidents and accidents with the ATR in icing conditions.64 Immediately the French struck back: It was not the French plane, they argued, it was the American crew. In a separate volume, the Bureau Enquetes Accidents submitted, under the provisions of ICAO Annex 13, a determination of probable cause that, in its content, stood in absolute opposition to the probable cause adduced by the National Transportation Safety Board. As far as the French were concerned, the deadly ridge of ice was due to the crew’s prolonged operation of their flight in a freezing drizzle beyond the aircraft’s certification envelope – with an airspeed and flap configuration altogether incompatible with the Aircraft Operating Manual.65

In both American and French reports we find the same instability of scale that we have already encountered in Air Florida 90 and United 232. On one hand both Roselawn reports zeroed in on localized causes (though the Americans fastened on a badly designed de-icing system and the French on pilot error), and both reports pulled back out to a wider scale as they each pointed a finger at inadequate oversight and research (though the Americans fastened on the French Directorate General and the French on the American Federal Aviation Authority). For our purposes, adjudicating between the two versions of the past is irrelevant. Rather I want to emphasize that the tension between localized and diffused causation remains a feature of all these accounts, even though some countries conduct their inquiries through judicial rather than civil authority (and some, such as India, do both). Strikingly, many countries, including the United States, have become increasingly sensitive to the problematic tension between condensed and diffused causation-contrast, for example, the May 1988 and July 1994 versions of Annex 13:

May 1988: “State findings and cause(s) established in the investigation.”

July 1994: “List the findings and causes established in the investigation. The list

of causes should include both the immediate and the deeper systemic causes.”66

Australia simply omits a “cause” or “probable cause” section. And in many recent French reports – such as the one analyzing the January 1992 Airbus 320 crash near Strasbourg – causality as such has disappeared. Does this mean that the problem of causal instability has vanished? Not at all. In the French case, the causal conclusion is replaced by two successive sections. One, “Mechanisms of the Accident,” aimed specifically at local conditions and the second, “Context of Use” (Contexte de Vexploitation”) directed the reader to the wide circle of background conditions.67 The drive outwards and inwards now stood, explicitly, back to back. Scale and agency instability lie deep in the problematic of historical explanation, and they survive even the displacement of the specific term “cause.”

There is enormous legal, economic, and moral pressure to pinpoint cause in a confined spacetime volume (an action, a metal defect, a faulty instrument). A frozen pitot tube, a hard alpha inclusion, an ice-roughened wing, a failure to throttle up, an overextended flap – such confined phenomena bring closure to catastrophe, restrict liability and lead to clear recommendations for the future. Steven Cushing has written effectively, in his Fatal Words, of phrases, even individual words, that have led to catastrophic misunderstandings.68 “At takeoff,” with its ambiguous reference to a place on the runway and to an action in process, lay behind one of the greatest aircraft calamities when two jumbo jets collided in the Canary Islands. Effectively if not logically, we want the causal chain to end. Causal condensation promises to close the story. As the French Airbus report suggests, over the last twenty-five years the accident reports have reflected a growing interest in moving beyond the individual action, establishing a mesoscopic world in which patterns of behavior and small-group sociology could play a role. In part, this expansion of scope aimed to relieve the tension between diagnoses of error and culpability. To address the dynamics of the small “cockpit culture,” the Safety Board, the FAA, the pilots, and the airlines brought in sociologists and social psychologists. In the Millsian world of CRM that they collectively conjured, the demon of unpredictable action in haste, fear or boredom is reduced to a problem of information transfer. Inquire when you don’t know, advocate when you do, resolve differences, allocate resources – the psychologists urged a new set of attitudinal corrections that would soften the macho pilot, harden the passive one and create coordinated systems. Information, once blocked by poisonous bad attitudes, would be freed, and the cockpit society, with its benevolent ruling captain, assertive, clear-thinking officers, and alert radio-present controllers, would outwit disaster. As we saw, under the more sociological form of CRM, it has been possible, even canonical, to re-narrate crashes like Air Florida 90 and United 232 in terms of small-group dynamic. But beyond the cockpit scale of CRM, sociologists have begun to look at larger “organizational cultures.” Diane Vaughan, for example, analyzed the Challenger launch decision not in terms of cold О-rings or even in the language of managerial group dynamics, but rather through organizational structures: faulty competitive, organizational, and regulative norms.69 And James Reason, in his Human Error invoked a medical model in which ever-present background conditions located in organizations are like pathogens borne by an individual: under certain conditions disease strikes. Reason’s work, according to Barry Strauch, Chief of the Human Performance Division at the NTSB, had a significant effect in bolstering attention to systemic, organizational dynamics as part of the etiology of accidents.70

Just as lines of causation radiate outwards from individual actions through individuals to small collectives, so too is it possible to pull the camera all the way back to a macroanalysis that puts in narrative view the whole of the technological infrastructure. Roughly speaking, this was Charles Perrow’s stance in his Normal Accidents.71 For Perrow, given human limitations, it was simply inevitable that tightly-coupled complex, dangerous technologies have component parts that interact in unforeseen and threatening ways.

Our narration of accidents slips between these various scales, but the instability goes deeper in two distinct ways. First, it is not simply that the various scales can be studied separately and then added up. Focusing on the cubic millimeter of hard alpha inclusion forces us back to the conditions of its presence, and so to ALCOA, Titanium Metals Inc., General Electric, or United Airlines. The alpha inclusion takes us to government standards for aircraft materials, and eventually to the whole of the economic-regulative environment. This scale-shifting undermines any attempt to fix a single scale as the single “right” position from which to understand the history of these occurrences. It even brings into question whether there is any single metric by which one can divide the “small” from the “large” in historical narration.

Second, throughout these accident reports (and I suspect more generally in historical writing), there is an instability between accounts terminating in persons and those ending with things. At one level, the report of United 232 comes to rest in the hard alpha inclusion buried deep in the titanium. At another level, it fingers the maintenance technician who did not see fluorescent penetrant dye glowing from a crack. Read different ways, the report on Air Florida flight 90 could be interpreted as spotlighting the frozen pitot tube that provided a low thrust indication; read another way the 737’s collision impact into the Fourteenth Street Bridge was due to the pilot’s failure to de-ice adequately, to abort the takeoff, or to firewall the throttle at the first sign of stall. Protocol and judgment stood in a precarious and unstable equilibrium. What to the American investigators of the Roselawn ATR-72 crash looked like a technological failure appeared to the French team as a human failing.

Such a duality between the human and the technological is general. It is always possible to trade a human action for a technological one: failure to notice can be swapped against a system failure to make noticeable. Conversely, every technological failure can be tracked back to the actions of those who designed, built, or used that piece of the material world. In a rather different context, Bruno Latour and Michel Callon have suggested that the non-human be accorded equal agency with the human.72 I would rather bracket any fixed division between human and technological in our accounts and put it this way: it is an unavoidable feature of our narratives about human-technological systems that we are always faced with a contested ambiguity between human and material causation.

Though airplane crashes are far from the world of the historian of science and technology or that of the general historian interested in technology, the problems that engaged the attention of the NTSB investigators are familiar ones. We historians also want to avoid ascribing inarticulate confusion to the historical actors about whom we write – we seek a mode of reasoning in terms that make sense of the actors’ understanding. We try to reconstruct the steps of a derivation of a theorem or the construction of an object just as NTSB investigators struggle to recreate the Air Florida 90’s path to the Fourteenth Street Bridge. We interpret the often castaway, fragmentary evidence of an incomplete notebook page or overwritten equation; they argue over the correct interpretation of “really cold” or “that’s not right.”

But the heart of the similarity lies elsewhere, not just in the hermeneutics of interpretation but in the tension between the condensation and diffusion of historical explanation. The NTSB investigators, like historians, face a world that often doesn’t make sense; and our writings seek to find in it a rational kernel of controllability. We know full well how interrelated, how deeply embedded in a broader culture scientific developments are. At the same time we search desperately to find a narrative that at one moment tracks big events back to small ones, that hunts a Copemican revolution into the lair of Copernicus’s technical objections to the impure equant. And at another moment the scale shifts to Copernicus’s neo-Platonism or his clerical humanism.73 At the micro-scale, we want to find the real source, the tiny anomaly, asymmetry, or industrial demand that eats at the scientific community until it breaks open into a world-changing discovery. Value inverted, from the epoch-defining scientific revolution to the desperate disaster, catastrophe too has its roots in the molecular: in a badly chosen word spoken to the АТС controller, in a too sharp application of force to the yoke, in a tiny, deadly alpha inclusion that spread its flaw for fifteen thousand cycles until it tore a jumbo jet to pieces.

At the end of the day, these remarkable accident reports time and time again produce a double picture printed once with the image of a whole ecological world of causation in which airplanes, crews, government, and physics connect to one another, and printed again, in overstrike, with an image tied to a seed of destruction, what the chief investigator of flight 800 called the “eureka part.” In that seed almost everyone can find satisfaction. All at once it promises that guilty people and failed instruments will be localized, identified, confined, and that those who died will be immortalized through a collective immunization against repetition through regulation, training, simulation. But if there is no seed, if the bramble of cause, agency, and procedure does not issue from a fault nucleus, but is rather unstably perched between scales, between human and non-human, and between protocol and judgment, then the world is a more disordered and dangerous place. These reports, and much of the history we write, struggle, incompletely and unstably, to hold that nightmare at bay.

NACA TRANSONIC AND SUPERSONIC COMPRESSOR RESEARCH: 1945-1955

The need to use axial, instead of centrifugal, compressors in order to attain high levels of thrust in aircraft gas turbine engines had become increasingly clear by the end of World War II.29 Unlike centrifugal compressors, however, axial compressors were proving to be difficult to design with consistency. The base point in aero­dynamic design technology that had emerged by 1945 allowed efficient axial compressor stages to be designed30, but only under the restriction that the aerodynamic demands made on the compressor remained modest. The design method in question was based to a considerable extent on empirical data from tests of some airfoil profiles in cascade31 over a limited aerodynamic range. Specifically, the pressure-rise, turning, and thermodynamic losses had been determined for these airfoils in cascade as functions of incidence conditions in two-dimensional wind – tunnel tests. Compressor blades were then formed by selecting and stacking a sequence of these airfoil profiles radially on top of one another, as if the air flows through the blade row in a modular series of radially stacked two-dimensional blade passages. Achieving more ambitious levels of compressor performance was going to require this method to be extended, if not modified, and this in turn was going to require a substantial research effort, including extensive wind-tunnel tests of a wider range of airfoils in cascade. The engine companies conducted some research to this end – e. g., P&W carried out their own wind-tunnel airfoil cascade tests. Never­theless, the main body of research fell to government laboratories like the National Gas Turbine Establishment in England and the National Advisory Committee for Aeronautics in the U. S.

The applied research program on axial compressors carried out by the NACA in the decade following World War II was especially important in advancing the state of the art. This program involved a number of diverse efforts, most of them located at the Lewis Flight Propulsion Laboratory, in Cleveland, though a few at the Langley Aeronautical Laboratory, in Virginia, as well. While this research program deserves a historical study unto itself, we will confine ourselves here primarily to results that ended up contributing crucially to the design of the first successful turbofan engines. We say “ended up contributing” because none of this work appears at the time to have been aimed at the design of turbofan engines. The goal throughout was to advance the performance of axial compressors in what were then standard aircraft gas turbines.

CONCLUDING REMARKS

I have attempted, using a case study of wings at supersonic speed, to show how research engineers built their knowledge of aerodynamics in the days before electronic computers. As in other advanced fields of engineering, the process involved the comparative, mutually illuminating use of experiment and theory, neither of which could reproduce exactly the actual problem. In theoretical work, limited ability at direct numerical computation required physical approximations and assumptions – in the present case, the customary inviscid gas, plus thin wings at small angles of attack for three-dimensional problems – to bring the calculations within the scope of analytical techniques then available. For the two­dimensional problem of airfoils, the linear, thin-wing approximation could be improved upon; even then, however, the inviscid assumption could be circumvented only by means of qualitative concepts and quantitative estimates from the independent boundary-layer theory. On the experimental side, the effect of the inevitable support-body interference could be estimated in some aspects, but then only roughly. The inaccuracies accompanying experimental measurement, which I have not gone into, also had to be considered. As in much engineering research, experimental ingenuity, theoretical capability, and analytical insight formed essential parts of the total process.

The material here illustrates clearly two parts of the threefold makeup of modem engineering research (and much design and development) pointed out in the introduction. All three parts – theory, experiment, and use – appeared varying degree in my recent paper on the early development of transonic airfoil theory, research in which I participated later at Ames.17 To quote Constant again in regard to still another example from aeronautics, “the approaches were synergistic: discovery or design progressed faster when the three modes interacted.”18 Use in flight could not be involved in the wing research here, since supersonic aircraft were not yet available; the principle still appeared in the motivation for the study, however. It is this kind of synergism, as much as anything else, I believe, that provides the power of modern engineering generally.

My transonic paper contains a further concept pertinent here: the view of an engineering theory as an artifact – more precisely, a tool – to use in testing the performance of another artifact.19 Here the linear theory was used to test wings on paper in much the same way as the wind tunnel served to test them in a physical environment. In the present application, both tools were put to use in research for knowledge that might someday be employed in design of aircraft. They are also employed regularly side by side in the typical design process. This view – of theory and experiment as analogous artifacts for both research and design – I find useful in thinking analytically about modem engineering. It helps me to focus on theory and experiment in a parallel way in sorting out the synergistic interaction pointed to above. (Use might be looked at similarly, though I have not thought the matter through.)

As pointed out earlier, the wing research did not achieve anything broad and reliable enough to be included under the “theoretical design methods” mentioned in the introduction. Whether and to what extent the research contributed to the accompanying “general understanding” and “ways of thinking” is difficult to know; this would depend on what the audience took away from my New York talk and on how widely and thoroughly our reports were read and thought about. The story does exemplify, however, the kinds of things that make up those categories.20

“Ways of thinking” in my view comprises more or less structured procedures, short of complete calculative methods, for thinking about and analyzing engineering problems. As appears here, aeronautical engineers had long found it useful to regard the aerodynamic force on a wing in terms of lift, center of lift, and drag. This division has the virtue, among other things, of relegating the influence of viscosity, and hence the need to take it into account as a major factor, primarily to drag. Such division has been the practice for so long that engineers dealing with wings take it as almost natural. Designers of axial turbines and compressors, however, because the airfoil-like blades of their machines operate in close proximity to one another, think of the forces on them rather differently. A second example in the present work is the manner of accounting for the various performance characteristics of a wing in terms of the interplay of the inviscid pressure distribution and the viscous boundary layer, both of which can be analyzed, to a first approximation, independently. This procedure too had been around for some time, but the present example, by being fairly clear, may have added something. I have found it useful, at least, in teaching.

“General understanding” consists of the shared, less structured understandings and notions – the basic mental equipment – that engineers carry around to deal with their design and research problems. This and ways of thinking are perhaps best seen as separated, indistinctly bounded portions of a continuum rather than discrete categories. At the time of the present work, the difference in propagation of pressure signals between subsonic and supersonic flow had been understood for many years, and the concept of the Mach cone was becoming well known. Its consequences for the flow over wings were being explored, and the benefits and problems of sweepback were topics of widespread research and discussion. A feeling was developing in at least the research portion of the aerodynamic community that in some semiconscious way we “understood” something of the realities of supersonic flow. In our work at Ames, we contributed to this understanding in a small degree and advanced the knowledge of the powers and limitations of linear theory. After three years of living with supersonic wing problems, our group had acquired some of the mental equipment needed to understand and deal with such problems; the necessary ideas had become incorporated into our technical intuition. Indications later materialized that some of this was picked up from our reports by the aerodynamic community, but how much is anyone’s guess. It is these kinds of knowledge that I see under the rubric “general understanding.”

Other concerns appear here that are treated at some length in my transonic paper.21 I mention them briefly for the reader who may wish to look into them further:

(1) Our story provides examples of both the experimental and theoretical aspects of what scientist-cum-philosopher Michael Polanyi called “systematic technology,” which I take to be the same as what scholars and engineers currently speak of as engineering science. This, in Polanyi’s words, “lies between science and technology” and “can be cultivated in the same way as pure science” (Polanyi’s emphasis) but “would lose all interest and fall into oblivion” if the device or process to which it applies should for some reason cease to be found useful by society.22

(2) Our research at Ames, by requiring as many people as it did, illustrates how engineering advance is characteristically a community activity. I subscribe wholeheartedly to Edward Constant’s contention that communities committed to a given practical problem or problem area form “the central locus of technological cognition” and hence a community of practitioners provides “a primary unit of historical analysis.”23 Here the community we have examined existed entirely at Ames, where our wind-tunnel group depended critically also on the laboratory’s machine-shop, instrumentation, and electrical – maintenance sections. Externally, however, through our reports, my New York talk, and visitors who came to consult us, we were at the same time becoming increasingly a part of the international supersonic research-and-design community that was then forming.

(3) The personal motivation for some in our group came from the fact that the work was part of the job from which they earned a living. For people with greater responsibility, the work offered intellectual and experimental challenge and excitement, heightened by the potential utility of the results – typical research-engineering incentives. (The necessary administrative motive had come from discussions about our section’s overall program with our research superior, the chief of the laboratory’s High-Speed Research Division; the choice was fairly obvious, however, given our new wind tunnel and the existing state of knowledge.) Motivation of the NACA as a governmental institution flowed presumably from its desire to maintain its competitive position vis-a-vis other countries in supplying knowledge to the aircraft industry for design of supersonic airplanes, should they prove practical. Motivation overall was thus a complex mix.

(4) The laboratory’s institutional context for research could scarcely have been improved. Supervision, which was by engineers who had done (or, in the case of our section’s division chief, was still doing) research, was informed, supportive, and free of pressure; interaction with other research sections of the laboratory was encouraged. Skilled service groups provided support when called upon. My fellow research engineers and I didn’t realize how fortunate we were.

In closing, I would make one more point. The process we have seen is now in one

respect a thing of the past. Thanks to large-scale digital computers, designers and

research engineers today can calculate the flow over complicated shapes in detail without either the inviscid assumption or mathematical linearization or other approximation of any sort. To the categories of physical experiment, analytical theory, and actual use, we can thus add a kind of direct “numerical experiment” as a fourth instrument in both our search for aerodynamic design knowledge and in design itself24 The resort to mathematical analysis in the way seen here is thus no longer essential. Our ability to incorporate turbulence and turbulent boundary layers into direct calculations, however, still leaves something to be desired. Where such phenomena are important, which includes most practical problems in aerodynamics, comparison between numerical and physical experiment still plays a role. Such comparison can also be important in instances of great geometrical complexity, which computers encourage aerodynamic designers to attempt.

The foregoing statements, I must emphasize, have been entirely about aerodynamics; it should not be assumed that they apply to engineering generally. In fields where analytical and numerical methods are not so advanced, experiment and use may still predominate. Overall, the situation is still very mixed. Other fields and details of the present case aside, however, the point I would emphasize for the topic of the workshop is this: To understand the evolution of flight in the twentieth century, tracing the nature and evolution of research and knowledge may be as necessary as is the study of aircraft and the people and circumstances behind them.

EPILOGUE

The work we have followed found echo years later at the renowned Lockheed Skunk Works in southern California. In 1975, Richard Scherrer, one of the test engineers in the Ames research, headed a Skunk Works group engaged in preliminary design that would lead to the F-l 17A “Stealth Fighter.”25 Mathematical studies by one of Scherrer’s group had suggested that the military goal of negligible radar reflection might best be attained by a shape made up of a small number of suitably oriented flat panels. Scherrer’s memory of the Ames tests encouraged him to believe that such a startlingly unorthodox shape might in fact have acceptable aerodynamic performance. His faceted flying wing, laid out along the lines of the double-wedge triangular wings of figure 10, became known to his skeptical Skunk Works colleagues as the “Hopeless Diamond.” The idea, however, proved sound. The largely forgotten research from Ames thus contributed 30 years later to cutting – edge technology that could not have been imagined when the research was done. As in human affairs generally, serendipity plays a role in engineering.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

In March of 1927, in B. Franklin Mahoney’s small San Diego manufacturing plant, the construction of Charles Lindbergh’s Spirit of St. Louis began. Less than three months later, this modest little monoplane touched off a burst of aeronautical enthusiasm that would serve as a catalyst for the nascent American aircraft industry. Just when the first bits of wood and metal that would become the Spirit of St. Louis were being fashioned into shape, another project of significance to the history of American aeronautics commenced. This was the dismantling of the experiment station of the U. S. Air Service’s Engineering Division at McCook Field, Dayton, Ohio.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 1. Aerial view of the Engineering Division’s installation at McCook Field, Dayton, Ohio.

45

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 45-66 © 2000 Kluwer Academic Publishers.

For ten years, this bustling 254-acre installation, was the site of an incredible breadth of aeronautical research and development activity. By the mid-1920s, however, the Engineering Division, nestled within the confines of the Great Miami River and the city of Dayton, literally had outgrown its home, McCook Field. In the spring of 1927, the 69 haphazardly constructed wooden buildings that housed the installation were torn down, and the tons of test rigs, machinery, and personal equipment were moved to Wright Field, the Engineering Division’s new, much larger site several miles down the road.1 The move to Wright Field would be followed by further expansion in the 1930s with the addition of Patterson Field. In 1948, these two main sites were formally combined to create the present Wright- Patterson Air Force Base, one of the world’s premier aerospace R&D centers.

Although an event hardly equal to Lindbergh’s epic transatlantic flight, historically, the shut down of McCook Field offers a useful vantage point to reflect upon the beginnings of American aerospace research and development. In the 1920s, before American aeronautical R&D matured in the form of places such as Wright-Patterson AFB, basic research philosophies, and the roles of the government, the military, and private industry in the development of the new technology of flight, were being formulated and fleshed out. Just how research and manufacture of military aeronautical technology would be organized, how aviation was to become a part of overall national defense, and how R&D conducted for the military would influence and be incorporated into civil aviation, were still all wide open questions. The resolution of these issues, along with the passage of several key pieces of regulatory legislation,2 were the foundation of the dramatic expansion of American aviation after 1930. Lindbergh’s flight was a catalyst for this development, a spark of enthusiasm. But the organization of manufacture and the refinement of engineering knowledge and techniques in this period were the substantive underpinnings of future U. S. leadership in aerospace.

The ten-year history of McCook Field is a rich vehicle for studying these origins of aerospace research and manufacture in the United States. The facility was central to the emergence of a body of aeronautical engineering practices that brought aircraft design out of dimly lit hangars and into the drafting rooms of budding aircraft manufacturers. Further, McCook served as a crossroads for three of the primary players in the creation of a thriving American aircraft industry – the government, the military, and private aircraft firms.

A useful way to characterize this period is the “adolescence” of American aerospace development. The decade after the Wrights’ invention of the basic technology in 1903 was dominated by bringing aircraft performance and reliability to a reasonable level of practicality. One might think of this era as the “gestation,” or “birth,” of aeronautics. To continue the metaphor, it can be argued that by the 1930s aviation and aeronautical research and development had reached early “maturity.” The extensive and pervasive aerospace research establishment, and its interconnections to industry and government, of the later twentieth century was in place in recognizable form by this time. It was in the years separating these two stages of development, the late teens and 1920s, that the transition from rudimentary flight technology supported by minimal resources to sophisticated R&D carried out by professional engineers and technicians in well-organized institutional settings took place. In this period of “adolescence,” aeronautical research found its organizational structure and direction, aeronautical engineering practices and knowledge grew and became more formalized, and the relationship of this emerging research enterprise and manufacturing was established. McCook Field was a nexus of this process. In the modest hangars and shops of the Engineering Division, not only were the core problems of aircraft design and performance pursued, but also energetically engaged was research on the wide range of related equipment and technologies that today are intimately associated with the field of aeronautics. The catch-all connotation of “aerospace technology” that undergirds our modem use of the term took shape in the 1920s at facilities such as McCook. Moreover, the administrators and engineers at McCook were at the center of the debate over how the fruits of this research should be incorporated into the burgeoning American aircraft industry and into national defense policy. In large measure, the structure of the United States’ aerospace establishment that matured after World War II came of age in this period, when aerospace was in adolescence.

There were of course several other key centers of early aeronautical R&D beyond McCook Field, most notably the National Advisory Committee for Aeronautics and the Naval Aircraft Factory. Both of these government agencies had significant resources at their command and made important contributions to aeronautics. My focus on McCook is not to suggest that these other organizations were peripheral to the broader theme of the origins of modem flight research. They were not. McCook does, however, as a case study, present a somewhat more illuminating picture than the other facilities because of the broader range of activities conducted there. Moreover, NACA and the Naval Aircraft Factory are the subjects of several scholarly and popular books. The story of McCook Field remains largely untreated by professional historians. If nothing else, this presentation should demonstrate the need for additional study of this important installation.3

As is often the case, a temporary measure taken in time of emergency ends up serving a permanent function after the crisis has subsided. This was true of the Engineering Division at McCook Field. Established as a stopgap facility to meet some very specific needs when the United States entered World War I, McCook remained in existence after the war and developed into an important research center for the still young technology of flight. (“McCook Field” quickly became the unofficial shorthand reference for the facility and was used interchangeably with “Engineering Division.”)

Heavier-than-air aviation formally entered the American military in 1907 with the creation of an aeronautical division within the U. S. Army Signal Corps.4 In 1909, the Army purchased its first airplane from Wilbur and Orville Wright for $30,000.5 With the acquisition of several others, the Signal Corps began training pilots and exploring the military potential of aircraft in the early teens. Even with these initial steps, however, there was little significant American military aeronautical activity before World War I.

A seemingly ubiquitous feature of human conflict throughout history is the entrepreneur who, when others are weighing the geopolitical and military factors of an impending war, see a golden opportunity for financial gain. The First World War is a most conspicuous example. In that war, there is likely no better case of extreme private profit at the expense of the government war effort than the activities of the Aircraft Production Board. In the midst of this financial legerdemain, McCook Field was bom.

After the United States declared its involvement in the war and the Aircraft Production Board was set up, the dominance of Army aviation quickly settled in Dayton, Ohio. Howard E. Coffin, a leading industrialist in the automobile engineering field, was put in charge of the APB. Coffin appointed to the board another powerful leader of the Dayton-Detroit industrial circle, Edward A. Deeds, general manager of the National Cash Register Company.6 Deeds was given an officer’s commission and headed up the Equipment Division of the aviation section of the Signal Corps. This gave him near complete control over aircraft production.

Earlier, in 1911, Deeds had begun to organize his industrial interests with the formation of the Dayton Engineering Laboratories Company (DELCO). His partners included Charles Kettering and H. E. Talbott. In 1916, when European war clouds were drifting toward the United States, Deeds and his DELCO partners, along with Orville Wright, formed the Dayton-Wright Airplane Company in anticipation of large wartime contracts.7

By the eve of the American declaration of war, Coffin and Deeds had the framework for a government supported aircraft industry in place, organized around their own automotive, engineering, and financial interests and connections. Carefully arranged holding companies obfuscated any obvious conflict of interest, while Coffin and Deeds administered government aircraft contracts with one hand and received the profits from them with the other.8 Having orchestrated this grand profit-making venture in the name of making the world safe for democracy, Coffin crowned the achievement with a rather pretentious comment in June of 1917:

We should not hesitate to sacrifice any number of millions for the sake of the more precious lives which the expenditures of this money will save.9

An easy statement of conviction to make coming from someone who stood to reap a significant portion of those “any number of millions.”

Ambitious military plans for thousands of U. S.-built aircraft10 quickly pointed to the need for a centralized facility to carry out the design and testing of new aircraft, the reconfiguration of European airframes to accept American powerplants, and to perform the developmental work on the much lauded Liberty engine project. The Aircraft Production Board was concerned that a “lack of central engineering facilities” was delaying production and requested that “immediate steps be taken to provide proper facilities.”11 Here again, Edward Deeds was at the center of things, succeeding at maneuvering government money into his own pocket.

The engineers of the Equipment Division suggested locating a temporary experiment and design station at South Field, just outside Dayton. This field, not so coincidently, was owned by Deeds and used by the Dayton-Wright Airplane

Company. Charles Kettering and H. E. Talbott, Deeds’ partners, objected to the idea, arguing that they needed South Field for their own experimental work for the government contracts already awarded to Dayton-Wright. Kettering and Talbott suggested a nearby alternative, North Field.12

Found acceptable by the Army, this site was also owned by Deeds, along with Kettering. Deeds conveyed his personal interest in the property to Kettering, who in turn signed the field over to the Dayton Metal Products Company, a firm founded by Deeds, Kettering, and Talbott in 1915. In terms arranged by Deeds, Dayton Metal Products leased North Field to the Army beginning on October 4, 1917, at an initial rate of $12,000 per year.13

As the lease was being negotiated, the Aircraft Production Board adopted a resolution renaming the site McCook Field in honor of the “Fighting McCooks,” a family that had distinguished itself during the Civil War and had owned the land for a long period prior to its acquisition by Deeds.14

Thus, the creation of McCook Field took place amidst a series of complex financial and bureaucratic dealings against a backdrop of world war. The basic result was the centralization of American aeronautical research and production, both financially and physically, in the hands of this tightly integrated, Dayton – based industrial group. During the war, the Aircraft Production Board and the people who controlled it would direct American aeronautical research and production. The issue of the individual roles of government and private industry in aviation, however, would re-emerge and continue to be addressed in the postwar decade. The engineering station at McCook Field would be a principal arena for this process.

The experimental facility at McCook was almost as well known for its numerous reorganizations as it was for the research it conducted. Shortly after the American declaration of war, the meager airplane and engine design sections that comprised the engineering department of the Signal Corps’ aviation section were consolidated and expanded into the Airplane Engineering Department. Headed by Captain Virginius E. Clark, this department was under the Signal Corps’ Equipment Division that Edward Deeds administered.15 The aviation experiment station at McCook would be continually restructured and compartmentalized throughout the war. It officially became known as the Engineering Division in March 1919 when the entire Air Service was totally reorganized.16

The Army’s aeronautical engineering activity in Dayton began even before the facilities at McCook were ready. With wartime emergency at hand, Clark and his people started work in temporary quarters set up in Dayton office buildings. By December 4, 1917, construction at McCook had progressed to the point where Clark and his team could take up residency. Always intended to be a temporary facility, the buildings were simple wooden structures with a minimum of conveniences. They were cold and drafty in the winter and hot and vermin-infested in the summer. A variety of flies, insects, and rodents were constant research companions.17 Upkeep and heating were terribly expensive and the slapdash wooden construction was an ever-present fire hazard.18

In spite of these less than ideal working conditions, the station immersed itself in a massive wartime aeronautical development program. It was quickly realized that if the United States’ aviation effort was to have any impact in Europe at all, it would have to limit attempts at original designs and concentrate on re-working existing European aircraft to suit American engines and production techniques. This scheme, however, proved to be nearly as involved as starting from scratch because of the difference in approach of American mass production to that of Europe.

During World War I, European manufacturing techniques still involved a good deal of hand crafting. Engine cylinders, for example, were largely hand-fitted, a handicap that became very evident when the need to replace individual cylinders arose at the battle front. Although the production of European airframes was becoming increasingly standardized, each airplane was still built by a single team from start to finish.

American mass production, by contrast, had by this time largely moved away from such hand crafting in many industries. During the nineteenth century, mass production of articles with interchangeable parts became increasingly common in American manufacture. Evolving within industries such as firearms, sewing machines, and bicycles, production with truly interchangeable parts came to fruition with Henry Ford’s automobile assembly line early in the twentieth century.19

By 1917, major American automobile manufacturers were characterized by efficient, genuine mass production. When the U. S. entered World War I, it was hoped that a vast air fleet could be produced in short order by adapting American production techniques and facilities already in place for automobiles to aircraft. The

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 2. The main design and drafting room at McCook.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 3. A biplane being load tested in the Static Testing Laboratory at McCook.

most notable example of this auto-aero production crosslink was the highly touted Liberty engine project.20

If U. S. assembly line techniques were to be effectively employed, however, accurate, detailed drawings of every component of a particular airplane or engine were required. Consequently, when the engineers at McCook began re-working European designs, huge numbers of production drawings had to be prepared. To produce the American version of the British De Havilland DH-9, for instance, approximately 3000 separate drawings were made. This was exclusive of the engine, machine guns, instruments, and other equipment apart from the airframe. Another principle re-design project, the British Bristol Fighter F-2B, yielded 2500 production drawings for all the parts and assemblies.21 As a result, the time saved re­working European aircraft to take advantage of American assembly line techniques, rather than creating original designs, was minimal.

In addition to adopting assembly line type production, the McCook engineers developed a number of other aids that helped transcend cut-and-try style manufacture. For example, a systematic method of stress analysis using sand bags to determine where and how structures should be reinforced was devised. Also, a fairly sophisticated wind tunnel was constructed enabling the use of models to determine appropriate wing and tail configurations before building the full-size aircraft. (This was the first of two tunnels. The more famous “Five-Foot Wind Tunnel” would be built in 1922.) These and other design tools began to transform the staff at McCook from mere airplane builders into aeronautical engineers.

In the end, even with all the effort to gear up for mass production, American industry produced comparatively few aircraft,22 and did so at a very high cost to the government. But this was due more to corruption in the administration of aircraft production than to the techniques employed.23 Still, the efforts of the engineers at McCook Field were not fruitless. They contributed to bringing aviation into the professional discipline of engineering that had been developing in other fields since the late nineteenth century. Although the American aeronautical effort had little impact in Europe, the approach adopted at McCook was an important long term contribution to the field of aeronautical engineering and aircraft production. It was, in the United States at least, the bridge over which homespun flying machines stepped into the realm of truly engineered aircraft.

Even though it was only intended to serve as a temporary clearinghouse for the wartime aeronautical build up, McCook Field did not close down after hostilities ended. In fact, it was in the postwar phase of its existence that the station made its most notable contributions. Colonel Thurman Bane took over command from Virginius Clark in January 1919, and under his leadership McCook expanded into an extremely wide-ranging research and development center. During the war, the facility was primarily involved with aircraft design and production problems. After, the Engineering Division continued to design aircraft and engines, but its most significant achievements were in the development of related equipment, materials, testing rigs, and production techniques that enhanced the performance and versatility of aircraft and aided in their manufacture. Virtually none of the thirty-odd airplanes designed by McCook engineers during the 1920s were particularly remarkable machines. (Except, perhaps, for their nearly uniform ugliness.) But in terms of related equipment, materials, and refinement of aeronautical engineering knowledge, the R&D at McCook was cutting edge. The list of McCook firsts is lengthy. The depth and variety of projects tackled by the Engineering Division made it one of the richest sources of engineering research in its day.

Among the most significant contributions made by the Engineering Division were those in the field of aero propulsion. The Liberty engine was a principal project during the war and after. Although fraught with problems early in its development, in its final form the Liberty was one of the best powerplants of the period. It was clearly the single most important technological contribution of the United States’ aeronautical effort during World War I. In addition, it powered the Army’s four Douglas World Cruisers that made the first successful around-the-world flight in 1924.

The Liberty engine was only part of the story. As early as 1921, the Engineering Division had built a very successful 700 hp engine known as the Model W, and was at work on a 1000 hp version.24 These and other engines were developed in what was recognized as the finest propulsion testing laboratory in the country. It featured several very large and sophisticated engine dynamometers. The McCook engineers also built an impressive portable dynamometer mounted on a truck bed. Engine and test bed were driven up mountainsides to simulate high altitude running conditions.25

The Engineering Division had a particularly strong reputation for its propeller research. Some of the most impressive test rigs anywhere operated at McCook. In fact, one of the earliest, first set up in 1918, is still in use at Wright-Paterson AFB. High speed whirling tests were done to determine maximum safe rotation speeds, and water spray tests were conducted to investigate the effects of flying in rain storms. Extensive experimentation with all sorts of woods, adhesives, and construction techniques was also performed. In addition, some of the earliest work with metal and variable pitch propellers was carried out at McCook. Propulsion research also included work on superchargers, fuel systems, carburetors, ignition systems, and cooling systems. Experimental work with ethylene-glycol as a high temperature coolant that allowed for the reduction in size of bulky radiators was another significant McCook contribution in this field.26

Aerodynamic and structural testing were other key aspects of the Engineering Division’s research program. Alexander Klemin headed what was called the Aeronautical Research Department. Klemin had been the first student in Jerome Hunsaker’s newly established aeronautical engineering course at MIT. So successful had Klemin been that he succeeded Hunsaker as head of the aeronautics program at MIT. When the United States entered the war, he joined the Army and went to McCook.27

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 4. The propulsion research at McCook was particularly strong. One of these early propeller test rigs is still in use today at Wright-Patterson Air Force Base.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 5. The propeller shop hand-crafted propellers of all varieties for research and flight test purposes.

Klemin’s work during and after the war centered around bringing theory and practice together in the McCook hangars. The Engineering Division’s wind tunnel work was a prime example. The tunnel built during World War I was superseded by a much larger tunnel built in 1922. Known as the “five foot tunnel,” it was a beautiful creation built up of lathe-turned cedar rings. The McCook tunnel was 96 feet in length and had a maximum smooth airflow diameter of five feet, hence the name.28 Although the National Advisory Committee for Aeronautics’ variable density tunnel completed the following year was the real breakthrough instrument in the field,29 the McCook tunnel provided important data and helped standardize the use of such devices for design purposes.

Among the activities of the Aeronautical Research Department were the famous sand loading tests. Under Klemin’s direction this method of structural analysis was refined to a high degree. Although the NACA became the American leader in aerodynamic testing with its variable density tunnel, McCook led the way in structural analysis.30

Materials research was another area in which the Engineering Division was heavily involved. Great strides were made in their work with aluminum and magnesium alloys. These products found important applications in engines, airframes, propellers, airship structure, and armament. In 1921, the Division was at work on this country’s first all duraluminum airplane.31 Materials research also included developmental work on adhesives and paints, fuels and lubricants, and fabrics, tested for strength and durability for applications in both aircraft coverings and parachutes.32

One of the most often-cited achievements at McCook was the perfecting of the free-fall parachute by Major Edward L. Hoffman. First used at the inception of human flight by late-eighteenth century balloonists, the parachute remained a somewhat dormant technology until after World War I. Prior to Hoffman’s work, bulk and weight concerns overrode the obvious life-saving potential of the device. Hoffman experimented with materials, various shapes and sizes for the canopy, the length of the shroud lines, the harness, vents for controlling the descent, all with an eye toward increased efficiency and reliability. His systematic approach was characteristic of the emerging McCook pattern.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 6. The Flight Test hangar at McCook, showing the range of aircraft types being evaluated by the Engineering Division.

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 7. The Five-Foot Wind Tunnel, built in 1922, bad a maximum airflow speed of 270 mph.

 

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCHFigure 8. The “pack-on-lhc-aviator” parachute design that was perfected at MrC innlr

After numerous tests with dummies, Leslie Irvin made the first human test of Hoffman’s perfected chute on April 28, 1919. Designated as the Type A, this was a modem-style “pack-on-the-aviator” design with a ripcord that could be manually activated during free fall. Though completely successful, parachutes did not become mandatory equipment for U. S. Army airmen until 1923, a few months after Lt. Harold Harris was saved by one after his Loening monoplane broke apart in the air on October 20, 1922. Harris’ exploit was the first instance of an emergency use of a free-fall parachute by a U. S. Army pilot.33

Aerial photography was another of the related fields that was significantly advanced during the McCook years. The Air Service had initiated a sizeable photo reconnaissance program during the war. Work in this field continued during the 1920s, and it became one of the most noted contributions of the Engineering Division. Albert Stevens and George Goddard were the central figures of aerial photography and mapping at McCook. Goddard made the first night aerial photographs and developed techniques for processing film on board the aircraft. In 1923, Stevens, with pilot Lt. John Macready, made the first large-scale photographic survey of the United States from the air. Stevens had particular success with his work in high altitude photography. By 1924, Air Service photographers were producing extremely detailed, undistorted images from altitudes above 30,000 feet, covering 20 square miles of territory.34

In addition to the obvious military value of aerial photography, this capability was also being employed in fields such as soil erosion control, tax assessment, contour mapping, forest conservation, and harbor improvements. The fruits of the research at McCook often extended beyond purely aeronautical applications.

The demands of the aerial photography work were also an impetus to other areas of aeronautical research. The need to carry cameras higher and higher stimulated propulsion technology, particularly superchargers. Flight clothing and breathing devices were similarly influenced. Extreme cold and thin air at high altitudes resulted in the development of electrically heated flight suits, non-frosting goggles, and oxygen equipment.35

Several important contributions in the fields of navigation and radio communication that would help spur civil air transport were developed at McCook. The first night airways system in the United States was established between Dayton and Columbus, Ohio. This route was used to develop navigation and landing lights, boundary and obstacle lights, and airport illumination systems. Experimentation with radio beacons and improved wireless telephony were also part of the program. These innovations proved especially valuable when the Department of Commerce inaugurated night airmail service. Advances in the field of aircraft instrumentation, included improvements in altimeters, airspeed indicators, venturi tubes, engine tachometers, inclinometers, tum-and-bank indicators, and the earth induction compass, just to name a few. Refinement of meteorological data collection also made great strides at McCook. The development of such equipment was essential for the creation of a safe, reliable, efficient, and profitable, commercial air transport industry.36

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 9. An example of the mapping produced by the aerial mapping photography program conducted by the Engineering Division at McCook Field.

Another significant economic application of aeronautics that saw development at McCook was crop dusting. The advantages of releasing insecticide over wide areas by air compared to hand spraying on the ground were obvious. In the summer of 1921, when a serious outbreak of catalpa sphinx caterpillars occurred in a valuable catalpa grove near Troy, Ohio, the opportunity to demonstrate the effectiveness of aerial spraying presented itself. A dusting hopper designed by E. Dormoy was fitted to a Curtiss JN-6. Lt. Macready flew the airplane over the affected area at an altitude of about 30 feet as he released the insecticide. He accomplished in a few minutes what normally would have taken days.37

Of course, McCook Field was a military installation, and a good deal of their research focused on improving and expanding the uses of aircraft for war. Perhaps the most significant long term contribution in this area made by the Engineering Division was their work with the heavy bomber. In the early twenties, General William “Billy” Mitchell, assistant chief of the Air Service, began to vociferously promote aerial bombardment as a pivotal instrument of war. The Martin Bomber was the Army’s standard bombing aircraft at the time. The Engineering Division worked with the Glenn L. Martin Company to re-design the aircraft, but were unable to meet General Mitchell’s requirements for a long range, heavily loaded bomber.

In 1923, the Air Service bought a bomber designed by an English engineer named Walter Barling. Spanning 120 feet and powered by six Liberty engines, the Barling Bomber was the largest airplane yet built in America. So big and heavy was the craft that it could not operate from the confined McCook airfield. Consequently, the Engineering Division had it transported by rail to the nearby Fairfield Air Depot to conduct flying tests. First flown by Lt. Harold Harris in August of 1923, the Barling Bomber proved largely unsuccessful. It was a heavy, ungainly craft that never lived up to expectations. Nevertheless, it in part influenced the Air Service, in terms of both technology and doctrine, toward strategic bombing as a central element of the application of air power.38

Complementary to the development of military aircraft was, of course, armament. McCook engineers turned out a continuous stream of new types of gun mounts, bomb racks, aerial torpedoes, machine gun synchronization devices, bomb sights, and armament handling equipment. Even experiments with bullet proof glass were conducted. The advances in metallurgy that were revolutionizing airframe and engine construction were also being employed in the development of lightweight aircraft weaponry.39

Another distinct avenue of aeronautical research that saw at least limited development at McCook was vertical flight. George de Bothezat, a Russian emigre who worked on the famous World War I Ilya Muromets bomber, designed a workable helicopter for the U. S. Army in the early 1920s. Built in total secrecy, the complex maze of steel tubing and rotor blades was ready for testing on December 18,1922. In its first public demonstration the craft stayed aloft for one minute and 42 seconds and reached a maximum altitude of eight feet. Flight testing continued during 1923. On one occasion it carried four people into the air. Although it met with some success, de Bothezat’s helicopter did not live up to its initial expectations and the project was eventually abandoned.40 Still, the vertical flight research, like the heavy bomber, demonstrates McCook’s pioneering role in numerous areas of long range importance.

Equally important as conducting research is, of course, dissemination of the results. Here again the Engineering Division’s efforts are noteworthy. During the war, the McCook Field Aeronautical Reference Library was created to serve as a clearinghouse for all pertinent aeronautical engineering literature and a repository for original research conducted at the station. By war’s end, the library contained approximately 5000 domestic and foreign technical reports, over 900 reference works, and had subscriptions to 42 aeronautical and technical periodicals. All of the material was cataloged, cross-indexed, and made available to any organization involved in aeronautical engineering. During the war, an in-house periodical called the Bulletin of the Experimental Department, Airplane Engineering Division was published. After 1918, at the urging of the National Advisory Committee for Aeronautics, the Division increased distribution of the journal to over 3000 engineering societies, libraries, schools, and technical institutes. Through these instruments, the research of the Engineering Division was documented and disseminated. McCook proved to be an invaluable information resource to both the military and private manufacturing firms throughout the period.41

In addition, in 1919, the Air Service set up an engineering school at McCook. Carefully selected officers were trained in the rudiments of aircraft design, propulsion theory, and other related technical areas. This school still operates today as the Air Force Institute of Technology.42

The Engineering Division’s role as a technical, professional information resource was complemented by its efforts to keep aviation in the public eye. During the 1920s, Dayton became almost as famous for the aerial exploits of the McCook flying officers as it was for being the home of Wilbur and Orville Wright. Speed and altitude records were being set on a regular basis. These flights were in part integral to the research, but they had a public relations component as well. With the postwar wind down of government contracts, private investment had to be cultivated. The Engineering Division saw a thriving private aircraft industry that could be tapped in time of war as essential to national security. The publicity garnered from record­setting flights was in part intended to draw support for a domestic industry.

There were hundreds of celebrated flying achievements that originated with the Engineering Division, but two events in particular brought significant notoriety to McCook Field and aviation. In 1923, McCook test pilots Lt. Oakley G. Kelly and Lt. John A. Macready made the first non-stop coast-to-coast flight across the United States. Their Fokker T-2 aircraft was specially prepared for the flight by the Engineering Division. Kelly and Macready departed from Roosevelt Field, Long Island, on May 2, and completed a successful flight with a landing in San Diego, California, in just under 27 hours.43

The following year, the Air Service decided to attempt an around-the-world flight. Again, preparations and prototype testing were done at McCook. Four Douglas-built aircraft were readied and on April 6, 1924, the group took off from Seattle, Washington. Only two of the airplanes completed the entire trip, but it was a technological and logistical triumph nonetheless. The achievement received international acclaim and was one the most notable flights of the decade.44

This cursory discussion of McCook Field research and development from propulsion to public relations is intended to be merely suggestive of the rich and diversified program administered by the Engineering Division of the U. S. Air Service. McCook is something of an historical Pandora’s box. Once looked into, the list of technological project areas is almost limitless. One program dovetails into the next, and all were carried out with thoroughness and sophistication.

One obvious conclusion that can be drawn from this brief overview is the powerful place McCook Field holds in the maturation of professional, high-level

McCOOK FIELD AND THE BEGINNINGS OF MODERN FLIGHT RESEARCH

Figure 10. Among the more famous aircraft prepared at McCook Field were the Fokker T-2 (center), which made the first non-stop U. S. transcontinental flight in 1923, and the Verville – Sperry Racer (right), which featured retracting landing gear.

aeronautical engineering in the United States, and its influence on the embryonic American aircraft industry. Beginning with the World War I experience, aircraft were now studied and developed in terms of their various components. Systematic testing and design had replaced cut-and-try. The organized approach to problems that characterized the Engineering Division’s research program became a model for similar facilities. Many who would later become influential figures in the American aircraft industry were “graduates” of McCook. They took with them the experience and techniques learned at the small Dayton experiment station and helped create an industry that dominated World War II and became essential thereafter. While the Engineering Division was by no means the singular source of aeronautical information and skill in this period, a review of their research activity and style clearly illustrate their extensive contributions to aeronautical engineering knowledge, as well as the formation of the professional discipline. In these ways aeronautics was transformed from simply a new technology into a new field, a new arena of professional, economic, and political significance.

The crosslink between McCook and private industry involved more than the transfer of technical data and experienced personnel. There was also a philosophical component at work of great importance with respect to how future government sponsored research would be conducted. Military engineers and private aircraft manufacturers agreed that a well developed domestic industry was in the best interest of all concerned. Yet, each had very different ideas regarding how it should be organized and what would be their individual roles.

McCook Field had, of course, been intimately tied to private industry since its creation. Its initial purpose was to serve as a clearinghouse for America’s hastily gathered aeronautical production resources upon the United States’ entry into World War I. Although the installation had a military commander, it was under the administration of industrial leader Edward Deeds.

During the war, when contracts were sizeable and forthcoming, budding aircraft manufacturers had few problems with the Army’s involvement in design and production. By 1919, however, when heavy government subsidy dried up and contracts were canceled, the interests of the Engineering Division and private manufacturers began to diverge. Throughout the twenties, civilian industry leaders and the military engineers at McCook exchanged accusations concerning responsibilities and prerogatives.

Even though government contracts were severely curtailed after the war, the military was still the primary customer for most private manufacturers. Keenly aware of this, the Army attempted to follow a course that would aid these still relatively small, hard pressed private firms, as well as facilitate their needs for aircraft and equipment. They continually reaffirmed their position that a thriving private industry that could be quickly enlisted in time of national emergency was an essential component of national defense. In a 1924 message to Congress, President Coolidge commented that “the airplane industry in this country at the present time is dependent almost entirely upon Government business. To strengthen this industry is to strengthen our National Defense.”45 Such statements reflected the “pump-priming” attitude toward the aircraft industry that was typical throughout the government, not only among the military. By providing the necessary funds to get private manufacturers on sound footing, government officials felt they were at once bolstering the economy as well as meeting their mandate of providing national security.46

These sentiments were backed up with action. For example, in 1919, Colonel Bane, head of the Engineering Division, recommended an order be placed with the Glenn L. Martin Company for fifty bombers. The Army needed the airplanes and such an order would at least cover the costs of tooling up and expanding the Martin factory. In addition to supplying aircraft, it was believed that this type of patronage would help create a “satisfactory nucleus,…, capable of rapid expansion to meet the Government’s needs in an emergency.”47 On the surface, it seemed like a beneficial approach all the way around.

This philosophy, however, met with resistance from the civilian industry. They liked the idea of government contracts, but they felt the Army was playing too large a role in matters of design and the direction the technology should go. They were concerned private manufacturers would become slaves to restrictive military design concepts as a result of their financial dependency on government contracts. By centralizing the design function of aircraft production within the military, it would stifle originality and leave many talented designers idle.48 Moreover, they believed that in a system where private firms merely built aircraft to predetermined Army specifications, they would be in a vulnerable position. They feared the Army would take the credit for successful designs and that they would be blamed for the failures.49 The civilian industry hoped to gain government subsidy, but wanted to do their own developmental work and then provide the Army with what they believed would best serve the nation’s military needs.

The Engineering Division’s response to this philosophical divergence was twofold. First, they asserted that Army engineers were in the best position to assess the Air Service’s needs and having them do the design work was the most efficient way to build up American air defenses. They claimed civilian designers sacrificed ease of production and maintenance for superior flight performance. Key to a military aircraft construction program, it was argued, were designs that were simple enough to mass produce and then maintain in the field by minimally-trained mechanics. When other performance parameters are the primary goal, complexity and expense often creep into the final product. Although performance factors such as speed and maneuverability were certainly important to the Army, utility and practicality remained higher priorities. This difference in outlook was among the principal reasons why the Engineering Division did not want to give up their design and development prerogatives.50

The other divisive issue was the conduct of basic research. The Engineering Division stressed the crucial nature of this type work with a new technology such as aeronautics. They were concerned that private industry, particularly in light of its troubled financial situation, would be reluctant to undertake fundamental research due to its frequent indefinite results and prohibitive costs. They would, understandably, focus on projects that promised fairly immediate financial return. Leon Cammem, a prominent New York engineer, skillfully summarized the Army’s position in an article that appeared in The Journal of the American Society of Mechanical Engineers. He concluded that “it is obvious that if aeronautics is to be developed in this country there must be some place where investigations into matters pertaining to this new art can be carried on without any regard to immediate commercial returns.” He suggested that place should be McCook Field.51

Throughout the 1920s, the civilian industry assailed the government, and the Engineering Division in particular, for attempting to undercut what they saw as their role in the development of this new field of technological endeavor. Although the military always had the upper hand in the McCook era, industry leaders managed to keep the issues on the table. Pressures on the industry eased somewhat in the 1930s because a sizeable commercial aviation market was emerging and gave private manufacturers a greater degree of financial autonomy. Yet, battles over research and decision making prerogatives continued to arise whenever government contracts were involved. Although the dollar amounts are higher and the technological and ethical questions more complex, many of the organizational issues of modern, multi-billion-dollar aerospace R&D are not new. The historical point of significance is that it was in the 1920s that such organizational issues were first raised and began to be sorted out. Again, the notion of a field in adolescence, finding its way, establishing its structural patterns for the long term clearly presents itself A look at the formative years of the American aircraft industry and government-sponsored aeronautical research shows that these organizational debates were an early feature of aviation in the United States, and that the Engineering Division at McCook Field was an intimate player in this history. Given this, and McCook’s countless contributions to aeronautical engineering, it is perhaps only a slight overstatement to suggest that the beginnings of our current-day aerospace research establishment lie in a small piece of Ohio acreage just west of Interstate 75 where today, among other things, the McCook Bowling Alley now resides.

Diffusing Knowledge – The Compressor “Bible ”

One product of the NACA research program was a three-volume Confidential Re­search Memorandum, issued in 1956, often referred to as the “Compressor Bible” in the industry.32 These volumes presented a complete semi-empirical method for designing axial compressors achieving levels of performance far beyond the standard of the mid-1940s. Subsequent advances notwithstanding, including the advent of computer-based analytical techniques in the mid-1950s, this design method remained in use for at least the next quarter century, if not still today. Strikingly, however, while the “bible” often mentions turboprops and turbojets, and it expressly lays out compressor design requirements for both of them, it makes no mention of turbofans.33

The empirical component of the NACA design method was based primarily on a huge number of cascade performance tests of NACA 65-Series airfoils carried out at Langley. Airfoils in cascade perform somewhat differently from isolated airfoils. The two-dimensional wind-tunnel tests determined air deflections, irreversible pressure losses, and airfoil surface pressures as functions of incidence conditions across the family of NACA 65-Series airfoils for a range of cascade stagger angles and solidities (i. e. chord-to-space ratios). These data allowed designers first to select preferred airfoil shapes along a blade to achieve a given design performance, including thermodynamic loss requirements, and then to predict the performance of the airfoils at specified off-design operating conditions.34 In large part because of the availability of this data-base, NACA 65-Series airfoils became the most widely used airfoils in axial compressors.

Constructing a Parameter for Blade Loading – The Diffusion Factor A critical element in the NACA design method was a new parameter, devised by Seymour Lieblein and others in 1953, called the “diffusion factor.” Losses result from many effects, but most important, in the absence of shocks, are viscous losses related to diffusion – i. e., deceleration – acting on boundary layers. As the loading on a given airfoil increases, a point is reached where the losses abruptly increase. Designers needed a non-dimensional parameter that could serve as a measure of the loading, allowing them to anticipate, in the form of a critical value, where the losses abruptly increase. Axial compressor blading had originally been conceptualized on the basis of isolated airfoil theory, using the lift coefficient as a non-dimensionalized measure of loading, but the losses in cascades did not correlate well with it. As a consequence designers did not have an adequate way of anticipating loading limits. Other parameters were tried before the diffusion factor, but with limited success.35

The diffusion factor was derived from a series of simplifying assumptions from boundary layer theory, applied to the suction surface. The basic idea was that the ultimately dominating losses came from turbulence developing in the boundary layer along the rear half of the airfoil suction surface, where the velocity drops from its peak to its discharge value. The problem was that any correlating parameter had to be defined in terms of quantities that could be determined with confidence; this did not include the peak velocity along the suction surface in rotating blade rows. The simplifying assumptions allowed this peak velocity to be replaced by quantities that could be measured upstream and downstream of blade rows:

W2 ^2C02“rlC01

0=1 2 rm<5Wx

where W is the relative velocity, C0 is the absolute tangential velocity, о is the cascade solidity, the subscripts 1 and 2 designate upstream and downstream of the blade row, and rm is the average of the radii rj and r2. The multi-term structure of this formula should make clear that Lieblein’s diffusion factor was not an entirely obvious, intuitive parameter. Yet, when assessed against the NACA 65-Series cascade data, it turned out to indicate a clear loading limit criterion.36 This criterion was equally successful when tried with cascade data from other airfoils.37 It has subsequently proved to be applicable to compressor blading quite generally, lending an element of rationality to compressor design much as the lift coefficient did to wing design.38

The importance of having a clear loading limit criterion is best seen by considering the ramifications of not having one. The obvious way to pursue improvements in performance was by trying to develop new airfoils; and the natural way of trying this was to test airfoils in cascade and then make incremental modifications in shape that promised incremental gains in performance. The problem with this approach in the absence of a clear loading limit criterion was that any incremental modification in shape might well cross some unrecognized barrier, resulting not in an incremental gain, but in a prohibitively large fall-off in performance. The diffusion factor and the empirically determined loading limit expressed in terms of it defined the barrier that the exploration of new airfoil designs needed to remain within.

The diffusion factor did indeed play a key role in the pursuit of higher stage pressure-ratios. The overall pressure-ratio of a compressor amounts to a product of the individual stage pressure-ratios. The pressure-ratio per stage tends to increase as the velocity of the flow relative to the rotating blades increases. As the so-called velocity triangles shown in Figure 6 indicate, if the flow approaching a rotor blade

is axial in the absolute frame of reference, then the velocity relative to the rotor blade increases as the blade tip-speed increases. Ultimately, stress considerations limit tip-speed. Aerodynamic considerations, however, were imposing limits on tip – speed far below those imposed by stresses. As the relative incident Mach number at the tip increases, shocks begin to form in the outer portion of the airfoil passages, resulting in a sharp increase in losses. In the case of NACA 65-Series airfoils, the ‘ losses rise sharply for incident Mach numbers above 0.8. This limited the pressure – ratio in stages using these airfoils to around 1.15, as we saw earlier.

Pushing Blade Loading – Transonic Stages This restriction, coupled with the goals of achieving higher pressure-ratios per stage in order to use fewer stages, hence saving weight, and higher airflow per unit frontal-area, hence limiting engine drag, led to the research problem of finding airfoil shapes that would allow the incident tip Mach number to rise above 1. That is, the goal was to find airfoil shapes that would permit efficient transonic stages – stages in which the inlet relative velocity is supersonic in the outer portion of the blade and subsonic in the inner portion (which, at the same RPM, is moving at a lower velocity). NACA researchers at Langley and Lewis had been working on the problem of transonic airfoil and stage design from 1947 on as another part of their axial compressor research program. They had achieved some successes before the diffusion factor was identified – e. g. a stage with a 1.1 tip Mach number without excessive losses39 – but not consistently. They began having more success with the diffusion factor in hand by limiting attention to velocity triangles that met the loading limit criterion for this parameter. In particular, they designed an experimental 5-stage transonic compressor with a tip-speed of 1100 ft/sec in which the tip Mach numbers were as high as 1.18. Although the efficiency fell off at 100 percent speed, this compressor did achieve an overall pressure-ratio of 4.6 at an adiabatic efficiency of 85 percent, or, in other words, an average stage pressure-ratio of 1.3 5.40 Furthermore, the measured performance of the double-circular-arc airfoils used in these and other NACA test stages, along with wind-tunnel testing of double­circular-arc cascades, began to yield a data-base for transonic airfoils, supplementing the NACA-65 Series data-base.

Save perhaps for the early efforts in the mid-1940s, the NACA work on transonic stages was focused on improving axial compressors, not on fans that could be used in bypass engines. The primary application of the NACA transonic stage research was in the early stages of axial compressors, yielding both higher pressure-ratio per stage and higher airflow per unit frontal-area.41 Nevertheless, as we shall see, NACA’s success in pushing tip Mach numbers well above 1.0 was an important step in the emergence of the turbofan engine. Turbofans with tip Mach numbers below 1 would have offered at most only small gains in performance over turbojets. Once it became clear that the tip Mach number can exceed 1.0 without a large drop­off in performance, the question became, how far above Mach 1 can the tip Mach number go?

Pursuing a Quantum Jump – Supersonic Stages Another, more radical part of the NACA compressor program proved in hindsight to be even more important to the emergence of the turbofan engine. It explored a some­what revolutionary way of trying to achieve higher pressure-ratio per stage: actually using the sharp pressure increase across a normal shock to greatly increase the pressure rise in a stage. The idea of a supersonic compressor stage – one in which the incident relative velocity is supersonic along the entire span of the blade – was first proposed in 1937. Arthur Kantrowitz initiated research on supersonic stages at NACA Langley in 1942. Shortly after the War several young engineers joined him, forming a research group at Langley and then at Lewis as well. Their fundamental problem was to control and limit the thermodynamic losses in a supersonic stage. The abrupt pressure-rise across the shock acts as an adverse pressure-gradient at the point where it meets the boundary layer, threatening to cause the boundary layer to separate, resulting in large losses. An example of such shock-induced boundary layer separation is shown in Figure 8, for a Mach number of 1.5. The problem was to find

Diffusing Knowledge - The Compressor “Bible ”

Figure 8. Shock waves and boundary layer separation in a Mach 1.5 cascade. Note shock waves at blade tips (left). Boundary layer separates on suction (i. e. convex) surface, where the shock intersects it (dark region above each airfoil), with attendent thermodynamic losses. [F. A.E. Breugelmans, “High Speed Cascade Testing and its Application to Axil Flow Supersonic Compressors,” ASME paper 68-GT-10, 1968, p. 6.]

airfoil shapes for which the attendant losses would be greatly outweighed by performance gains. Since analytical methods at the time were worthless for attacking this problem, the only approach was to learn through testing experimental designs.

NACA engineers designed and tested an impressively large number of experimental supersonic stages between 1946 and 1956.42 Virtually all of these research compressors performed poorly when judged by the standards that would have to be met for flight. In the last years of the effort, however, some designs began showing promise. Of particular note was a 1400 fit/sec tip-speed compressor rotor designed by John Klapproth and others, which took into account Lieblein’s diffusion factor. It achieved a pressure-ratio of 2.17 at an adiabatic efficiency (for the rotor alone) of 89 percent, with a tip Mach number of 1.35. As the report describing these results notes, however, its greatest significance lay elsewhere:

Inlet relative Mach numbers were supersonic across the entire blade span for speeds of 90 percent design and above. There were no appreciable effects of Mach number on blade-element losses below 90 percent of design speed. At 90 percent design speed and above, there was an increase in the relative total – pressure losses at the tip. However, based on rotor diffusion factor, these losses for Mach numbers up to 1.35 are comparable with the losses in subsonic and transonic compressors at equivalent values of blade loading.43

This was the first clear evidence that losses continue to correlate with the diffusion factor to much higher Mach numbers than in the tests which had provided the basis for this parameter – a result that was by no means assured a priori.

…. While the derivation of the diffusion factor D was based on incompressible flow, the primary factors influencing performance, that is, over-all diffusion and blade circulation, would not be expected to change for high Mach number applications…. The applicability of the correlation of D should be expected only in cases having similar velocity profiles on the blade suction surface. This similarity existed for the theoretical velocity profiles for this rotor, although the actual distribution was probably altered somewhat by differences between the assumed and real flow. On the basis of [our results], the diffusion factor appears to be a satisfactory loading criterion even for very high Mach number blading when the velocity distribution approximates that of conventional airfoils in cascade 44

In other words, for the first time, the performance in a supersonic blade row cor­related continuously – i. e. seamlessly – with the performance achieved in subsonic and transonic compressor stages, up to as high as Mach 1.35. An approach to design­ing much higher Mach number stages was beginning to emerge.45