Category Archimedes

CONTROVERSY OVER SCALE EFFECT

During World War I, many scientists, including science students, were mobilized for weapons development. While Ernest Rutherford and other physicists were engaged in devising a submarine detection system, many Cambridge scholars gathered at the Royal Aircraft Factory to assist in the development of the airplane. George P. Thomson, the son of J. J. Thomson; Francis Aston, the inventor of the mass spectrograph; Geoffrey I. Taylor, a specialist in fluid mechanics; and other excellent students or fresh graduates, including Hermann Glauert and William S. Farren, participated in the war work. The Factory in Famborough thus became another prominent center of aeronautical research in Britain.

Famborough approached aeronautical problems differently than the NPL. Whereas the NPL relied on wind tunnel experiments using small-scale airplane models, the Royal Aircraft Factory performed test flights of full-scale aircraft. For example, Oxford physicist Frederick Lindemann performed dangerous spinning flight and his data were analyzed by G. P. Thomson. The Factory’s primary function was to construct full-scale airplanes and conduct flight tests on them. Cambridge scientists collaborated closely with pilots and aircraft designers in their aeronautical investigations.

Through a number of full-scale flight tests, Factory investigators became aware of discrepancies between model tests and corresponding full-scale tests. They prepared a preliminary report noting the differences in terms of values of drag and lift of the airplane.9 To discuss the problem, a subcommittee was formed in 1917 including among its members representatives from the NPL and the Factory.10 Its official name was the “Scale Effect” subcommittee. The term “scale effect” was enclosed in quotation marks, suggesting that its significance was a matter in question.

A vehement debate arose at the first meeting of the subcommittee. Bairstow, the advocate of model experiments, argued against the Factory conclusion that the discrepancies between the measurements achieved by the two methods was attributable to scale effect. In his report, he referred to various causes of error other than scale effect, including errors in full-scale tests themselves. He even pointed out that a previous Factory report was “illogical” because it neglected the effect of interference on airplane drag. He also mentioned French aeronautical research in which model tests at Eiffel’s laboratory and full-scale tests at St. Cyr showed fairly good correspondence.11

The subcommittee considered a variety of causes for the discrepancies, examining each cause extensively. For example, the full-scale measurement of the drag of an airplane depended on the value of the power of its engine and the efficiency of its propeller. The suggestion was raised at one meeting that the power of the engine measured during flight would be different from that measured on the ground.12 In this case, it appeared that the pressure distribution should be measured by both full-scale and model methods.

Among the causes of errors investigated by the subcommittee, the most notable was the effect of the propeller on full-scale data. To investigate this effect, it was suggested in 1917 that full-scale tests should be made while the airplane was gliding with its engine stopped. Farren at Famborough objected that airplanes suitable for such gliding tests were no longer available there, having all been sent to the front. Bairstow observed that the Factory should always be able to secure airplanes for experiments.13

Through the discussions and investigations, the subcommittee reduced the original differences between the two sets of test results. Yet, subcommittee members remained divided in their conclusions regarding scale effect. In preparing the subcommittee’s final report, Bairstow insisted that scale effect was not a significant factor. When a Factory report on the collection of full-scale data was circulated among subcommittee members, Bairstow severely criticized the report. Although it did not explicitly refer to the unreliability of model tests, it listed full-scale data as necessary and sufficient for the calculation of drag. This form of presentation, Bairstow contended, would leave the impression that model test results could not be readily applied to full-scale planes. He suggested that the subcommittee should take some steps to correct the “wrong” impression created by the Factory report.14 The subcommittee’s final report therefore carried a statement on the usefulness of model tests, virtually neglecting the significance of the scale effect. Bairstow was willing to support the publication of the complete data only if the final report explicitly stated that observed differences had not been found to be due to scale effect.15

The different positions on the scale effect taken by the Factory and the NPL investigators reflected the different research strategies pursued at the two research facilities. The NPL concentrated on model testing in wind tunnels only, whereas the Factory’s main focus was in full-scale testing using its own planes. Bairstow was apparently afraid that invalidation of the model test results would seriously undermine the significance of aerodynamic investigations in which he had been engaged while at the NPL.

AN ACCIDENT OF HISTORY

We regularly ask after the limits of historical inquiry; we agonize over the right combination of psychological, sociological, and technical explanations. We struggle over how to combine the behavior of machines and practices of their users. Imagine, for a moment, that there was a nearly punctiform scientific-technological event that took place in the very recent past for which an historical understanding was so important that the full resources of the American government bore down upon it. Picture further that every private and public word spoken by the principal actors had been recorded, and that their every significant physical movement had been inscribed on tape. Count on the fact that lives were lost or jeopardized in the hundreds, and that thousands of others might be in the not so distant future. Expect that the solvency of some of the largest industries in the United States was on the line through a billion dollars in liability coverage that would ride, to no small extent, on the causal account given in that history. What form, we can ask, would this high-stakes history take? And what might an inquiry into such histories tell us about the project of – and limits to – historical inquiry more generally, as it is directed to the sphere of science and technology?

There are such events and such histories – the unimaginably violent, destructive, and costly crash of a major passenger-carrying airplane. We can ask: What is the concept of history embedded in the accident investigation that begins while crushed aluminum is still smoldering? Beginning with the Civil Aeronautics Act of 1938, the Civil Aeronautics Authority (a portion of which became today’s National Transportation Safety Board) and its successors have been assigned the task of reporting on each accident, determining what happened, producing a “probable cause” and arriving at recommendations to what is now the Federal Aviation Authority (and through them to industry and government) that would avoid repetition. Quite deliberately, the NTSB report conclusions were disqualified from being used in court: the investigative process was designed to have some freedom both from the FAA and from the courts. Since its establishment, the system of inquiry has evolved in ways I will discuss, but over the last half century there are certain elements that remain basically constant. From these consistencies, and from the training program and manuals of investigation, I believe we can understand the guiding historiographical principles that underlie these extraordinary inquiries. What they say – and do not say – can tell us about the broad system of aviation, its interconnectedness and vulnerabilities, but also, perhaps, something larger about the reconstruction of the intertwined human and machinic world as it slips into the past.

3

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 3-43 © 2000 Kluwer Academic Publishers.

There is a wide literature that aims to re-explain aviation accidents. Such efforts are not my interest here. Instead, I want to explore the form of historical explanation realized in the accident reports. In particular, I will focus on a cluster of closely related instabilities, by which I mean unresolvable tensions between competing norms of explanation. Above all, the reports are pulled at one and the same time towards localizing accounts (causal chains that end at particular sites with a critical action) and towards diffusing accounts (causal chains that spread out to human interactions and organizational cultures). Along the way, two other instabilities will emerge: first, a sharp tension between an insistence on the necessity of following protocol and a simultaneous commitment to the necessary exercise of protocol-defying judgment. Second, there is a recurrent strain between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors. To approach these and related questions, one needs sources beyond the reports alone. And here an old legislative stricture proves of enormous importance: for each case the NTSB investigates, it is possible to see the background documentation, sometimes amounting to many thousands of pages. From this “docket” emerge transcripts of the background material used to assemble the reports themselves: recordings and data from the flight, metallurgical studies, interviews, psychological analyses. But enough preliminaries. Our first narrative begins in Washington, DC, on a cold Wednesday afternoon, January 13, 1982.

The accident report opened its account at Washington National Airport. Snow was falling so hard that, by 1338, the airport had to shut down for 15 minutes of clearing. At 1359, Air Florida Flight 90, a Boeing 737-222 carrying 5 crewmembers and 74 passengers, requested and received their Instrument Flight Rules clearance. Twenty minutes later, a tug began de-icing the left side of the plane, then halted because of further departure delays. With the left side of the aircraft cleared, a relief operator replaced the initial one, and resumed the spraying of heated glycol-water mixture on the right side. By 1510, the relief operator finished with a final coat of glycol, inspected the plane’s engine intakes and landing gear, and found all surfaces clear of snow and ice. Stuck in the snow, the Captain blasted the engines in reverse for about a minute in a vain effort to free the plane from its deepening prison of water, glycol, ice, and snow. With a new tug in place, the ground crew successfully pulled flight 90 out of the gate at 1535. Planes were backed up in holding patterns up and down the East Coast as they waited for landing clearance. Taxiways jammed: flight 90 was seventeenth in line for takeoff.

When accident investigators dissected the water-soaked, fuel-encrusted cockpit voice recorder (cvr), here is what they transcribed from time code 1538:06 forward. We are in the midst of their “after start” checklist. Captain Larry Michael Wheaton, a 34 year-old captain for Air Florida, speaks first on CAM-1. The first officer is Roger Alan Pettit, a 31 year-old ex-fighter pilot for the Air Force; he is on CAM-2.

1538:06 Wheaton/CAM-1 {my insertions in curly brackets} After start

Pettit/CAM-2 Electrical

Wheaton/CAM-1 Generators

Pettit/CAM-2 Pitot heat {heater for the ram air intake that measures airspeed} Wheaton/CAM-1 On Pettit/CAM-2 Anti-ice

Wheaton/CAM-1 {here, because some of the listeners heard “on” and the majority “off’, the tape was sent to FBI Technical Services Division where the word was judged to be “off’.} Off.

Pettit/CAM-2 Air conditioning pressurization Wheaton/CAM-1 Packs on flight Pettit/CAM-2 APU {Auxiliary Power Unit}

Wheaton/CAM-1 Running Pettit/CAM-2 Start levers Wheaton/CAM-1 Idle [ … ]

Preparation for flight includes these and many other checklist items, each conducted in a format in which the first officer Pettit “challenges” captain Wheaton, who then responds. Throughout this routine, however, the severe weather commanded the flightcrew’s attention more than once as they sat on the taxiway. In the reportorial language of the investigators’ descriptive sections, the following excerpt illustrates the flight crew’s continuing concern about the accumulating ice, snow and slush, as they followed close behind another jet:

At 1540:42, the first officer continued to say, “It’s been a while since we’ve been deiced.” At 1546:21, the captain said, “Tell you what, my windshield will be deiced, don’t know about my wings.” The first officer then commented, “well – all we need is the inside of the wings anyway, the wingtips are gonna speed up on eighty anyway, they’ll shuck all that other stuff.” At 1547:32, the captain commented, “(Gonna) get your wing now.” Five seconds later, the first officer asked, “D’they get yours? Did they get your wingtip over ’er’?” The captain replied, “I got a little on mine.” The first officer then said, “A little, this one’s got about a quarter to half an inch on it all the way.”1

Then, just a little later, the report on voice recordings indicates:

At 1548:59, the first officer asked, “See this difference in that left engine and right one?” The captain replied, “Yeah.” The first officer then commented, “I don’t know why that’s different – less it’s hot air going into that right one, that must be it – from his exhaust – it was doing that at the chocks awhile ago but, ah.”

Which instrument exactly the first officer had in mind is not clear; the NTSB (for reasons that will become apparent shortly) later argued that he was attentive to the fact that, despite similar Engine Pressure Ratios (the ratio of pressure at the intake and exhaust of the jet and therefore a primary measure of thrust), there was a difference in the readout of the other engine instruments. These others are the N1 and N2 gauges – displaying the percent of maximum rpm of low and high pressure compressors respectively – , the Exhaust Gas Temperature gauge (EGT), and the fuel flow gauge that reads in pounds per minute. Apparently satisfied with the first officer’s explanation that there was hot air entering the right engine from the preceding plane, and that somehow this was responsible for the left-right discrepancy, the captain and first officer dropped the topic. But ice and snow continued to accumulate on the wings, as was evident from the cockpit voice recorder tape recorded four minutes later. To understand the first officer’s intervention at 1558:12, you need to know that the “bugs” are hand-set indicators on the airspeed gauge; the first corresponds to VI, the “decision speed” above which the plane has enough speed to accelerate safely to flight on one engine and below which the plane can (theoretically) be stopped on the runway. The second speed is VR, rotation speed at which the nosewheel is pulled off the ground, and the third, V2, is the optimal climbout speed during the initial ascent, a speed set by pitching the plane to a pre-set angle (here 18°).

1553:21 Pettit/CAM-2 Boy, this is a losing battle here on trying to deice those things, it (gives) you a false sense of security that’s all that does

Wheaton/CAM-1 That, ah, satisfied the Feds Pettit/CAM-2 Yeah

1558:10 Pettit/CAM-2 EPR all the way two oh four {Engine Pressure Ratio, explained below}

1558:12 Pettit/CAM-2 Indicated airspeed bugs are a thirty-eight, forty, forty four

Wheaton/CAM-1 Set

1558:21 Pettit/CAM-2 Cockpit door

1558:22 Wheaton/CAM-1 Locked

1558:23 Pettit/CAM-2 Takeoff briefing

1558:25 Wheaton/CAM-1 Air Florida standard

1558:26 Pettit/CAM-2 Slushy runway, do you want me to do anything special for this or just go for it?

1558:31 Wheaton/CAM-1 Unless you got anything special you’d like to do 1558:33 Pettit/CAM-2 Unless just takeoff the nose well early like a soft field takeoff or something 1558:37 Pettit/CAM-2 I’ll take the nose wheel off and then we’ll let it fly off

1558:39 Pettit/CAM-2 Be out of three two six, climbing to five, I’ll pull it back to about one point five five supposed to be about one six depending on how scared we are.

1558:45 (Laughter)

As in most flights, the captain and first officer were alternating as “pilot flying”; on this leg the first officer had the airplane. For most purposes, and there are significant exceptions, the two essentially switch roles when the captain is the pilot not flying. In the above remarks, the first officer was verifying that he would treat the slushy runway as one typically does any “soft field” – the control wheel is pulled back to keep weight off the front wheel and as soon as the plane produces enough lift to keep the nosewheel off the runway, it is allowed to do so. His next remark re-stated that the departure plan calls for a heading of 326-degrees magnetic, that their first altitude assignment was for 5,000 feet, and that he expected to throttle back from thrust (EPR) takeoff setting of 2.04 to a climb setting of between 1.55 and 1.6. Takeoff clearance came forty seconds later, with the urgent injunction “no delay.” There was another incoming jet two and a half miles out heading for the same runway. Flight 90’s engines spooled up, and the 737 began its ground roll down runway 36. Note that the curly brackets indicate text I have added to the transcript.

1559:54 {Voice identification unclear} CAM-? Real cold here 1559:55 Pettit/CAM-2 Got ‘em?

1559:56 Wheaton/CAM-1 Real cold 1559:57 Wheaton/CAM-1 Real cold 1559:58 Pettit/CAM-2 God, look at that thing 1600:02 Pettit/CAM-2 That doesn’t seem right does it?

1600:05 Pettit/CAM-2 Ah, that’s not right 1600:07 Pettit/CAM-2 (Well) –

1600:09 Wheaton/CAM-1 Yes it is, there’s eighty {knots indicated airspeed}

1600:10 Pettit/CAM-2 Naw, I don’t think that’s right

1600:19 Pettit/CAM-2 Ah, maybe it is

1600:21 Wheaton/CAM-1 Hundred and twenty

1600:23 Pettit/CAM-2 I don’t know

1600:31 Wheaton/CAM-1 Vee one

1600:33 Wheaton/CAM-1 Easy

1600:37 Wheaton/CAM-1 Vee two

1600:39 CAM (Sound of stickshaker starts and continues to impact)

1600:45 Wheaton/CAM-1 Forward, forward {presumably the plane is over-rotating to too high a pitch attitude}

1600:47 CAM-? Easy

1600:48 Wheaton/CAM-1 We only want five hundred {feet per minute climb}

1600:50 Wheaton/CAM-1 Come on, forward 1600:53 Wheaton/CAM-1 Forward 1600:55 Wheaton/CAM-1 Just barely climb 1600:59 Pettit/CAM-2 (Stalling) we’re (falling)

1601:00 Pettit/CAM-2 Larry we’re going down, Larry 1601:01 Wheaton/CAM-1 I know it 1601:01 ((Sound of impact))

The aircraft struck rush-hour traffic on the Fourteenth Street Bridge, hitting six occupied automobiles and a boom truck, ripping a 41-foot section of the bridge wall along with 97 feet of railings. The tail section pitched up, throwing the cockpit down towards the river. Tom to pieces by the impact, the airplane ripped and buckled, sending seats into each other amidst the collapsing structure. According to pathologists cited in the NTSB report, seventy passengers, among whom were three infants and four crewmembers, were fatally injured; seventeen passengers were incapacitated by the crash and could not escape.2 Four people in vehicles died immediately of impact-induced injuries as cars were spun across the bridge. Only the tail section of the plane remained relatively intact, and from it six people were plunged into the 34-degree ice-covered Potomac. The one surviving flight attendant, her hands immobilized by the cold, managed to chew open a plastic bag containing a flotation device and give it to the most seriously injured passenger. Twenty minutes later, a Parks Department helicopter arrived at the scene and rescued four of the five survivors; a bystander swam out to rescue the fifth.3

AN ACCIDENT OF HISTORY

Figure 1. Flightpath. Sources: National Transportation Safety Board, Aircraft Accident Report, Air Florida, Inc. Boeing 737-222, N62AF, Collision with 14th Street Bridge, near Washington National Airport Washington, D. C., January 13, 1982, p. 7, figure 1. Hereafter, NTSB-90.

THE EVOLUTION OF THE TURBOJET ENGINE: 1945-1956

The first turbojet engines to achieve truly high performance (even by today’s standards) emerged at the end of the 1940s and in the early 1950s. These engines required several advances in technology, including better alloys, especially for blading, and higher turbine inlet temperatures. The most important advance, however, was to raise the overall compressor pressure-ratio from around 5 to 1 – as in General Electric’s J-47, the engine with by far the most flight hours as of 1952 – to more than 10 to 1. Because the average pressure-ratio per stage in axial compressors was then limited to around 1.15, this meant many more stages. It also meant a much smaller annulus area for the flow in the rear stages than in the forward stages. This reduction in annulus area required that a new, fundamental problem in compressor design had to be solved. When the rotational speed of the compressor was low, the front stages would not compress the flow enough to pass through the smaller annuli of the rear stages, causing these stages to stall and the compressor to go into a violent instability called surge. Consequently, some special provision was needed to enable the engine just to sustain operation at off-design conditions. A second factor exacerbated this problem. As the flow acquires a tangential com­ponent of velocity in a stage, a centrifugal force arises in it. A radial pressure gradient balances this force, resulting in radial equilibrium. Unless this pressure gradient is accounted for carefully in design, the flow can become so radially “distorted” by the time it reaches the rear stages that they are forced to operate far off the incidence angles for which they were intended and hence with high thermodynamic losses.

Accordingly, in order to design high pressure-ratio, multistage compressors, the engine companies had to find a solution to the problem of matching the rear stages with the front stages at off-design as well as at design operating conditions. The three engine companies that emerged as dominant in the U. S. and Britain by the early 1960s, Pratt & Whitney Aircraft, General Electric, and Rolls-Royce – solved this problem in three different ways.18

Pratt & Whitney – The Two-Spool Engine Pratt & Whitney, in spite of decades of experience with reciprocating aircraft engines, entered the turbojet business well behind GE and Rolls-Royce. During the late 1940s P&W invested heavily in jet engine technology, including extensive in­house tests of the performance of compressor airfoil profiles in cascade at off-design incidence angles. P&W received a study contract to design a high-thrust engine for a strategic bomber in 1947.19 They decided that the best way to achieve the requisite compressor pressure-ratio was, in effect, to divide the compressor into two separate compressors, powered by two separate turbines, turning at different speeds. This arrangement (displayed in Figure 1) is called a two-spool engine, with the front compressor serving as a low-pressure compressor and the rear one, as a high – pressure compressor. At off-design conditions the low-pressure spool rotates at much lower speed than the high-pressure spool; as a consequence the low-pressure compressor passes less flow into the high-pressure compressor at these conditions. Individually, each of the two compressors requires a comparatively modest number of stages, so that the cumulative effects of radial equilibrium on the back stages of each spool are not that severe.

The two-spool engine P&W designed under its 1947 study contract became the J – 57, powering the B-52 bomber, among other aircraft. It was a remarkable engine by any standards, all the more so considering that it was designed between 1947 and 1949, essentially using slide rule methods. The initial version was a 10,000-pound – thrust engine for subsonic flight; with afterburner added, it produced 15,000 pounds of thrust for low supersonic flight. It had an overall compressor pressure-ratio of 12.5 to 1, achieved in a 9-stage low-pressure compressor and a 7-stage high – pressure compressor (for an average pressure-ratio of 1.17 per stage). The J-57 went into service in 1953. Using basically the same design approach, P&W designed a somewhat larger two-spool engine in the early 1950s, the J-75, for Mach 2 flight.20

Working Around Ignorance

The presence of such parameters points to another layer of engineering knowledge, or lack thereof. A striking feature of this episode is the extent to which the design process revolved around ignorance – more precisely, the recognition of ignorance and ways of compensating for and safeguarding against it. No one working on axial compressors and fans in that era knew what the flow inside a blade row was at any level of detail. It was not just that they could not calculate the detailed flow; they could not even measure it inside the rotating blade rows – only at their inlet and

outlet. Rolls-Royce’s way of dealing with this in the case of the bypass flow in the Conway was to use several stages with standard subsonic, low pressure-ratio airfoils whose “black-box” performance had been established empirically. Even though Pratt & Whitney knew that General Electric had achieved a 1.6 pressure-ratio in a single stage, they recognized that they did not know how to do this and opted for two stages. They too used pre-defined, pre-tested airfoils – in their case double­circular-arc airfoils that could be pushed to inlet Mach numbers of 1.15 and a little above. The boundaries of ignorance within P&W had been pushed back somewhat by the mid-1950s compared with those of Rolls-Royce two or three years earlier, but these boundaries still dictated the design.

The boundaries of GE’s ignorance had been pushed even further back, yet most of their design effort was still aimed primarily at compensating for what they did not know. They did not know how to control the effects of shocks, but they recognized that they could get away with not knowing this if they limited the tip Mach number to 1.25, safely below the 1.35 level where the losses had jumped in Klapproth’s NACA rotor. GE had no way of knowing the complicated three-dimensional flow inside their rotor blade row, but they knew they could get away with this so long as their calculated radial and axial velocity distributions were sufficiently similar in key respects to those of conventional airfoils and their diffusion factors remained below the established limiting values. The novel computer program they devised, besides giving them information about the velocity distributions, allowed them to work backwards from these distributions to plausible blade contours. Even so, as their tests showed, the actual flow departed non-trivially from their calculation. Yet they came sufficiently close to the actual flow in crucial respects, most notably the diflusion factor, to achieve a breakthrough in stage performance.

A related point about dealing with ignorance holds for the NACA compressor research program. Its aim was not one of obtaining detailed knowledge of the three­dimensional flow inside a blade row and how to control it. Rather, the aim was to find ways of achieving both consistent and superior designs without having to know the detailed flow. The cascade wind-tunnel tests gave black-box performance of two-dimensional airfoils, and the NACA design method provided ways of compensating for radial effects in using this two-dimensional performance. The transonic research program searched for ways of pushing the boundaries of ignorance back a little, and the supersonic program explored the possibility of pushing them back dramatically. The most striking example of compensating for ignorance, however, is the diffusion factor. The whole idea behind it was to employ quantities that could be measured, at the inlet and outlet of blade rows, to provide an approximation to a feature of the flow inside the blade row that generally could not be measured or calculated with confidence. The diffusion factor enabled higher pressure-ratio stages to be pursued without having to know more about the flow inside the blade row. The rule of thumb it gave for limiting blade loading defined a boundary of ignorance. Reasonable stages could be designed without mastery of the detailed flow inside the blade rows so long as the diffusion factor remained below its empirically determined critical value and velocity distributions did not depart radically from those of the past. The correlation of airfoil profile losses with the diffusion factor, together with a subsequently developed NACA model for calculating shock losses at higher Mach numbers87, allowed designers in the engine companies to live with their ignorance. Engineers do not need to know why some­thing works so long as they know how to stay safely within the bounds of their ignorance and still produce competitive designs.

These practices differ markedly from Vincenti’s characterization of uncertainty in engineering, where engineers usually “did not know as much as they thought they did,” and sometimes “didn’t even know what they didn’t know.”88 In our case, engineers knew rather acccurately what they did not know and hence endeavored specifically to work around the boundaries of their ignorance. Nonetheless, their work is well described by Vincenti’s observation that such work often serves “to free engineering from the limitations of science.”89 Although physicists had established the equations of motion for fluid flow more than a century earlier, these equations remained intractable even for flows enormously simpler than those in compressor and fan stages. Engineers could turn to physics for simplified, approximate reformulations of these equations, but engineering judgment then became crucial in deciding which features of the flow could be ignored or represented grossly by an empirically-based model.90 Experiments could be carried out in wind tunnels and measurements could be made on full stages, but again judgment and ingenuity were indispensable in drawing conclusions from data that designers could use. The shortage of science fostered an engineering practice epitomized by the following recommendation, made not in the early 1950s, but in 1978: “No compressor designer should overlook the possibility or underestimate the advantages of scaling an existing compressor geometry of known performance to meet his current design goals.”91 Even when existing designs could not be so used, they served as starting points for incremental advances. The continuous improvement achieved in axial fan and compressor design in the period covered in this paper, and subsequently, has not come from being better able to exploit scientific knowledge of fluid flow, but rather from sophisticated aspects of engineering practice aimed at defining, surmounting, and hence shifting, boundaries of ignorance.

We have shown how the development of turbofan engines, a technology with significant technical preconditions and precedents, emerged out of a disparate, but rich set of experiments and designs, working with knowledge of fluid flow very close to its boundaries of uncertainty. How well do the historical phenomena in this analysis apply to engineering epistemology in general? This question can be reformulated: is the role of uncertainty in engineering design exaggerated when one examines cutting-edge aerodynamics, where the physics of turbulence, that paradigm of poorly-understood phenomena, so dominate? Isn’t design in other contexts a more “certain” endeavor? Anecdotal evidence suggests otherwise. A prominent computer scientist and algorithm designer, when recently asked this question, responded emphatically in the negative. Any number of parameters in computer systems, from network behavior to algorithmic complexity, display similar phenomena. We understand, after all, how any individual Newtonian air particle behaves, just as we understand individual transistors. Their sum totals, however, exhibit behaviors currently beyond “the limitations of science.” It is at this boundary, we argue, literally at the border of complexity, that engineering begins.

THE AEROPLANE OF 1930

Bairstow’s coercive behavior in the subcommittee meetings reappeared after the war. A special meeting was held in early 1921 to formulate a postwar research program under the new Aeronautical Research Committee (ARC), the successor of the Advisory Committee for Aeronautics.16 Called “The Discussion of the Aeroplane of 1930,” this unique event aimed at discussing the most important fields of investigation in designing the future airplanes. While various conflicting issues emerged, the discussion was chiefly focused on establishing the priority of two different research programs: one concentrating on the production of more stable and controllable airplanes and the other directed towards designing faster planes by reducing head resistance. The two programs were advocated by Bairstow and B. Melvill Jones respectively, Bairstow the new chair in aeronautics at the Imperial College of Science and Technology and Jones holding the same position at Cambridge University.

The meeting of 1921 arose from an idea of Henry R. M. Brooke-Popham, Director of Research of the Air Ministry, who asked Henry Tizard to explore “the most important lines of research which might be expected to lead up to the 1930 aircraft.”17 While preparing his own article on the topic, Tizard asked for opinions on this question from leading aeronautical engineers in Britain. Through the Secretary of the Aeronautical Research Committee, a letter was sent to these engineers in November of 1920, requesting comments and suggestions on Tizard’s article.

About ten aeronautical engineers responded, from the military, industry, and academia. Their answers conformed roughly to the format of Tizard’s questions. The report returned by Jones contained specific research proposals and a methodological discussion on the nature of technological investigations. Jones distinguished between long-term and short-term investigations. In his view the most promising field for long-term investigation was the airplane body, especially the aerodynamic interference between the propeller and the body.18

After Jones and the three other Committee members who had submitted preliminary papers had spoken at the 1921 meeting, Bairstow offered a long criticism of Jones’s proposals. The sharp conflict between Bairstow and Jones became the central issue of the Discussion of the Aeroplane of 1930. Bairstow’s position was clear: continue research into aeronautical control and stability. In arguing for this policy, he stressed three points: the main cause of airplane accidents, the high cost of insurance premiums for commercial aviation, and the need for night flying capability.19 A recent report of the Accidents subcommittee had stated that in order to decrease accidents, the investigation of lateral control and stability was of pressing importance. The report also concluded that “the knowledge of longitudinal motion is in a far more satisfactory condition than knowledge of lateral motion.” On this point, however, Bairstow drew attention to a recent accident of the Tarrant Tabor, the giant experimental airplane which had lost its longitudinal balance while taking off, causing the death of the two crewmen on board.20 Though the real causes of the accident were still unclear, Bairstow emphasized that more investigations on models were needed to secure longitudinal balance in large, manually-controlled aeroplanes.

Tizard attempted to find a compromise between Bairstow’s insistence on stability and control and Jones’s refusal to fragment the study of aerodynamic forces on the airframe. Might not some research be terminated to free resources for a new project? Specifically, he questioned the urgency of an investigation into the stability and control of a twin-engine airplane when one of its engines suddenly stopped. This problem was so complicated, he commented, that by the time it was finally solved, the current type of twin-engined airplanes might be completely outdated. Funds for this research might be better invested in Jones’s plan. But Tizard could not convince the Committee. Instead, Jones was nominated to be chairman of the Stability and Control subcommittee, and was obliged to continue his study on the control of airplanes flying at low speeds. Bairstow’s power prevailed. Researches on stability and control continued to dominate for most of the next decade.

Bairstow’s power in this instance can be compared to that of Pasteur, as described in Bruno Latour’s Pasteurization of France. In Latour’s view, Pasteur parlayed his research achievements within the laboratory into power in the outside world.21 In Bairstow’s attempt to combine the inside and the outside, his manner appears coercive. In the controversy over scale effect, for example, his contention was too one-sided. In the discussion of the Aeroplane of 1930, his statement defended his own vested interest rather than reflecting on the best research program for the next decade.

When the controversy over scale effect was finally settled, Bairstow’s argument turned out wrong. The complete controversy is cogently summarized by Joseph L. Nayler, longtime Secretary of the Aeronautical Research Committee, in his obituary account of Bairstow. There was a great controversy in the early days between Bairstow and research staff at the Royal Aircraft Factory that boiled up in the ‘Scale Effect’ subcommittee which gave rise to the Aerodynamics subcommittee. Bairstow maintained that full-scale was inaccurate and model work was dead accurate. This position did not alter much until an ‘international’ aerofoil was sent to laboratories abroad by Richard Southwell for the A. R.C., and a variety of results obtained. That led to the investigation of turbulence in wind tunnels. In another respect Bairstow was at fault. He disagreed about corrections for wind tunnel walls brought forward by Glauert, who had studied Prandtl, and the Aerodynamics Committee actually voted against their inclusion under Bairstow’s influence; but the position changed so rapidly that in a couple of years or so the swing was all the other way.22 The following section traces the story in more detail. It begins with another connection between the inside and the outside of the laboratory.

THE PHYSICS OF FAILURE

Why did flight 90 crash? At a technical level (and as we will see the technical never is purely technical) the NTSB concluded that the answer was twofold: not enough thrust and contaminated wings. Easily said, less easily demonstrated. The crash team mounted three basic arguments. First, from the cockpit voice recorder, investigators could extract and frequency analyze the background noise, noise that was demonstrably dominated by the rotation of the low-pressure compressor. This frequency, which corresponds to the number of blades passing per second (BPF), is closely related to the instrument panel gauge N1 (percentage of maximum rpm for the low pressure compressor) by the following formula:

BPF (blades per second) = (rotations per minute (rpm) x number of blades)/60

or

Percent max rpm (N1) = (rpm x 60 x BPF x 100)/(maximum rpm x number of blades)

Applying this formula, the frequency analyzer showed that until 1600:55 – about six seconds before the crash – N1 remained between 80 and 84 percent of maximum. Normal N1 during standard takeoff thrust was about 90 percent. It appeared that only during these last seconds was the power pushed all the way. So why was N1 so low, so discordant with the relatively high setting of the EPR at 2.04? After all, we heard a moment ago on the CVR that the engines had been set at 2.04, maximum takeoff thrust. How could this be? The report then takes us back to the gauges.

The primary instrument for takeoff thrust was the Engine Pressure Ratio gauge, the EPR. In the 737 this gauge was read off of an electronically divided signal in which the inlet engine nose probe pressure given by Pt2 was divided by the engine exhaust pressure probe Pt7. Normally the Pt2 probe was deiced by the anti-ice bleed air from the engine’s eighth stage compressor. If, however, ice were allowed to form in and block the probe Pt2, the EPR gauge would become completely unreliable. For with Pt2 frozen, pressure measurement took place at the vent (see figure 2) – and the pressure at that vent was significantly lower than the compressed air in the midst of the compressor, making

apparent EPR = Pt7/(Pt2-vent) > real EPR = Pt7/Pt2.

Since takeoff procedure was governed by throttling up to a fixed EPR reading of 2.04, a falsely high reading of the EPR meant that the “real” EPR could have been much less, and that meant less engine power.

To test the hypothesis of a frozen low pressure probe, the Boeing Company engineers took a similarly configured 737-200 aircraft with JT8D engines resembling those on the accident flight, and blocked with tape the Pt2 probe on the number one engine (simulating the probe being frozen shut). They left the number two engine probe unblocked (normal). The testers then set the Engine Pressure Ratio

THE PHYSICS OF FAILURE

Figure 2. Pt2 and Pt7. Source: NTSB-90, p. 25, figure 5.

indicator for both engines at takeoff power (2.04), and observed the resulting readings on the other instruments for both “frozen” and “normal” cases. This experiment made it clear that the EPR reading for the blocked engine was deceptive – as soon as the tape was removed from Pt2, the EPR revealed not the 2.04 to which it had been set, but a mere 1.70. Strikingly, all the other number one engine gauges

THE PHYSICS OF FAILURE

Figure 3. Instruments for Normal/Blocked Pt2. Source: NTSB-90, p. 26, figure 6.

– N1, N2, EGT, and Fuel Flow – remained at the level expected for an EPR of 1.70. One thing was now clear: instead of two engines operating at an EPR of 2.04 or 14,500 lbs of thrust each, flight 90 had taken off, hobbled into a stall, and begun falling towards the 14th Street Bridge with two engines delivering an EPR of 1.70, a mere 10,750 lbs of thrust apiece. At that power, the plane was only marginally able
to climb under perfect conditions. And with wings covered with ice and snow, flight 90 was not, on January 13, flying under otherwise perfect conditions.

Finally, in Boeing’s Flight Simulator Center in Renton, Washington, staff unfolded a third stage of inquiry into the power problem. With some custom programming the computer center designed visuals to reproduce the runway at Washington National Airport, the 14th Street Bridge and the railroad bridge. Pilots flying the simulator under “normal” (no-ice configuration) concurred that the simulation resembled the 737s they flew. With normalcy defined by this consensus, the simulator was then set to replicate the 737-200 with wing surface contamination – specifically the coefficient of lift was degraded and that of drag augmented. Now using the results of the engine test and noise spectrum analysis, engineers set the EPR at 1.70 instead of the usual takeoff value of 2.04. While alone the low power was not “fatal” and alone the altered lift and drag were not catastrophic, together the two delivered five flights that did reproduce the flight profile, timing and position of impact of the ill-starred flight 90. Under these flight conditions the last possible time in which recovery appeared possible by application of full power (full EPR = 2.23) was about 15 seconds after takeoff. Beyond that point, no addition of power rescued the plane.4

Up to now the story is as logically straightforward as it is humanly tragic: wing contamination and low thrust resulting from a power setting fixed on the basis of a frozen, malfunctioning gauge drove the 737 into a low-altitude stall. But from this point on in the story that limpid quality clouds. Causal lines radiated every which way like the wires of an old, discarded computer – some terminated, some crossed, some led to regulations, others to hardware; some to training, and others to individual or group psychology. At the same time, this report, like others, began to focus the causal inquiry upon an individual element, or even on an individual person. This dilemma between causal diffusion and causal localization lay at the heart of this and other inquiries. But let us return to the specifics.

The NTSB followed, inter alia, the deicing trucks. Why, the NTSB asked, was the left side of the plane treated without a final overspray of glycol while the right side received it? Why was the glycol mixture wrongly reckoned for the temperature? Why were the engine inlets not properly covered during the spraying? Typical of the ramified causal paths was the one that led to a non-regulation nozzle used by one of the trucks, such that its miscalibration left less glycol in the mixture (18%) than there should have been (30%).5 What does one conclude? That the replacement nozzle killed these men, women and children? That the purchase order clerk who bought it was responsible? That the absence of a “mix monitor” directly registering the glycol-to-water ratio was the seed of destruction?6 And the list of circumstances without which the accident would not have occurred goes on – including the possibility that wing de-icing could have been used on the ground, that better gate holding procedures would have kept flight 90 from waiting so long between de­icing and takeoff, to name but two others.7

There is within the accident report’s expanding net of counterfactual conditionals a fundamental instability that, I believe, registers in the very conception of these accident investigations. For these reports in general – and this one in particular – systematically turn in two conflicting directions. On one side the reports identify a wide net of necessary causes of the crash, and there are arbitrarily many of these – after all the number of ways in which the accident might not have happened is legion. Human responsibility in such an account disperses over many individuals. On the other side, the reports zero in on sufficient, localizable causes, often the actions of one or two people, a bad part or faulty procedure. Out of the complex net of interactions considered in this particular accident, the condensation was dramatic: the report lodged immediate, local responsibility squarely with the captain.

Fundamentally, there were two charges: that the captain did not reject the takeoff when the first officer pointed out the instrument anomalies, and that, once in the air, the captain did not demand a full-throttle response to the impending stall. Consider the “rejection” issue first. Here it is worth distinguishing between dispersed and individuated causal agency (causal instability), and individual and multiple responsibility (agency instability). There is also a third instability that enters, this one rooted between the view that flight competence stems from craft knowledge and the view that it comes from procedural knowledge (protocol instability).

The NTSB began its discussion of the captain’s decision not to reject by citing the Air Florida Training and Operations Manual:

Under adverse conditions on takeoff, recognition of an engine failure may be difficult. Therefore, close reliable crew coordination is necessary for early recognition.

The captain ALONE makes the decision to “REJECT.”

On the B-737, the engine instruments must be closely monitored by the pilot not flying. The pilot flying should also monitor the engine instruments within his capabilities. Any crewmember will call out any indication of engine problems affecting flight safety. The callout will be the malfunction, e. g., “ENGINE FAILURE,” “ENGINE FIRE,” and appropriate engine number.

The decision is still the captain’s, but he must rely heavily on the first officer.

The initial portion of each takeoff should be performed as if an engine failure were to occur.8

The NTSB report used this training manual excerpt to show that despite the fact that the co-pilot was the “pilot flying,” responsibility for rejection lay squarely and unambiguously with the captain. But intriguingly, this document also pointed in a different direction: that rejection was discussed in the training procedure uniquely in terms of the failure of a single engine. Since engine failure typically made itself known through differences between the two engines’ performance instruments, protocol directed the pilot’s attention to a comparison (cross-check) between the number one and number two engines, and here the two were reading exactly the same way. Now it is true that the NTSB investigators later noted that the reliance on differences could have been part of the problem.9 In the context of training procedures that stressed the

cross-check, the absence of a difference between the left and right engines strikes me not as incidental, but rather as central. In particular it may help explain why the first officer saw something as wrong – but not something that fell into the class of expectations. He did not see a set of instruments that protocol suggested would reflect the alternatives “ENGINE FAILURE” or “ENGINE FIRE.”

But even if the first officer or captain unambiguously knew that, say, N1 was low for a thrust setting of the EPR readout of 2.04, the rejection process itself was riddled with problems. Principally, it makes no sense. The airspeed V1 functioned as the speed below which it was supposed to be safe to decelerate to a stop and above which it was safe to proceed to takeoff even with an engine failure. But this speed was so racked with confusion that it is worth discussing. Neil Van Sickle gives a typical definition of VI in his Modern Airmanship, where he writes that VI is “The speed at which… should one engine fail, the distance required to complete the takeoff exactly equals the distance required to stop.”10 So before VI, if the engine failed, you could stop in less distance than you could get off the ground. Other sources defined V1 as the speed at which air would pass the rudder rapidly enough for rudder authority to keep a plane with a dead engine from spinning. Whatever its basis, as the Air Florida Flight Operations Manual for the Boeing 737 made clear, pilots were to reject a takeoff if the engine failed before V1; afterwards, presumably, the takeoff ought be continued. The problem is that, by its use, the speed V1 had come to serve as a marker for the crucial spatial point where the speed of the plane and distance to go made it possible to stop (barely) before overrunning the runway. In the supporting documents of the NTSB report (called the Docket) one finds in the Operations Group “factual report” the following hybrid definition of VI:

[V1 is] the speed at which, if an engine failure occurs, the distance to continue the takeoff to a height of 35 feet will not exceed the usable takeoff distance; or the distance to bring the airplane to a full stop will not exceed the acceleration – stop distance available. VI must not be greater than the rotation speed, Vr [rejecting after rotation would be enormously dangerous], or less than the ground minimum control speed Vmcg [rejecting before the plane achieves sufficient rudder authority to become controllable would be suicidal].11

Obviously, VI cannot possibly do the work demanded of it: it is the wrong parameter to be measuring. Suppose the plane accelerated at a slow, constant rate from the threshold to the overrun area, achieving V1 as it began to cross the far end of the runway. That would, by the book, mean it could safely take off where in reality it would be within a couple of seconds of collapsing into a fuel-soaked fire. The question should be whether V1 has been reached by a certain point on the runway where a maximum stop effort will halt the plane before it runs out of space (a point known elsewhere in the lore as the acceleration-stop distance). If one is going to combine the acceleration-stop distance with the demand that the plane have rudder authority and that it be possible to continue in the space left to an engine-out takeoff, then one way or another, the speed VI must be achieved at or before a fixed point on the runway. No such procedure existed.

Sadly, as the NTSB admitted, it was technically unfeasible to marry the very precise inertial navigation system (fixing distance) to a simple measurement of time elapsed since the start of acceleration. And planting distance-to-go markers on the runway was dismissed because of the “fear of increasing exposure to unnecessary high-speed aborts and subsequent overruns… .[that might cause] more accidents than they might prevent.”12 With such signs the rolling protocol would presumably demand that the pilots reject any takeoff where VI was reached after a certain point on the runway. But given the combination of technical limitations and cost-benefit decisions about markers, it was, in fact, impossible to know in a protocol-following way whether VI had been achieved in time for a safe rejection. This meant that the procedure of rejection by VI turns out to be completely unreliable in just that case where the airplane is accelerating at a less than normal rate. And it is exactly such a low-acceleration case that we are considering in flight 90. What is demanded of a pilot – a pilot on any flight using VI as a go-no-go speed – is a judgment, a protocol – defying judgment, that VI has been reached “early enough” (determined without an instrument or exterior marking) in the takeoff roll and without a significant anomaly. (Given the manifest and recognized dangers of aborting a high-speed roll, “significant” here obviously carries much weight; Air Florida, for example, forbids its pilots from rejecting a takeoff solely on the basis of the illumination of the Master Caution light.)13

The NTSB report “knows” that there is a problem with the V1 rejection criterion, though it knows it in an unstable way:

It is not necessary that a crew completely analyze a problem before rejecting a takeoff on the takeoff roll. An observation that something is not right is sufficient reason to reject a takeoff without further analysis… The Safety Board concludes that there was sufficient doubt about instrument readings early in the takeoff roll to cause the captain to reject the takeoff while the aircraft was still at relatively low speeds; that the doubt was clearly expressed by the first officer; and that the failure of the captain to respond and reject the takeoff was a direct cause of the accident.14

Indeed, after a careful engineering analysis involving speed, reverse thrust, the runway surface, and braking power, the NTSB determined the pilot could have aborted even with a frictional coefficient of 0.1 (sheet ice) – the flight 90 crew should not have had trouble braking to a stop from a speed of 120 knots on the takeoff roll. “Therefore, the Safety Board believes that the runway condition should not have been a factor in any decision to reject the takeoff when the instrument anomaly was noted.”15

What does this mean? What is this concept of agency that takes the theoretical engineering result computed months later and uses it to say “therefore… should not have been a factor”? Is it that the decision that runway condition “should not have been a factor” would have been apparent to a Laplacian computer, an ideal pilot able to compute friction coefficients by sight and from it deceleration distance using weight, wind, breaking power, and available reverse thrust? Robert Buck, a highly experienced pilot – a 747 captain, who was given the Air Medal by President Truman – wrote about the NTSB report on flight 90: “How was a pilot to know that [he could have stopped]? No way from training, no way was there any runway coefficient information given the pilot; a typical NTSB after-the-fact, pedantic, unrealistic piece of laboratory-developed information.”16 Once the flight was airborne with the stickshaker vibrating and the stall warning alarm blaring, the NTSB had a different criticism: the pilot did not ram the throttles into a full open position. Here the report has an interesting comment. “The Board believes that the flightcrew hesitated in adding thrust because of the concern about exceeding normal engine limitations which is ingrained through flightcrew training programs.” If power is raised to make the exhaust temperature rise even momentarily above a certain level, then, at bare minimum, the engine has to be completely disassembled and parts replaced. Damage can easily cost hundreds of thousands of dollars, and it is no surprise that firewalling a throttle is an action no trained pilot executes easily. But this line of reasoning can be combined with arguments elsewhere in the report. If the captain believed (as the NTSB argues) that the power delivered was normal takeoff thrust, he might well have seen the stall warning as the result of an over-rotation curable by no more than some forward pressure on the yoke. By the time it became clear that the fast rate of pitch and high angle of attack were not easily controllable (737s notoriously pitch up with contaminated wings), he did apply full power – but given the delay in jet engines between power command and delivery, it was too late. The NTSB recommended changes in “indoctrination” to allow for modification if loss of aircraft is the alternative.17

In the end, the NTSB concluded their analysis with the following statement of probable cause, the bottom line:

The National Transportation Safety Board determines that the probable cause of this accident was the flightcrew’s failure to use engine anti-ice during ground operation and takeoff, their decision to take off with snow/ice on the airfoil surfaces of the aircraft, and the captain’s failure to reject the takeoff during the early stage when his attention was called to anomalous engine instrument readings.18

But there was one more implied step to the account. From an erroneous gauge reading and icy wing surfaces, the Board had driven their “probable cause” back to a localizable faulty human decision. Now they began, tentatively, to put that human decision itself under the microscope. Causal diffusion shifted to causal localization.

General Electric – Variable Geometry

General Electric’s approach to solving the high pressure-ratio compressor problem, by contrast, was to stay with the single spool design they had employed on their highly successful earlier engines, and to adopt “variable geometry” in the forward stages of the compressor in order to modulate the flow at off-design operation. Specifically, the stationary blades, or “stator vanes,” in the forward stages were rotated to different stagger angles, depending on the operating point, thereby altering the flow area in these stages in order to maintain favorable incidence angles on the blades at different conditions.21 The first flight-qualified engine GE designed with variable stator vanes was the J-79, which powered the Mach 2.2 B – 58 bomber and several Mach 2.2 fighters, including the F-104 and the F-4H.22 The design that evolved into the J-79 was begun in 1951, with the first flight test of the engine in 1955. The J-79 produced 12,000 pounds of thrust without afterburner and 17,000 pounds of thrust with afterburner. Its 17-stage compressor had variable stator vanes in the first 6 stages, as well as variable inlet guide vanes; its overall compressor pressure-ratio was 12 to 1 (for an average pressure-ratio just below 1.16 per stage).23

ENGINEERING EXPERIMENT AND ENGINEERING THEORY: THE AERODYNAMICS OF WINGS AT SUPERSONIC SPEEDS, 1946-1948

By 1946, though the possibility of supersonic flight had yet to be proven by the Bell XS-1, research engineers had begun to explore the anticipated aerodynamic problems. This paper offers an inside look at a contribution to those early days of supersonic aerodynamics.

Artifacts, by their nature, must work in the real, physical world. Engineers strive, insofar as they can, to bring that world into the design office through general understanding, ways of thinking, theoretical design methods, and design data. Research engineers (and sometimes design engineers themselves) work to develop these means by synergistic recourse to theory, experiment, and use. Edward Constant speaks of these – in reverse order and aeronautical context – as “technological empiricism (building it), careful but empirical testing (trying design families in wind tunnels), and theoretical investigation (development of formal aerodynamic theory).” These “are equal in the sense that they ultimately discover the same world,”1 that is, they are mutually validating and authenticating. The present case study recounts the earliest systematic investigation, by joint theory and experiment in the years 1946-48, to assess the potential of the then newly developing theory as a design tool for airplane wings at supersonic speeds. The story thus shows the typically synergistic use in aerodynamic research of theory and experiment in the pursuit of knowledge, plus the role therein of theory as a kind of “artifact” or “tool” analogous to the wind tunnel. It also illustrates what I intend above by “general understanding” and “ways of thinking” and to exemplify other concerns of interest for historical analysis.

As in a companion article,2 the story is based – in this case mainly – on my own experiences in the 1940s as a research engineer at the Ames Aeronautical Laboratory of the National Advisory Committee for Aeronautics (NACA) near San Francisco. Though I shall place the work in historical context, activities in which I participated will predominate, and the situation will be described as it appeared to our group at Ames. As in the other article, I shall write as objectively as I can, using my recollection as critically as I would any other historical source. At the same time, I shall try to convey something of the spirit and feel of the activity. To make clear the complexities of the experimental-theoretical interaction, considerable detail will be needed.

157

P Galison and A. Roland (eds.), Atmospheric Flight in the Twentieth Century, 157-180 © 2000 Kluwer Academic Publishers.


THE INTERNATIONAL TRIALS

After the war was over, British aeronautical engineers conceived a project for comparing and subsequently standardizing wind tunnel data. The original idea came from Director of Research Robert Brooke-Popham.23 In a letter to the Aerodynamics subcommittee of February 1920, he referred to a previous comparison between wind tunnel tests at the NPL, Eiffel’s Laboratory, and MIT. It was desirable, he believed, to conduct another set of such comparative tests at representative laboratories in Britain, France, and the United States. For this purpose, he suggested that identical airplane models, airscrews, and stream-lined bodies be tested.24

Accepted by the subcommittee members, the proposal was sent to the Main Committee. The Main Committee approved the proposal and ordered the subcommittee to direct this international project.25 At the same time, the Main Committee sent letters to the four foreign organizations mentioned by Brooke-Popham: the Aerotechnical Institute at St. Cyr, the laboratory of Gustave Eiffel, the Central Aeronautical Institute in Italy, and the NACA. Shortly afterwards, the British Committee received letters of acceptance from all the laboratories together with comments and suggestions on the proposed project.26 The NPL began to construct standard models, and the decade-long “International Trials” project started.

By the end of 1920, three other countries had joined this international project. In August of 1921, the Imperial Research Service for Aviation in the Netherlands asked to be included in the International Trials. Once it was learned from the Controller of Information that this institution was a government establishment, the Committee approved the inclusion of the Amsterdam laboratory.27 Likewise, the requests to participate from the Associate Air Research Committee of Canada and the Japanese Imperial Navy were both approved. It was decided that the models be sent to Japan after the completion of tests in Canada.28

By this time, the British Committee had become aware of the possible importance of aerodynamic research at the Gottingen Aerodynamics Institute in Germany.29 Through some fragmentary information, the British had learned that the aerodynamic research at Gottingen lay at the heart of the wartime achievements of German aeronautical research. The Royal Aircraft Establishment (RAE, the former Royal Aircraft Factory) sent two investigators, Hermann Glauert and Robert McKinnon Wood, to visit Prandtl’s laboratory.30 Members of the Aeronautical Research Committee naturally agreed on the desirability of cooperating with the Gottingen Laboratory. The Committee reported to the Air Ministry that it was prepared to ask the allied laboratories about their willingness to cooperate with Prandtl unless there were any official difficulties.31

At the next ARC meeting, however, the Committee members were informed by the Director of Research that the Air Council deemed it undesirable to approach

Prandtl to enquire about his laboratory’s participation in the International Trials. The message from the Air Council frustrated some Committee members. Glauert and Wood had just returned from their visit to Gottingen and had submitted a report on the theoretical achievements and the experimental facilities of the Gottingen Aerodynamics Institute. Wood’s report had specifically referred to the desirability of including the Gottingen team in the International Trials, since a discrepancy had been perceived between German and British testing of the same form of wing.32 ARC Chairman Glazebrook restated his belief in the scientific importance of the participation of Prandtl’s Laboratory, calling attention to the excellence of the Gottingen wind tunnel, which possessed a steadiness and uniformity of air flow comparable with the NPL wind tunnel. Despite Glazebrook’s appeal, the Director of Research insisted that the matter not be raised again with the Air Council at this time.33 Just why the Air Council was opposed to the contact between the British Committee and the German laboratories is not recorded in its minutes. A later ARC minute indicates that the Council’s opposition sprang from diplomatic reasons.34

The International Trials entailed two different tests:

1. Determination of lift, drag, and center of pressure on a standard airfoil model at various angles of attack.

2. Resistance measurement of an airship model with and without fins.35

Accordingly, the NPL and the RAE constructed an airfoil model of the type R. A.F. 15 and an airship model of the type R.33. These were first measured in two wind tunnels of the NPL and in three tunnels of the RAE.36 They were then sent to France in the spring of 1922 to be measured both at St. Cyr and Eiffel’s Laboratory. After their return from France early in 1923, they were measured again at the NPL to see if their travel or experiments had resulted in any changes. Then they were dispatched to the United States, and measured at the NACA laboratory at Langley Field and at MIT. They returned to England in September 1924. Checked once more, the models were forwarded to Italy. The same procedure was carried out for the Netherlands, Canada, and Japan. This long and cumbersome process took several years to complete.

The same model was used in every test. The model was carefully constructed and repeatedly examined in order to ensure that its size and shape had not altered. Otherwise, all detailed experimental procedures of measuring the forces and moments were left to individual experimenters. Even at the same institution, two different groups may have employed different procedures. At the NPL, for example, two groups, each consisting of two engineers, used the seven-foot wind tunnels #1 and #2, and turned in different reports on their experimental procedures. For example, the group for the #2 tunnel used a specially designed optical device to check the sensitivity of the aerodynamical balance, while the other group did not use such an instrument. Despite these minor differences, they shared the basic procedure, and applied the same corrections due to the drags of wire, the spindle, the sting, and so forth.37

SOCIOLOGY ON THE FLIGHTDECK

In the NTSB’s final judgment of probable cause was an explicit reference to the fact that the captain failed to reject the takeoff “when his attention was called to anomalous engine instrument readings.” Though not formalized in the probable cause assessment, the investigative team did comment elsewhere in the report that the Safety Board strongly believed in the training program of command decision, resource management, role performance, and assertiveness. As the NTSB pointed out, it had already, in June of 1979 (A-79-47), recommended flightdeck resource management, boosting the merits of participative management and assertiveness training for other cockpit crewmembers. Here a new analytical framework entered, in which causal agency fell not to individual but to group (social) psychology. That framework (dubbed Cockpit Resource Management or CRM) was fairly recent and came in the wake of what became a set of canonical accidents. The NTSB – interpreted record of Air Florida flight 90 became a book in that canon.

For United Airlines, the transformation in their view of CRM came following the December 28, 1978 loss of their flight UA 173. Departing Denver with 46,700 pounds of fuel, with 31,900 predicted necessary for the leg to Portland, the DC-8 came in for final approach. When the gear lowered, those in the body of the plane heard a loud noise and sharp jolt. The captain felt that the gear had descended too rapidly, and noted that the gear lights did not illuminate. Asking his second officer to “give us a current card on weight, figure about another fifteen minutes,” he received a query in reply, “fifteen minutes?” To this, the captain responded “Yeah, give us three or four thousand lbs. on top of zero fuel weight.” Second officer: “not enough. Fifteen minutes is really gonna really run us low on fuel here,” then later: “we got about three on the fuel and that’s it.” When the first officer urged, “We’re going to lose an engine,” the captain responded “why?” To which the first officer responded “Fuel!” Within eight minutes the plane was down in the woods outside the city, with a loss of ten lives.19 The canonical interpretation read the accident in terms of a failure of communication: Why, United Airlines personnel wanted to know, was the captain not listening to his officers?

According to United Airlines’ CRM curriculum of the mid 1990s, the conversion of Delta Airlines to CRM came seven years after the United 173 crash, in the aftermath of its own disastrous flight 191. Approaching Dallas Fort Worth airport on August 2, 1985, Delta’s L-1011 hit a microburst, descended into the ground, and disintegrated. The question raised by investigators was why the otherwise prudent captain had entered an area of known lightning – that is to say a thunderstorm – close to the ground and in a shaft of pounding rain. “Probable cause” included the decision to enter the cumulonimbus area, a lack of training in escape from windshear, and lack of timely windshear warning. Unlike the captain of United 173 or Air Florida 90, no one suggested here that the Delta captain was not listening to the flightcrew. Instead, “given the fact that the captain was described as one who willingly accepted suggestions from flightcrew members,” the Board did not infer that they were intimidated by him. But because neither first nor second officer dissented from the continued approach, the NTSB held the flightcrew responsible for the decision to continue. “Suggestions were not forthcoming,” concluded the investigation, on the basis of which the NTSB argued that air carriers should provide formal cockpit resource management and assertiveness training for their crews.20

When, in the mid 1980s, the airlines began to develop their CRM courses, they invariably turned back to the by-then widely-discussed proceedings of a meeting held

under NASA’s auspices in San Francisco over 26-28 June 1979. In various ways, that conference set out the outline for hundreds of courses, books, and pamphlets designed to characterize and cure the “dangerous” failures of communication on the flightdeck. Most prominent among the speakers was Robert Helmreich, a social psychologist from the University of Texas at Austin, who came to the problem through his work on Navy and NASA crew training efforts for the space program. Psychology (Helmreich declared at the San Francisco meeting) had so far failed those in the cockpit. On one side, he noted, there was personality psychology which had concentrated solely on exclusion of unacceptable candidates, failing utterly to capture the positive individual qualities needed for successful flight. On the other side, Helmreich contended, social psychologists had so far ignored personality and focused on rigorous laboratory experiments only loosely tied to real-life situations. Needed was an approach that joined personality to social interaction. To this end he advocated the representation of an individual’s traits by a point on a two-dimensional graph with instrumentality on one axis and expressivity on the other. At the far end of instrumentality lay the absolutely focused goal-oriented pilot, on the extreme end of expressivity lay the pilot most adept at establishing “warmer” and more effective personal relationships. In a crisis, (argued the authors of United’s CRM course) being at the high end of both was crucial, and likely to conflict with the “macho pilot” who is high in instrumentality and low in expressivity.21

In various forms, this two-dimensional representation of expressivity and instrumentality crops up in every presentation of CRM that I have seen. Perhaps the most sophisticated reading of the problem came in another plenary session of the 1979 meeting, in the presentation by Lee Bolman from the Harvard Graduate School of Education. Bolman’s idea was to pursue the mutual relations of three different “theories”: first, there was the principals’ short-term “theory of the situation” which captured their momentary understanding of what was happening, here the pilots’ own view of the local condition of their flight. Second, Bolman considered the individual’s longer-term “theory of practice,” that collection of skills and procedures accumulated over a period of years. Finally, at the most general level, there was a meta-theory, the “theory-in-use” that contained the general rules by which information was selected, and by which causal relationships could be anticipated. In short, the meta-theory provided “core values,” “beliefs,” “skills,” and “expected outcomes.” Deduced from observation, the “theory in use” was the predictively successful account of what the subject will actually do in specific situations. But Bolman noted that this “theory-in-use” only partially overlapped with views that the subject may explicitly claim to have (“the espoused theory”). Espoused knowledge was important, Bolman argued, principally insofar as it highlighted errors or gaps in the “theory in use”:

Knowledge is “intellectual” when it exists in the espoused theory but not in the theory-in-use: the individual can think about it and talk about it, but cannot do it. Knowledge is “tacit” when it exists in the theory-in-use but not the espoused theory; the person can do it, but cannot explain how it is done. Knowledge is “integrated” when there is synchrony between espoused theory and theory-in-use: the person can both think it and do it.22

Bottom line: Bolman took the highest level theory (“theory-in-use”) to be extremely hard to revise as it involved fundamental features of self-image and lifelong habits. The lowest level theory (“theory of the situation”) might be revised given specific technical inputs (one gauge corrected by the reading of two others) but frequently will only actually be revised through an alteration in the “theory of practice.” It was therefore at the level of a “theory of practice” that training was most needed. Situations were too diverse and patterns of learning too ingrained to be subject to easy alteration. At this level of practice could be found the leamable skills of advocacy, inquiry, management, and role modification. And these, Bolman and the airlines hoped, would contribute to a quicker revision of a faulty “theory of the situation” when one arose. CRM promised to be that panacea.

Textbooks and airlines leaped at the new vocabulary of CRM. Stanley Trollip and Richard Jensen’s widely distributed Human Factors for General Aviation (1991) graphed “relationship orientation” on the у-axis against “task orientation” on the abscissa. High task orientation with low relationship orientation yields the dreadful amalgam: a style that would be “overbearing, autocratic, dictatorial, tyrannical, ruthless, and intimidating.”

According to Trollip and Jensen, who took United 173, Delta 191, and Air Florida 90 as principal examples, the co-pilot of Air Florida 90 was earnestly asking after take-off procedures when he asked about the slushy runway departure, and was (according to the authors) being mocked by captain Wheaton in his response “unless you got something special you’d like to do,” a mockery that continued in the silences with which the captain greeted every subsequent intervention by the co­pilot.23 Such a gloss assumed that copilot Pettit understood that the EPR was faulty and defined the catastrophe as a failure of his advocacy and the captain’s inquiry. Once again agency and cause were condensed, this time to a social, rather than, or in addition to, an individual failure. Now this CRM reading may be a way of glossing the evidence, but it is certainly not the only way; Pettit may have noted the discrepancy between the EPR and N1, for example, noted too that both engines were reading identically, and over those few seconds not known what to make of this circumstance. I want here not to correct the NTSB report, but to underline the fragility of these interpretive moments. Play the tape again:

F. O. Pettit (CAM-1): “That’s not right… well …”

Captain Wheaton (CAM-1): “Yes it is, there’s eighty”

Pettit (CAM-2): “Naw, I don’t think that’s right …. Ah, maybe it is.”

Wheaton (CAM-1): “One hundred twenty”

Pettit (CAM-2): “I don’t know.”

Now it might be that in these hesitant, contradictory remarks Pettit is best understood to be advocating a rejected takeoff. But it seems to be at least worth considering that when Pettit said, “I don’t know,” that he meant, in fact, that he did not know.

United Airlines put it only slightly differently than Trollip and Jensen when the company used its instructional materials to tell new captains to analyze themselves

SOCIOLOGY ON THE FLIGHTDECK

The Grid Approach To Job Performance

A study of how the Grid framework applies to the cockpit can aid individuals in exploring alternative possibilities of behaviour which may have been unclear. Understanding these concepts can enable a person to sort out unsound or ineffective behavior and replace it with more effective behaviors.

The Grid below can be used as a frame of reference to study how each crewmember approaches a job.

 

SOCIOLOGY ON THE FLIGHTDECK

High

 

0

a.

о

£

w

a

E

0

о

e

8

 

Low

 

Figure 4. United CRM Grid. Source: United Airlines training manual, “Introduction to Command/Leadership/Resource Management,” MN-94, 10/95, p. 9.

 

and others on the Grid, a matrix putting “concern for people” against “concern for performance.” Each of several decision-making elements then get graphed to the Grid: inquiry, advocacy, conflict resolution, and critique. Inquiry, for example, comes out this way in the (1,9) quadrant: “I look for facts, decisions, and beliefs that suggest all is well; I am not inclined to challenge other crewmembers” and in the (9,1) quadrant as “I investigate my own and others’ facts, decisions, and beliefs in depth in order to be on top of any situation and to reassure myself that others are not making mistakes.”24 United’s gloss on Flight 90’s demise is not much different from that of Trollip and Jensen: the first officer made various non-assertive comments “but he never used the term, ‘Abort!’ The Captain failed to respond to the inquiry and advocacy of the First Officer.”25

Not surprisingly, the 747 pilot I quoted before, Robert Buck, registered, in print, a strenuous disagreement. After lampooning the psychologists who were intruding on his cockpit, Buck dismissed the CRM claim that the accident was a failure of assertiveness. “Almost any pilot listening to the tape would say that was not the case but rather that the crew members were trying to analyze what was going on. To further substantiate this is the fact the copilot was well-known to be an assertive individual who would have said loud and clear if he’d thought they should abort.”26 With snow falling, a following plane on their tail, АТС telling them to hurry, and the raging controversy over VI still in the air, Buck was not at all surprised that neither pilot aborted the launch.

Again and again we have within the investigation a localized cause in unstable suspension over a sea of diffuse necessary causes.27 We find agency personalized even where the ability to act lies far outside any individual’s control. And finally, we find a strict and yet unstable commitment to protocol even when, in other circumstances, maintenance of that protocol would be equally condemned. In flight 90 the final condemnation fell squarely on the shoulders of the captain. According to the NTSB, Wheaton’s multiple errors of failing to deice properly, failing to abort, and failing to immediately engage full power doomed him and scores of others.

I now want to turn to a very different accident, one in which the captain’s handling of a crippled airliner left him not condemned but celebrated by the NTSB. As we will see even then, the instabilities of localized cause, protocol, and the human/technological boundary pull the narrative into a singular point in space, time, and action, but always against the contrary