Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Initial NACA-NASA Research

Sudden gusts and their effects upon aircraft have posed a danger to the aviator since the dawn of flight. Otto Lilienthal, the inventor of the hang glider and arguably the most significant aeronautical researcher before the Wright brothers, sustained fatal injuries in an 1896 accident, when a gust lifted his glider skyward, died away, and left him hanging in a stalled flight condition. He plunged to Earth, dying the next day, his last words reputedly being "Opfer mussen gebracht werden”—or "Sacrifices must be made.”[19]

NASA’s interest in gust and turbulence research can be traced to the earliest days of its predecessor, the NACA. Indeed, the first NACA
technical report, issued in 1917, examined the behavior of aircraft in gusts.[20] Over the first decades of flight, the NACA expanded its interest in gust research, looking at the problems of both aircraft and lighter – than-air airships. The latter had profound problems with atmospheric turbulence and instability: the airship Shenandoah was torn apart over Ohio by violent stormwinds; the Akron was plunged into the Atlantic, possibly from what would now be considered a microburst; and the Macon was doomed when clear air turbulence ripped off a vertical fin and opened its gas cells to the atmosphere. Dozens of airmen lost their lives in these disasters.[21]

Initial NACA-NASA ResearchDuring the early part of the interwar years, much research on turbulence and wind behavior was undertaken in Germany, in con­junction with the development of soaring, and the long-distance and long – endurance sailplane. Conceived as a means of preserving German aeronautical skills and interest in the wake of the Treaty of Versailles, soaring evolved as both a means of flight and a means to study atmo­spheric behavior. No airman was closer to the weather, or more depen­dent upon an understanding of its intricacies, than the pilot of a sailplane, borne aloft only by thermals and the lift of its broad wings. German soaring was always closely tied to the nation’s excellent technical insti­tutes and the prestigious aerodynamics research of Ludwig Prandtl and the Prandtl school at Gottingen. Prandtl himself studied thermals, pub­lishing a research paper on vertical air currents in 1921, in the earliest years of soaring development.[22] One of the key figures in German sail­plane development was Dr. Walter Georgii, a wartime meteorologist who headed the postwar German Research Establishment for Soaring Flight (Deutsche Forschungsanstalt fur Segelflug ([DFS]). Speaking before

Britain’s Royal Aeronautical Society, he proclaimed, "Just as the mas­ter of a great liner must serve an apprenticeship in sail craft to learn the secret of sea and wind, so should the air transport pilot practice soaring flights to gain wider knowledge of air currents, to avoid their dangers and adapt them to his service.”[23] His DFS championed weather research, and out of German soaring, came such concepts as thermal flying and wave flying. Soaring pilot Max Kegel discovered firsthand the power of storm-generated wind currents in 1926. They caused his sail­plane to rise like "a piece of paper that was being sucked up a chimney,” carrying him almost 35 miles before he could land safely.[24] Used dis­cerningly, thermals transformed powered flight from gliding to soaring. Pioneers such as Gunter Gronhoff, Wolf Hirth, and Robert Kronfeld set notable records using combinations of ridge lift and thermals. On July 30, 1929, the courageous Gronhoff deliberately flew a sailplane with a barograph into a storm, to measure its turbulence; this flight anticipated much more extensive research that has continued in various nations.[25]

Initial NACA-NASA ResearchThe NACA first began to look at thunderstorms in the 1930s. During that decade, the Agency’s flagship laboratory—the Langley Memorial Aeronautical Laboratory in Hampton, VA—performed a series of tests to determine the nature and magnitude of gust loadings that occur in storm systems. The results of these tests, which engineers performed in Langley’s signature wind tunnels, helped to improve both civilian and military aircraft.[26] But wind tunnels had various limitations, leading to use of specially instrumented research airplanes to effectively use the sky as a laboratory and acquire information unobtainable by tradi­tional tunnel research. This process, most notably associated with the post-World War II X-series of research airplanes, led in time to such future NASA research aircraft as the Boeing 737 "flying laboratory” to study wind shear. Over subsequent decades, the NACAs successor, NASA,

would perform much work to help planes withstand turbulence, wind shear, and gust loadings.

Initial NACA-NASA ResearchFrom the 1930s to the 1950s, one of the NACA’s major areas of research was the nature of the boundary layer and the transition from laminar to turbulent flow around an aircraft. But Langley Laboratory also looked at turbulence more broadly, to include gust research and meteorological turbulence influences upon an aircraft in flight. During the previous decade, experimenters had collected measurements of pressure distribution in wind tunnels and flight, but not until the early 1930s did the NACA begin a systematic program to generate data that could be applied by industry to aircraft design, forming a committee to oversee loads research. Eventually, in the late 1930s, Langley cre­ated a separate structures research division with a structures research laboratory. By this time, individuals such as Philip Donely, Walter Walker, and Richard V. Rhode had already undertaken wideranging and influ­ential research on flight loads that transformed understanding about the forces acting on aircraft in flight. Rhode, of Langley, won the Wright Brothers Medal in 1935 for his research of gust loads. He pioneered the undertaking of detailed assessments of the maneuvering loads encoun­tered by an airplane in flight. As noted by aerospace historian James Hansen, his concept of the "sharp edge gust” revised previous think­ing of gust behavior and the dangers it posed, and it became "the back­bone for all gust research.”[27] NACA gust loads research influenced the development of both military and civilian aircraft, as did its research on aerodynamic-induced flight-surface flutter, a problem of particu­lar concern as aircraft design transformed from the era of the biplane to that of the monoplane. The NACA also investigated the loads and stresses experienced by combat aircraft when undertaking abrupt rolling and pullout maneuvers, such as routinely occurred in aerial dog­fighting and in dive-bombing.[28] A dive bomber encountered particularly punishing aerodynamic and structural loads as the pilot executed a pullout: abruptly recovering the airplane from a dive and resulting in it

swooping back into the sky. Researchers developed charts showing the relationships between dive angle, speed, and the angle required for recovery. In 1935, the Navy used these charts to establish design requirements for its dive bombers. The loads program gave the American aeronautics community a much better understanding of load distributions between the wing, fuselage, and tail surfaces of aircraft, including high – performance aircraft, and showed how different extreme maneuvers "loaded” these individual surfaces.

Initial NACA-NASA ResearchIn his 1939 Wilbur Wright lecture, George W. Lewis, the NACA’s legendary Director of Aeronautical Research, enumerated three major questions he believed researchers needed to address:

• What is the nature or structure of atmospheric gusts?

• How do airplanes react to gusts of known structure?

• What is the relation of gusts to weather conditions?[29]

Answering these questions, posed at the close of the biplane era, would consume researchers for much of the next six decades, well into the era of jet airliners and supersonic flight.

The advent of the internally braced monoplane accelerated inter­est in gust research. The long, increasingly thin, and otherwise unsup­ported cantilever wing was susceptible to load-induced failure if not well-designed. Thus, the stresses caused by wind gusts became an essen­tial factor in aircraft design, particularly for civilian aircraft. Building on this concern, in 1943, Philip Donely and a group of NACA research­ers began design of a gust tunnel at Langley to examine aircraft loads produced by atmospheric turbulence and other unpredictable flow phenomena and to develop devices that would alleviate gusts. The tun­nel opened in August 1945. It utilized a jet of air for gust simulation, a catapult for launching scaled models into steady flight, curtains for catching the model after its flight through the gust, and instruments for recording the model’s responses. For several years, the gust tunnel was useful, "often [revealing] values that were not found by the best known methods of calculation. . . in one instance, for example, the gust tunnel tests showed that it would be safe to design the airplane for load increments 17 to 22 percent less than the previously accepted

Initial NACA-NASA Research

The experimental Boeing XB-1 5 bomber was instrumented by the NACA to acquire gust-induced structural loads data. NASA.

values.”[30] As well, gust researchers took to the air. Civilian aircraft— such as the Aeronca C-2 light, general-aviation airplane, Martin M-130 flying boat, and the Douglas DC-2 airliner—and military aircraft, such as the Boeing XB-15 experimental bomber, were outfitted with special loads recorders (so-called "v-g recorders,” developed by the NACA). Extensive records were made on the weather-induced loads they experienced over various domestic and international air routes.[31]

This work was refined in the postwar era, when new generations of long-range aircraft entered air transport service and were also instrumented to record the loads they experienced during routine airline
operation.[32] Gust load effects likewise constituted a major aspect of early transonic and supersonic aircraft testing, for the high loads involved in transiting from subsonic to supersonic speeds already posed a serious challenge to aircraft designers. Any additional loading, whether from a wind gust or shear, or from the blast of a weapon (such as the over­pressure blast wave of an atomic weapon), could easily prove fatal to an already highly loaded aircraft.[33] The advent of the long-range jet bomber and transport—a configuration typically having a long and relatively thin swept wing, and large, thin vertical and horizontal tail surfaces— added further complications to gust research, particularly because the penalty for an abrupt gust loading could be a fatal structural failure. Indeed, on one occasion, while flying through gusty air at low altitude, a Boeing B-52 lost much of its vertical fin, though fortunately, its crew was able to recover and land the large bomber.[34]

Initial NACA-NASA ResearchThe emergence of long-endurance, high-altitude reconnaissance aircraft such as the Lockheed U-2 and Martin RB-57D in the 1950s and the long-range ballistic missile further stimulated research on high – altitude gusts and turbulence. Though seemingly unconnected, both the high-altitude jet airplane and the rocket-boosted ballistic missile required understanding of the nature of upper atmosphere turbulence and gusts. Both transited the upper atmospheric region: the airplane cruising in the high stratosphere for hours, and the ballistic missile

or space launch vehicle transiting through it within seconds on its way into space. Accordingly, from early 1956 through December 1959, the NACA, in cooperation with the Air Weather Service of the U. S. Air Force, installed gust load recorders on Lockheed U-2 strategic reconnais­sance aircraft operating from various domestic and overseas locations, acquiring turbulence data from 20,000 to 75,000 feet over much of the Northern Hemisphere. Researchers concluded that the turbulence problem would not be as severe as previous estimates and high-altitude balloon studies had indicated.[35]

Initial NACA-NASA ResearchHigh-altitude loitering aircraft such as the U-2 and RB-57 were followed by high-altitude, high-Mach supersonic cruise aircraft in the early to mid-1960s, typified by Lockheed’s YF-12A Blackbird and North American’s XB-70A Valkyrie, both used by NASA as Mach 3+ Supersonic Transport (SST) surrogates and supersonic cruise research testbeds. Test crews found their encounters with high – altitude gusts at supersonic speeds more objectionable than their exposure to low-altitude gusts at subsonic speeds, even though the given g-loading accelerations caused by gusts were less than those experi­enced on conventional jet airliners.[36] At the other extreme of aircraft performance, in 1961, the Federal Aviation Agency (FAA) requested NASA assistance to document the gust and maneuver loads and performance of general-aviation aircraft. Until the program was terminated in 1982, over 35,000 flight-hours of data were assembled from 95 airplanes, representing every category of general-aviation airplane, from single-engine personal craft to twin-engine business airplanes and including such specialized types as crop-dusters and aerobatic aircraft.[37]

Along with studies of the upper atmosphere by direct measurement came studies on how to improve turbulence detection and avoidance, and how to measure and simulate the fury of turbulent storms. In 1946­1947, the U. S. Weather Bureau sponsored a study of turbulence as part of a thunderstorm study project. Out of this effort, in 1948, research­ers from the NACA and elsewhere concluded that ground radar, if prop­erly used, could detect storms, enabling aircraft to avoid them. Weather radar became a common feature of airliners, their once-metal nose caps replaced by distinctive black radomes.[38] By the late 1970s, most wind shear research was being done by specialists in atmospheric science, geo­physical scientists, and those in the emerging field of mesometeorology— the study of small atmospheric phenomena, such as thunderstorms and tornadoes, and the detailed structure of larger weather events.[39] Although turbulent flow in the boundary layer is important to study in the laboratory, the violent phenomenon of microburst wind shear cannot be sufficiently understood without direct contact, investigation, and experimentation.[40]

Initial NACA-NASA ResearchMicroburst loadings constitute a threat to aircraft, particularly dur­ing approach and landing. No one knows how many aircraft accidents have been caused by wind shear, though the number is certainly con­siderable. The NACA had done thunderstorm research during World War II, but its instrumentation was not nearly sophisticated enough to detect microburst (or thunderstorm downdraft) wind shear. NASA would join with the FAA in 1986 to systematically fight wind shear and would only have a small pool of existing wind shear research data from which to draw.[41]

Initial NACA-NASA Research

The Lockheed L-101 1 TriStar uses smoke generators to show its strong wing vortex flow patterns in 1977. NASA.

 

Initial NACA-NASA Research

Подпись: A revealing view taken down the throat of a wingtip vortex, formed by a low-flying crop- duster. NASA.

Wind Shear Emerges as an Urgent Aviation Safety Issue

In 1972, the FAA had instituted a small wind shear research program, with emphasis upon developing sensors that could plot wind speed and direction from ground level up to 2,000 feet above ground level (AGL). Even so, the agency’s major focus was on wake vortex impingement. The powerful vortexes streaming behind newer-generation wide-body aircraft could—and sometimes did—flip smaller, lighter aircraft out of control. Serious enough at high altitude, these inadvertent excur­sions could be disastrous if low over the ground, such as during landing and takeoff, where a pilot had little room to recover. By 1975, the FAA had developed an experimental Wake Vortex Advisory System, which it installed later that year at Chicago’s busy O’Hare International Airport. NASA undertook a detailed examination of wake vortex studies, both in tunnel tests and with a variety of aircraft, including the Boeing 727 and 747, Lockheed L-1011, and smaller aircraft, such as the Gates Learjet, helicopters, and general-aviation aircraft.

But it was wind shear, not wake vortex impingement, which grew into a major civil aviation concern, and the onset came with stunning and deadly swiftness.[42] Three accidents from 1973 to 1975 highlighted the extreme danger it posed. On the afternoon of December 17, 1973, while making a landing approach in rain and fog, an Iberia Airlines McDonnell-Douglas DC-10 wide-body abruptly sank below the glide – slope just seconds before touchdown, impacting amid the approach lights of Runway 33L at Boston’s Logan Airport. No one died, but the crash seriously injured 16 of the 151 passengers and crew. The subse­quent National Transportation Safety Board (NTSB) report determined "that the captain did not recognize, and may have been unable to recog­nize an increased rate of descent” triggered "by an encounter with a low – altitude wind shear at a critical point in the landing approach.”[43] Then, on June 24, 1975, Eastern Air Lines’ Flight 66, a Boeing 727, crashed on approach to John F. Kennedy International Airport’s Runway 22L. This time, 113 of the 124 passengers and crew perished. All afternoon, flights had encountered and reported wind shear conditions, and at least one pilot had recommended closing the runway. Another Eastern captain, flying a Lockheed L-1011 TriStar, prudently abandoned his approach and landed instead at Newark. Shortly after the L-1011 diverted, the EAL Boeing 727 impacted almost a half mile short of the runway threshold, again amid the approach lights, breaking apart and bursting into flames. Again, wind shear was to blame, but the NTSB also faulted Kennedy’s air traffic controllers for not diverting the 727 to another runway, after the EAL TriStar’s earlier aborted approach.[44]

Initial NACA-NASA ResearchJust weeks later, on August 7, Continental Flight 426, another Boeing 727, crashed during a stormy takeoff from Denver’s Stapleton

International Airport. Just as the airliner began its climb after lifting off the runway, the crewmembers encountered a wind shear so severe that they could not maintain level flight despite application of full power and maintenance of a flight attitude that ensured the wings were produc­ing maximum lift.[45] The plane pancaked in level attitude on flat, open ground, sustaining serious damage. No lives were lost, though 15 of the 134 passengers and crew were injured.

Initial NACA-NASA ResearchLess than a year later, on June 23, 1976, Allegheny Airlines Flight 121, a Douglas DC-9 twin-engine medium-range jetliner, crashed dur­ing an attempted go-around at Philadelphia International Airport. The pilot, confronting "severe horizontal and vertical wind shears near the ground,” abandoned his landing approach to Runway 27R. As controllers in the airport tower watched, the straining DC-9 descended in a nose – high attitude, pancaking onto a taxiway and sliding to a stop. The fact that it hit nose-high, wings level, and on flat terrain undoubtedly saved lives. Even so, 86 of the plane’s 106 passengers and crew were seriously injured, including the entire crew.[46]

In these cases, wind shear brought about by thunderstorm down­drafts (microbursts), rather than the milder wind shear produced by gust fronts, caused these accidents. This led to a major reinterpretation of the wind shear-causing phenomena that most endangered low-flying planes. Before these accidents, meteorologists believed that gust fronts, or the leading edge of a large dome of rain-cooled air, provided the most danger­ous sources of wind shear. Now, using data gathered from the planes that had crashed and from weather radar, scientists, engineers, and designers came to realize that the small, focused, jet-like downdraft columns charac­teristic of microbursts produced the most threatening kind of wind shear.[47]

Microburst wind shear poses an insidious danger for an aircraft. An aircraft landing will typically encounter the horizontal outflow of a microburst as a headwind, which increases its lift and airspeed, tempting

Initial NACA-NASA Research

Fateful choice: confronting the microburst threat. Richard P. Hallion.

the pilot to reduce power. But then the airplane encounters the descend­ing vertical column as an abrupt downdraft, and its speed and altitude both fall. As it continues onward, it will exit the central downflow and experience the horizontal outflow, now as a tailwind. At this point, the airplane is already descending at low speed. The tailwind seals its fate, robbing it of even more airspeed and, hence, lift. It then stalls (that is, loses all lift) and plunges to Earth. As NASA testing would reveal, professional pilots generally need between 10 to 40 seconds of warning to avoid the problems of wind shear.[48]

Goaded by these accidents and NTSB recommendations that the FAA improve its weather advisory and runway selection procedures, "step up research on methods of detecting the [wind shear] phenome­non,” and develop aircrew wind shear training process, the FAA man­dated installation at U. S. airports of a new Low-Level Windshear Alert System (LLWAS), which employed acoustic Doppler radar, technically similar to the FAA’s Wake Vortex Advisory System installed at O’Hare.[49] The LLWAS incorporated a variety of equipment that measured wind velocity (wind speed and direction). This equipment included a mas­ter station, which had a main computer and system console to moni­tor LLWAS performance, and a transceiver, which transmitted signals
to the system’s remote stations. The master station had several visual computer displays and auditory alarms for aircraft controllers. The remote stations had wind sensors made of sonic anemometers mounted on metal pipes. Each remote station was enclosed in a steel box with a radio transceiver, power supplies, and battery backup. Every airport out­fitted with this system used multiple anemometer stations to effectively map the nature of wind events in and around the airport’s runways.[50]

Initial NACA-NASA ResearchAt the end of March 1981, over 70 representatives from NASA, the FAA, the military, the airline community, the aerospace industry, and aca­demia met at the University of Tennessee Space Institute in Tullahoma to explore weather-related aviation issues. Out of that came a list of recommendations for further joint research, many of which directly addressed the wind shear issue and the need for better detection and warning systems. As the report summarized:

1. There is a critical need to increase the data base for wind and temperature aloft forecasts both from a more fre­quent updating of the data as well as improved accuracy in the data, and thus, also in the forecasts which are used in flight planning. This will entail the development of rational definitions of short term variations in inten­sity and scale length (of turbulence) which will result in more accurate forecasts which should also meet the need to improve numerical forecast modeling require­ments relative to winds and temperatures aloft.

2. The development of an on-board system to detect wind induced turbulence should be beneficial to meeting the requirement for an investigation of the subjective evaluation of turbulence "feel” as a function of motion drive algorithms.

3. More frequency reporting of wind shift in the terminal area is needed along with greater accuracy in forecasting.

4. T here is a need to investigate the effects of unequal wind components acting across the span of an airfoil.

5. The FAA Simulator Certification Division should monitor the work to be done in conjunction with the JAWS project relative to the effects of wind shear on air­craft performance.

6. Initial NACA-NASA ResearchRobert Steinberg’s ASDAR effort should be utilized as soon as possible, in fact it should be encouraged or demanded as an operational system beneficial for flight planning, specifically where winds are involved.

7. There is an urgent need to review the way pilots are trained to handle wind shear. The present method, as indicated in the current advisory circular, of immedi­ately pulling to stick shaker on encountering wind shear could be a dangerous procedure. It is suggested the cir­cular be changed to recommend the procedure to hold at whatever airspeed the aircraft is at when the pilot real­izes he is encountering a wind shear and apply maxi­mum power, and that he not pull to stick shaker except to flare when encountering ground effect to minimize impact or to land successfully or to effect a go-around.

8. Need to develop a clear non-technical presentation of wind shear which will help to provide improved train­ing for pilots relative to wind shear phenomena. Such training is of particular importance to pilots of high per­formance, corporate, and commercially used aircraft.

9. Need to develop an ICAO type standard terminology for describing the effects of windshear on flight performance.

10. The ATC system should be enhanced to provide opera­tional assistance to pilots regarding hazardous weather areas and in view of the envisioned controller workloads generated, perfecting automated transmissions contain­ing this type of information to the cockpit as rapidly and as economically as practicab1e.

11. In order to improve the detection in real time of haz­ardous weather, it is recommended that FAA, NOAA, NWS, and DOD jointly address the problem of fragmen­tal meteorological collection, processing, and dissem­ination pursuant to developing a system dedicated to making effective use of perishable weather information. Coupled with this would be the need to conduct a cost

benefit study relative to the benefits that could be real­ized through the use of such items as a common winds and temperature aloft reporting by use of automated sensors on aircraft.

12. Initial NACA-NASA ResearchDevelop a capabi1ity for very accurate four to six min­ute forecasts of wind changes which would require ter­minal reconfigurations or changing runways.

13. Due to the inadequate detection of clear air turbulence an investigation is needed to determine what has hap­pened to the promising detection systems that have been reported and recommended in previous workshops.

14. Improve the detection and warning of windshear by developing on-board sensors as well as continuing the development of emerging technology for ground – based sensors.

15. Need to collect true three and four dimensional wind shear data for use in flight simulation programs.

16. Recommend that any systems whether airborne or ground based that can provide advance or immediate alert to pilots and controllers should be pursued.

17. Need to continue the development of Doppler radar tech­nology to detect the wind shear hazard, and that this be continued at an accelerated pace.

18. Need for airplane manufacturers to take into consid­eration the effect of phenomena such as microbursts which produce strong periodic longitudinal wind perturbations at the aircraft phugoid frequency.

19. Consideration should be given, by manufacturers, to consider gust alleviation devices on new aircraft to pro­vide a softer ride through turbulence.

20. Need to develop systems to automatically detect haz­ardous weather phenomena through signature recog­nition algorithms and automatically data linking alert messages to pilots and air traffic controllers.[51]

Given the subsequent history of NASA’s research on the wind shear problem (and others), many of these recommendations presciently forecast the direction of Agency and industry research and develop­ment efforts.

Initial NACA-NASA ResearchUnfortunately, that did not come in time to prevent yet another series of microburst-related accidents. That series of catastrophes effectively elevated microburst wind shear research to the status of a national air safety emergency. By the early 1980s, 58 U. S. airports had installed LLWAS. Although LLWAS constituted a great improvement over verbal observations and warnings by pilots communicated to air traffic control­lers, LLWAS sensing technology was not mature or sophisticated enough to remedy the wind shear threat. Early LLWAS sensors were installed without fullest knowledge of microburst characteristics. They were usu­ally installed in too-few numbers, placed too close to the airport (instead of farther out on the approach and departure paths of the runways), and, worst, were optimized to detect gust fronts (the traditional pre – Fujita way of regarding wind shear)—not the columnar downdrafts and horizontal outflows characteristic of the most dangerous shear flows. Thus, wind shear could still strike, and viciously so.

On July 9, 1982, Clipper 759, a Pan American World Airways Boeing 727, took off from the New Orleans airport amid showers and "gusty, variable, and swirling” winds.[52] Almost immediately, it began to descend, having attained an altitude of no more than 150 feet. It hit trees, con­tinued onward for almost another half mile, and then crashed into res­idential housing, exploding in flames. All 146 passengers and crew died, as did 8 people on the ground; 11 houses were destroyed or "substan­tially” damaged, and another 16 people on the ground were injured. The NTSB concluded that the probable cause of the accident was "the airplane’s encounter during the liftoff and initial climb phase of flight with a microburst-induced wind shear which imposed a downdraft and a decreasing headwind, the effects of which the pilot would have had difficulty recognizing and reacting to in time for the airplane’s descent to be arrested before its impact with trees.” Significantly, it also noted, "Contributing to the accident was the limited capability of current ground based low level wind shear detection technology [the LLWAS] to provide
definitive guidance for controllers and pilots for use in avoiding low level wind shear encounters.”[53] This tragic accident impelled Congress to direct the FAA to join with the National Academy of Sciences (NAS) to "study the state of knowledge, alternative approaches and the consequences of wind shear alert and severe weather condition standards relating to take off and landing clearances for commercial and general aviation aircraft.”[54] As the FAA responded to these misfortunes and accelerated its research on wind shear, NASA researchers accelerated their own wind shear research. In the late 1970s, NASA Ames Research Center con­tracted with Bolt, Baranek, and Newman, Inc., of Cambridge, MA, to perform studies of "the effects of wind-shears on the approach perfor­mance of a STOL aircraft. . . using the optimal-control model of the human operator.” In laymen’s terms, this meant that the company used existing data to mathematically simulate the combined pilot/aircraft reaction to various wind shear situations and to deduce and explain how the pilot should manipulate the aircraft for maximum safety in such situations. Although useful, these studies did not eliminate the wind shear problem.[55] Throughout the 1980s, NASA research into thun­derstorm phenomena involving wind shear continued. Double-vortex thunderstorms and their potential effects on aviation were of partic­ular interest. Double-vortex storms involve a pair of vortexes present in the storm’s dynamic updraft that rotate in opposite directions. This pair forms when the cylindrical thermal updraft of a thunderstorm pen­etrates the upper-level air and there is a large amount of vertical wind shear between the lower – and upper-level air layers. Researchers pro­duced a numerical tornado prediction scheme based on the movement of the double-vortex thunderstorm. A component of this scheme was the Energy-Shear Index (ESI), which researchers calculated from radio­sonde measurements. The index integrated parameters that were rep­resentative of thermal instability and the blocking effect. It indicated

Initial NACA-NASA Research

Initial NACA-NASA Research

NASA 809, a Martin B-57B flown by Dryden research crews in 1982 for gust and microburst research. NASA.

environments appropriate for the development of double-vortex thun­derstorms and tornadoes, which would help pilots and flight control­lers determine safe flying conditions.[56]

In 1982, in partnership with the National Center for Atmospheric Research (NCAR), the University of Chicago, the National Oceanic Atmospheric Administration (NOAA), the National Science Foundation (NSF), and the FAA, NASA vigorously supported the Joint Airport Weather Studies (JAWS) effort. NASA research pilots and flight research engineers from the Ames-Dryden Flight Research Facility (now the NASA Dryden Flight Research Center) participated in the JAWS program from mid-May through mid-August 1982, using a specially instrumented Martin B-57B jet bomber. NASA researchers selected the B-57B for its strength, flying it on low-level wind shear research flights around the Sierra Mountains near Edwards Air Force Base (AFB), CA, about the Rockies near Denver, CO, around Marshall Space Flight Center, AL, and near Oklahoma City, OK. Raw data were digitally collected on microbursts, gust fronts, mesocyclones, torna­
does, funnel clouds, and hail storms; converted into engineering for­mat at the Langley Research Center; and then analyzed at Marshall Space Flight Center and the University of Tennessee Space Institute at Tullahoma. Researchers found that some microbursts recorded dur­ing the JAWS program created wind shear too extreme for landing or departing airliners to survive if they encountered it at an altitude less than 500 feet.[57] In the most severe case recorded, the B-57B experienced an abrupt 30-knot speed increase within less than 500 feet of distance traveled and then a gradual decrease of 50 knots over 3.2 miles, clear evidence of encountering the headwind outflow of a microburst and then the tailwind outflow as the plane transited through the microburst.[58]

Initial NACA-NASA ResearchAt the same time, the Center for Turbulence Research (CTR), run jointly by NASA and Stanford University, pioneered using an early par­allel computer, the Illiac IV, to perform large turbulence simulations, something previously unachievable. CTR performed the first of these simulations and made the data available to researchers around the globe. Scientists and engineers tested theories, evaluated modeling ideas, and, in some cases, calibrated measuring instruments on the basis of these data. A 5-minute motion picture of simulated turbulent flow provided an attention-catching visual for the scientific community.[59]

In 1984, NASA and FAA representatives met at Langley Research Center to review the status of wind shear research and progress toward developing sensor systems and preventing disastrous accidents. Out of this, researcher Roland L. Bowles conceptualized a joint NASA-FAA
program to develop an airborne detector system, perhaps one that would be forward-looking and thus able to furnish real-time warning to an air­line crew of wind shear hazards in its path. Unfortunately, before this program could yield beneficial results, yet another wind shear accident fol­lowed the dismal succession of its predecessors: the crash of Delta Flight 191 at Dallas-Fort Worth International Airport (DFW) on August 2, 1985.[60]

Initial NACA-NASA ResearchDelta Flight 191 was a Lockheed L-1011 TriStar wide-body jumbo jet. As it descended toward Runway 17L amid a violent turbulence – producing thunderstorm, a storm cell produced a microburst directly in the airliner’s path. The L-1011 entered the fury of the outflow when only 800 feet above ground and at a low speed and energy state. As the L-1011 transitioned through the microburst, a lift-enhancing head­wind of 26 knots abruptly dropped to zero and, as the plane sank in the downdraft column, then became a 46-knot tailwind, robbing it of lift. At low altitude, the pilots had insufficient room for recovery, and so, just 38 seconds after beginning its approach, Delta Flight 191 plunged to Earth, a mile short of the runway threshold. It broke up in a fiery heap of wreckage, slewing across a highway and crashing into some water tanks before coming to a rest, burning furiously. The accident claimed the lives of 136 passengers and crewmembers and the driver of a passing automobile. Just 24 passengers and 3 of its crew survived: only 2 were without injury. [61] Among the victims were several senior staff members from IBM, including computer pioneer Don Estridge, father of the IBM PC. Once again, the NTSB blamed an "encounter at low altitude with a microburst-induced, severe wind shear” from a rapidly developing thunderstorm on the final approach course. But the accident illustrated as well the immature capabilities of the LLWAS at that time; only after Flight 191 had crashed did the DFW LLWAS detect the fatal microburst.[62]

The Dallas accident resulted in widespread shock because of its large number of fatalities. It particularly affected airline crews, as American Airlines Capt. Wallace M. Gillman recalled vividly at a NASA-sponsored 1990 meeting of international experts in wind shear:

Initial NACA-NASA ResearchAbout one week after Delta 191’s accident in Dallas, I was taxi­ing out to take off on Runway 17R at DFW Airport. Everybody was very conscience of wind shear after that accident. I remem­ber there were some storms coming in from the northwest and we were watching it as we were in a line of airplanes waiting to take off. We looked at the wind socks. We were listening to the tower reports from the LLWAS system, the winds at var­ious portions around the airport. I was number 2 for takeoff and I said to my co-pilot, "I’m not going to go on this runway.”

But just at that time, the number 1 crew in line, Pan Am, said,

"I’m not going to go.” Then the whole line said, "We’re not going to go” then the tower taxies us all down the runway, took us about 15 minutes, down to the other end. By that time the storm had kind of passed by and we all launched to the north.[63]

Applications Technology Satellite 1 (ATS 1): 1966-1967

Aviation’s use of actual space-based technology was first demonstrated by the FAA using NASA’s Applications Technology Satellite 1 (ATS 1) to relay voice communications between the ground and an airborne FAA aircraft using very high frequency (VHF) radio during 1966 and 1967, with the aim of enabling safer air traffic control over the oceans.[199]

Launched from Cape Canaveral atop an Atlas Agena D rocket on December 7, 1966, the spin-stabilized ATS 1 was injected into geo­synchronous orbit to take up a perch 22,300 miles high, directly over Ecuador. During this early period in space history, the ATS 1 spacecraft was packed with experiments to demonstrate how satellites could be used to provide the communication, navigation, and weather monitor­ing that we now take for granted. In fact, the ATS 1’s black and white television camera captured the first full-Earth image of the planet’s cloud-covered surface.[200]

Eight flight tests were conducted using NASA’s ATS 1 to relay voice signals between the ground and an FAA aircraft using VHF band radio, with the intent of allowing air traffic controllers to speak with pilots flying over an ocean. Measurements were recorded of signal level, signal plus noise-to-noise ratio, multipath propagation, voice intelli­gibility, and adjacent channel interference. In a 1970 FAA report, the author concluded that the "overall communications reliability using the ATS 1 link was considered marginal.”[201]

All together, the ATS project attempted six satellite launches between 1966 and 1974, with ATS 2 and ATS 4 unable to achieve a useful orbit. ATS 1 and ATS 3 continued the FAA radio relay testing, this time includ­ing a specially equipped Pan American Airways 747 as it flew a commer­cial flight over the ocean. Results were better than when the ATS 1 was tested alone, with a NASA summary of the experiments concluding that

The experiments have shown that geostationary satellites can provide high quality, reliable, un-delayed communications

between distant points on the earth and that they can also be used for surveillance. A combination of un-delayed communi­cations and independent surveillance from shore provides the elements necessary for the implementation of effective traffic control for ships and aircraft over oceanic regions. Eventually the same techniques may be applied to continental air traffic control.[202]

Center TRACON Automation System

The computer-based tools used to improve the flow of traffic across the National Airspace System—such as SMS, FACET, and ACES already discussed—were built upon the historical foundation of another set of tools that are still in use today. Rolled out during the 1990s, the underlying concepts of these tools go back to 1968, when an Ames Research Center scientist, Heinz Erzberger, first explored the idea of introducing air traffic control concepts—such as 4-D trajectory syn­thesis—and then proposed what was, in fact, developed: the Center TRACON Automation System (CTAS), the Traffic Manager Adviser (TMA), the En Route Descent Adviser (EDA), and the Final Approach Spacing Tool (FAST). Each of the tools provides controllers with advice, information, and some amount of automation—but each tool does this for a different segment of the NAS.[265]

CTAS provides automation tools to help air traffic controllers plan for and manage aircraft arriving to a Terminal Radar Approach Control (TRACON), which is the area within about 40 miles of a major airport. It does this by generating air traffic advisories that are designed to increase fuel efficiency and reduce delays, as well as assist controllers in ensuring that there is an acceptable separation between aircraft and that planes are approaching a given airport in the correct order. CTAS’s goals also include improving airport capacity without threatening safety or increasing the workload of controllers.[266]

Center TRACON Automation System

Flight controllers test the Traffic Manager Adviser tool at the Denver TRACON. The tool helps manage the flow of air traffic in the area around an airport. National Air and Space Museum.

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

Over the past 50 years, NASA has indeed encountered many complex human factors issues. Each of these had to be resolved to make possi­ble the space agency’s many phenomenal accomplishments. Its initial goal of putting a man into space was quickly accomplished by 1961. But in the years to come, NASA progressed beyond that at warp speed— at least technologically speaking.[344] By 1973, it had put men into orbit around the Earth; sent them outside the relative safety of their orbiting craft to "walk” in space, with only their pressurized suit to protect them; sent them around the far side of the Moon and back; placed them into an orbiting space station, where they would live, function, and perform com­plex scientific experiments in weightlessness for months at a time; and, certainly most significantly, accomplished mankind’s greatest technolog­ical feat by landing humans onto the surface of the Moon—not just once, but six times—and bringing them all safely back home to Mother Earth.[345]

NASA’s magnificent accomplishments in its piloted space program during the 1960s and 1970s—nearly unfathomable only a few years before—thus occurred in large part as a result of years of dedicated human factors research. In the early years of the piloted space program, researchers from the NASA Environmental Physiology Branch focused on the biodynamics—or more accurately, the bioastronautics – of man in space. This discipline, which studies the biological and medical effects of space flight on man, evaluated such problems as noise, vibration, acceleration and deceleration, weightlessness, radiation, and the phys­iology, behavioral aspects, and performance of astronauts operating under confined and often stressful conditions.[346] These researchers thus focused on providing life support and ensuring the best possi-

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

Mercury astronauts experiencing weightlessness in a C-1 31 aircraft flying a "zero-g” trajec­tory. This was just one of many aspects of piloted space flight that had never before been addressed. NASA.

ble medical selection and maintenance of the humans who were to fly into space.

Also essential for this work to progress was the further development of the technology of biomedical telemetry. This involved monitoring and transmitting a multitude of vital signs from an astronaut in space on a real-time basis to medical personnel on the ground. The compre­hensive data collected included such information as body temperature, heart rate and rhythm, blood and pulse pressure, blood oxygen content, respiratory and gastrointestinal functions, muscle size and activity, uri­nary functions, and varying types of central nervous system activity.[347] Although much work had already been done in this field, particularly in the X-15 program, NASA further perfected it during the Mercury program when the need to carefully monitor the physiological condi­tion of astronauts in space became critical.[348]

Finally, this early era of NASA human factors research included an emphasis on the bioengineering aspects of piloted space flight, or the application of engineering principles in order to satisfy the phys­iological requirements of humans in space. This included the design and application of life-sustaining equipment to maintain atmospheric pressure, oxygen, and temperature; provide food and water; eliminate metabolic waste products; ensure proper restraint; and combat the many other stresses and hazards of space flight. This research also included finding the most expeditious way of arranging the multitude of dials, switches, knobs, and displays in the spacecraft so that the astronaut could efficiently monitor and operate them.[349]

In addition to the knowledge gained and applied while planning these early space flights was that gleaned from the flights themselves. The data gained and the lessons learned from each flight were essen­tial to further success, and they were continually factored into future piloted space endeavors. Perhaps even more important, however, was the information gained from the failures of this period. They taught NASA researchers many painful but nonetheless important lessons about the cost of neglecting human factors considerations. Perhaps the most glaring example of this was the Apollo 1 fire of January 27, 1967, that killed NASA astronauts Virgil "Gus” Grissom, Roger Chaffee, and Edward White. While the men were sealed in their capsule conduct­ing a launch pad test of the Apollo/Saturn space vehicle that was to be used for the first flight, a flash fire occurred. That such a fire could have happened in such a controlled environment was hard to explain, but the fact that there had been provided no effective means for the astronauts’ rescue or escape in such an emergency was inexplicable.[350] This tragedy did, however, serve some purpose; it gave impetus to tangible safety and engineering improvements, including the cre­ation of an escape hatch through which astronauts could more quickly open and egress during an emergency.[351] Perhaps more importantly, this tragedy caused NASA to step back and reevaluate all of its safety and human engineering procedures.

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

Apollo 1 astronauts, left to right, Gus Grissom, Ed White, and Roger Chaffee. Their deaths in a January 27, 1967, capsule fire prompted vital changes in NASA’s safety and human engineering policies. NASA.

A New Direction for NASA’s Human Factors Research

By the end of the Apollo program, NASA, though still focused on the many initiatives of its space ventures, began to look in a new direction for its research activities. The impetus for this came from a 1968 Senate Committee on Aeronautical and Space Sciences report recommend­ing that NASA and the recently created Department of Transportation jointly determine which areas of civil aviation might benefit from fur­ther research.[352] A subsequent study prompted the President’s Office of Science and Technology to direct NASA to begin similar research. The resulting Terminal Configured Vehicle program led to a new focus in NASA human factors research. This included the all-important inter­face between not only the pilot and airplane, but also the pilot and the air traffic controller.[353]

The goal of this ambitious program was

. . . to provide improvements in the airborne systems (avionics and air vehicle) and operational flight procedures for reducing approach and landing accidents, reducing weather minima, increasing air traffic controller productivity and airport and airway capacity, saving fuel by more efficient terminal area operations, and reducing noise by operational procedures.[354]

With this directive, NASA’s human factors scientists were now officially involved with far more than "just” a piloted space program; they would now have to extend their efforts into the expansive world of aviation.

With these new aviation-oriented research responsibilities, NASA’s human factors programs would continue to evolve and increase in com­plexity throughout the remaining decades of the 20th century and into the present one. This advancement in development was inevitable, given the growing technology, especially in the realm of computer science and complex computer-managed systems, as well as the changing space and aeronautical needs that arose throughout this period.

During NASA’s first three decades, more and more of the increasingly complex aerospace operating systems it was developing for its space ini­tiatives and the aviation industry were composed of multiple subsys­tems. For this reason, the need arose for a human systems integration (HSI) plan to help maximize their efficiency. HSI is a multidisciplinary approach that stresses human factors considerations, along with other such issues as health, safety, training, and manpower, in the early design of fully integrated systems.[355]

To better address the human factors research needs of the aviation community, NASA formed the Flight Management and Human Factors Division at Ames Research Center, Moffett Field, CA.[356] Its name was later changed to the Human Factors Research & Technology Division; today, it is known as the Human Systems Integrations Division (HSID).[357]

For the past three decades, this division and its precursors have sponsored and participated in most of NASA’s human factors research affecting both aviation and space flight. HSID describes its goal as "safe, efficient, and cost-effective operations, maintenance, and training, both in space, in flight, and on the ground,” in order to "advance human – centered design and operations of complex aerospace systems through analysis, experimentation and modeling of human performance and human-automation interaction to make dramatic improvements in safety, efficiency and mission success.”[358] To accomplish this goal, the division, in its own words,

• Studies how humans process information, make deci­sions, and collaborate with human and machine systems.

• Develops human-centered automation and interfaces, decision support tools, training, and team and organi­zational practices.

• Develops tools, technologies, and countermeasures for safe and effective space operations.[359]

More specifically, the Human Systems Integrations Division focuses on the following three areas:

• Human performance: This research strives to better define how people react and adapt to various types of technology and differing environments to which they are exposed. By analyzing such human reactions as visual, auditory, and tactile senses; eye movement; fatigue; attention; motor control; and such perceptual cogni­tive processes as memory, it is possible to better predict and ultimately improve human performance.

• Technology interface design: This directly affects human performance, so technology design that is patterned to efficient human use is of utmost importance. Given the complexity and magnitude of modern pilot/aircrew cock­pit responsibilities—in commercial, private, and military aircraft, as well as space vehicles—it is essential to sim­plify and maximize the efficiency of these tasks. Only with cockpit instruments and controls that are easy to operate can human safety and efficiency be maximized. Interface design might include, for example, the development of cockpit instrumentation displays and arrangement, using a graphical user interface.

• Human-computer interaction: This studies the "pro­cesses, dialogues, and actions” a person uses to inter­act with a computer in all types of environment. This interaction allows the user to communicate with the computer by inputting instructions and then receiving responses back from the computer via such mechanisms as conventional monitor displays or head monitor dis­plays that allows the user to interact with a virtual envi­ronment. This interface must be properly adapted to the individual user, task, and environment.[360]

Some of the more important research challenges HSID is addressing and will continue to address are proactive risk management, human per­formance in virtual environments, distributed air traffic management, com­putational models of human-automation interaction, cognitive models of complex performance, and human performance in complex operations.[361]

Over the years, NASA’s human factors research has covered an almost unbelievably wide array of topics. This work has involved—and ben – efitted—nearly every aspect of the aviation world, including the FAA, DOD, the airline industry, general aviation, and a multitude of nonavi­ation areas. To get some idea of the scope of the research with which NASA has been involved, one need only search the NASA Technical Report Server using the term "human factors,” which produces more

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

A full-scale aircraft drop test being conducted at the 240-foot-high NASA Langley Impact Dynamics Research Facility. The gantry previously served as the Lunar Landing Research Facility. NASA.

than 3,600 records.[362] It follows that no single paper or document—and this case study is no exception—could ever comprehensively describe NASA’s human factors research. It is possible, however, to get some idea of the impact that NASA human factors research has had on aviation safety and technology by reviewing some of the major programs that have driven the Agency’s human factors research over the past decades.

Into the Future

The preceding discussion can serve only as a brief introduction to NASA’s massive research contribution to aviation in the realm of human factors. Hopefully, however, it has clearly made the following point: NASA, since its creation in 1958, has been an equally contributing partner with the aeronautical industry in the sharing of new technol­ogy and information resulting from their respective human factors research activities.

Because aerospace is but an extension of aeronautics, it is difficult to envision how NASA could have put its first human into space with­out the knowledge and technology provided by the aeronautical human factors research and development that occurred in the decades lead­ing up to the establishment of NASA and its piloted space program. In return, however, today’s high-tech aviation industry is immeasurably more advanced than it would have been without the past half century of dedicated scientific human factors research conducted and shared by the various components of NASA.

Without the thousands of NASA human factors-related research initiatives during this period, many—if not most—of the technologies that are a normal part of today’s flight, air traffic control, and aircraft maintenance operations, would not exist. The high cost, high risk, and lack of tangible cost effectiveness the research and development these advances entailed rendered this kind of research too expensive and spec­ulative for funding by commercial concerns forced to abide by "bottom­line” considerations. As a result of NASA research and the many safety programs and technological innovations it has sponsored for the bene­fit of all, countless additional lives and dollars were saved as many acci­dents and losses of efficiency were undoubtedly prevented.

It is clear that NASA is going to remain in the business of improv­ing aviation safety and technology for the long haul. NASA’s Aeronautics Research Mission Directorate (ARMD), one of the Agency’s four major directorates, will continue improving the safety and efficiency of aviation

with its aviation safety, fundamental aeronautics, airspace systems, and aeronautics test programs. Needless to say, a major aspect of these pro­grams will involve human factors research, as it pertains to aeronautics.[439]

It is impossible to predict precisely in which direction NASA’s human factors research will go in the decades to come; however, based on the Agency’s remarkably unique 50-year history, it seems safe to assume it will continue to contribute to an ever-safer and more efficient world of aviation.

Into the Future

Hovering flight test of a free-flight model of the Hawker P.1127 V/STOL fighter underway in the return passage of the Full-Scale Tunnel. Flying-model demonstrations of the ease of transi­tion to and from forward flight were key in obtaining the British government’s support. NASA.

 

Spinning

Qualitatively, recovery from the various spin modes is dependent on the type of spins exhibited, the mass distribution of the aircraft, and the sequence of controls applied. Recovering from the steep steady spin tends to be relatively easy because the nose-down orientation of the air­craft control surfaces to the free stream enables at least a portion of the control effectiveness to be retained. In contrast, during a flat spin, the fuselage may be almost horizontal, and the control surfaces are ori­ented so as to provide little recovery moment, especially a rudder on a conventional vertical tail. In addition to the ineffectiveness of controls for recovery from the flat spin, the rotation of the aircraft about a near­vertical axis near its center of gravity results in extremely high centrifu­gal forces at the cockpit for configurations with long fuselages. In many cases, the negative ("eyeballs out”) g-loads may be so high as to incapaci­tate the crewmembers and prevent them from escaping from the aircraft.

The NACA and the Wind Tunnel

For the United States, the Great War highlighted the need to achieve parity with Europe in aeronautical development. Part of that effort was the creation of the Government civilian research agency, the NACA, in March 1915. The committee established its first facility, Langley Memorial Aeronautical Laboratory—named in honor of aeronautical experimenter and Smithsonian Secretary Samuel P. Langley—2 years

The NACA and the Wind Tunnel

NACA Wind Tunnel No. 1 with a model of a Curtiss JN-4D Trainer in the test section. NASA.

later near Hampton, VA, on the Chesapeake Bay. In June 1920, NACA Wind Tunnel No. 1 became operational. A close copy of a design built at the British National Physical Laboratory a decade earlier, the tunnel produced no data directly applicable to aircraft design.[536]

One of the major obstacles facing the effective use of a wind tun­nel was scale effects, meaning the Reynolds number of model did not match the full-scale airplane. Prandtl protege Max Munk proposed the construction of a high-pressure tunnel to solve the problem. His Variable Density Tunnel (VDT) could be used to test a 1/20th-scale model in an airflow pressurized to 20 atmospheres, which would generate identical Reynolds numbers to full-scale aircraft. Built in the Newport News shipyards, the VDT was radical in design with its boilerplate and rivets. More importantly, it proved to be a point of departure from pre­vious tunnels with the data that it produced.[537]

The VDT became an indispensable tool to airfoil development that effectively reshaped the subsequent direction of American airfoil research and development after it became operational in 1923. Munk’s successor in the VDT, Eastman Jacobs, and his colleagues in the VDT pioneered airfoil design methods with the pivotal Technical Report 460, which influenced air­craft design for decades after its publication in 1933.[538] Of the 101 distinct air­foil sections employed on modern Army, Navy, and commercial airplanes by 1937, 66 were NACA designs. Those aircraft included the venerable Douglas DC-3 airliner, considered by many to be the first truly "modern” airplane, and the highly successful Boeing B-17 Flying Fortress of World War II.[539]

The NACA also addressed the fundamental problem of incorporating a radial engine into aircraft design in the pioneering Propeller Research Tunnel (PRT). Lightweight, powerful, and considered a revolutionary aeronautical innovation, a radial engine featured a flat frontal config­uration that created a lot of drag. Engineer Fred E. Weick and his col­leagues tested full-size aircraft structures in the tunnel’s 20-foot opening. Their solution, called the NACA cowling, arrived at the right moment to increase the performance of new aircraft. Spectacular demonstra­tions—such as Frank Hawks flying the Texaco Lockheed Air Express, with a NACA cowling installed, from Los Angeles to New York nonstop in a record time of 18 hours 13 minutes in February 1929—led to the organization’s first Collier Trophy, in 1929.

With the basic formula for the modern airplane in place, the aero­nautical community began to push the limits of conventional aircraft design. The NACA built upon its success with the cowling research in the PRT and concentrated on the aerodynamic testing of full-scale aircraft in wind tunnels. The Full-Scale Tunnel (FST) featured a 30- by 60-foot test section and opened at Langley in 1931. The building was a massive structure at 434 feet long, over 200 feet wide, and 9 stories high. The first aircraft to be tested in the FST was a Navy Vought O3U-1 Corsair obser­vation airplane. Testing in the late 1930s focused on removing as much drag from an airplane in flight as possible. NACA engineers—through an extensive program involving the Navy’s first monoplane fighter, Brewster XF2A-1 Buffalo—showed that attention to details such as air intakes, exhaust pipes, and gun ports effectively reduced drag.

In the mid – to late 1920s, the first generation of university-trained American aeronautical engineers began to enter work with industry, the Government, and academia. The philanthropic Daniel Guggenheim Fund for the Promotion of Aeronautics created aeronautical engineer­ing schools, complete with wind tunnels, at the California Institute of Technology, Georgia Institute of Technology, Massachusetts Institute of Technology, University of Michigan, New York University, Stanford University, and University of Washington. The creation of these dedi­cated academic programs ensured that aeronautics would be an insti­tutionalized profession. The university wind tunnels quickly made their mark. The prototype Douglas DC airliner, the DC-1, flew in July 1933. In every sense of the word, it was a streamline airplane because of the extensive amount of wind tunnel testing at Guggenheim Aeronautical Laboratory at the California Institute of Technology used in its design.

By the mid-1930s, it was obvious that the sophisticated wind tunnel research program undertaken by the NACA had contributed to a new level of American aeronautical capability. Each of the major American manufacturers built wind tunnels or relied upon a growing number of university facilities to keep up with the rapid pace of innovation. Despite those additions, it was clear in the minds of the editors at the influential trade journal Aviation that the NACA led the field with the grace, style, and coordinated virtuosity of a symphonic orchestra.[540]

World War II stimulated the need for sophisticated aerodynamic testing, and new wind tunnels met the need. Langley’s 20-Foot Vertical Spin Tunnel (VST) became operational in March 1941. The major dif­ference between the VST and those that came before was its vertical closed-throat, annular return. A variable-speed three-blade, fixed-pitch fan provided vertical airflow at an approximate velocity of 85 feet per second at atmospheric conditions. Researchers threw dynamically scaled, free-flying aircraft models into the tunnel to evaluate their stability as they spun and tumbled out of control. The installation of remotely actu­ated control surfaces allowed the study of spin recovery characteristics. The NACA solution to spin problems for aircraft was to enlarge the verti­cal tail, raise the horizontal tail, and extend the length of the ventral fin.[541]

The NACA founded the Ames Aeronautical Laboratory on December 20, 1939, in anticipation of the need for expanded research and flight – test facilities for the West Coast aviation industry. The NACA leadership wanted to reach parity with European aeronautical research based on the belief that the United States would be entering World War II. The cor­nerstone facility at Ames was the 40 by 80 Tunnel capable of generating airflow of 265 mph for even larger full-scale aircraft when it opened in 1944. Building upon the revolutionary drag reduction studies pioneered in the FST, Ames researchers continued to modify existing aircraft with fillets and innovated dive recovery flaps to offset a new problem encoun­tered when aircraft entered high-speed dives called compressibility.[542]

The NACA also desired a dedicated research facility that special­ized in aircraft propulsion systems. Construction of the Aircraft Engine Research Laboratory (AERL) began at Cleveland, OH, in January 1941, with the facility becoming operational in May 1943.[543] The cornerstone

facility was the Altitude Wind Tunnel (AWT), which became opera­tional in 1944. The AWT was the only wind tunnel in the world capable of evaluating full-scale aircraft engines in realistic flight conditions that simulated altitudes up to 50,000 feet and speeds up to 500 mph. AERL researchers began first with large radial engines and propellers and con­tinued with the new jet technology on through the postwar decades.[544]

The AERL soon became the center of the NACA’s work on alleviat­ing aircraft icing. The Army Air Forces lost over 100 military transports along with their crews and cargoes over the "Hump,” or the Himalayas, as it tried to supply China by air. The problem was the buildup of ice on wings and control surfaces that degraded the aerodynamic integrity and overloaded the aircraft. The challenge was developing de-icing systems that removed or prevented the ice buildup. The Icing Research Tunnel (IRT) was the largest of its kind when it opened in 1944. It featured a 6- by 9-foot test section, a 160-horsepower electric motor capable of generating a 300 mph airstream, and a 2,100-ton refrigeration system that cooled the airflow down to -40 degrees Fahrenheit (°F).[545] The tun­nel worked well during the war and the following two decades, before NASA closed it. However, a new generation of icing problems for jet air­craft, rotary wing, and Vertical/Short Take-Off and Landing (V/STOL) aircraft resulted in the reopening of the IRT in 1978.[546]

During World War II, airplanes ventured into a new aerodynamic regime, the so-called "transonic barrier.” American propeller-driven aircraft suffered from aerodynamic problems caused by high-speed flight. Flight-testing of the P-38 Lightning revealed compressibility prob­lems that resulted in the death of a test pilot in November 1941. As the Lightning dove from 30,000 feet, shock waves formed over the wings and hit the tail, causing violent vibration, which caused the airplane to plummet into a vertical, and unrecoverable, dive. At speeds approach­ing Mach 1, aircraft experienced sudden changes in stability and control,

extreme buffeting, and, most importantly, a dramatic increase in drag, which created challenges for the aeronautical community involving pro­pulsion, research facilities, and aerodynamics. Bridging the gap between subsonic and supersonic speeds was a major aerodynamic challenge.[547]

The transonic regime was unknown territory in the 1940s. Four approaches—putting full-size aircraft into terminal velocity dives, drop­ping models from aircraft, installing miniature wings mounted on fly­ing aircraft, and launching models mounted on rockets—were used in lieu of an available wind tunnel in the 1940s for transonic research. Aeronautical engineers faced a daunting challenge rooted in developing tools and concepts because no known wind tunnel was able to operate and generate data at transonic speeds.

NACA Manager John Stack took the lead in American work in tran­sonic development. As the central NACA researcher in the development of the first research airplane, the Bell X-1, he was well-qualified for high­speed research. His part in the first supersonic flight resulted in a joint award of the 1947 Collier Trophy. He ordered the conversion of the 8- and 16-Foot High-Speed Tunnels in spring 1948 to a slotted throat to enable research in the transonic regime. Slots in the tunnels’ test sections, or throats, enabled smooth operation at high subsonic speeds and low supersonic speeds. The initial conversion was not satisfactory. Physicist Ray Wright and engineers Virgil S. Ritchie and Richard T. Whitcomb hand-shaped the slots based on their visualization of smooth transonic flow. Working directly with Langley woodworkers, they designed and fab­ricated a channel at the downstream end of the test section that reintro­duced air that traveled through the slots. Their painstaking work led to the inauguration of operations in the newly christened 8-Foot Transonic Tunnel (TT) 7 months later, on October 6, 1950.[548]

Rumors had been circulating throughout the aeronautical com­munity about the NACA’s new transonic tunnels: the 8-Foot TT and the 16-Foot TT. The NACA wanted knowledge of their existence to remain confidential among the military and industry. Concerns over secrecy were

deemed less important than the acknowledgement of the development of the slotted-throat tunnel, for which John Stack and 19 of his colleagues received a Collier Trophy in 1951. The award specifically recognized the importance of a research tool, which was a first in the 40-year history of the award. When used with already available wind tunnel components and techniques, the tunnel balance, pressure orifice, tuft surveys, and schlieren photographs, slotted-throat tunnels resulted in a new theoret­ical understanding of transonic drag. The NACA claimed that its slotted – throat transonic tunnels gave the United States a 2-year lead in the design of supersonic military aircraft.[549] John Stack’s leadership affected the NACAs development of state-of-the-art wind tunnel technology. The researchers inspired by or working under him developed a generation of wind tun­nels that, according to Joseph R. Chambers, became "national treasures.”[550]

The Path to the Modern Era

A strategy began forming in 1972 with the launch of the Air Force-NASA Long Range Planning Study for Composites (RECAST), which focused priorities for the research projects that would soon begin.[700] That was pre­lude to what NASA research Marvin Dow would later call the "golden age of composites research,”[701] a period stretching from roughly 1975 until funding priorities shifted in 1986. As airlines looked to airframers for help, military aircraft were already making great strides with composite structure. The Grumman F-14 Tomcat, then the McDonnell-Douglas F-15 Eagle, incorporated boron-epoxy composites into the empennage skin, a primary structure.[702] With the first flight of the McDonnell-Douglas AV-8B Harrier in 1978, composite usage had drifted to the wing as well. In all,

The Path to the Modern Era

Air Force engineer Norris Krone prompted NASA to develop the X-29 to prove that high-strength composites were capable of supporting forward-swept wings. NASA.

about one-fourth of the AV-8B’s weight,[703] including 75 percent in the weight of the wing alone,[704] was made of composite material. Meanwhile, composite materials studies by top Grumman engineer Norris Krone opened the door to experimenting with forward-swept wings. NASA responded to Krone’s papers in 1976 by launching the X-29 technology demonstrator, which incorporated an all-composite wing.[705]

Composites also found a fertile atmosphere for innovation in the rotorcraft industry during this period. As NASA pushed the commer­cial aircraft industry forward in the use of composites, the U. S. Army spurred progress among its helicopter suppliers. In 1981, the Army selected Bell Helicopter Textron and Sikorsky to design all-composite airframes under the advanced composite airframe program (ACAP).[706]

Perhaps already eyeing the need for a new light airframe to replace the Bell OH-58 Kiowa scout helicopter, the Army tasked the contrac­tors to design a new utility helicopter under 10,000 pounds that could fly for up to 2 hours 20 minutes.[707] Bell first flew the D-292 in 1984, and Sikorsky flew the S-75 ACAP in 1985.[708] Boeing complemented their efforts by designing the Model 360, an all-composite helicopter airframe with a gross weight of 30,500 pounds.[709] Each of these projects provided the steppingstones needed for all three contractors to fulfill the design goals for both the now-canceled Sikorsky-Boeing RAH-66 Comanche and the Bell-Boeing V-22 Osprey tilt rotor. The latter also drove devel­opments in automated fiber placement technology, relieving the need to lay up by hand about 50 percent of the airframe’s weight.[710]

The Path to the Modern EraIn the midst of this rapid progress, the makers of executive and "general” aircraft required neither the encouragement nor the finan­cial assistance of the Government to move wholesale into composite airframe manufacturing. While Boeing dabbled with composite spoil­ers, ailerons, and wing covers on its new 767, William P. Lear, founder of LearAvia, was developing the Lear Fan 2100—a twin-engine, nine – seat aircraft powered by a pusher-propeller with a 3,650-pound air­frame made almost entirely from a graphite-epoxy composite.[711] About a decade later, Beechcraft unveiled the popular and stylish Starship 1, an 8- to 10-passenger twin turboprop weighing 7,644 pounds empty.[712] Composite materials—mainly using graphite-epoxy and NOMEX sand­wich panels—accounted for 72 percent of the airframe’s weight.[713]

Actual performance fell far short of the original expectations dur­ing this period. Dow’s NASA colleagues in 1975 had outlined a strategy that should have led to full-scale tests of an all-composite fuselage and wing box for a civil airliner by the late 1980s. Although the dream was delayed by more than a decade, it is true that state of knowledge and
understanding of composite materials leaped dramatically during this period. The three major U. S. commercial airframers of the era—Boeing, Lockheed, and McDonnell-Douglas—each made contributions. However, the agenda was led by NASA’s $435-million investment in the Aircraft Energy Efficiency (ACEE) program. ACEE’s top goal, in terms of fund­ing priority, was to develop an energy-efficient engine. The program also invested greatly to improve how airframers control for laminar flow. But a major pillar of ACEE was to drive the civil industry to fundamentally change its approach to aircraft structures and shift from metal to the new breed of composites then emerging from laboratories. As of 1979, NASA had budgeted $75 million toward achieving that goal,[714] with the manufacturers responsible for providing a 10-percent match.

The Path to the Modern EraACEE proposed a gradual development strategy. The first step was to install a graphite-epoxy composite material called Narmco T300/5208[715] on lightly loaded secondary structures of existing commercial aircraft in oper­ational service. For their parts, Boeing selected the 727 elevator, Lockheed chose the L-1011 inboard aileron, and Douglas opted to change the DC-10 upper aft rudder.[716] From this starting point, NASA engaged the manufac­turers to move on to medium-primary components, which became the 737 horizontal stabilizer, the L-1011 vertical fin, and the DC-10 vertical stabi­lizer.[717] The weight savings for each of the medium primary components was estimated to be 23 percent, 30 percent, and 22 percent, respectively.[718]

The leap from secondary to medium-primary components yielded some immediate lessons for what not to do in composite structural design. All three components failed before experiencing ultimate loads in initial ground tests.[719] The problems showed how different composite material could be from the familiar characteristics of metal. Compared to aluminum, an equal amount of composite material can support a heavier load. But, as experience revealed, this was not true in every con­dition experienced by an aircraft in normal flight. Metals are known to
distribute stresses and loads to surrounding structures. In simple terms, they bend more than they break. Composite material does the opposite. It is brittle, stiff, and unyielding to the point of breaking.

The Path to the Modern EraBoeing’s horizontal stabilizer and Douglas’s vertical stabilizer both failed before the predicted ultimate load for similar reasons. The brittle composite structure did not redistribute loads as expected. In the case of the 737 component, Boeing had intentionally removed one lug pin to simulate a fail-safe mode. The structure under the point of stress buck­led rather than redistributed the load. Douglas had inadvertently drilled too big of a hole for a fastener where the web cover for the rear spar met a cutout for an access hole.[720] It was an error by Douglas’s machin­ists but a tolerable one if the same structure were designed with metal. Lockheed faced a different kind of problem with the failure of the L-1011 vertical fin during similar ground tests. In this case, a secondary inter­laminar stress developed after the fin’s aerodynamic cover buckled at the attachment point with the front spar cap. NASA later noted: "Such secondary forces are routinely ignored in current metals design.”[721] The design for each of these components was later modified to overcome these unfamiliar weaknesses of composite materials.

In the late 1970s, all three manufacturers began working on the basic technology for the ultimate goal of the ACEE program: design­ing full-scale, composite-only wing and fuselage. Control surfaces and empennage structures provided important steppingstones, but it was expected that expanding the use of composites to large sections of the fuselage and wing could improve efficiency by an order of mag­nitude.[722] More specifically, Boeing’s design studies estimated a weight savings of 25-30 percent if the 757 fuselage was converted to an all­composite design.[723] Further, an all-composite wing designed with a metal-like allowable strain could reduce weight by as much as 40 per­cent for a large commercial aircraft, according to NASA’s design anal­ysis.[724] Each manufacturer was assigned a different task, with all three collaborating on their results to gain maximum results. Lockheed explored
design techniques for a wet wing that could contain fuel and survive light­ning strikes.[725] Boeing worked on creating a system for defining degrees of damage tolerance for structures[726] and designed wing panes strong enough to endure postimpact compression of 50,000 pounds per square inch (psi) at strains of 0.006.[727] Meanwhile, Douglas concentrated on meth­ods for designing multibolted joints.[728] By 1984, NASA and Lockheed had launched the advanced composite center wing project, aimed at designing an all-composite center wing box for an "advanced” C-130 airlifter. This project, which included fabricating two 35-foot-long structures for static and durability tests, would seek to reduce the weight of the C-130’s cen­ter wing box by 35 percent and reduce manufacturing costs by 10 percent compared with aluminum structure.[729] Meanwhile, Boeing started work in 1984 to design, fabricate, and test full-scale fuselage panels.[730]

The Path to the Modern EraWithin a 10-year period, the U. S. commercial aircraft industry had come very far. From the near exclusion of composite structure in the early 1970s, composites had entered the production flow as both second­ary and medium-primary components by the mid-1980s. This record of achievement, however, was eclipsed by even greater progress in commer­cial aircraft technology in Europe, where the then-upstart DASA Airbus consortium had pushed composites technology even further.

While U. S. commercial programs continued to conduct demonstra­tions, the A300 and A310 production lines introduced an all-composite rudder in 1983 and achieved a vertical tailfin in 1985. The latter vividly demonstrated the manufacturing efficiencies promised by composite designs. While a metal vertical tail contained more than 2,000 parts, Airbus designed a new structure with a carbon fiber epoxy-honeycomb core sand­wich that required fewer than 100 parts, reducing both the weight of the structure and the cost of assembly.[731] A few years later, Airbus unveiled the A320 narrow body with 28 percent of its structural weight filled by
composite materials, including the entire tail structure, fuselage belly skins, trailing-edge flaps, spoilers, ailerons, and nacelles.[732] It would be another decade before a U. S. manufacturer eclipsed Airbus’s lead, with the introduction of the Boeing 777 in 1995. Consolidating experience gained as a major structural supplier for the Northrop B-2A bomber program, Boeing designed the 777, with an all-composite empennage one-tenth of the weight.[733] By this time, the percentage of composites integrated into a commercial airliner’s weight had become a measure of the manufactur­er’s progress in gaining a competitive edge over a rival, a trend that con­tinues to this day with the emerging Airbus A350/Boeing 787 competition.

The Path to the Modern EraAs European manufacturers assumed a technical lead over U. S. rivals for composite technology in the 1980s, the U. S. still retained a huge lead with military aircraft technology. With fewer operational con­cerns about damage tolerance, crash survivability, and manufacturing cost, military aircraft exploited the performance advantages of com­posite material, particularly for its weight savings. The V-22 Osprey tilt rotor employed composites for 70 percent of its structural weight.[734] Meanwhile, Northrop and Boeing used composites extensively on the B-2 stealth bomber, which is 37-percent composite material by weight.

Steady progress on the military side, however, was not enough to sustain momentum for NASA’s commercial-oriented technology. The ACEE program folded after 1985, following several years of real prog­ress but before it had achieved all of its goals. The full-scale wing and fuselage test program, which had received a $92-million, 6-year budget from NASA in fiscal year 1984,[735] was deleted from the Agency’s spend­ing plans a year later.[736] By 1985, funding available to carry out the goals of the ACEE program had been steadily eroding for several years. The Reagan Administration took office in 1981 with a distinctly different view on the responsibility of Government to support the validation of com­mercial technologies.[737]

In constant 1988 dollars, ACEE funding dropped from a peak $300 million in 1980 to $80 million in 1988, with funding for validat­ing high-strength composite materials in flight wiped out entirely.[738] The shift in technology policy corresponded with priority disagree­ments between aeronautics and space supporters in industry, with the latter favoring boosting support for electronics over pure aeronautics research.[739]

The Path to the Modern EraIn its 10-year run, the composite structural element of the ACEE program had overcome numerous technical issues. The most serious issue erupted in 1979 and caused NASA to briefly halt further studies until it could be fully analyzed. The story, always expressed in general terms, has become an urban myth for the aircraft composites commu­nity. Precise details of the incident appear lost to history, but the conse­quences of its impact were very real at the time. The legend goes that in the late 1970s, waste fibers from composite materials were dumped into an incinerator. Afterward, whether by cause or coincidence, a nearby electric substation shorted out.[740] Carbon fibers set loose by the inciner­ator fire were blamed for the malfunction at the substation.

The incident prompted widespread concerns among aviation engi­neers at a time when NASA was poised to spend hundreds of millions of dollars to transition composite materials from mainly space and military vehicles to large commercial transports. In 1979, NASA halted work on the ACEE program to analyze the risk that future crashes of increasingly composite-laden aircraft would spew blackout-causing fibers onto the Nation’s electrical grid.[741]

Few seriously question the potential benefits that composite mate­rials offer society. By the mid-1970s, it was clear that composites dra­matically raise the efficiency of aircraft. The cost of manufacturing the materials was higher, but the life-cycle cost of maintaining noncorrod­ing composite structures offered a compelling offset. Concerns about the economic and health risks poised by such a dramatic transition to a different structural material have also been very real.

It was up to the aviation industry, with Government support, to answer these vital questions before composite technology could move further.

The Path to the Modern EraWith the ACEE program suspended to study concerns about the risks to electrical equipment, both NASA and the U. S. Air Force by 1978 had launched separate efforts to overcome these concerns. In a typi­cal aircraft fire after a crash, the fuel-driven blaze can reach tempera­tures between 1,800 to 3,600 degrees Fahrenheit (°F). At temperatures higher than 750 °F, the matrix material in a composite structure will burn off, which creates two potential hazards. As the matrix polymer transforms into fumes, the underlying chemistry creates a toxic mix­ture called pyrolysis product, which if inhaled can be harmful. Secondly, after the matrix material burns away, the carbon fibers are released into the atmosphere.[742]

These liberated fibers, which as natural conductors have the power to short circuit a power line, could be dispersed over wide areas by wind. This led to concerns that the fibers would could come into contact with local power cables or, even worse, exposed power substations, leading to widespread power blackouts as the fibers short circuit the electrical equipment.[743] In the late 1970s, the U. S. Air Force started a program to study aircraft crashes that involved early-generation composite materials.

Another incident in 1997 was typical of different type of concern about the growing use of composite materials for aircraft structures. A

U. S. Air Force F-117 flying a routine at the Baltimore airshow crashed when a wing-strut failed. Emergency crews who rushed to the scene extinguished fires that destroyed and damaged several dwellings, blan­keting the area with a "wax-like” substance that contained carbon fibers embedded in the F-117’s structures that could have otherwise been released into the atmosphere. Despite these precautions, the same fire­fighters and paramedics who rushed to the scene later reported becom­ing "ill from the fumes emitted by the fire. It was believed that some of these fumes resulted from the burning of the resin in the composite materials,” according a U. S. Navy technical paper published in 2003.[744]

Yet another issue has sapped the public’s confidence in compos­ite materials for aircraft structures for several decades. As late as 2007, the risk presented by lightning striking a composite section of an aircraft fuselage was the subject of a primetime investigation by Dan Rather, who extensively quoted a retired Boeing Space Shuttle engineer. The question is repeatedly asked: If the aluminum structure of a previous generation of airliners created a natural Faraday cage, how would composite materials with weaker properties for conductivity respond when struck by lightning?

The Path to the Modern EraTechnical hazards were not the only threat to the acceptance of com­posite materials. To be sure, proving that composite material would be safe to operate in commercial service constituted an important endorse­ment of the technology for subsequent application, as the ACEE projects showed. But the aerospace industry also faced the challenge of estab­lishing a new industrial infrastructure from the ground up that would supply vast quantities of composite materials. NASA officials anticipated the magnitude of the infrastructure issue. The shift from wood to metal in the 1930s occurred in an era when airframers acted almost recklessly by today’s standards. Making a similar transition in the regulatory and business climate of the late 1970s would be another challenge entirely. Perhaps with an eye on the rapid progress being made by European com­petitors in commercial aircraft, NASA addressed the issue head-on. In 1980, NASA Deputy Administrator Alan M. Lovelace urged industry to "anticipate this change,” adding that he realized "this will take consid­erable capital, but I do worry that if this is not done then might we not, a decade from now, find ourselves in a position similar to that in which the automobile industry is at the present time?”[745]

Of course, demand drives supply, and the availability of the raw mate­rial for making composite aerospace parts grew precipitously through­out the 1980s. For example, 2 years before Lovelace issued his warning to industry, U. S. manufacturers consumed 500,000 pounds of com­posites every 12 months, with the aerospace industry accounting for half of that amount.[746] Meanwhile, a single supplier for graphite fiber, Union Carbide, had already announced plans to increase annual out­put to 800,000 pounds by the end of 1981.[747] U. S. consumption would soon be driven by the automobile industry, which was also struggling
to keep up with the innovations of foreign competition, as much as by the aerospace industry throughout the 1980s.

Modeling the Future: Radio-Controlled Lifting Bodies

Robert Dale Reed, an engineer at NASA’s Flight Research Center (later renamed NASA Dryden Flight Research Center) at Edwards Air Force Base and an avid radio-controlled (R/C) model airplane hobbyist, was one of the first to recognize the RPRV potential. Previous drone air­craft had been used for reconnaissance or strike missions, flying a restricted number of maneuvers with the help of an autopilot or radio signals from a ground station. The RPRV, on the other hand, offered a versatile platform for operating in what Reed called "unexplored engineering territory.”[883] In 1962, when astronauts returned from space

in capsules that splashed down in the ocean, NASA and Air Force engineers were discussing a revolutionary concept for spacecraft reentry vehicles. Wingless lifting bodies—half-cone-shaped vehicles capable of controlled flight using the craft’s fuselage shape to produce stability and lift—could be controlled from atmospheric entry to gliding touchdown on a conventional runway. Skeptics believed such craft would require deployable wings and possibly even pop-out jet engines.

Reed believed the basic lifting body concept was sound and set out to convince his peers. His first modest efforts at flight demonstra­tion were confined to hand-launching small paper models in the hall­ways of the Flight Research Center. His next step involved construction, from balsa wood, of a 24-inch-long free-flight model.

The vehicle’s shape was a half-cone design with twin vertical – stabilizer fins with rudders and a bump representing a cockpit canopy. Elevons provided longitudinal trim and turning control. Spring-wired tricycle wheels served as landing gear. Reed adjusted the craft’s center of gravity until he was satisfied and began a series of hand-launched flight tests. He began at ground level and finally moved to the top of the NASA Administration building, gradually expanding the performance envelope. Reed found the model had a steep gliding angle but remained upright and landed on its gear.

He soon embarked on a path that presaged eventual testing of a full-scale, piloted vehicle. He attached a thread to the upper part of the nose gear and ran to tow the lifting body aloft, as one would launch a kite. Reed then turned to one of his favorite hobbies: radio-controlled, gas-powered model airplanes. He had previously used R/C models to tow free flight model gliders with great success. By attaching the towline to the top of the R/C model’s fuselage, just at the trailing edge of the wing, he ensured minimum effect on the tow plane from the motions of the lifting body model behind it.

Reed conducted his flight tests at Sterk’s Ranch in nearby Lancaster while his wife, Donna, documented the demonstrations with an 8-millimeter motion picture camera. When the R/C tow plane reached a sufficient altitude for extended gliding flight, a vacuum timer released the lifting body model from the towline. The lifting body demonstrated stable flight and landing characteristics, inspir­ing Reed and other researchers to pursue development of a full-scale,

Modeling the Future: Radio-Controlled Lifting Bodies

Radio-controlled mother ship and models of Hyper III and M2-F2 on lakebed with research staff. Left to right: Richard C. Eldredge, Dale Reed, James O. Newman, and Bob McDonald. NASA.

 

9

 

piloted lifting body, dubbed the M2-F1.[884] Reed’s R/C model experiments provided a low-cost demonstration capability for a revolutionary con­cept. Success with the model built confidence in proposals for a full – scale lifting body. Essentially, the model was scaled up to a length of 20 feet, with a span of 14.167 feet. A tubular steel framework pro­vided internal support for the cockpit and landing gear. The outer
shell was comprised of mahogany ribs and spars covered with plywood and doped cloth skin. As with the small model, the full-scale M2-F1 was towed into the air—first behind a Pontiac convertible and later behind a C-47 transport for extended glide flights. Just as the models paved the way for full-scale, piloted testing, the M2-F1 served as a pathfinder for a series of air-launched heavyweight lifting body vehi­cles—flown between 1966 and 1975—that provided data eventually used in development of the Space Shuttle and other aerospace vehicles.[885]

By 1969, Reed had teamed with Dick Eldredge, one of the origi­nal engineers from the M2-F1 project, for a series of studies involving modeling spacecraft-landing techniques. Still seeking alternatives to splashdown, the pair experimented with deployable wings and paraglider concepts. Reed discussed his ideas with Max Faget, director of engineer­ing at the Manned Spacecraft Center (now NASA Johnson Space Center) in Houston, TX. Faget, who had played a major role in designing the Mercury, Gemini, and Apollo spacecraft, had proposed a Gemini-derived vehicle capable of carrying 12 astronauts. Known as the "Big G,” it was to be flown to a landing beneath a gliding parachute canopy.

Reed proposed a single-pilot test vehicle to demonstrate paraglider­landing techniques similar to those used with his models. The Parawing demonstrator would be launched from a helicopter and glide to a land­ing beneath a Rogallo wing, as used in typical hang glider designs. Spacecraft-type viewports would provide visibility for realistic simula­tion of Big G design characteristics.[886] Faget offered to lend a borrowed Navy SH-3A helicopter—one being used to support the Apollo program— to the Flight Research Center and provide enough money for several Rogallo parafoils. Hugh Jackson was selected as project pilot, but for safety reasons, Reed suggested that the test vehicle initially be flown by radio control with a dummy on board.

Eldredge designed the Parawing vehicle, incorporating a generic ogival lifting body shape with an aluminum internal support structure, Gemini-style viewing ports, a pilot’s seat mounted on surplus shock struts from Apollo crew couches, and landing skids. A general-aviation auto­pilot servo was used to actuate the parachute control lines. A side stick controller was installed to control the servo. On planned piloted flights, it would be hand-actuated, but in the test configuration, model airplane servos were used to move the side stick. For realism, engineers placed an anthropomorphic dummy in the pilot’s seat and tied the dummy’s hands in its lap to prevent interference with the controls. The dummy and airframe were instrumented to record accelerations, decelerations, and shock loads as the parachute opened.

The Parawing test vehicle was then mounted on the side of the heli­copter using a pneumatic hook release borrowed from the M2-F2 lifting body launch adapter. Donald Mallick and Bruce Peterson flew the SH-3A to an altitude of approximately 10,000 feet and released the Parawing test vehicle above Rosamond Dry Lake. Using his R/C model controls, Reed guided the craft to a safe landing. He and Eldredge conducted 30 successful radio-controlled test flights between February and October 1969. Shortly before the first scheduled piloted tests were to take place, however, officials at the Manned Spacecraft Center canceled the project. The next planned piloted spacecraft, the Space Shuttle orbiter, would be designed to land on a runway like a conventional airplane does. There was no need to pursue a paraglider system.[887] This, however, did not spell the end of Reed’s paraglider research. A few decades later, he would again find himself involved with paraglider recovery systems for the Spacecraft Autoland Project and the X-38 Crew Return Vehicle tech­nology demonstration.

Hyper III: The First True RPRV

In support of the lifting body program, Dale Reed had built a small fleet of models, including variations on the M2-F2 and FDL-7 concepts. The M2-F2 was a half cone with twin stabilizer fins like the M2-F1 but with the cockpit bulge moved forward from midfuselage to the nose. The full-scale heavyweight M2-F2 suffered some stability problems and eventually crashed, although it was later rebuilt as the M2-F3 with an additional vertical stabilizer. The FDL-7 had a sleek shape (somewhat resembling a flatiron) with four stabilizer fins, two horizontal, and two that were canted outward. Engineers at the Air Force Flight Dynamics Laboratory at Wright-Patterson Air Force Base, OH, designed it with hypersonic-flight characteristics in mind. Variants included wingless ver­sions as well as those equipped with fixed or pop-out wings for extended gliding.[888] Reed launched his creations from a twin-engine R/C model

Подпись: 9 Modeling the Future: Radio-Controlled Lifting Bodies

plane he dubbed "Mother,” since it served as a mother ship for his lifting body models. With a 10.5-foot wingspan, Mother was capable of lofting models of various sizes to useful altitudes for extended glide flights. By the end of 1968, Reed’s mother ship had successfully made 120 drops from an altitude of around 1,000 feet.

One day, Reed asked research pilot Milton O. Thompson if he thought he would be able to control a research airplane from the ground using an attitude-indicator instrument as a reference. Thompson thought this was possible and agreed to try it using Reed’s mother ship. Within a month, at a cost of $500, Mother was modified, and Thompson had success­fully demonstrated the ability to fly the craft from the ground using the instrument reference.[889] Next, Reed wanted to explore the possibility of flying a full-scale research airplane from a ground cockpit. Because of his interest in lifting bodies, he selected a simplified variant of the FDL-7 configuration based on research accomplished at NASA Langley Research Center. Known as Hyper III—because the shape would have a lift-to-drag (L/D) ratio of 3.0 at hypersonic speeds—the test vehicle had a 32-foot-long fuselage with a narrow delta planform and trapezoidal
cross-section, stabilizer fins, and fixed straight wings spanning 18.5 feet to simulate pop-out airfoils that could be used to improve the low-speed glide ratio of a reentry vehicle. The Hyper III RPRV weighed about

1,0 pounds.[890]

Reed recruited numerous volunteers for his low-budget, low – priority project. Dick Fischer, a designer of R/C models as well as full-scale homebuilt aircraft, joined the team as operations engineer and designed the vehicle’s structure. With previous control-system engineering experience on the X-15, Bill "Pete” Peterson designed a control system for the Hyper III. Reed also recruited aircraft inspector Ed Browne, painter Billy Schuler, crew chief Herman Dorr, and mechan­ics Willard Dives, Bill Mersereau, and Herb Scott.

The craft was built in the Flight Research Center’s fabrication shops. Frank McDonald and Howard Curtis assembled the fuselage, consist­ing of a Dacron-covered, steel-tube frame with a molded fiberglass nose assembly. LaVern Kelly constructed the stabilizer fins from sheet aluminum. Daniel Garrabrant borrowed and assembled aluminum wings from an HP-11 sailplane kit. The vehicle was built at a cost of just $6,500 and without interfering with the Center’s other, higher-priority projects.[891] The team managed to scrounge and recycle a variety of items for the vehicle’s control system. These included a Kraft uplink from a model airplane radio-control system and miniature hydraulic pumps from the Air Force’s Precision Recovery Including Maneuvering Entry (PRIME) lifting body program. Peterson designed the Hyper III control system to work from either of two Kraft receivers, mounted on the top and bottom of the vehicle, depending on signal strength. If either malfunctioned or suffered interference, an electronic circuit switched control signals to the operating receiver to actuate the elevons. Keith Anderson modified the PRIME hydraulic actuator system for use on the Hyper III.

The team also developed an emergency-recovery parachute sys­tem in case control of the vehicle was lost. Dave Gold, of Northrop, who had helped design the Apollo spacecraft parachute system, and John Rifenberry, of the Flight Research Center life-support shop, designed a system that included a drogue chute and three main parachutes that would safely lower the vehicle to the ground onto its landing skids. Pyrotechnics expert Chester Bergener assumed responsibility for the drogue’s firing system.[892] To test the recovery system, technicians mounted the Hyper III on a flatbed truck and fired the drogue-extraction system while racing across the dry lakebed, but weak radio signals kept the three main chutes from deploying. To test the clustered main parachutes, the team dropped a weight equivalent to the vehicle from a helicopter.

Tom McAlister assembled a ground cockpit with instruments iden­tical to those in a fixed-base flight simulator. An attitude indicator displayed roll, pitch, heading, and sideslip. Other instruments showed air­speed, altitude, angle of attack, and control-surface position. Don Yount and Chuck Bailey installed a 12-channel downlink telemetry system to record data and drive the cockpit instruments. The ground cockpit sta­tion was designed to be transported to the landing area on a two-wheeled trailer.[893] On December 12, 1969, Bruce Peterson piloted the SH-3A heli­copter that towed the Hyper III to an altitude of 10,000 feet above the lakebed. Hanging at the end of a 400-foot cable, the nose of the Hyper III had a disturbing tendency to drift to one side or another. Reed real­ized later that he should have added a small drag chute to stabilize the craft’s heading prior to launch. Peterson started and stopped forward flight several times until the Hyper III stabilized in a forward climb atti­tude, downwind with a northerly heading.

As soon as Peterson released the hook, Thompson took control of the lifting body. He flew the vehicle north for 3 miles, then reversed course and steered toward the landing site, covering another 3 miles. During each straight course, Thompson performed pitch doublets and oscilla­tions in order to collect aerodynamic data. Since the Hyper III was not equipped with an onboard video camera, Thompson was forced to fly on instruments alone. Gary Layton, in the Flight Research Center con­trol room, watched the radar data showing the vehicle’s position and relayed information to Thompson via radio.

Dick Fischer stood beside Thompson to take control of the Hyper III just before the landing flare, using the model airplane radio­control box. Several miles away, the Hyper III was invisible in the hazy sky as it descended toward the lakebed. Thompson called out altitude read­ings as Fischer strained to see the vehicle. Suddenly, he spotted the lifting body, when it was on final approach just 1,000 feet above the ground. Thompson relinquished control, and Fischer commanded a slight left roll to confirm he had established radio contact. He then leveled the aircraft and executed a landing flare, bringing the Hyper III down softly on its skids.

Thompson found the experience of flying the RPRV exciting and chal­lenging. After the 3-minute flight, he was as physically and emotionally drained as he had been after piloting first flights in piloted research air­craft. Worries that lack of motion and visual cues might hurt his pilot­ing performance proved unfounded. It seemed as natural to control the Hyper III on gauges as it did any other airplane or simulator, respond­ing solely to instrument readings. Twice during the flight, he used his experience to compensate for departures from predicted aerodynamic characteristics when the lift-to-drag ratio proved lower than expected, thus demonstrating the value of having a research pilot at the controls.[894]

Ikhana: Awareness in the National Airspace

Military UAVs are easily adapted for civilian research missions. In November 2006, NASA Dryden obtained a civilian version of the General Atomics MQ-9 Reaper that was subsequently modified and instrumented for research. Proposed missions included supporting Earth science research, fabricating advanced aeronautical technology, and develop­ing capabilities for improving the utility of unmanned aerial systems.

The project team named the aircraft Ikhana, a Native American Choctaw word meaning intelligent, conscious, or aware. The choice was considered descriptive of research goals NASA had established for the aircraft and its related systems, including collecting data to better under­stand and model environmental conditions and climate and increasing the ability of unpiloted aircraft to perform advanced missions.

The Ikhana was 36 feet long with a 66-foot wingspan and capable of carrying more than 400 pounds of sensors internally and over 2,000 pounds in external pods. Driven by a 950-horsepower turboprop engine, the aircraft has a maximum speed of 220 knots and is capable of reaching

Подпись: Research pilot Mark Pestana flies the Ikhana from a Ground Control Station at NASA Dryden Flight Research Center. NASA. Подпись: 9

altitudes above 40,000 feet with limited endurance.[1038] Initial experiments included the use of fiber optics for wing shape and temperature sens­ing, as well as control and structural loads measurements. Six hairlike fibers on the upper surfaces of the Ikhana’s wings provided 2,000 strain measurements in real time, allowing researchers to study changes in the shape of the wings during flight. Such sensors have numerous appli­cations for future generations of aircraft and spacecraft. They could be used, for example, to enable adaptive wing-shape control to make an aircraft more aerodynamically efficient for specific flight regimes.[1039] To fly the Ikhana, NASA purchased a Ground Control Station and satellite communication system for uplinking flight commands and downlink­ing aircraft and mission data. The GCS was installed in a mobile trailer and, in addition to the pilot’s remote cockpit, included computer work­stations for scientists and engineers. The ground pilot was linked to the aircraft through a C-band line-of-sight (LOS) data link at ranges up to 150 nautical miles. A Ku-band satellite link allowed for over-the-horizon control. A remote video terminal provided real-time imagery from

the aircraft, giving the pilot limited visual input.[1040] Two NASA pilots, Hernan Posada and Mark Pestana, were initially trained to fly the Ikhana. Posada had 10 years of experience flying Predator vehicles for General Atomics before joining NASA as an Ikhana pilot. Pestana, with over

4,0 flight hours in numerous aircraft types, had never flown a UAS prior to his assignment to the Ikhana project. He found the experience an exciting challenge to his abilities because the lack of vestibular cues and peripheral vision hinders situational awareness and eliminates the pilot’s ability to experience such sensations as motion and sink rate.[1041]

Building on experience with the Altair unpiloted aircraft, NASA devel­oped plans to use the Ikhana for a series of Western States Fire Mission flights. The Autonomous Modular Sensor (AMS), developed by Ames, was key to their success. The AMS is a line scanner with a 12-band spec­trometer covering the spectral range from visible to the near infrared for fire detection and mapping. Digitized data are combined with navi­gational and inertial sensor data to determine the location and orienta­tion of the sensor. In addition, the data are autonomously processed with geo-rectified topographical information to create a fire intensity map.

Data collected with AMS are processed onboard the aircraft to provide a finished product formatted according to a geographical infor­mation systems standard, which makes it accessible with commonly available programs, such as Google Earth. Data telemetry is downlinked via a Ku-band satellite communications system. After quality-control assess­ment by scientific personnel in the GCS, the information is transferred to NASA Ames and then made available to remote users via the Internet.

After the Ikhana was modified to carry the AMS sensor pod on a wing pylon, technicians integrated and tested all associated hardware and systems. Management personnel at Dryden performed a flight read­iness review to ensure that all necessary operational and safety con­cerns had been addressed. Finally, planners had to obtain permission from the FAA to allow the Ikhana to operate in the national airspace.[1042]

The first four Ikhana flights set a benchmark for establishing cri­teria for future science operations. During these missions, the aircraft traversed eight western U. S. States, collecting critical fire information and relaying data in near real time to fire incident command teams on the ground as well as to the National Interagency Fire Center in Boise, ID. Sensor data were downlinked to the GCS, transferred to a server at Ames, and autonomously redistributed to a Google Earth data visualization capability—Common Desktop Environment (CDE)—that served as a Decision Support System (DSS) for fire-data integration and information sharing. This system allowed users to see and use data in as little as 10 minutes after it was collected.

The Google Earth DSS CDE also supplied other real-time fire – related information, including satellite weather data, satellite-based fire data, Remote Automated Weather Station readings, lightning-strike detection data, and other critical fire-database source information. Google Earth imagery layers allowed users to see the locations of man­made structures and population centers in the same display as the fire information. Shareable data and information layers, combined into the CDE, allowed incident commanders and others to make real-time strategy decisions on fire management. Personnel throughout the U. S. who were involved in the mission and imaging efforts also accessed the CDE data. Fire incident commanders used the thermal imagery to develop management strategies, redeploy resources, and direct operations to critical areas such as neighborhoods.[1043] The Western States UAS Fire Missions, carried out by team members from NASA, the U. S. Department of Agriculture Forest Service, the National Interagency Fire Center, the NOAA, the FAA, and General Atomics Aeronautical Systems, Inc., were a resounding success and a historic achievement in the field of unpiloted aircraft technology.

In the first milestone of the project, NASA scientists developed improved imaging and communications processes for delivering near-real-time information to firefighters. NASA’s Applied Sciences and Airborne Science programs and the Earth Science Technology Office developed an Airborne Modular Sensor with the intent of dem­onstrating its capabilities during the WSFM and later transitioning those capabilities to operational agencies.[1044] The WSFM project team repeatedly demonstrated the utility and flexibility of using a UAS as a tool to aid disaster response personnel through the employment of various platform, sensor, and data-dissemination technologies related to improving near-real-time wildfire observations and intelligence-gathering techniques. Each successive flight expanded capa­bilities of the previous missions for platform endurance and range, number of observations made, and flexibility in mission and sensing reconfiguration.

Team members worked with the FAA to safely and efficiently inte­grate the unmanned aircraft system into the national airspace. NASA pilots flew the Ikhana in close coordination with FAA air traffic control­lers, allowing it to maintain safe separation from other aircraft.

WSFM project personnel developed extensive contingency man­agement plans to minimize the risk to the aircraft and the public, including the negotiation of emergency landing rights agreements at three Government airfields and the identification and documentation of over 300 potential emergency landing sites.

The missions included coverage of more than 60 wildfires through­out 8 western States. All missions originated and terminated at Edwards Air Force Base and were operated by NASA crews with sup­port from General Atomics. During the mission series, near-real-time data were provided to Incident Command Teams and the National Interagency Fire Center.[1045] Many fires were revisited during some mis­sions to provide data on time-induced fire progression. Whenever possible, long-duration fire events were imaged on multiple mis­sions to provide long-term fire-monitoring capabilities. Postfire burn – assessment imagery was also collected over various fires to aid teams in fire ecosystem rehabilitation. The project Flight Operations team built relationships with other agencies, which enabled real-time flight plan changes necessary to avoid hazardous weather, to adapt to fire priorities, and to avoid conflicts with multiple planned military GPS testing/jamming activities.

Critical, near-real-time fire information allowed Incident Command Teams to redeploy fire-fighting resources, assess effectiveness of containment operations, and move critical resources, personnel, and equipment from hazardous fire conditions. During instances in which blinding smoke obscured normal observations, geo-rectified thermal- infrared data enabled the use of Geographic Information Systems or data visualization packages such as Google Earth. The images were col­lected and fully processed onboard the Ikhana and transmitted via a communications satellite to NASA Ames, where the imagery was served on a NASA Web site and provided in the Google Earth-based CDE for quick and easy access by incident commanders.

The Western States UAS Fire Mission series also gathered crit­ical, coincident data with satellite sensor systems orbiting overhead, allowing for comparison and calibration of those resources with the more sensitive instruments on the Ikhana. The Ikhana UAS proved a versatile platform for carrying research payloads. Since the sensor pod could be reconfigured, the Ikhana was adaptable for a variety of research projects.[1046]