Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Dynamic Stability: Early Applications and a Lesson Learned

When Langley began operations of its 12-Foot Free-Flight Tunnel in 1939, it placed a high priority on establishing correlation with full-scale flight results. Immediately, requests came from the Army and Navy for correla­tion of model tests with flight results for the North American BT-9, Brewster XF2A-1, Vought-Sikorsky V-173, Naval Aircraft Factory SBN-1, and Vought Sikorsky XF4U-1. Meanwhile, the NACA used a powered model of the Curtiss P-36 fighter for an in-house calibration of the free-flight process.[466]

The results of the P-36 study were, in general, in fair agreement with airplane flight results, but the dynamic longitudinal stability of the model was found to be greater (more damped) than that of the air­plane, and the effectiveness of the model’s ailerons was less than that for the airplane. Both discrepancies were attributed to aerodynamic defi­ciencies of the model caused by the low Reynolds number of the tun­nel test and led to one of the first significant lessons learned with the free-flight technique. Using the wing airfoil shape (NACA 2210) of the full-scale P-36 for the model resulted in poor wing aerodynamic perfor­mance at the low Reynolds number of the model flight tests. The max­imum lift of the model and the angle of attack for maximum lift were both decreased because of scale effects. As a result, the stall occurred at a slightly lower angle of attack for the model. After this experience, researchers conducted an exhaustive investigation of other airfoils that might have more satisfactory performance at low Reynolds numbers. In planning for subsequent tests, the researchers were trained to antic­ipate the potential existence of scale effects for certain airfoils, even at relatively low angles of attack. As a result of this experience, the wing airfoils of free-flight tunnel models were sometimes modified to airfoil shapes that provided better results at low Reynolds number.[467]

General-Aviation Spin Technology

The dramatic changes in aircraft configurations after World War II required almost complete commitment of the Spin Tunnel to develop­ment programs for the military, resulting in stagnation of any research for light personal-owner-type aircraft. In subsequent years, designers had to rely on the database and design guidelines that had been devel­oped based on experiences during the war. Unfortunately, stall/spin acci­dents in the early 1970s in the general aviation community increased at an alarming rate. Even more troublesome, on several occasions aircraft that had been designed according to the NACA tail-damping power fac­tor criterion had exhibited unsatisfactory recovery characteristics, and the introduction of features such as advanced general aviation airfoils resulted in concern over the technical adequacy and state of the data­base for general aviation configurations.

Finally, in the early 1970s, the pressure of new military aircraft devel­opment programs eased, permitting NASA to embark on new studies related to spin technology for general aviation aircraft. A NASA General Aviation Spin Research program was initiated at Langley that focused on the use of radio-control and spin tunnel models to assess the impact of design features on spin and recovery characteristics, and to develop testing techniques that could be used by the industry. The program also included the acquisition of several full-scale aircraft that were modi­fied for spin tests to produce data for correlation with model results.[515]

One of the key objectives of the program was to evaluate the impact of tail geometry on spin characteristics. The approach taken was to design alternate tail configurations so as to produce variability in the TDPF parameter by changing the vertical and horizontal locations of the

General-Aviation Spin Technology

Involved in a study of spinning characteristics of general-aviation configurations in the 1970s were Langley test pilot Jim Patton, center, and researchers Jim Bowman, left, and Todd Burk. NASA.

horizontal tail. A spin tunnel model of a representative low wing con­figuration was constructed with four interchangeable tails, and results for the individual tail configurations were compared with predictions based on the tail design criteria. The range of tails tested included con­ventional cruciform-tail configurations, low horizontal tail locations, and a T-tail configuration.

As expected, results of the spin tunnel testing indicated that tail con­figuration had a large influence on spin and recovery characteristics, but many other geometric features also influenced the characteristics, including fuselage cross-sectional shape. In addition, seemingly small configuration features such as wing fillets at the wing trailing-edge junc­ture with the fuselage had large effects. Importantly, the existing TDPF criterion for light airplanes did not correctly predict the spin recovery characteristics of models for some conditions, especially for those in which ailerons were deflected. NASA’s report to the industry following

the tests stressed that, based on these results, TDPF should not be used to predict spin recovery characteristics. However, the recommendation did provide a recommended "best practice” approach to overall design of the tail of the airplane for spin behavior.[516]

As part of its General Aviation Spin Research program, NASA con­tinued to provide information on the design of emergency spin recovery parachute systems.[517] Parachute diameters and riser line lengths were sized based on free-spinning model results for high and low wing con­figurations and a variety of tail configurations. Additionally, guidelines for the design and implementation of the mechanical systems required for parachute deployment (such as mechanical jaws and pyrotechnic deployment) and release of the parachute were documented.

NASA also encouraged industry to use its spin tunnel facility on a fee-paying basis. Several industry teams proceeded to use the opportu­nity to conduct proprietary tests for configurations in the tunnel. For example, the Beech Aircraft Corporation sponsored the first fee-paid test in the Langley Spin Tunnel for free-spinning model tests of its Model 77 "Skipper” trainer.[518] In such proprietary tests, the industry provided models and personnel for joint participation in the testing experience.

The Advent of Hypersonic Tunnel and Aeroballistic Facilities

John V. Becker at Langley led the way in the development of conven­tional hypersonic wind tunnels. He built America’s first hypersonic wind tunnel in 1947, with an 11-inch test section and the capability of Mach 6.9 flow. To T. A. Heppenheimer, it is "a major advance in hypersonics,” because Becker had built the discipline’s first research instrument.[582] Becker and Eugene S. Love followed that success with their design of the 20-Inch Hypersonic Tunnel in 1958. Becker, Love, and their col­leagues used the tunnel for the investigation of heat transfer, pressure,

and forces acting on inlets and complete models at Mach 6. The facility featured an induction drive system that ran for approximately 15 min­utes in a nonreturn circuit operating at 220-550 psia (pounds-force per square inch absolute).[583]

The need for higher Mach numbers led to tunnels that did not rely upon the creation of a flow of air by fans. A counterflow tunnel featured a gun that fired a model into a continual onrushing stream of gas or air, which was an effective tool for supersonic and hypersonic testing. An impulse wind tunnel created high temperature and pressure in a test gas through an explosive release of energy. That expanded gas burst through a nozzle at hypersonic speeds and over a model in the test sec­tion in milliseconds. The two types of impulse tunnels—hotshot and shock—introduced the test gas differently and were important steps in reaching ever-higher speeds, but NASA required even faster tunnels.[584]

The companion to a hotshot tunnel was an arc-jet facility, which was capable of evaluating spacecraft heat shield materials under the extreme heat of planetary reentry. An electric arc preheated the test gas in the stilling chamber upstream of the nozzle to temperatures of 10,000-20,000 °F. Injected under pressure into the nozzle, the heated gas created a flow that was sustainable for several minutes at low- density numbers and supersonic Mach numbers. The electric arc required over 100,000 kilowatts of power. Unlike the hotshot, the arc-jet could operate continually.[585]

NASA combined these different types of nontraditional tunnels into the Ames Hypersonic Ballistic Range Complex in the 1960s.[586] The Ames Vertical Gun Range (1964) simulated planetary impact with various model­launching guns. Ames researchers used the Hypervelocity Free-Flight Aerodynamic Facility (1965) to examine the aerodynamic characteristics of atmospheric entry and hypervelocity vehicle configurations. The research programs investigated Earth atmosphere entry (Mercury, Gemini, Apollo,

and Shuttle), planetary entry (Viking, Pioneer-Venus, Galileo, and Mars Science Lab), supersonic and hypersonic flight (X-15), aerobraking con­figurations, and scramjet propulsion studies. The Electric Arc Shock Tube (1966) enabled the investigation of the effects of radiation and ionization that occurred during high-velocity atmospheric entries. The shock tube fired a gaseous bullet at a light-gas gun, which fired a small model into the onrushing gas.[587]

The NACA also investigated the use of test gases other than air. Designed by Antonio Ferri, Macon C. Ellis, and Clinton E. Brown, the Gas Dynamics Laboratory at Langley became operational in 1951. One facility was a high-pressure shock tube consisting of a constant area tube 3.75 inches in diameter, a 20-inch test section, a 14-foot-long high – pressure chamber, and 70-foot-long low-pressure section. The induction drive system consisted of a central 300-psi tank farm that provided heated fluid flow at a maximum speed of Mach 8 in a nonreturn circuit at a pres­sure of 20 atmospheres. Langley researchers investigated aerodynamic heating and fluid mechanical problems at speeds above the capability of conventional supersonic wind tunnels to simulate hypersonic and space-reentry conditions. For the space program, NASA used pure nitrogen and helium instead of heated air as the test medium to simulate reentry speeds.[588]

NASA built the similar Ames Thermal Protection Laboratory in the early 1960s to solve reentry materials problems for a new generation of craft, whether designed for Earth reentry or the penetration of the atmo­spheres of the outer planets. A central bank of 10 test cells provided the pressurized flow. Specifically, the Thermal Protection Laboratory found solutions for many vexing heat shield problems associated with the Space Shuttle, interplanetary probes, and intercontinental ballistic missiles.

Called the "suicidal wind tunnel” by Donald D. Baals and William R. Corliss because it was self-destructive, the Ames Voitenko Compressor was the only method for replicating the extreme velocities required for the design of interplanetary space probes. It was based on the Voitenko

The Advent of Hypersonic Tunnel and Aeroballistic Facilities

The Continuous Flow Hypersonic Tunnel at Langley in 1961. NASA.

concept from 1965 that a high-velocity explosive, or shaped, charge developed for military use be used for the acceleration of shock waves. Voitenko’s compressor consisted of a shaped charge, a malleable steel plate, and the test gas. At detonation, the shaped charge exerts pressure on the steel plate to drive it and the test gas forward. Researchers at the Ames Laboratory adapted the Voitenko compressor concept to a self­destroying shock tube comprised of a 66-pound shaped charge and a glass-walled tube 1.25 inches in diameter and 6.5 feet long. Observation of the tunnel in action revealed that the shock wave traveled well ahead of the rapidly disintegrating tube. The velocities generated upward of

220,0 feet per second could not be reached by any other method.[589]

Langley, building upon a rich history of research in high-speed flight, started work on two tunnels at the moment of transition from the NACA

to NASA. Eugene Love designed the Continuous Flow Hypersonic Tunnel for nonstop operation at Mach 10. A series of compressors pushed high­speed air through a 1.25-inch square nozzle into the 31-inch square test section. A 13,000-kilowatt electric resistance heater raised the air tem­perature to 1,450 °F in the settling chamber, while large water coolers and channels kept the tunnel walls cool. The tunnel became opera­tional in 1962 and became instrumental in study of the aerodynamic performance and heat transfer on winged reentry vehicles such as the Space Shuttle.[590]

The 8-Foot High-Temperature Structures Tunnel, opened in 1967, permitted full-scale testing of hypersonic and spacecraft components. By burning methane in air at high pressure and through a hypersonic nozzle in the tunnel, Langley researchers could test structures at Mach 7 speeds and at temperatures of 3,000 °F. Too late for the 1960s space program, the tunnel was instrumental in the testing of the insulating tiles used on the Space Shuttle.[591]

NASA researchers Richard R. Heldenfels and E. Barton Geer devel­oped the 9- by 6-Foot Thermal Structures Tunnel to test aircraft and missile structural components operating under the combined effects of aerodynamic heating and loading. The tunnel became operational in 1957 and featured a Mach 3 drive system consisting of 600-psia air stored in a tank farm filled by a high-capacity compressor. The spent air simply exhausted to the atmosphere. Modifications included addi­tional air storage (1957), a high-speed digital data system (1959), a sub­sonic diffuser (1960), a Topping compressor (1961), and a boost heater system that generated 2,000 °F of heat (1963). NASA closed the 9- by 6-Foot Thermal Structures Tunnel in September 1971. Metal fatigue in the air storage field led to an explosion that destroyed part of the facil­ity and nearby buildings.[592]

NASA’s wind tunnels contributed to the growing refinement of space­craft technology. The multiple design changes made during the transi­tion from the Mercury program to the Gemini program and the need for more information on the effects of angle of attack, heat transfer, and surface pressure resulted in a new wind tunnel and flight-test program. Wind tunnel tests of the Gemini spacecraft were conducted in the range

of Mach 3.51 to 16.8 at the Langley Unitary Plan and tunnels at AEDC and Cornell University. The flight-test program gathered data from the first four launches and reentries of Gemini spacecraft.[593] Correlation revealed that both independent sets of data were in agreement.[594]

The Propulsion Perspective

Aerodynamics always constituted an important facet of NACA-NASA GA research, but no less significant is flight propulsion, for the aircraft engine is often termed the "heart” of an airplane. In the 1920s and 1930s, NACA research by Fred Weick, Eastman Jacobs, John Stack, and others had profoundly influenced the efficiency of the piston engine-propeller­cowling combination.[800] Agency work in the early jet age had been no less influential upon improving the performance of turbojet, turboshaft, and turbofan engines, producing data judged "essential to industry designers.”[801]

The rapid proliferation of turbofan-powered GA aircraft—over 2,100 of which were in service by 1978, with 250 more being added each year— stimulated even greater attention.[802] NASA swiftly supported development of a specialized computer-based program for assessing engine perfor­mance and efficiency. In 1977, for example, Ames Research Center funded development of GASP, the General Aviation Synthesis Program, by the Aerophysics Research Corporation, to compute propulsion system per­formance for engine sizing and studies of overall aircraft performance. GASP consisted of an overall program routine, ENGSZ, to determine appropriate fanjet engine size, with specialized subroutines such as ENGDT and NACDG assessing engine data and nacelle drag. Additional subroutines treated performance for propeller powerplants, including PWEPLT for piston engines, TURBEG for turboprops, ENGDAT and PERFM for propeller characteristics and performance, GEARBX for gearbox cost and weight, and PNOYS for propeller and engine noise.[803]

Such study efforts reflected the increasing numbers of noisy turbine-powered aircraft operating into over 14,500 airports and airfields in the United States, most in suburban areas, as well as the growing cost of aviation fuel and the consequent quest for greater engine effi­ciency. NASA had long been interested in reducing jet engine noise, and the Agency’s first efforts to find means of suppressing jet noise dated to the late NACA in 1957. The needs of the space program had necessarily focused Lewis research primarily on space, but it returned vigorously to air-breathing propulsion at the conclusion of the Apollo program, spurred by the widespread introduction of turbofan engines for mili­tary and civil purposes and the onset of the first oil crisis in the wake of the 1973 Arab-Israeli War.

Out of this came a variety of cooperative research efforts and pro­grams, including the congressionally mandated ACEE program (for Aircraft Engine Efficiency, launched in 1975), the NASA-industry QCSEE (for Quiet Clean STOL Experimental Engine) study effort, and the QCGAT (Quiet Clean General Aviation Turbofan) program. All benefited future propulsion studies, the latter two particularly so.[804]

QCGAT, launched in 1975, involved awarding initial study contracts to Garrett AiResearch, General Electric, and Avco Lycoming to explore applying large turbofan technology to GA needs. Next, AiResearch and Avco were selected to build a small turbofan demonstrator engine suit­able for GA applications that could meet stringent noise, emissions, and fuel consumption standards using an existing gas-generating engine core. AiResearch and Avco took different approaches, the former with a high-thrust engine suitable for long-range high-speed and high alti­tude GA aircraft (using as a baseline a stretched Lear 35), and the lat­ter with a lower-thrust engine for a lower, slower, intermediate-range design (based upon a Cessna Citation I). Subsequent testing indicated that each company did an excellent job in meeting the QCGAT pro­gram goals, each having various strengths. The Avco engine was qui­eter, and both engines bettered the QCQAT emissions goals for carbon monoxide and unburned hydrocarbons. While the Avco engine was "right at the goal” for nitrous oxide emissions, the AiResearch engine was higher, though much better than the baseline TFE-731-2 turbo­fan used for comparative purposes. While the AiResearch engine met sea-level takeoff and design cruise thrust goals, the Avco engine missed both, though its measured numbers were nevertheless "quite respect­able.” Overall, NASA considered that the QCGAT program, executed on schedule and within budget, constituted "a very successful NASA joint effort with industry,” concluding that it had "demonstrated that noise need not be a major constraint on the future growth of the GA turbofan fleet.”[805] Subsequently, NASA launched GATE (General Aviation Turbine Engines) to explore other opportunities for the application of small tur­bine technology to GA, awarding study contracts to AiResearch, Detroit Diesel Allison, Teledyne CAE, and Williams Research.[806] GA propulsion study efforts gained renewed impetus through the Advanced General Aviation Transport Experiment (AGATE) program launched in 1994, which is discussed later in this study.

DAST: Exploring the Limits of Aeroelastic Structural Design

In the early 1970s, researchers at Dryden and NASA Langley Research Center sought to expand the use of RPRVs into the transonic realm. The Drones for Aerodynamic and Structural Testing (DAST) pro­gram was conceived as a means of conducting high-risk flight exper­iments using specially modified Teledyne-Ryan BQM-34E/F Firebee II supersonic target drones to test theoretical data under actual flight conditions. Described by NASA engineers as a "wind-tunnel in the sky,” the DAST program merged advances in electronic remote – control systems with advanced airplane-design techniques. The drones were relatively inexpensive and easy to modify for research purposes

and, moreover, were readily available from an existing stock of Navy target drones.[929] The unmodified Firebee II had a maximum speed of Mach 1.1 at sea level and almost Mach 1.8 at 45,000 feet, and was capa­ble of 5 g turns. Firebee II drones in the basic configuration provided baseline data. Researchers modified two vehicles, DAST-1 and DAST-2, to test several wing configurations during maneuvers at transonic speeds in order to compare flight results with theoretical and wind tunnel find­ings. For captive and free flights, the drones were carried aloft beneath a 9 DC-130A or the NB-52B. The DAST vehicles were equipped with

remotely augmented digital flight control systems, research instrumen­tation, an auxiliary fuel tank for extended range, and a MARS recov­ery system. On the ground, a pilot controlled the DAST vehicle from a remote cockpit while researchers examined flight data transmitted via pulse-mode telemetry. In the event of a ground computer failure, the DAST vehicle could also be flown using a backup control system in the rear cockpit of a Lockheed F-104B chase plane.[930]

The primary flight control system for DAST was remotely augmented. In this configuration, control laws for augmenting the airplane’s fly­ing characteristics were programmed into a general-purpose computer on the ground. Closed-loop operation was achieved through a teleme­try uplink/downlink between the ground cockpit and the vehicle. This technique had previously been tested using the F-15 RPRV.[931] Baseline testing was conducted between November 1975 and June 1977, using an unmodified BQM-34F drone. It was carried aloft three times for captive flights, twice by a DC-130A and once by the NB-52B. These flights gave ground pilot William H. Dana a chance to check out the RPRV systems and practice prelaunch procedures. Finally, on July 28, 1977, the Firebee II was launched from the NB-52B for the first time. Dana flew the vehicle using an unaugmented control mode called Babcock-direct. He found the Firebee less controllable in roll than had been indicated in simulations, but overall performance was higher.

Dana successfully transferred control of the drone to Vic Horton in the rear seat of an F-104B chase plane. Horton flew the Firebee through the autopilot to evaluate controllability before transferring control back to Dana just prior to recovery.

Technicians then installed instrumented standard wings, known as the Blue Streak configuration. Thomas C. McMurtry flew a mission March 9, 1979, to evaluate onboard systems such as the autopilot and RAV system. Results were generally good, with some minor issues to be addressed prior to flying the DAST-1 vehicle.[932] The DAST researchers were most interested in correlating theoretical predictions and experi­mental flight results of aeroelastic effects in the transonic speed range. Such tests, particularly those involving wing flutter, would be extremely hazardous with a piloted aircraft.

One modified Firebee airframe, which came to be known as DAST-1, was fitted with a set of swept supercritical wings of a shape optimized for a transport-type aircraft capable of Mach 0.98 at 45,000 feet. The ARW-1 aeroelastic research wing, designed and built by Boeing in Wichita, KS, was equipped with an active flutter-suppression system (FSS). Research goals included validation of active controls technology for flutter sup­pression, enhancement, and verification of transonic flutter prediction techniques, and providing a database for aerodynamic-loads prediction techniques for elastic structures.[933] The basic Firebee drone was controlled through collective and differential horizontal stabilizer and rudder deflec­tions because it had no wing control surfaces. The DAST-1 retained this control system, leaving the ailerons free to perform the flutter suppres­sion function. During fabrication of the wings, it became apparent that torsional stiffness was higher than predicted. To ensure that the flut­ter boundary remained at an acceptable Mach number, 2-pound ballast weights were added to each wingtip. These weights consisted of contain­ers of lead shot that could be jettisoned to aid recovery from inadvertent large-amplitude wing oscillations. Researchers planned to intentionally fly the DAST-1 beyond its flutter boundary to demonstrate the effective­ness of the FSS.[934] Along with the remote cockpit, there were two other

ground-based facilities for monitoring and controlling the progress of DAST flight tests. Dryden’s Control Room contained radar plot boards for monitoring the flight path, strip charts indicating vehicle rigid-body stability and control and operational functions, and communications equipment for coordinating test activities. A research pilot stationed in the Control Room served as flight director. Engineers monitoring the flutter tests were located in the Structural Analysis Facility (SAF). The SAF accommodated six people, one serving as test director to over­see monitoring of the experiments and communicate directly with the ground pilot.[935] The DAST-1 was launched for the first time October 2, 1979. Following release from the NB-52B, Tom McMurtry guided the vehicle through FSS checkout maneuvers and a subcritical-flutter inves­tigation. An uplink receiver failure resulted in an unplanned MARS recovery about 8 minutes after launch. The second flight was delayed until March 1980. Again only subcritical-flutter data were obtained, this time because of an unexplained oscillation in the left FSS aile­ron.[936] During the third flight, unknown to test engineers, the FSS was operating at one-half nominal gain. Misleading instrument indications concealed a trend toward violent flutter conditions at speeds beyond Mach 0.8. As the DAST-1 accelerated to Mach 0.825, rapidly divergent oscillations saturated the FSS ailerons. The pilot jettisoned the wingtip masses, but this failed to arrest the flutter. Less than 6 seconds after the oscillations began, the right wing broke apart, and the vehicle crashed near Cuddeback Dry Lake, CA.

Investigators concluded that erroneous gain settings were the pri­mary cause. The error resulted in a configuration that caused the wing to be unstable at lower Mach numbers than anticipated, causing the vehi­cle to experience closed-loop flutter. The ARW-1 wing was rebuilt as the ARW-1R and installed in a second DAST vehicle in order to continue the research program.[937] The DAST-2 underwent a captive systems-checkout flight beneath the wing of the NB-52B on October 29, 1982, followed by a subcritical-flutter envelope expansion flight 5 days later. Unfortunately, the flight had to be aborted early because of unexplained wing structural

vibrations and control-system problems. The next three flight attempts were also aborted—the first because of a drone engine temperature warning, the second because of loss of telemetry, and a third time for unspecified reasons prior to taxi.[938] Further testing of the DAST-2 vehi­cle was conducted using a Navy DC-130A launch aircraft. Following two planned captive flights for systems checkout, the vehicle was ready to fly.

On June 1, 1983, the DC-130A departed Edwards as the crew exe­cuted a climbing turn over Mojave and California City. Rogers Smith flew the TF-104G with backup pilot Ray Young, while Einar Enevoldson began preflight preparations from the ground cockpit. The airplanes passed abeam of Cuddeback Dry Lake, passed north of Barstow, and turned west. The launch occurred a few minutes later over Harper Dry Lake. Immediately after separation from the launch pylon, the drone’s recovery-system drag chute deployed, but the main parachute was jet­tisoned while still packed in its canister.[939] The drone plummeted to the ground in the middle of a farm field west of the lakebed. It was com­pletely destroyed, but other than loss of a small patch of alfalfa at the impact site, there was no property damage. Much later, when it was pos­sible to joke about such things, a few wags referred to this event as the "alfalfa impact study.”[940] An investigation board found that a combination of several improbable anomalies—a design flaw, a procedural error, and a hardware failure—simultaneously contributed to loss of the vehicle. These included an uncommanded recovery signal produced by an elec­trical spike, failure to reset a drag chute timer, and improper ground­ing of an electrical relay. Another section of the investigation focused on project management issues. Criticism of Dryden’s DAST program man­agement was hotly debated, and several dissenting opinions were filed along with the main report.[941] Throughout its history, the DAST program was plagued by difficulties. Between December 1973 and November 1983, five different project managers oversaw the program. As early as December 1978, Dryden’s Center Director, Isaac T. Gillam, had requested

chief engineer Milton O. Thompson and chief counsel John C. Mathews to investigate management problems associated with the project. This resulted from the project team’s failure to meet an October 1978 flight date for the Blue Streak wing, Langley managers’ concern that Dryden was not properly discharging its project obligations, repeated requests by the project manager for schedule slips, and various other indications that the project was in a general state of confusion. The resulting report indicated that problems had been caused by a lack of effective planning at Dryden, exacerbated by poor internal communication among project personnel.[942] Only 7 flights were achieved in 10 years. Several flights were aborted for various reasons, and two vehicles crashed, problems that drove up testing costs. Meanwhile, flight experiments with higher-profile, better-funded remotely piloted research vehicles took priority over DAST missions at Dryden. Organizational upheaval also took a toll, as Dryden was consolidated with Ames Research Center in 1981 and responsibility for projects was transferred to the Flight Operations Directorate in 1983.

Exceptionally good test data had been obtained through the DAST program but not in an efficient and timely manner. Initially, the Firebee drone was selected for use in the DAST project in the belief that it offered a quick and reasonably inexpensive option for conducting a task too haz­ardous for a piloted vehicle. Experience proved, however, that using off – the-shelf hardware did not guarantee expected results. Just getting the vehicle to fly was far more difficult and far less successful than origi­nally anticipated.[943] Hardware delays created additional difficulties. The Blue Streak wing was not delivered until mid-1978. The ARW-1 wing arrived in April 1979, 1% years behind schedule, and was not flown until 6 months later. Following the loss of the DAST-1 vehicle, the program was delayed nearly 2 years until delivery of the ARW-1R wing. After the 1983 crash, the program was terminated.[944]

Airlines and the Jet Age

In the 1930s, the NACA had conducted research on engine cowlings that improved cooling while reducing drag. This led to improvements in air­liner speed and economy, which in turn led to increased capacity and more acceptance by the traveling public; airliners were as fast as the fighters of the early Depression era. In World War II, the NACA shifted research focus to military needs, the most challenging being the turbojet,
and almost doubled potential top speeds. In civil aviation, postwar propeller-driven airliners could span the continent and the oceans, but at 300 mph. Initial attempts to install turbojets in straight winged airliners failed because of the fuel inefficiency of the jets and the increased drag at jet speeds; the loss of life in the mysterious crashes of three British jet-propelled Comets did not instill confidence. Practical airliners had to wait for more efficient engines and a better understanding of high subsonic speeds at high altitudes. NACA aeronautical research of the early 1950s helped provide the latter; the drive toward higher speed in military aircraft provided the impetus for the engine improvements. Boeing’s business gamble in funding the 367-80 demonstrator, which first flew in 1954, triggered the avalanche of jet airliner designs. Airlines began to buy the prospective aircraft by the dozens; because the Civil Aeronautics Board (CAB) mandated all ticket prices in the United States, an airline could not afford to be left behind if its competitors offered travel time significantly less than its propeller-driven fleet. Once passen­gers were exposed to the low vibration and noise levels of the turbine powerplants, compared to the dozens of reciprocating cylinders of the piston engines banging away combined with multiple noisy propellers, the outcome was further cemented. By the mid 1950s, the jet revolu­tion was imminent in the civil aviation world.

Подпись: 10In late 1958, commercial transcontinental and transatlantic jet ser­vice began out of New York City, but it was not an easy start. Turbojet noise to ground bystanders during takeoff and landings was not a con­cern to the military; it was to the New York City airport authorities. "Organ pipe” sound suppressors were mandated, which reduced engine performance and cost the airlines money; even with them, special flight procedures were required to minimize residential noise footprints, requiring numerous flight demonstrations and even weight limitations for takeoffs. The 707 was larger than the newly redesigned British Comet and hence noisier; final approval to operate the 707 from Idlewild was given at the last minute, and the delay helped give the British aircraft "bragging rights” on transatlantic jet service.[1063]

Other jet characteristics were also a concern to operators and air traffic control (ATC) alike. Higher jet speeds would give the pilots less time to avoid potential collisions if they relied on visual detection alone. A high-profile midair collision between a DC-7 and Constellation over the Grand Canyon in 1956 highlighted this problem. Onboard colli­sion warning systems using either radar or infrared had been in devel­opment since 1954, but no choice had been made for mandatory use. Long-distance jet operations were fuel critical; early jet transatlantic flights frequently had to make unplanned landings en route to refuel. Jets could not endure lengthy waits in holding patterns; hence, ATC had to plan on integrating increasingly dense traffic around popular desti­nations, with some of the traffic traveling at significantly higher speeds and potentially requiring priority. A common solution to the traffic problems was to provide ground radar coverage across the country and to better automate the ATC sequencing of flight traffic. This was being introduced as the jet airliner was introduced; a no-survivors midair col­lision between a United Airlines DC-8 jetliner and a Constellation, this time over Staten Island, NY, was widely televised and emphasized the importance of ATC modernization.11

Подпись: 10NACA research by Richard Whitcomb that led to the area rule had been used by Convair in reducing drag on the F-102 so it would go supersonic. It was also used to make the B-58 design more efficient so that it had a significant range at Mach 2, propelled by four afterburn­ing General Electric J79 turbojets. Convair had been busy with these military projects and was late in the jet airliner market. It decided that a smaller, medium-range airliner could carve out a niche. An initial design appeared as the Convair 880 but did not attract much interest. The decision was made to develop a larger aircraft, the Convair 990, which employed non-afterburning J79s with an added aft fan to reap the developing turbofan engines’ advantages of increased fuel efficiency and decreased sideline noise. Furthermore, the aircraft would employ Whitcomb’s area rule concepts (including so-called shock bodies on its [1064]

wings, something it shared with the Soviet Union’s Tupolev bombers) to allow it to efficiently cruise some 60-80 mph faster than the 707 and the DC-8, leading to a timesavings on long-haul routes. The aircraft had a higher cruise speed and some limited success in the marketplace, but the military-derived engine had poor fuel economics even with a fan and without an afterburner, was still very noisy, and generated enough black smoke on approach that casual observers often thought the aircraft was on fire (something it shared with its military counterpart, which gen­erated so much smoke that McDonnell F-4 Phantoms often had their position given away by an accusing finger of sooty smoke). The poten­tial trip timesavings was not adequate to compensate for those short­comings. The lesson the airline industry learned was that, in an age of regulated common airline ticket prices, any speed increase would have to be sufficiently great to produce a significant timesavings and justify a ticket surcharge. The latter was a double-edged sword, because one might lose market share to non-high-speed competitors.[1065]

Sensor Fusion Arrives

Подпись: 11Integrating an External Vision System was an overarching goal of the HSR program. The XVS would include advanced television and infra­red cameras, passive millimeter microwave radar, and other cutting – edge sensors, fused with an onboard database of navigation information, obstacles, and topography. It would thus furnish a complete, syntheti­cally derived view for the aircrew and associated display symbologies in real time. The pilot would be presented with a visual meteorologi­cal conditions view of the world on a large display screen in the flight, deck simulating a front window. Regardless of actual ambient meteo­rological conditions, the pilot would thus "see” a clear daylight scene, made possible by combining appropriate sensor signals; synthetic scenes derived from the high-resolution terrain, navigation, and obstacle data­bases; and head-up symbology (airspeed, altitude, velocity vector, etc.) provided by symbol generators. Precise GPS navigation input would complete the system. All of these inputs would be processed and dis­played in real time (on the order of 20-30 milliseconds) on the large "virtual window” displays. During the HSR program, Langley did not develop the sensor fusion technology before program termination and, as a result, moved in the direction of integration of the synthetic data­base derived view with sophisticated display symbologies, redefining the implementation of the primary flight display and navigation display. Part of the problem with developing the sensor fusion algorithms was the perceived need for large, expensive computers. Langley continued on this path when the Synthetic Vision Systems project was initiated under NASA’s Aviation Safety Program in 1999 and achieved remark­able results in SVS architecture, display development, human factors engineering, and flight deck integration in both GA and CBA domains.[1164]

Simultaneously with these efforts, JSC was developing the X-38 unpiloted lifting body/parafoil recovery reentry vehicle. The X-38 was a technology demonstrator for a proposed orbital crew rescue vehicle
that could, in an emergency, return up to seven astronauts to Earth, a veritable space-based lifeboat. NASA planners had forecasted a need for such a rescue craft in the early days of planning for Space Station Freedom (subsequently the International Space Station). Under a Langley study program for the Space Station Freedom Crew Emergency Rescue Vehicle (CERV, later shortened to CRV), Agency engineers and research pilots had undertaken extensive simulation studies of one candidate shape, the HL-20 lifting body, whose design was based on the general aerodynamic shape of the Soviet Union’s BOR-4 subscale spaceplane.[1165] The HL-20 did not proceed beyond these tests and a full-scale mockup. Instead, Agency attention turned to another escape vehicle concept, one essentially identical in shape to the nearly four-decade-old body shape of the Martin SV-5D hypersonic lifting reentry test vehicle, sponsored by NASA’s Johnson Space Center. The Johnson configuration spawned its own two-phase demonstrator research effort: the X-38 program, for a series of subsonic drop-shapes air-launched from NASA’s NB-52B Stratofortress, and the second, for an orbital reentry shape to be test – launched from the Space Shuttle from a high-inclination orbit. But while tests of the former did occur at the NASA Dryden Flight Research Center (DFRC) in the late 1990s, the fully developed orbital craft did not pro­ceed to development and orbital test.[1166]

Подпись: 11To remotely pilot this vehicle during its flight-testing at Dryden, project engineers were developing a system displaying the required navigation and control data. Television cameras in the nose of the X-38 provided a data link video signal to a control flight deck on the ground. Video signals alone, however, were insufficient for the remote pilot to perform all the test and control maneuvers, including "flap turns” and "heading hold” commands during the parafoil phase of flight. More infor­mation on the display monitor would be needed. Further complications arose because of the design of the X-38: the crew would be lying on its

backs, looking at displays on the "ceiling” of the vehicle. Accordingly, a team led by JSC X-38 Deputy Avionics Lead Frank J. Delgado was tasked with developing a display system allowing the pilot to control the X-38 from a perspective 90 degrees to the vehicle direction of travel. On the cockpit design team were NASA astronauts Rick Husband (subsequently lost in the Columbia reentry disaster), Scott Altman, and Ken Ham, and JSC engineer Jeffrey Fox.

Подпись: 11Delgado solicited industry assistance with the project. Rapid Imaging Software, Inc., a firm already working with imaginative synthetic vision concepts, received a Phase II Small Business Innovation Research (SBIR) contract to develop the display architecture. RIS subsequently developed LandForm VisualFlight, which blended "the power of a geographic informa­tion system with the speed of a flight simulator to transform a user’s desk­top computer into a ‘virtual cockpit.’”[1167] It consisted of "symbology fusion” software and 3-D "out-the-window” and NAV display presentations oper­ating using a standard Microsoft Windows-based central processing unit (CPU). JSC and RIS were on the path to developing true sensor fusion in the near future, blending a full SVS database with live video signals. The system required a remote, ground-based control cockpit, so Jeff Fox pro­cured an extended van from the JSC motor pool. This vehicle, officially known as the X-38 Remote Cockpit Van, was nicknamed the "Vomit Van” by those poor souls driving around lying on their backs practicing flying a simulated X-38. By spring 2002, JSC was flying the X-38 from the Remote Cockpit Van using an SVS NAV Display, an SVS out-the-window display, and a video display developed by RIS. NASA astronaut Ken Ham judged it as furnishing the "best seat in the house” during X-38 glide flights.[1168]

Indeed, during the X-38 testing, a serendipitous event demon­strated the value of sensor fusion. After release from the NASA NB-52B Stratofortress, the lens of the onboard X-38 television camera became partially covered in frost, occluding over 50 percent of the FOV. This would have proved problematic for the pilot had orienting symbology
not been available in the displays. Synthetic symbology, including spa­tial entities identifying keep-out zones and runway outlines, provided the pilot with a synthetic scene replacing the occluded camera image. This foreshadowed the concept of sensor fusion, in which, for example, blossoming as the camera traversed the Sun could be "blended” out, and haze obscuration could be minimized by adjusting the degree of syn­thetic blend from 0 to 100 percent.[1169]

Подпись: 11But then, on April 29, 2002, faced with rising costs for the International Space Station, NASA canceled the X-38 program.[1170] Surprisingly, the cancellation did not have the deleterious impact upon sensor fusion development that might have been anticipated. Instead, program members Jeff Fox and Eric Boe secured temporary support via the Johnson Center Director’s discretionary fund to keep the X-38 Remote Cockpit Van operating. Mike Abernathy, president of RIS, was eager to continue his company’s sensor fusion work. He supported their efforts, as did Patrick Laport of Aerospace Applications North America (AANA). For the next 2 years, Fox and electronics technician James B. Secor continued to improve the van, working on a not-to-interfere basis with their other duties. In July 2004, Fox secured further Agency funding to convert the remote cockpit, now renamed, at Boe’s suggestion, the Advanced Cockpit Evaluation System (ACES). It was rebuilt with a single, upright seat affording a 180-degree FOV visual system with five large surplus moni­tors. An array of five cameras was mounted on the roof of the van, and its input could be blended in real time with new RIS software to form a complete sensor fusion package for the wraparound monitors or a helmet – mounted display.[1171] Subsequently, tests with this van demonstrated true sensor fusion. Now, the team looked for another flight project it could use to demonstrate the value of SVS.

Its first opportunity came in November 2004, at Creech Air Force Base in Nevada. Formerly known as Indian Springs Auxiliary Air Field, a backwater corner of the Nellis Air Force Base range, Creech had risen
to prominence after the attacks of 9/11, as it was the Air Force’s center of excellence for unmanned aerial vehicle (UAV) operations. It used, as its showcase, the General Atomics Predator UAV. The Predator, modi­fied as a Hellfire-armed attack system, had proven a vital component of the global war on terrorism. With UAVs increasing dramatically in their capabilities, it was natural that the UAV community at Nellis would be interested in the work of the ACES team. Traveling to Nevada to demon­strate its technology to the Air Force, the JSC team used the ACES van in a flight-following mode, receiving downlink video from a Predator UAV. That video was then blended with synthetic terrain database inputs to provide a 180-degree FOV scene for the pilot. The Air Force’s Predator pilots found the ACES system far superior to the narrow-view perspec­tive they then had available for landing the UAV.

Подпись: 11In 2005, astronaut Eric Boe began training for a Shuttle flight and left the group, replaced by the author, who had spent years over 10 years at Langley as a project or research pilot on all of that Center’s SVS and XVS projects. The author transferred to JSC from Langley in 2004 as a research pilot and learned of the Center’s SVS work from Boe. The author’s involve­ment with the JSC group linked Langley and JSC’s SVS efforts, for he provided the JSC group with his experience with Langley’s SVS research.

That spring, a former X-38 cooperative student—Michael Coffman, now an engineer at the FAA’s Mike Monroney Aeronautical Center in Oklahoma City—serendipitously visited Fox at JSC. They discussed using the sensor fusion technology for the FAA’s flight-check mission. Coffman, Fox, and Boe briefed Thomas C. Accardi, Director of Aviation Systems Standards at FAA Oklahoma City, on the sensor fusion work at JSC, and he was interested in its possibilities. Fox seized this opportu­nity to establish a memorandum of understanding (MOU) among the Johnson Space Center, the Mike Monroney Aeronautical Center, RIS, and AANA. All parties would work on a quid pro quo basis, sharing intellectual and physical resources where appropriate, without fund­ing necessarily changing hands. Signed in July 2005, this arrangement was unique in its scope and, as will be seen, its ability to allow contrac­tors and Government agencies to work together without cost. JSC and FAA Oklahoma City management had complete trust in their employ­ees, and both RIS and AANA were willing to work without compensa­tion, predicated on their faith in their product and the likely potential return on their investment, effectively a Skunk Works approach taken to the extreme. The stage was set for major SVS accomplishments, for
during this same period, huge strides in SVS development had been made at Langley, which is where this narrative now returns.[1172]

Aircraft Ice Protection

The Aircraft Ice Protection program focuses on two main areas: devel­opment of remote sensing technologies to measure nearby icing con­ditions, improve current forecast capabilities, and develop systems to transfer and display that information to flight crews, flight controllers, and dispatchers; and development of systems to monitor and assess aircraft performance, notify the cockpit crew about the state of the
aircraft, and/or automatically alter the aircraft controlling systems to prevent stall or loss of control in an icing environment. Keeping those two focus areas in mind, the Aircraft Ice Protection program is subdi­vided to work on these three goals:

• Provide flight crews with real-time icing weather infor­mation so they can avoid the hazard in the first place or find the quickest way out.[1265]

• Improve the ability of an aircraft to operate safely in icing conditions.[1266]

Подпись: 12Improve icing simulation capabilities by develop­ing better instrumentation and measurement tech­niques to characterize atmospheric icing conditions, which also will provide icing weather validation data­bases, and increase basic knowledge of icing physics.[1267]

In terms of remote sensing, the top level goals of this activity are to develop and field-test two forms of remote sensing system technologies that can reduce the exposure of aircraft to in-flight icing hazards. The first technology would be ground based and provide coverage in
a limited terminal area to protect all vehicles. The second technology would be airborne and provide unrestricted flightpath coverage for a commuter class aircraft. In most cases the icing hazard to aircraft is minimized with either de-icing or anti-icing procedures, or by avoid­ing any known icing or possible icing areas altogether. However, being able to avoid the icing hazard depends much on the quality and timing of the latest observed and forecast weather conditions. And once stuck in a severe icing hazard zone, the pilot must have enough information to know how to get out of the area before the aircraft’s ice protection systems are overwhelmed. One way to address these problem areas is to remotely detect icing potential and present the information to the pilot in a clear, easily understood manner. Such systems would allow the pilot to avoid icing conditions and also allow rapid escape from icing if severe conditions were encountered.[1268]

Fifth Generation: The F-22 Program

The Air Force initiated its Advanced Tactical Fighter (ATF) program in 1985 as an effort to augment and ultimately replace the F-15. During the competitive phase of the program between the Northrop-led YF-23 and the Lockheed-led YF-22 designs, the Air Force established that each team could draw on the facilities and expertise of NASA for establish­ing credibility and risk reduction before a competitive fly-off. Lockheed subsequently requested free-flight and spin tests of the YF-22 in the Langley Full-Scale Tunnel and the Langley Spin Tunnel. The relatively
compressed timeframe of the ATF competition would not permit a feasible schedule for the fabrication and testing of a helicopter drop model of the YF-22.

Подпись: 13A joint NASA-Lockheed team conducted conventional tunnel tests in the Full-Scale Tunnel in 1989 to measure YF-22 aerodynamic data for high-angle-of-attack conditions, followed by free-flight model studies to determine the low-speed departure resistance of the configuration. Meanwhile, spin tunnel tests obtained information on spin and recovery characteristics as well as the size and location of an emergency spin recovery parachute for the high-angle-of-attack test airplane. In addition, specialized "rotary-balance” tests were conducted in the spin tunnel to obtain aerodynamic data during simulated spin motions. Lockheed incorporated all of the forego­ing results in the design process, leading to an impressive display of capabilities by the YF-22 during the competitive flight demonstrations in 1990.

Lockheed formally acknowledged its appreciation of NASA’s participation in the YF-22 program in a letter to NASA, which stated:

On behalf of the Lockheed YF-22 Team, I would like to express our appreciation of the contribution that the people of NASA Langley made to our successful YF-22 flight test program, and provide some feedback on how well the flight test measure­ments agreed with the predictions from your wind-tunnel mea­surements. . . . The highlight of the flight test program was the high-angle-of-attack flying qualities. We relied on aerodynamic data obtained in the full-scale wind tunnel to define the low – speed, high-angle-of-attack static and dynamic aerodynamic derivatives; rotary derivatives from your spin tunnel; and free- flight demonstrations in the full-scale tunnel. We expanded the flight envelope from 20° to 60° angle of attack, demonstrating pitch attitude changes and full-stick rolls about the velocity vector in seven calendar days. The reason for this rapid enve­lope expansion was the quality of the aerodynamic data used in the control law design and pre-flight simulations.[1322]

Подпись: Free-flight model tests of the YF-22 in the Full-Scale Tunnel accurately predicted the high-alpha maneuverability of the full-scale airplane and provided risk reduction for the F-22 program. NASA. Подпись: 13

After the team of Lockheed, Boeing, and General Dynamics was announced as the winner of the ATF competition in April 1991, high – angle-of-attack testing of the final F-22 configuration was conducted in the Full-Scale Tunnel and the Spin Tunnel. Aerodynamic force testing was completed in the Full-Scale Tunnel in 1992, with spin – and rotary – balance tests conducted in 1993. A wind tunnel free-flight model was not fabricated for the F-22 program, but a typical full-scale tunnel model was constructed and used for the aerodynamic tests. A notable contribution from the spin tunnel tests was a relocation of the attachment point for the F-22 emergency spin recovery parachute to clear the exhaust plume of the vectoring engine in 1994. Langley’s contributions to the high – angle-of-attack technologies embodied in the F-22 fighter had been com­pleted well in advance of the aircraft’s first flight in September 1997.[1323]

Fatal Accident #2

The remaining XV-5A was rigged with a pilot-operated res­cue hoist, located on the left side of the fuselage just ahead of the wing fan. An evaluation test pilot was fatally injured during the test program while performing a low-speed, steep – descent "pick-up” maneuver at Edwards AFB. The heavily – weighted rescue collar was ingested into the left wing fan as the pilot descended and simultaneously played-out the collar. The damaged fan continued to rotate, but the resultant loss in fan lift caused the aircraft to roll-left and settle toward the ground. The pilot apparently leveled the wings; applied full power and up-collective to correct for the left wing-fan lift loss. The damaged left fan produced enough lift to hold the wings level and somewhat reduce the ensuing descent rate. The pilot elected to eject from the aircraft as it approached the ground in this wings-level attitude. As the pilot released the right-stick displacement and initiated the ejection, the air­craft rolled back to the left which caused the ejected seat tra­jectory to veer-off to a path parallel to the ground. The seat

Подпись: 14

impacted the ground, and the pilot did not survive the ejec­tion. Post-accident analysis revealed that despite the ingestion of the rescue collar and its weight, the wing-fan continued to operate and produce enough lift force to hold a wings-level roll attitude and reduce descent rate to a value that may have allowed the pilot to survive the ensuing "emergency landing” had he stayed with the aircraft. This was a grim testimony as to the ruggedness of the lift-fan.

Tupolev-144 SST on takeoff from Zhukovsky Air Development Center in Russia with a NASA pilot at the controls. NASA.

 

Fatal Accident #2