Category NASA’S CONTRIBUTIONS TO AERONAUTICS

The Advent of Hypersonic Tunnel and Aeroballistic Facilities

John V. Becker at Langley led the way in the development of conven­tional hypersonic wind tunnels. He built America’s first hypersonic wind tunnel in 1947, with an 11-inch test section and the capability of Mach 6.9 flow. To T. A. Heppenheimer, it is "a major advance in hypersonics,” because Becker had built the discipline’s first research instrument.[582] Becker and Eugene S. Love followed that success with their design of the 20-Inch Hypersonic Tunnel in 1958. Becker, Love, and their col­leagues used the tunnel for the investigation of heat transfer, pressure,

and forces acting on inlets and complete models at Mach 6. The facility featured an induction drive system that ran for approximately 15 min­utes in a nonreturn circuit operating at 220-550 psia (pounds-force per square inch absolute).[583]

The need for higher Mach numbers led to tunnels that did not rely upon the creation of a flow of air by fans. A counterflow tunnel featured a gun that fired a model into a continual onrushing stream of gas or air, which was an effective tool for supersonic and hypersonic testing. An impulse wind tunnel created high temperature and pressure in a test gas through an explosive release of energy. That expanded gas burst through a nozzle at hypersonic speeds and over a model in the test sec­tion in milliseconds. The two types of impulse tunnels—hotshot and shock—introduced the test gas differently and were important steps in reaching ever-higher speeds, but NASA required even faster tunnels.[584]

The companion to a hotshot tunnel was an arc-jet facility, which was capable of evaluating spacecraft heat shield materials under the extreme heat of planetary reentry. An electric arc preheated the test gas in the stilling chamber upstream of the nozzle to temperatures of 10,000-20,000 °F. Injected under pressure into the nozzle, the heated gas created a flow that was sustainable for several minutes at low- density numbers and supersonic Mach numbers. The electric arc required over 100,000 kilowatts of power. Unlike the hotshot, the arc-jet could operate continually.[585]

NASA combined these different types of nontraditional tunnels into the Ames Hypersonic Ballistic Range Complex in the 1960s.[586] The Ames Vertical Gun Range (1964) simulated planetary impact with various model­launching guns. Ames researchers used the Hypervelocity Free-Flight Aerodynamic Facility (1965) to examine the aerodynamic characteristics of atmospheric entry and hypervelocity vehicle configurations. The research programs investigated Earth atmosphere entry (Mercury, Gemini, Apollo,

and Shuttle), planetary entry (Viking, Pioneer-Venus, Galileo, and Mars Science Lab), supersonic and hypersonic flight (X-15), aerobraking con­figurations, and scramjet propulsion studies. The Electric Arc Shock Tube (1966) enabled the investigation of the effects of radiation and ionization that occurred during high-velocity atmospheric entries. The shock tube fired a gaseous bullet at a light-gas gun, which fired a small model into the onrushing gas.[587]

The NACA also investigated the use of test gases other than air. Designed by Antonio Ferri, Macon C. Ellis, and Clinton E. Brown, the Gas Dynamics Laboratory at Langley became operational in 1951. One facility was a high-pressure shock tube consisting of a constant area tube 3.75 inches in diameter, a 20-inch test section, a 14-foot-long high – pressure chamber, and 70-foot-long low-pressure section. The induction drive system consisted of a central 300-psi tank farm that provided heated fluid flow at a maximum speed of Mach 8 in a nonreturn circuit at a pres­sure of 20 atmospheres. Langley researchers investigated aerodynamic heating and fluid mechanical problems at speeds above the capability of conventional supersonic wind tunnels to simulate hypersonic and space-reentry conditions. For the space program, NASA used pure nitrogen and helium instead of heated air as the test medium to simulate reentry speeds.[588]

NASA built the similar Ames Thermal Protection Laboratory in the early 1960s to solve reentry materials problems for a new generation of craft, whether designed for Earth reentry or the penetration of the atmo­spheres of the outer planets. A central bank of 10 test cells provided the pressurized flow. Specifically, the Thermal Protection Laboratory found solutions for many vexing heat shield problems associated with the Space Shuttle, interplanetary probes, and intercontinental ballistic missiles.

Called the "suicidal wind tunnel” by Donald D. Baals and William R. Corliss because it was self-destructive, the Ames Voitenko Compressor was the only method for replicating the extreme velocities required for the design of interplanetary space probes. It was based on the Voitenko

The Advent of Hypersonic Tunnel and Aeroballistic Facilities

The Continuous Flow Hypersonic Tunnel at Langley in 1961. NASA.

concept from 1965 that a high-velocity explosive, or shaped, charge developed for military use be used for the acceleration of shock waves. Voitenko’s compressor consisted of a shaped charge, a malleable steel plate, and the test gas. At detonation, the shaped charge exerts pressure on the steel plate to drive it and the test gas forward. Researchers at the Ames Laboratory adapted the Voitenko compressor concept to a self­destroying shock tube comprised of a 66-pound shaped charge and a glass-walled tube 1.25 inches in diameter and 6.5 feet long. Observation of the tunnel in action revealed that the shock wave traveled well ahead of the rapidly disintegrating tube. The velocities generated upward of

220,0 feet per second could not be reached by any other method.[589]

Langley, building upon a rich history of research in high-speed flight, started work on two tunnels at the moment of transition from the NACA

to NASA. Eugene Love designed the Continuous Flow Hypersonic Tunnel for nonstop operation at Mach 10. A series of compressors pushed high­speed air through a 1.25-inch square nozzle into the 31-inch square test section. A 13,000-kilowatt electric resistance heater raised the air tem­perature to 1,450 °F in the settling chamber, while large water coolers and channels kept the tunnel walls cool. The tunnel became opera­tional in 1962 and became instrumental in study of the aerodynamic performance and heat transfer on winged reentry vehicles such as the Space Shuttle.[590]

The 8-Foot High-Temperature Structures Tunnel, opened in 1967, permitted full-scale testing of hypersonic and spacecraft components. By burning methane in air at high pressure and through a hypersonic nozzle in the tunnel, Langley researchers could test structures at Mach 7 speeds and at temperatures of 3,000 °F. Too late for the 1960s space program, the tunnel was instrumental in the testing of the insulating tiles used on the Space Shuttle.[591]

NASA researchers Richard R. Heldenfels and E. Barton Geer devel­oped the 9- by 6-Foot Thermal Structures Tunnel to test aircraft and missile structural components operating under the combined effects of aerodynamic heating and loading. The tunnel became operational in 1957 and featured a Mach 3 drive system consisting of 600-psia air stored in a tank farm filled by a high-capacity compressor. The spent air simply exhausted to the atmosphere. Modifications included addi­tional air storage (1957), a high-speed digital data system (1959), a sub­sonic diffuser (1960), a Topping compressor (1961), and a boost heater system that generated 2,000 °F of heat (1963). NASA closed the 9- by 6-Foot Thermal Structures Tunnel in September 1971. Metal fatigue in the air storage field led to an explosion that destroyed part of the facil­ity and nearby buildings.[592]

NASA’s wind tunnels contributed to the growing refinement of space­craft technology. The multiple design changes made during the transi­tion from the Mercury program to the Gemini program and the need for more information on the effects of angle of attack, heat transfer, and surface pressure resulted in a new wind tunnel and flight-test program. Wind tunnel tests of the Gemini spacecraft were conducted in the range

of Mach 3.51 to 16.8 at the Langley Unitary Plan and tunnels at AEDC and Cornell University. The flight-test program gathered data from the first four launches and reentries of Gemini spacecraft.[593] Correlation revealed that both independent sets of data were in agreement.[594]

The Propulsion Perspective

Aerodynamics always constituted an important facet of NACA-NASA GA research, but no less significant is flight propulsion, for the aircraft engine is often termed the "heart” of an airplane. In the 1920s and 1930s, NACA research by Fred Weick, Eastman Jacobs, John Stack, and others had profoundly influenced the efficiency of the piston engine-propeller­cowling combination.[800] Agency work in the early jet age had been no less influential upon improving the performance of turbojet, turboshaft, and turbofan engines, producing data judged "essential to industry designers.”[801]

The rapid proliferation of turbofan-powered GA aircraft—over 2,100 of which were in service by 1978, with 250 more being added each year— stimulated even greater attention.[802] NASA swiftly supported development of a specialized computer-based program for assessing engine perfor­mance and efficiency. In 1977, for example, Ames Research Center funded development of GASP, the General Aviation Synthesis Program, by the Aerophysics Research Corporation, to compute propulsion system per­formance for engine sizing and studies of overall aircraft performance. GASP consisted of an overall program routine, ENGSZ, to determine appropriate fanjet engine size, with specialized subroutines such as ENGDT and NACDG assessing engine data and nacelle drag. Additional subroutines treated performance for propeller powerplants, including PWEPLT for piston engines, TURBEG for turboprops, ENGDAT and PERFM for propeller characteristics and performance, GEARBX for gearbox cost and weight, and PNOYS for propeller and engine noise.[803]

Such study efforts reflected the increasing numbers of noisy turbine-powered aircraft operating into over 14,500 airports and airfields in the United States, most in suburban areas, as well as the growing cost of aviation fuel and the consequent quest for greater engine effi­ciency. NASA had long been interested in reducing jet engine noise, and the Agency’s first efforts to find means of suppressing jet noise dated to the late NACA in 1957. The needs of the space program had necessarily focused Lewis research primarily on space, but it returned vigorously to air-breathing propulsion at the conclusion of the Apollo program, spurred by the widespread introduction of turbofan engines for mili­tary and civil purposes and the onset of the first oil crisis in the wake of the 1973 Arab-Israeli War.

Out of this came a variety of cooperative research efforts and pro­grams, including the congressionally mandated ACEE program (for Aircraft Engine Efficiency, launched in 1975), the NASA-industry QCSEE (for Quiet Clean STOL Experimental Engine) study effort, and the QCGAT (Quiet Clean General Aviation Turbofan) program. All benefited future propulsion studies, the latter two particularly so.[804]

QCGAT, launched in 1975, involved awarding initial study contracts to Garrett AiResearch, General Electric, and Avco Lycoming to explore applying large turbofan technology to GA needs. Next, AiResearch and Avco were selected to build a small turbofan demonstrator engine suit­able for GA applications that could meet stringent noise, emissions, and fuel consumption standards using an existing gas-generating engine core. AiResearch and Avco took different approaches, the former with a high-thrust engine suitable for long-range high-speed and high alti­tude GA aircraft (using as a baseline a stretched Lear 35), and the lat­ter with a lower-thrust engine for a lower, slower, intermediate-range design (based upon a Cessna Citation I). Subsequent testing indicated that each company did an excellent job in meeting the QCGAT pro­gram goals, each having various strengths. The Avco engine was qui­eter, and both engines bettered the QCQAT emissions goals for carbon monoxide and unburned hydrocarbons. While the Avco engine was "right at the goal” for nitrous oxide emissions, the AiResearch engine was higher, though much better than the baseline TFE-731-2 turbo­fan used for comparative purposes. While the AiResearch engine met sea-level takeoff and design cruise thrust goals, the Avco engine missed both, though its measured numbers were nevertheless "quite respect­able.” Overall, NASA considered that the QCGAT program, executed on schedule and within budget, constituted "a very successful NASA joint effort with industry,” concluding that it had "demonstrated that noise need not be a major constraint on the future growth of the GA turbofan fleet.”[805] Subsequently, NASA launched GATE (General Aviation Turbine Engines) to explore other opportunities for the application of small tur­bine technology to GA, awarding study contracts to AiResearch, Detroit Diesel Allison, Teledyne CAE, and Williams Research.[806] GA propulsion study efforts gained renewed impetus through the Advanced General Aviation Transport Experiment (AGATE) program launched in 1994, which is discussed later in this study.

DAST: Exploring the Limits of Aeroelastic Structural Design

In the early 1970s, researchers at Dryden and NASA Langley Research Center sought to expand the use of RPRVs into the transonic realm. The Drones for Aerodynamic and Structural Testing (DAST) pro­gram was conceived as a means of conducting high-risk flight exper­iments using specially modified Teledyne-Ryan BQM-34E/F Firebee II supersonic target drones to test theoretical data under actual flight conditions. Described by NASA engineers as a "wind-tunnel in the sky,” the DAST program merged advances in electronic remote – control systems with advanced airplane-design techniques. The drones were relatively inexpensive and easy to modify for research purposes

and, moreover, were readily available from an existing stock of Navy target drones.[929] The unmodified Firebee II had a maximum speed of Mach 1.1 at sea level and almost Mach 1.8 at 45,000 feet, and was capa­ble of 5 g turns. Firebee II drones in the basic configuration provided baseline data. Researchers modified two vehicles, DAST-1 and DAST-2, to test several wing configurations during maneuvers at transonic speeds in order to compare flight results with theoretical and wind tunnel find­ings. For captive and free flights, the drones were carried aloft beneath a 9 DC-130A or the NB-52B. The DAST vehicles were equipped with

remotely augmented digital flight control systems, research instrumen­tation, an auxiliary fuel tank for extended range, and a MARS recov­ery system. On the ground, a pilot controlled the DAST vehicle from a remote cockpit while researchers examined flight data transmitted via pulse-mode telemetry. In the event of a ground computer failure, the DAST vehicle could also be flown using a backup control system in the rear cockpit of a Lockheed F-104B chase plane.[930]

The primary flight control system for DAST was remotely augmented. In this configuration, control laws for augmenting the airplane’s fly­ing characteristics were programmed into a general-purpose computer on the ground. Closed-loop operation was achieved through a teleme­try uplink/downlink between the ground cockpit and the vehicle. This technique had previously been tested using the F-15 RPRV.[931] Baseline testing was conducted between November 1975 and June 1977, using an unmodified BQM-34F drone. It was carried aloft three times for captive flights, twice by a DC-130A and once by the NB-52B. These flights gave ground pilot William H. Dana a chance to check out the RPRV systems and practice prelaunch procedures. Finally, on July 28, 1977, the Firebee II was launched from the NB-52B for the first time. Dana flew the vehicle using an unaugmented control mode called Babcock-direct. He found the Firebee less controllable in roll than had been indicated in simulations, but overall performance was higher.

Dana successfully transferred control of the drone to Vic Horton in the rear seat of an F-104B chase plane. Horton flew the Firebee through the autopilot to evaluate controllability before transferring control back to Dana just prior to recovery.

Technicians then installed instrumented standard wings, known as the Blue Streak configuration. Thomas C. McMurtry flew a mission March 9, 1979, to evaluate onboard systems such as the autopilot and RAV system. Results were generally good, with some minor issues to be addressed prior to flying the DAST-1 vehicle.[932] The DAST researchers were most interested in correlating theoretical predictions and experi­mental flight results of aeroelastic effects in the transonic speed range. Such tests, particularly those involving wing flutter, would be extremely hazardous with a piloted aircraft.

One modified Firebee airframe, which came to be known as DAST-1, was fitted with a set of swept supercritical wings of a shape optimized for a transport-type aircraft capable of Mach 0.98 at 45,000 feet. The ARW-1 aeroelastic research wing, designed and built by Boeing in Wichita, KS, was equipped with an active flutter-suppression system (FSS). Research goals included validation of active controls technology for flutter sup­pression, enhancement, and verification of transonic flutter prediction techniques, and providing a database for aerodynamic-loads prediction techniques for elastic structures.[933] The basic Firebee drone was controlled through collective and differential horizontal stabilizer and rudder deflec­tions because it had no wing control surfaces. The DAST-1 retained this control system, leaving the ailerons free to perform the flutter suppres­sion function. During fabrication of the wings, it became apparent that torsional stiffness was higher than predicted. To ensure that the flut­ter boundary remained at an acceptable Mach number, 2-pound ballast weights were added to each wingtip. These weights consisted of contain­ers of lead shot that could be jettisoned to aid recovery from inadvertent large-amplitude wing oscillations. Researchers planned to intentionally fly the DAST-1 beyond its flutter boundary to demonstrate the effective­ness of the FSS.[934] Along with the remote cockpit, there were two other

ground-based facilities for monitoring and controlling the progress of DAST flight tests. Dryden’s Control Room contained radar plot boards for monitoring the flight path, strip charts indicating vehicle rigid-body stability and control and operational functions, and communications equipment for coordinating test activities. A research pilot stationed in the Control Room served as flight director. Engineers monitoring the flutter tests were located in the Structural Analysis Facility (SAF). The SAF accommodated six people, one serving as test director to over­see monitoring of the experiments and communicate directly with the ground pilot.[935] The DAST-1 was launched for the first time October 2, 1979. Following release from the NB-52B, Tom McMurtry guided the vehicle through FSS checkout maneuvers and a subcritical-flutter inves­tigation. An uplink receiver failure resulted in an unplanned MARS recovery about 8 minutes after launch. The second flight was delayed until March 1980. Again only subcritical-flutter data were obtained, this time because of an unexplained oscillation in the left FSS aile­ron.[936] During the third flight, unknown to test engineers, the FSS was operating at one-half nominal gain. Misleading instrument indications concealed a trend toward violent flutter conditions at speeds beyond Mach 0.8. As the DAST-1 accelerated to Mach 0.825, rapidly divergent oscillations saturated the FSS ailerons. The pilot jettisoned the wingtip masses, but this failed to arrest the flutter. Less than 6 seconds after the oscillations began, the right wing broke apart, and the vehicle crashed near Cuddeback Dry Lake, CA.

Investigators concluded that erroneous gain settings were the pri­mary cause. The error resulted in a configuration that caused the wing to be unstable at lower Mach numbers than anticipated, causing the vehi­cle to experience closed-loop flutter. The ARW-1 wing was rebuilt as the ARW-1R and installed in a second DAST vehicle in order to continue the research program.[937] The DAST-2 underwent a captive systems-checkout flight beneath the wing of the NB-52B on October 29, 1982, followed by a subcritical-flutter envelope expansion flight 5 days later. Unfortunately, the flight had to be aborted early because of unexplained wing structural

vibrations and control-system problems. The next three flight attempts were also aborted—the first because of a drone engine temperature warning, the second because of loss of telemetry, and a third time for unspecified reasons prior to taxi.[938] Further testing of the DAST-2 vehi­cle was conducted using a Navy DC-130A launch aircraft. Following two planned captive flights for systems checkout, the vehicle was ready to fly.

On June 1, 1983, the DC-130A departed Edwards as the crew exe­cuted a climbing turn over Mojave and California City. Rogers Smith flew the TF-104G with backup pilot Ray Young, while Einar Enevoldson began preflight preparations from the ground cockpit. The airplanes passed abeam of Cuddeback Dry Lake, passed north of Barstow, and turned west. The launch occurred a few minutes later over Harper Dry Lake. Immediately after separation from the launch pylon, the drone’s recovery-system drag chute deployed, but the main parachute was jet­tisoned while still packed in its canister.[939] The drone plummeted to the ground in the middle of a farm field west of the lakebed. It was com­pletely destroyed, but other than loss of a small patch of alfalfa at the impact site, there was no property damage. Much later, when it was pos­sible to joke about such things, a few wags referred to this event as the "alfalfa impact study.”[940] An investigation board found that a combination of several improbable anomalies—a design flaw, a procedural error, and a hardware failure—simultaneously contributed to loss of the vehicle. These included an uncommanded recovery signal produced by an elec­trical spike, failure to reset a drag chute timer, and improper ground­ing of an electrical relay. Another section of the investigation focused on project management issues. Criticism of Dryden’s DAST program man­agement was hotly debated, and several dissenting opinions were filed along with the main report.[941] Throughout its history, the DAST program was plagued by difficulties. Between December 1973 and November 1983, five different project managers oversaw the program. As early as December 1978, Dryden’s Center Director, Isaac T. Gillam, had requested

chief engineer Milton O. Thompson and chief counsel John C. Mathews to investigate management problems associated with the project. This resulted from the project team’s failure to meet an October 1978 flight date for the Blue Streak wing, Langley managers’ concern that Dryden was not properly discharging its project obligations, repeated requests by the project manager for schedule slips, and various other indications that the project was in a general state of confusion. The resulting report indicated that problems had been caused by a lack of effective planning at Dryden, exacerbated by poor internal communication among project personnel.[942] Only 7 flights were achieved in 10 years. Several flights were aborted for various reasons, and two vehicles crashed, problems that drove up testing costs. Meanwhile, flight experiments with higher-profile, better-funded remotely piloted research vehicles took priority over DAST missions at Dryden. Organizational upheaval also took a toll, as Dryden was consolidated with Ames Research Center in 1981 and responsibility for projects was transferred to the Flight Operations Directorate in 1983.

Exceptionally good test data had been obtained through the DAST program but not in an efficient and timely manner. Initially, the Firebee drone was selected for use in the DAST project in the belief that it offered a quick and reasonably inexpensive option for conducting a task too haz­ardous for a piloted vehicle. Experience proved, however, that using off – the-shelf hardware did not guarantee expected results. Just getting the vehicle to fly was far more difficult and far less successful than origi­nally anticipated.[943] Hardware delays created additional difficulties. The Blue Streak wing was not delivered until mid-1978. The ARW-1 wing arrived in April 1979, 1% years behind schedule, and was not flown until 6 months later. Following the loss of the DAST-1 vehicle, the program was delayed nearly 2 years until delivery of the ARW-1R wing. After the 1983 crash, the program was terminated.[944]

Airlines and the Jet Age

In the 1930s, the NACA had conducted research on engine cowlings that improved cooling while reducing drag. This led to improvements in air­liner speed and economy, which in turn led to increased capacity and more acceptance by the traveling public; airliners were as fast as the fighters of the early Depression era. In World War II, the NACA shifted research focus to military needs, the most challenging being the turbojet,
and almost doubled potential top speeds. In civil aviation, postwar propeller-driven airliners could span the continent and the oceans, but at 300 mph. Initial attempts to install turbojets in straight winged airliners failed because of the fuel inefficiency of the jets and the increased drag at jet speeds; the loss of life in the mysterious crashes of three British jet-propelled Comets did not instill confidence. Practical airliners had to wait for more efficient engines and a better understanding of high subsonic speeds at high altitudes. NACA aeronautical research of the early 1950s helped provide the latter; the drive toward higher speed in military aircraft provided the impetus for the engine improvements. Boeing’s business gamble in funding the 367-80 demonstrator, which first flew in 1954, triggered the avalanche of jet airliner designs. Airlines began to buy the prospective aircraft by the dozens; because the Civil Aeronautics Board (CAB) mandated all ticket prices in the United States, an airline could not afford to be left behind if its competitors offered travel time significantly less than its propeller-driven fleet. Once passen­gers were exposed to the low vibration and noise levels of the turbine powerplants, compared to the dozens of reciprocating cylinders of the piston engines banging away combined with multiple noisy propellers, the outcome was further cemented. By the mid 1950s, the jet revolu­tion was imminent in the civil aviation world.

Подпись: 10In late 1958, commercial transcontinental and transatlantic jet ser­vice began out of New York City, but it was not an easy start. Turbojet noise to ground bystanders during takeoff and landings was not a con­cern to the military; it was to the New York City airport authorities. "Organ pipe” sound suppressors were mandated, which reduced engine performance and cost the airlines money; even with them, special flight procedures were required to minimize residential noise footprints, requiring numerous flight demonstrations and even weight limitations for takeoffs. The 707 was larger than the newly redesigned British Comet and hence noisier; final approval to operate the 707 from Idlewild was given at the last minute, and the delay helped give the British aircraft "bragging rights” on transatlantic jet service.[1063]

Other jet characteristics were also a concern to operators and air traffic control (ATC) alike. Higher jet speeds would give the pilots less time to avoid potential collisions if they relied on visual detection alone. A high-profile midair collision between a DC-7 and Constellation over the Grand Canyon in 1956 highlighted this problem. Onboard colli­sion warning systems using either radar or infrared had been in devel­opment since 1954, but no choice had been made for mandatory use. Long-distance jet operations were fuel critical; early jet transatlantic flights frequently had to make unplanned landings en route to refuel. Jets could not endure lengthy waits in holding patterns; hence, ATC had to plan on integrating increasingly dense traffic around popular desti­nations, with some of the traffic traveling at significantly higher speeds and potentially requiring priority. A common solution to the traffic problems was to provide ground radar coverage across the country and to better automate the ATC sequencing of flight traffic. This was being introduced as the jet airliner was introduced; a no-survivors midair col­lision between a United Airlines DC-8 jetliner and a Constellation, this time over Staten Island, NY, was widely televised and emphasized the importance of ATC modernization.11

Подпись: 10NACA research by Richard Whitcomb that led to the area rule had been used by Convair in reducing drag on the F-102 so it would go supersonic. It was also used to make the B-58 design more efficient so that it had a significant range at Mach 2, propelled by four afterburn­ing General Electric J79 turbojets. Convair had been busy with these military projects and was late in the jet airliner market. It decided that a smaller, medium-range airliner could carve out a niche. An initial design appeared as the Convair 880 but did not attract much interest. The decision was made to develop a larger aircraft, the Convair 990, which employed non-afterburning J79s with an added aft fan to reap the developing turbofan engines’ advantages of increased fuel efficiency and decreased sideline noise. Furthermore, the aircraft would employ Whitcomb’s area rule concepts (including so-called shock bodies on its [1064]

wings, something it shared with the Soviet Union’s Tupolev bombers) to allow it to efficiently cruise some 60-80 mph faster than the 707 and the DC-8, leading to a timesavings on long-haul routes. The aircraft had a higher cruise speed and some limited success in the marketplace, but the military-derived engine had poor fuel economics even with a fan and without an afterburner, was still very noisy, and generated enough black smoke on approach that casual observers often thought the aircraft was on fire (something it shared with its military counterpart, which gen­erated so much smoke that McDonnell F-4 Phantoms often had their position given away by an accusing finger of sooty smoke). The poten­tial trip timesavings was not adequate to compensate for those short­comings. The lesson the airline industry learned was that, in an age of regulated common airline ticket prices, any speed increase would have to be sufficiently great to produce a significant timesavings and justify a ticket surcharge. The latter was a double-edged sword, because one might lose market share to non-high-speed competitors.[1065]

Sensor Fusion Arrives

Подпись: 11Integrating an External Vision System was an overarching goal of the HSR program. The XVS would include advanced television and infra­red cameras, passive millimeter microwave radar, and other cutting – edge sensors, fused with an onboard database of navigation information, obstacles, and topography. It would thus furnish a complete, syntheti­cally derived view for the aircrew and associated display symbologies in real time. The pilot would be presented with a visual meteorologi­cal conditions view of the world on a large display screen in the flight, deck simulating a front window. Regardless of actual ambient meteo­rological conditions, the pilot would thus "see” a clear daylight scene, made possible by combining appropriate sensor signals; synthetic scenes derived from the high-resolution terrain, navigation, and obstacle data­bases; and head-up symbology (airspeed, altitude, velocity vector, etc.) provided by symbol generators. Precise GPS navigation input would complete the system. All of these inputs would be processed and dis­played in real time (on the order of 20-30 milliseconds) on the large "virtual window” displays. During the HSR program, Langley did not develop the sensor fusion technology before program termination and, as a result, moved in the direction of integration of the synthetic data­base derived view with sophisticated display symbologies, redefining the implementation of the primary flight display and navigation display. Part of the problem with developing the sensor fusion algorithms was the perceived need for large, expensive computers. Langley continued on this path when the Synthetic Vision Systems project was initiated under NASA’s Aviation Safety Program in 1999 and achieved remark­able results in SVS architecture, display development, human factors engineering, and flight deck integration in both GA and CBA domains.[1164]

Simultaneously with these efforts, JSC was developing the X-38 unpiloted lifting body/parafoil recovery reentry vehicle. The X-38 was a technology demonstrator for a proposed orbital crew rescue vehicle
that could, in an emergency, return up to seven astronauts to Earth, a veritable space-based lifeboat. NASA planners had forecasted a need for such a rescue craft in the early days of planning for Space Station Freedom (subsequently the International Space Station). Under a Langley study program for the Space Station Freedom Crew Emergency Rescue Vehicle (CERV, later shortened to CRV), Agency engineers and research pilots had undertaken extensive simulation studies of one candidate shape, the HL-20 lifting body, whose design was based on the general aerodynamic shape of the Soviet Union’s BOR-4 subscale spaceplane.[1165] The HL-20 did not proceed beyond these tests and a full-scale mockup. Instead, Agency attention turned to another escape vehicle concept, one essentially identical in shape to the nearly four-decade-old body shape of the Martin SV-5D hypersonic lifting reentry test vehicle, sponsored by NASA’s Johnson Space Center. The Johnson configuration spawned its own two-phase demonstrator research effort: the X-38 program, for a series of subsonic drop-shapes air-launched from NASA’s NB-52B Stratofortress, and the second, for an orbital reentry shape to be test – launched from the Space Shuttle from a high-inclination orbit. But while tests of the former did occur at the NASA Dryden Flight Research Center (DFRC) in the late 1990s, the fully developed orbital craft did not pro­ceed to development and orbital test.[1166]

Подпись: 11To remotely pilot this vehicle during its flight-testing at Dryden, project engineers were developing a system displaying the required navigation and control data. Television cameras in the nose of the X-38 provided a data link video signal to a control flight deck on the ground. Video signals alone, however, were insufficient for the remote pilot to perform all the test and control maneuvers, including "flap turns” and "heading hold” commands during the parafoil phase of flight. More infor­mation on the display monitor would be needed. Further complications arose because of the design of the X-38: the crew would be lying on its

backs, looking at displays on the "ceiling” of the vehicle. Accordingly, a team led by JSC X-38 Deputy Avionics Lead Frank J. Delgado was tasked with developing a display system allowing the pilot to control the X-38 from a perspective 90 degrees to the vehicle direction of travel. On the cockpit design team were NASA astronauts Rick Husband (subsequently lost in the Columbia reentry disaster), Scott Altman, and Ken Ham, and JSC engineer Jeffrey Fox.

Подпись: 11Delgado solicited industry assistance with the project. Rapid Imaging Software, Inc., a firm already working with imaginative synthetic vision concepts, received a Phase II Small Business Innovation Research (SBIR) contract to develop the display architecture. RIS subsequently developed LandForm VisualFlight, which blended "the power of a geographic informa­tion system with the speed of a flight simulator to transform a user’s desk­top computer into a ‘virtual cockpit.’”[1167] It consisted of "symbology fusion” software and 3-D "out-the-window” and NAV display presentations oper­ating using a standard Microsoft Windows-based central processing unit (CPU). JSC and RIS were on the path to developing true sensor fusion in the near future, blending a full SVS database with live video signals. The system required a remote, ground-based control cockpit, so Jeff Fox pro­cured an extended van from the JSC motor pool. This vehicle, officially known as the X-38 Remote Cockpit Van, was nicknamed the "Vomit Van” by those poor souls driving around lying on their backs practicing flying a simulated X-38. By spring 2002, JSC was flying the X-38 from the Remote Cockpit Van using an SVS NAV Display, an SVS out-the-window display, and a video display developed by RIS. NASA astronaut Ken Ham judged it as furnishing the "best seat in the house” during X-38 glide flights.[1168]

Indeed, during the X-38 testing, a serendipitous event demon­strated the value of sensor fusion. After release from the NASA NB-52B Stratofortress, the lens of the onboard X-38 television camera became partially covered in frost, occluding over 50 percent of the FOV. This would have proved problematic for the pilot had orienting symbology
not been available in the displays. Synthetic symbology, including spa­tial entities identifying keep-out zones and runway outlines, provided the pilot with a synthetic scene replacing the occluded camera image. This foreshadowed the concept of sensor fusion, in which, for example, blossoming as the camera traversed the Sun could be "blended” out, and haze obscuration could be minimized by adjusting the degree of syn­thetic blend from 0 to 100 percent.[1169]

Подпись: 11But then, on April 29, 2002, faced with rising costs for the International Space Station, NASA canceled the X-38 program.[1170] Surprisingly, the cancellation did not have the deleterious impact upon sensor fusion development that might have been anticipated. Instead, program members Jeff Fox and Eric Boe secured temporary support via the Johnson Center Director’s discretionary fund to keep the X-38 Remote Cockpit Van operating. Mike Abernathy, president of RIS, was eager to continue his company’s sensor fusion work. He supported their efforts, as did Patrick Laport of Aerospace Applications North America (AANA). For the next 2 years, Fox and electronics technician James B. Secor continued to improve the van, working on a not-to-interfere basis with their other duties. In July 2004, Fox secured further Agency funding to convert the remote cockpit, now renamed, at Boe’s suggestion, the Advanced Cockpit Evaluation System (ACES). It was rebuilt with a single, upright seat affording a 180-degree FOV visual system with five large surplus moni­tors. An array of five cameras was mounted on the roof of the van, and its input could be blended in real time with new RIS software to form a complete sensor fusion package for the wraparound monitors or a helmet – mounted display.[1171] Subsequently, tests with this van demonstrated true sensor fusion. Now, the team looked for another flight project it could use to demonstrate the value of SVS.

Its first opportunity came in November 2004, at Creech Air Force Base in Nevada. Formerly known as Indian Springs Auxiliary Air Field, a backwater corner of the Nellis Air Force Base range, Creech had risen
to prominence after the attacks of 9/11, as it was the Air Force’s center of excellence for unmanned aerial vehicle (UAV) operations. It used, as its showcase, the General Atomics Predator UAV. The Predator, modi­fied as a Hellfire-armed attack system, had proven a vital component of the global war on terrorism. With UAVs increasing dramatically in their capabilities, it was natural that the UAV community at Nellis would be interested in the work of the ACES team. Traveling to Nevada to demon­strate its technology to the Air Force, the JSC team used the ACES van in a flight-following mode, receiving downlink video from a Predator UAV. That video was then blended with synthetic terrain database inputs to provide a 180-degree FOV scene for the pilot. The Air Force’s Predator pilots found the ACES system far superior to the narrow-view perspec­tive they then had available for landing the UAV.

Подпись: 11In 2005, astronaut Eric Boe began training for a Shuttle flight and left the group, replaced by the author, who had spent years over 10 years at Langley as a project or research pilot on all of that Center’s SVS and XVS projects. The author transferred to JSC from Langley in 2004 as a research pilot and learned of the Center’s SVS work from Boe. The author’s involve­ment with the JSC group linked Langley and JSC’s SVS efforts, for he provided the JSC group with his experience with Langley’s SVS research.

That spring, a former X-38 cooperative student—Michael Coffman, now an engineer at the FAA’s Mike Monroney Aeronautical Center in Oklahoma City—serendipitously visited Fox at JSC. They discussed using the sensor fusion technology for the FAA’s flight-check mission. Coffman, Fox, and Boe briefed Thomas C. Accardi, Director of Aviation Systems Standards at FAA Oklahoma City, on the sensor fusion work at JSC, and he was interested in its possibilities. Fox seized this opportu­nity to establish a memorandum of understanding (MOU) among the Johnson Space Center, the Mike Monroney Aeronautical Center, RIS, and AANA. All parties would work on a quid pro quo basis, sharing intellectual and physical resources where appropriate, without fund­ing necessarily changing hands. Signed in July 2005, this arrangement was unique in its scope and, as will be seen, its ability to allow contrac­tors and Government agencies to work together without cost. JSC and FAA Oklahoma City management had complete trust in their employ­ees, and both RIS and AANA were willing to work without compensa­tion, predicated on their faith in their product and the likely potential return on their investment, effectively a Skunk Works approach taken to the extreme. The stage was set for major SVS accomplishments, for
during this same period, huge strides in SVS development had been made at Langley, which is where this narrative now returns.[1172]

Aircraft Ice Protection

The Aircraft Ice Protection program focuses on two main areas: devel­opment of remote sensing technologies to measure nearby icing con­ditions, improve current forecast capabilities, and develop systems to transfer and display that information to flight crews, flight controllers, and dispatchers; and development of systems to monitor and assess aircraft performance, notify the cockpit crew about the state of the
aircraft, and/or automatically alter the aircraft controlling systems to prevent stall or loss of control in an icing environment. Keeping those two focus areas in mind, the Aircraft Ice Protection program is subdi­vided to work on these three goals:

• Provide flight crews with real-time icing weather infor­mation so they can avoid the hazard in the first place or find the quickest way out.[1265]

• Improve the ability of an aircraft to operate safely in icing conditions.[1266]

Подпись: 12Improve icing simulation capabilities by develop­ing better instrumentation and measurement tech­niques to characterize atmospheric icing conditions, which also will provide icing weather validation data­bases, and increase basic knowledge of icing physics.[1267]

In terms of remote sensing, the top level goals of this activity are to develop and field-test two forms of remote sensing system technologies that can reduce the exposure of aircraft to in-flight icing hazards. The first technology would be ground based and provide coverage in
a limited terminal area to protect all vehicles. The second technology would be airborne and provide unrestricted flightpath coverage for a commuter class aircraft. In most cases the icing hazard to aircraft is minimized with either de-icing or anti-icing procedures, or by avoid­ing any known icing or possible icing areas altogether. However, being able to avoid the icing hazard depends much on the quality and timing of the latest observed and forecast weather conditions. And once stuck in a severe icing hazard zone, the pilot must have enough information to know how to get out of the area before the aircraft’s ice protection systems are overwhelmed. One way to address these problem areas is to remotely detect icing potential and present the information to the pilot in a clear, easily understood manner. Such systems would allow the pilot to avoid icing conditions and also allow rapid escape from icing if severe conditions were encountered.[1268]

Fifth Generation: The F-22 Program

The Air Force initiated its Advanced Tactical Fighter (ATF) program in 1985 as an effort to augment and ultimately replace the F-15. During the competitive phase of the program between the Northrop-led YF-23 and the Lockheed-led YF-22 designs, the Air Force established that each team could draw on the facilities and expertise of NASA for establish­ing credibility and risk reduction before a competitive fly-off. Lockheed subsequently requested free-flight and spin tests of the YF-22 in the Langley Full-Scale Tunnel and the Langley Spin Tunnel. The relatively
compressed timeframe of the ATF competition would not permit a feasible schedule for the fabrication and testing of a helicopter drop model of the YF-22.

Подпись: 13A joint NASA-Lockheed team conducted conventional tunnel tests in the Full-Scale Tunnel in 1989 to measure YF-22 aerodynamic data for high-angle-of-attack conditions, followed by free-flight model studies to determine the low-speed departure resistance of the configuration. Meanwhile, spin tunnel tests obtained information on spin and recovery characteristics as well as the size and location of an emergency spin recovery parachute for the high-angle-of-attack test airplane. In addition, specialized "rotary-balance” tests were conducted in the spin tunnel to obtain aerodynamic data during simulated spin motions. Lockheed incorporated all of the forego­ing results in the design process, leading to an impressive display of capabilities by the YF-22 during the competitive flight demonstrations in 1990.

Lockheed formally acknowledged its appreciation of NASA’s participation in the YF-22 program in a letter to NASA, which stated:

On behalf of the Lockheed YF-22 Team, I would like to express our appreciation of the contribution that the people of NASA Langley made to our successful YF-22 flight test program, and provide some feedback on how well the flight test measure­ments agreed with the predictions from your wind-tunnel mea­surements. . . . The highlight of the flight test program was the high-angle-of-attack flying qualities. We relied on aerodynamic data obtained in the full-scale wind tunnel to define the low – speed, high-angle-of-attack static and dynamic aerodynamic derivatives; rotary derivatives from your spin tunnel; and free- flight demonstrations in the full-scale tunnel. We expanded the flight envelope from 20° to 60° angle of attack, demonstrating pitch attitude changes and full-stick rolls about the velocity vector in seven calendar days. The reason for this rapid enve­lope expansion was the quality of the aerodynamic data used in the control law design and pre-flight simulations.[1322]

Подпись: Free-flight model tests of the YF-22 in the Full-Scale Tunnel accurately predicted the high-alpha maneuverability of the full-scale airplane and provided risk reduction for the F-22 program. NASA. Подпись: 13

After the team of Lockheed, Boeing, and General Dynamics was announced as the winner of the ATF competition in April 1991, high – angle-of-attack testing of the final F-22 configuration was conducted in the Full-Scale Tunnel and the Spin Tunnel. Aerodynamic force testing was completed in the Full-Scale Tunnel in 1992, with spin – and rotary – balance tests conducted in 1993. A wind tunnel free-flight model was not fabricated for the F-22 program, but a typical full-scale tunnel model was constructed and used for the aerodynamic tests. A notable contribution from the spin tunnel tests was a relocation of the attachment point for the F-22 emergency spin recovery parachute to clear the exhaust plume of the vectoring engine in 1994. Langley’s contributions to the high – angle-of-attack technologies embodied in the F-22 fighter had been com­pleted well in advance of the aircraft’s first flight in September 1997.[1323]

Fatal Accident #2

The remaining XV-5A was rigged with a pilot-operated res­cue hoist, located on the left side of the fuselage just ahead of the wing fan. An evaluation test pilot was fatally injured during the test program while performing a low-speed, steep – descent "pick-up” maneuver at Edwards AFB. The heavily – weighted rescue collar was ingested into the left wing fan as the pilot descended and simultaneously played-out the collar. The damaged fan continued to rotate, but the resultant loss in fan lift caused the aircraft to roll-left and settle toward the ground. The pilot apparently leveled the wings; applied full power and up-collective to correct for the left wing-fan lift loss. The damaged left fan produced enough lift to hold the wings level and somewhat reduce the ensuing descent rate. The pilot elected to eject from the aircraft as it approached the ground in this wings-level attitude. As the pilot released the right-stick displacement and initiated the ejection, the air­craft rolled back to the left which caused the ejected seat tra­jectory to veer-off to a path parallel to the ground. The seat

Подпись: 14

impacted the ground, and the pilot did not survive the ejec­tion. Post-accident analysis revealed that despite the ingestion of the rescue collar and its weight, the wing-fan continued to operate and produce enough lift force to hold a wings-level roll attitude and reduce descent rate to a value that may have allowed the pilot to survive the ensuing "emergency landing” had he stayed with the aircraft. This was a grim testimony as to the ruggedness of the lift-fan.

Tupolev-144 SST on takeoff from Zhukovsky Air Development Center in Russia with a NASA pilot at the controls. NASA.

 

Fatal Accident #2

NASA and Electromagnetic Pulse Research

The phrase "electromagnetic pulse” usually raises visions of a nuclear detonation, because that is the most frequent context in which it is used. While EMP effects upon aircraft certainly would feature in a thermonuclear event, the phenomenon is commonly experienced in and around lightning storms. Lightning can cause a variety of EMP radiations, including radio-frequency pulses. An EMP "fries” electrical circuits by passing a magnetic field past the equipment in one direc­tion, then reversing in an extremely short period—typically a few nano­seconds. Therefore, the magnetic field is generated and collapses within that ephemeral time, creating a focused EMP. It can destroy or render useless any electrical circuit within several feet of impact.

Any survey of lightning-related EMPs brings attention to the phenom­ena of "elves,” an acronym for Emissions of Light and Very low-frequency perturbations from Electromagnetic pulses. Elves are caused by lightning­generated EMPs, usually occurring above thunderstorms and in the ion­osphere, some 300,000 feet above Earth. First recorded on Space Shuttle Mission STS-41 in 1990, elves mostly appear as reddish, expanding flashes that can reach 250 miles in diameter, lasting about 1 millisecond.

EMP research is multifaceted, conducted in laboratories, on air­borne aircraft and rockets, and ultimately outside Earth’s atmosphere. Research into transient electric fields and high-altitude lightning above thunderstorms has been conducted by sounding rockets launched by Cornell University. In 2000, a Black Brant sounding rocket from White Sands was launched over a storm, attaining a height of nearly 980,000 feet. Onboard equipment, including electronic and magnetic instru­ments, provided the first direct observation of the parallel electric field within 62 miles horizontal from the lightning.[155]

By definition, NASA’s NF-106B flights in the 1980s involved EMP research. Among the overlapping goals of the project was quantifica­tion of lightning’s electromagnetic effects, and Langley’s Felix L. Pitts led the program intended to provide airborne data of lightning-strike traits. Bruce Fisher and two other NASA pilots (plus four Air Force pilots) conducted the flights. Fisher conducted analysis of the informa­tion he collected in addition to backseat researchers’ data. Those flying as flight-test engineers in the two-seat jet included Harold K. Carney, Jr., NASA’s lead technician for EMP measurements.

NASA Langley engineers built ultra-wide-bandwidth digital tran­sient recorders carried in a sealed enclosure in the Dart’s missile bay. To acquire the fast lightning transients, they adapted or devised electro­magnetic sensors based on those used for measurement of nuclear pulse radiation. To aid understanding of the lightning transients recorded on the jet, a team from Electromagnetic Applications, Inc., provided math­ematical modeling of the lightning strikes to the aircraft. Owing to the extra hazard of lightning strikes, the F-106 was fueled with JP-5, which is less volatile than the then-standard JP-4. Data compiled from dedi­cated EMP flights permitted statistical parameters to be established for lightning encounters. The F-106’s onboard sensors showed that lightning strikes to aircraft include bursts of pulses lasting shorter than previously thought, but they were more frequent. Additionally, the bursts are more numerous than better-known strikes involving cloud-to-Earth flashes.[156]

Rocket-borne sensors provided the first ionospheric observations of lightning-induced electromagnetic waves from ELF through the medium frequency (MF) bands. The payload consisted of a NASA double-probe electric field sensor borne into the upper atmosphere by a Black Brant sounding rocket that NASA launched over "an extremely active thunder­storm cell.” This mission, named Thunderstorm III, measured lightning EMPs up to 2 megahertz (MHz). Below 738,000 feet, a rising whistler wave was found with a nose-whistler wave shape with a propagating fre­quency near 80 kHz. The results confirmed speculation that the leading intense edge of the lightning EMP was borne on 50-125-kHz waves.[157]

Electromagnetic compatibility is essential to spacecraft performance. The requirement has long been recognized, as the insulating surfaces on early geosynchronous satellites were charged by geomagnetic sub­storms to a point where discharges occurred. The EMPs from such dis­charges coupled into electronic systems, potentially disrupting satellites. Laboratory tests on insulator charging indicated that discharges could be initiated at insulator edges, where voltage gradients could exist.[158]

Apart from observation and study, detecting electromagnetic pulses is a step toward avoidance. Most lightning detections systems include an antenna that senses atmospheric discharges and a processor to deter­mine whether the strobes are lightning or static charges, based upon their electromagnetic traits. Generally, ground-based weather surveillance is more accurate than an airborne system, owing to the greater number of sensors. For instance, ground-based systems employ numerous antennas hundreds of miles apart to detect a lightning stroke’s radio frequency (RF) pulses. When an RF flash occurs, electromagnetic pulses speed outward from the bolt to the ground at hyper speed. Because the antennas cover a large area of Earth’s surface, they are able to triangulate the bolt’s site of origin. Based upon known values, the RF data can determine with con­siderable accuracy the strength or severity of a lightning bolt.

Space-based lightning detection systems require satellites that, while more expensive than ground-based systems, provide instantaneous visual monitoring. Onboard cameras and sensors not only spot light­ning bolts but also record them for analysis. NASA launched its first lightning-detection satellite in 1995, and the Lightning Imaging Sensor, which analyzes lightning through rainfall, was launched 2 years later. From approximately 1993, low-Earth orbit (LEO) space vehicles car­ried increasingly sophisticated equipment requiring increased power levels. Previously, satellites used 28-volt DC power systems as a leg­acy of the commercial and military aircraft industry. At those voltage levels, plasma interactions in LEO were seldom a concern. But use of high-voltage solar arrays increased concerns with electromagnetic compatibility and the potential effects of EMPs. Consequently, space­craft design, testing, and performance assumed greater importance.

NASA researchers noted a pattern wherein insulating surfaces on geosynchronous satellites were charged by geomagnetic substorms, building up to electrical discharges. The resultant electromagnetic pulses can couple into satellite electronic systems, creating potentially disrup­tive results. Reducing power loss received a high priority, and laboratory tests on insulator charging showed that discharges could be initiated at insulator edges, where voltage gradients could exist. The benefits of such tests, coupled with greater empirical knowledge, afforded greater operating efficiency, partly because of greater EMP protection.[159]

Research into lightning EMPs remains a major focus. In 2008, Stanford’s Dr. Robert A. Marshall and his colleagues reported on time­modeling techniques to study lightning-induced effects upon VLF trans­mitter signals called "early VLF events.” Marshall explained:

This mechanism involves electron density changes due to electromagnetic pulses from successive in-cloud light­ning discharges associated with cloud-to-ground dis­charges (CGs), which are likely the source of continuing current and much of the charge moment change in CGs. Through time-domain modeling of the EMP we show that a sequence of pulses can produce appreciable density changes in the lower ionosphere, and that these changes are primarily electron losses through dissociative attach­ment to molecular oxygen. Modeling of the propagat­ing VLF transmitter signal through the disturbed region shows that perturbed regions created by successive hor­izontal EMPs create measurable amplitude changes.[160]

However, the researchers found that modeling optical signatures was difficult when observation was limited by line of sight, especially by ground-based observers. Observation was further complicated by clouds and distance, because elves and "sprites” (large-scale discharges over thunderclouds) were mostly seen at ranges of 185 to 500 statute miles. Consequently, the originating lightning usually was not visible. But empirical evidence shows that an EMP from lightning is extremely short-lived when compared to the propagation time across an elve’s radius. Observers therefore learned to recognize that the illuminated area at a given moment appears as a thin ring rather than as an actual disk.[161]

In addition to the effects of EMPs upon personnel directly engaged with aircraft or space vehicles, concern was voiced about researchers being exposed to simulated pulses. Facilities conducting EMP tests upon avionics and communications equipment were a logical area of investi­gation, but some EMP simulators had the potential to expose operators and the public to electromagnetic fields of varying intensities, includ­ing naturally generated lightning bolts. In 1988, the NASA Astrophysics Data System released a study of bioelectromagnetic effects upon humans. The study stated, "Evidence from the available database does not estab­lish that EMPs represent either an occupational or a public health haz­ard.” Both laboratory research and years of observations on staffs of EMP manufacturing and simulation facilities indicated "no acute or short-term health effects.” The study further noted that the occupational exposure guideline for EMPs is 100 kilovolts per meter, "which is far in excess of usual exposures with EMP simulators.”[162]

NASA’s studies of EMP effects benefited nonaerospace communities. The Lightning Detection and Ranging (LDAR) system that enhanced a safe work environment at Kennedy Space Center was extended to pri­vate industry. Cooperation with private enterprises enhances commercial applications not only in aviation but in corporate research, construction, and the electric utility industry. For example, while two-dimensional commercial systems are limited to cloud-to-ground lightning, NASA’s three-dimensional LDAR provides precise location and elevation of in­cloud and cloud-to-cloud pulses by measuring arrival times of EMPs.

Nuclear – and lightning-caused EMPs share common traits. Nuclear EMPs involve three components, including the "E2” segment, which is similar to lightning. Nuclear EMPs are faster than conventional cir­cuit breakers can handle. Most are intended to stop millisecond spikes caused by lightning flashes rather than microsecond spikes from a high – altitude nuclear explosion. The connection between ionizing radiation and lightning was readily demonstrated during the "Mike” nuclear test at Eniwetok Atoll in November 1952. The yield was 10.4 million tons, with gamma rays causing at least five lightning flashes in the ionized air around the fireball. The bolts descended almost vertically from the cloud above the fireball to the water. The observation demonstrated that, by causing atmospheric ionization, nuclear radiation can trigger a short­ing of the natural vertical electric gradient, resulting in a lightning bolt.[163]

Thus, research overlap between thermonuclear and lightning­generated EMPs is unavoidable. NASA’s workhorse F-106B, apart from NASA’s broader charter to conduct lightning-strike research, was employed in a joint NASA-USAF program to compare the electromag­netic effects of lightning and nuclear detonations. In 1984, Felix L. Pitts of NASA Langley proposed a cooperative venture, leading to the Air Force lending Langley an advanced, 10-channel recorder for measur­ing electromagnetic pulses.

Langley used the recorder on F-106 test flights, vastly expand­ing its capability to measure magnetic and electrical change rates, as well as currents and voltages on wires inside the Dart. In July 1993, an Air Force researcher flew in the rear seat to operate the advanced equipment, when 72 lightning strikes were obtained. In EMP tests at Kirtland Air Force Base, the F-106 was exposed to a nuclear electro­magnetic pulse simulator while mounted on a special test stand and during flybys. NASA’s Norman Crabill and Lightning Technologies’

J. A. Plumer participated in the Air Force Weapons Laboratory review of the acquired data.[164]

With helicopters becoming ever-more complex and with increasing dependence upon electronics, it was natural for researchers to extend the Agency’s interest in lightning to rotary wing craft. Drawing upon the Agency’s growing confidence in numerical computational analysis, Langley produced a numerical modeling technique to investigate the response of helicopters to both lightning and nuclear EMPs. Using a UH-60A Black Hawk as the focus, the study derived three-dimensional time domain finite-difference solutions to Maxwell’s equations, com­puting external currents, internal fields, and cable responses. Analysis indicated that the short-circuit current on internal cables was generally greater for lightning, while the open-circuit voltages were slightly higher for nuclear-generated EMPs. As anticipated, the lightning response was found to be highly dependent upon the rise time of the injected current. Data showed that coupling levels to cables in a helicopter are 20 to 30 decibels (dB) greater than in a fixed wing aircraft.[165]

Glass Cockpit

As aircraft systems became more complex and the amount of naviga­tion, weather, and air traffic information available to pilots grew in abundance, the nostalgic days of "stick and rudder” men (and women) gave way to "cockpit managers.” Mechanical, analog dials showing a

Glass Cockpit

A prototype "glass cockpit” that replaces analog dials and mechanical tapes with digitally driven flat panel displays is installed inside the cabin of NASA’s 737 airborne laboratory, which tested the new hardware and won support for the concept in the aviation community. NASA.

single piece of information (e. g., airspeed or altitude) weren’t sufficient to give pilots the full status of their increasingly complicated aircraft fly­ing in an increasingly crowded sky. The solution came from engineers at NASA’s Langley Research Center in Hampton, VA, who worked with key industry partners to come up with an electronic flight display—what is generally known now as the glass cockpit—that took advantage of pow­erful, small computers and liquid crystal display (LCD) flat panel technol­ogy. Early concepts of the glass cockpit were flight-proven using NASA’s Boeing 737 flying laboratory and eventually certified for use by the FAA.[233]

According to a NASA fact sheet,

The success of the NASA-led glass cockpit work is reflected in the total acceptance of electronic flight displays beginning with the introduction of the Boeing 767 in 1982. Airlines and their passengers, alike, have benefitted. Safety and efficiency of flight have been increased with improved pilot understand­ing of the airplane’s situation relative to its environment.

The cost of air travel is less than it would be with the old technology and more flights arrive on time.[234]

After developing the first glass cockpits capable of displaying basic flight information, NASA has continued working to make more infor­mation available to the pilots,[235] while at the same time being conscious of information overload,[236] the ability of the flight crew to operate the cockpit displays without distraction during critical phases of flight (take­off and landing),[237] and the effectiveness of training pilots to use the glass cockpit.[238]