Category NASA’S CONTRIBUTIONS TO AERONAUTICS

The New Breed

Подпись: 13The intense U. S. research and development programs on high-angle – of-attack technology of the 1970s and 1980s ushered in a new era of carefree maneuvering for tactical aircraft. New options for close-in combat were now available to military pilots, and more importantly, departure/spin accidents were dramatically reduced. Design tools had been sharpened, and the widespread introduction of sophisticated dig­ital flight control systems finally permitted the implementation of auto­matic departure and spin prevention systems. These advances did not go unnoticed by foreign designers, and emerging threat aircraft were rapidly developed and exhibited with comparable high-angle-of-attack capabilities.[1321] As the Air Force and Navy prepared for the next genera­tion of fighters to replace the F-15 and F-14, the integration of superior maneuverability at high angles of attack and other performance – and signature-related capabilities became the new challenge.

HUMAN FACTORS

Human factors played a part in some of the key issues that have already been discussed above. Examples are: confidence in lift-fans, concern for approach to the fan-stall boundary, high pilot workload tasks, and conversion controller design.

The human factor issue that concerned the writer the most was that of the cockpit arrangement. An XV-5A and its pilot were probably lost because of the inadvertent actuation of an incorrectly specified and improperly positioned conversion switch. This tragic lesson must not be repeated, and care­ful human factor studies must be included in the design of modern lift-fan aircraft such as the SSTOVLF. Human fac­tor considerations should be incorporated early in the design and development of the SSTOVLF from the first simulation effort on through the introduction of the production aircraft. It is therefore the writer’s hope that SSTOVLF designers will remember the past as they design for the future and take heed of the "Lessons learned.”

Fatal Accident #1

One of the two XV-5As being flown at Edwards AFB during an official flight demonstration on the morning of April 27, 1965, crashed onto the lakebed, killing Ryan’s Chief Engineering Test Pilot, Lou Everett. The two aircraft were simultaneously dem­onstrating the high-and low-speed capabilities of the Vertifan.

During a high-speed pass, Everett’s aircraft pushed over into a 30° dive and never recovered. The accident board concluded that the uncontrolled dive was the result of an accidental actu­ation of the conversion switch that took place when the air­craft’s speed was far in excess of the safe jet-mode to fan-mode conversion speed limit. The conversion switch (a simple 2- position toggle switch) was, at the time, (improperly) located on the collective for pilot "convenience.” It was speculated that the pilot inadvertently hit the conversion switch during the high-speed pass which initiated the conversion sequence: 15° of nose-down stabilizer movement was accompanied by actuation of the diverter valves to the fan-mode. The resulting stabilizer pitching moment created an uncontrollable nose – down flight path. (Note: Mr. Everett initiated a low altitude (rocket) ejection, but tragically, the ejection seat was improp­erly rigged…another lesson learned!) As a result of this acci­dent, the conversion switch was changed to a lift-lock toggle and relocated on the main instrument panel ahead of the col­lective lever control.

Spacecraft and Electrodynamic Effects

With advent of piloted orbital flight, NASA anticipated the potential effects of lightning upon launch vehicles in the Mercury, Gemini, and Apollo programs. Sitting atop immense boosters, the spacecraft were especially vulnerable on their launch pads and in the liftoff phase. One NASA lecturer warned his audience in 1965 that explosive squibs, deto­nators, vapors, and dust were particularly vulnerable to static electrical detonation; the amount of energy required to initiate detonation was "very small,” and, as a consequence, their triggering was "considerably more frequent than is generally recognized.”[146]

As mentioned briefly, on November 14, 1969, at 11:22 a. m. EST, Apollo 12, crewed by astronauts Charles "Pete” Conrad, Richard F. Gordon, and Alan L. Bean, thundered aloft from Launch Complex 39A at the Kennedy Space Center. Launched amid a torrential downpour, it disappeared from sight almost immediately, swallowed up amid dark, foreboding clouds that cloaked even its immense flaring exhaust. The rain clouds produced an electrical field, prompting a dual trigger response initiated by the craft. As historian Roger Bilstein wrote subsequently:

Within seconds, spectators on the ground were startled to see parallel streaks of lightning flash out of the cloud back to the launch pad. Inside the spacecraft, Conrad exclaimed "I don’t know what happened here. We had everything in the world drop out.” Astronautics Pete Conrad, Richard Gordon, and Alan Bean, inside the spacecraft, had seen a brilliant flash of light inside the spacecraft, and instantaneously, red and yellow warn­ing lights all over the command module panels lit up like an electronic Christmas tree. Fuel cells stopped working, circuits went dead, and the electrically oper­ated gyroscopic platform went tumbling out of control.

The spacecraft and rocket had experienced a massive power failure. Fortunately, the emergency lasted only seconds, as backup power systems took over and the instrument unit of the Saturn V launch vehicle kept the rocket operating.[147]

The electrical disturbance triggered the loss of nine solid-state instrumentation sensors, none of which, fortunately, was essential to the safety or completion of the flight. It resulted in the temporary loss of communications, varying between 30 seconds and 3 minutes, depending upon the particular system. Rapid engagement of backup systems permitted the mission to continue, though three fuel cells were automatically (and, as subsequently proved, unnecessarily) shut down. Afterward, NASA incident investigators concluded that though lightning could be triggered by the long combined length of the Saturn V rocket and its associated exhaust plume, "The pos­sibility that the Apollo vehicle might trigger lightning had not been considered previously.”[148]

Apollo 12 constituted a dramatic wake-up call on the hazards of mixing large rockets and lightning. Afterward, the Agency devoted extensive efforts to assessing the nature of the lightning risk and seeking ways to mitigate it. The first fruit of this detailed study effort was the issuance, in August 1970, of revised electrodynamic design criteria for spacecraft. It stipulated various means of spacecraft and launch facility protection, including

1. Ensuring that all metallic sections are connected electrically (bonded) so that the current flow from a lightning stroke is conducted over the skin with­out any caps where sparking would occur or current would be carried inside.

2. Protecting objects on the ground, such as buildings, by a system of lightning rods and wires over the out­side to carry the lightning stroke to the ground.

3. Providing a cone of protection for the lightning pro­tection plan for Saturn Launch Complex 39.

4. Providing protection devices in critical circuits.

5. Using systems that have no single failure mode; i. e., the Saturn V launch vehicle uses triple-redundant circuitry on the auto-abort system, which requires two out of three of the signals to be correct before abort is initiated.

6. Appropriate shielding of units sensitive to electro­magnetic radiation.[149]

Spacecraft and Electrodynamic Effects

A 1 973 NASA projection of likely paths taken by lightning striking a composite structure Space Shuttle, showing attachment and exit points. NASA.

The stakes involved in lightning protection increased greatly with the advent of the Space Shuttle program. Officially named the Space Transportation System (STS), NASA’s Space Shuttle was envisioned as a routine space logistical support vehicle and was touted by some as a "space age DC-3,” a reference to the legendary Douglas airliner that had galvanized air transport on a global scale. Large, complex, and expen­sive, it required careful planning to avoid lightning damage, particu­larly surface burnthroughs that could constitute a flight hazard (as, alas, the loss of Columbia would tragically demonstrate three decades sub­sequently). NASA predicated its studies on Shuttle lightning vulnera­bilities on two major strokes, one having a peak current of 200 kA at a current rate of change of 100 kA per microsecond (100 kA / 10-6 sec), and a second of 100 kA at a current rate of change of 50 kA / 10-6 sec. Agency researchers also modeled various intermediate currents of lower energies. Analysis indicated that the Shuttle and its launch stack (con­sisting of the orbiter, mounted on a liquid fuel tank flanked by two solid – fuel boosters) would most likely have lightning entry points at the tip of its tankage and boosters, the leading edges of its wings at mid-span

and at the wingtip, on its upper nose surface, and (least likely) above the cockpit. Likely exit points were the nozzles of the two solid-fuel boosters, the trailing-edge tip of the vertical fin, the trailing edge of the body flap, the trailing edges of the wing tip, and (least likely) the nozzles of its three liquid-fuel Space Shuttle main engines (SSMEs).[150] Because the Shuttle orbiter was, effectively, a large delta aircraft, data and criteria assembled previously for conventional aircraft furnished a good reference base for Shuttle lightning prediction studies, even studies dating to the early 1940s. As well, Agency researchers undertook extensive tests to guard against inadvertent triggering of the Shuttle’s solid rocket boosters (SRBs), because their premature ignition would be catastrophic.[151]

Prudently, NASA ensured that the servicing structure on the Shuttle launch complex received an 80-foot lightning mast plus safety wires to guide strikes to the ground rather than through the launch vehicle. Dramatic proof of the system’s effectiveness occurred in August 1983, when lightning struck the launch pad of the Shuttle Challenger before launching mission STS-8, commanded by Richard

H. Truly. It was the first Shuttle night launch, and it subsequently pro­ceeded as planned.

The hazards of what lightning could do to a flight control system (FCS) was dramatically illustrated March 26, 1987, when a bolt led to the loss of AC-67, an Atlas-Centaur mission carrying FLTSATCOM 6, a TRW, Inc., communications satellite developed for the Navy’s Fleet Satellite Communications system. Approximately 48 seconds after launch, a cloud-to-ground lightning strike generated a spurious signal into the Centaur launch vehicle’s digital flight control computer, which then sent a hard-over engine command. The resultant abrupt yaw overstressed the vehicle, causing its virtual immediate breakup. Coming after the weather-related loss of the Space Shuttle Challenger the previous year, the loss of AC-67 was particularly disturbing. In both cases, accident investigators found that the two Kennedy teams had not taken adequate account of meteorological conditions at the time of launch.[152]

The accident led to NASA establishing a Lightning Advisory Panel to provide parameters for determining whether a launch should proceed in the presence of electrical activity. As well, it understandably stimu­lated continuing research on the electrodynamic environment at the Kennedy Space Center and on vulnerabilities of launch vehicles and facilities at the launch site. Vulnerability surveys extended to in-flight hardware, launch and ground support equipment, and ultimately almost any facility in areas of thunderstorm activity. Specific items identified as most vulnerable to lightning strikes were electronic systems, wiring and cables, and critical structures. The engineering challenge was to design methods of protecting those areas and systems without adversely affecting structural integrity or equipment performance.

To improve the fidelity of existing launch models and develop a better understanding of electrodynamic conditions around the Kennedy Center, between September 14 and November 4, 1988, NASA flew a modified single-seat single-engine Schweizer powered sailplane, the Special Purpose Test Vehicle (SPTVAR), on 20 missions over the spaceport and its reservation, measuring electrical fields. These tri­als took place in consultation with the Air Force (Detachment 11 of its 4th Weather Wing had responsibility for Cape lightning forecasting) and the New Mexico Institute of Mining and Technology, which selected candidate cloud forms for study and then monitored the real­time acquisition of field data. Flights ranged from 5,000 to 17,000 feet, averaged over an hour in duration, and took off from late morning to as late as 8 p. m. The SPTVAR aircraft dodged around electrified clouds as high as 35,000 feet, while taking measurements of electrical fields, the net airplane charge, atmospheric liquid water content, ice parti­cle concentrations, sky brightness, accelerations, air temperature and pressure, and basic aircraft parameters, such as heading, roll and pitch angles, and spatial position.[153]

After the Challenger and AC-67 launch accidents, the ongoing Shuttle program remained a particular subject of Agency concern, particularly the danger of lightning currents striking the Shuttle during rollout, on the pad, or upon liftoff. As verified by the SPTVAR survey, large currents (greater than 100 kA) were extremely rare in the operating area. Researchers con­cluded that worst-case figures for an on-pad strike ran from 0.0026 to 0.11953 percent. Trends evident in the data showed that specific operating procedures could further reduce the likelihood of a lightning strike. For instance, a study of all lightning probabilities at Kennedy Space Center observed, "If the Shuttle rollout did not occur during the evening hours, but during the peak July afternoon hours, the resultant nominal probabili­ties for a >220 kA and >50 kA lightning strike are 0.04% and 0.21%, respec­tively. Thus, it does matter ‘when’ the Shuttle is rolled out.”[154] Although estimates for a triggered strike of a Shuttle in ascent were not precisely determined, researchers concluded that the likelihood of triggered strike (one caused by the moving vehicle itself) of any magnitude on an ascend­ing launch vehicle is 140,000 times likelier than a direct hit on the pad. Because Cape Canaveral constitutes America’s premier space launch cen­ter, continued interest in lightning at the Cape and its potential impact upon launch vehicles and facilities will remain major NASA concerns.

Aviation Safety Program

After the in-flight explosion and crash of TWA 800 in July 1996, President Bill Clinton established a Commission on Aviation Safety and Security, chaired by Vice President Al Gore. The Commission’s emphasis was to find ways to reduce the number of fatal air-related accidents. Ultimately, the Commission challenged the aviation community to lower the fatal aircraft accident rate by 80 percent in 10 years and 90 percent in 25 years.

NASA’s response to this challenge was to create in 1997 the Aviation Safety Program (AvSP) and, as seen before, partner with the FAA and the DOD to conduct research on a number of fronts.[226]

NASA’s AvSP was set up with three primary objectives: (1) eliminate accidents during targeted phases of flight, (2) increase the chances that passengers would survive an accident, and (3) beef up the foundation upon which aviation safety technologies are based. From those objec­tives, NASA established six research areas, some having to do directly with making safer skyways and others pointed at increasing aircraft safety and reliability. All produced results, as noted in the referenced technical papers. Those research areas included accident mitigation,[227] systemwide accident prevention,[228] single aircraft accident prevention,[229] weather accident prevention,[230] synthetic vision,[231] and aviation system modeling and monitoring.[232]

Of particular note is a trio of contributions that have lasting influence today. They include the introduction and incorporation of the glass cock­pit into the pilot’s work environment and a pair of programs to gather key data that can be processed into useful, safety enhancing information.

Ames’s SimLabs

NASA’s Ames Research Center in California is home to some of the more sophisticated and powerful simulation laboratories, which Ames calls SimLabs. The simulators support a range of research, with an empha­sis on aerospace vehicles, aerospace systems and operations, human fac­tors, accident investigations, and studies aimed at improving aviation

safety. They all have played a role in making work new air traffic control concepts and associated technology. The SimLabs include:

• Future Flight Central, which is a national air traffic con­trol and Air Traffic Management simulation facility ded­icated to exploring solutions to the growing problem of traffic congestion and capacity, both in the air and on the ground. The simulator is a two-story facility with a 360-degree, full-scale, real-time simulation of an airport, in which new ideas and technology can be tested or per­sonnel can be trained.[275]

• Vertical Motion Simulator, which is a highly adaptable flight simulator that can be configured to represent any aerospace vehicle, whether real or imagined, and still pro­vide a high-fidelity experience for the pilot. According to a facility fact sheet, existing vehicles that have been sim­ulated include a blimp, helicopters, fighter jets, and the Space Shuttle orbiter. The simulator can be integrated with Future Flight Central or any of the air traffic con­trol simulators to provide real-time interaction.[276]

• Crew-Vehicle Systems Flight Facility,[277] which itself has three major simulators, including a state-of-the-art Boeing 747 motion-based cockpit,[278] an Advanced Concept Flight Simulator,[279] and an Air Traffic Control Simulator consisting of 10 PC-based computer workstations that can be used in a variety of modes.[280]

Ames's SimLabs

A full-sized Air Traffic Control Simulator with a 360-degree panorama display, called Future Flight Central, is available to test new systems or train controllers in extremely realistic scenarios. NASA.

Crew Factors and Resource Management Program

After a series of airline accidents in the 1970s involving aircraft with no apparent problems, findings were presented at a 1979 NASA workshop indicating that most aviation accidents were indeed caused by human error, rather than mechanical malfunctions or weather. Specifically, there were communication, leadership, and decision-making fail­ures within the cockpit that were causing accidents.[385] The concept of Cockpit Resource Management (now often referred to as Crew Resource Management, or CRM) was thus introduced. It describes the process of helping aircrews reduce errors in the cockpit by improving crew coor­dination and better utilizing all available resources on the flight deck, including information, equipment, and people.[386] Such training has been shown to improve the performance of aircrew members and thus increase efficiency and safety.[387] It is considered so successful in reducing accidents caused by human error that the aviation industry has almost universally adopted CRM training. Such training is now considered man­datory not only by NASA, but also the FAA, the airlines, the military, and even a variety of nonaviation fields, such as medicine and emergency services.[388] Most recently, measures have been taken to further expand mandatory CRM training to all U. S. Federal Aviation Regulations Part 135 operators, including commuter aircraft. Also included is Single­Pilot Resource Management (SRM) training for on-demand pilots who fly without additional crewmembers.[389]

Presently, the NASA Ames Human Systems Integration Division’s Flight Cognition Laboratory is involved with the evaluation of the thought processes that determine the behavior of air crewmen, controllers, and others involved with flight operations. Among the areas they are study­ing are prospective memory, concurrent task management, stress, and visual search. As always, the Agency actively shares this information with other governmental and nongovernmental aviation organizations, with the goal of increasing flight safety.[390]

Dynamic Stability: Early Applications and a Lesson Learned

When Langley began operations of its 12-Foot Free-Flight Tunnel in 1939, it placed a high priority on establishing correlation with full-scale flight results. Immediately, requests came from the Army and Navy for correla­tion of model tests with flight results for the North American BT-9, Brewster XF2A-1, Vought-Sikorsky V-173, Naval Aircraft Factory SBN-1, and Vought Sikorsky XF4U-1. Meanwhile, the NACA used a powered model of the Curtiss P-36 fighter for an in-house calibration of the free-flight process.[466]

The results of the P-36 study were, in general, in fair agreement with airplane flight results, but the dynamic longitudinal stability of the model was found to be greater (more damped) than that of the air­plane, and the effectiveness of the model’s ailerons was less than that for the airplane. Both discrepancies were attributed to aerodynamic defi­ciencies of the model caused by the low Reynolds number of the tun­nel test and led to one of the first significant lessons learned with the free-flight technique. Using the wing airfoil shape (NACA 2210) of the full-scale P-36 for the model resulted in poor wing aerodynamic perfor­mance at the low Reynolds number of the model flight tests. The max­imum lift of the model and the angle of attack for maximum lift were both decreased because of scale effects. As a result, the stall occurred at a slightly lower angle of attack for the model. After this experience, researchers conducted an exhaustive investigation of other airfoils that might have more satisfactory performance at low Reynolds numbers. In planning for subsequent tests, the researchers were trained to antic­ipate the potential existence of scale effects for certain airfoils, even at relatively low angles of attack. As a result of this experience, the wing airfoils of free-flight tunnel models were sometimes modified to airfoil shapes that provided better results at low Reynolds number.[467]

General-Aviation Spin Technology

The dramatic changes in aircraft configurations after World War II required almost complete commitment of the Spin Tunnel to develop­ment programs for the military, resulting in stagnation of any research for light personal-owner-type aircraft. In subsequent years, designers had to rely on the database and design guidelines that had been devel­oped based on experiences during the war. Unfortunately, stall/spin acci­dents in the early 1970s in the general aviation community increased at an alarming rate. Even more troublesome, on several occasions aircraft that had been designed according to the NACA tail-damping power fac­tor criterion had exhibited unsatisfactory recovery characteristics, and the introduction of features such as advanced general aviation airfoils resulted in concern over the technical adequacy and state of the data­base for general aviation configurations.

Finally, in the early 1970s, the pressure of new military aircraft devel­opment programs eased, permitting NASA to embark on new studies related to spin technology for general aviation aircraft. A NASA General Aviation Spin Research program was initiated at Langley that focused on the use of radio-control and spin tunnel models to assess the impact of design features on spin and recovery characteristics, and to develop testing techniques that could be used by the industry. The program also included the acquisition of several full-scale aircraft that were modi­fied for spin tests to produce data for correlation with model results.[515]

One of the key objectives of the program was to evaluate the impact of tail geometry on spin characteristics. The approach taken was to design alternate tail configurations so as to produce variability in the TDPF parameter by changing the vertical and horizontal locations of the

General-Aviation Spin Technology

Involved in a study of spinning characteristics of general-aviation configurations in the 1970s were Langley test pilot Jim Patton, center, and researchers Jim Bowman, left, and Todd Burk. NASA.

horizontal tail. A spin tunnel model of a representative low wing con­figuration was constructed with four interchangeable tails, and results for the individual tail configurations were compared with predictions based on the tail design criteria. The range of tails tested included con­ventional cruciform-tail configurations, low horizontal tail locations, and a T-tail configuration.

As expected, results of the spin tunnel testing indicated that tail con­figuration had a large influence on spin and recovery characteristics, but many other geometric features also influenced the characteristics, including fuselage cross-sectional shape. In addition, seemingly small configuration features such as wing fillets at the wing trailing-edge junc­ture with the fuselage had large effects. Importantly, the existing TDPF criterion for light airplanes did not correctly predict the spin recovery characteristics of models for some conditions, especially for those in which ailerons were deflected. NASA’s report to the industry following

the tests stressed that, based on these results, TDPF should not be used to predict spin recovery characteristics. However, the recommendation did provide a recommended "best practice” approach to overall design of the tail of the airplane for spin behavior.[516]

As part of its General Aviation Spin Research program, NASA con­tinued to provide information on the design of emergency spin recovery parachute systems.[517] Parachute diameters and riser line lengths were sized based on free-spinning model results for high and low wing con­figurations and a variety of tail configurations. Additionally, guidelines for the design and implementation of the mechanical systems required for parachute deployment (such as mechanical jaws and pyrotechnic deployment) and release of the parachute were documented.

NASA also encouraged industry to use its spin tunnel facility on a fee-paying basis. Several industry teams proceeded to use the opportu­nity to conduct proprietary tests for configurations in the tunnel. For example, the Beech Aircraft Corporation sponsored the first fee-paid test in the Langley Spin Tunnel for free-spinning model tests of its Model 77 "Skipper” trainer.[518] In such proprietary tests, the industry provided models and personnel for joint participation in the testing experience.

The Advent of Hypersonic Tunnel and Aeroballistic Facilities

John V. Becker at Langley led the way in the development of conven­tional hypersonic wind tunnels. He built America’s first hypersonic wind tunnel in 1947, with an 11-inch test section and the capability of Mach 6.9 flow. To T. A. Heppenheimer, it is "a major advance in hypersonics,” because Becker had built the discipline’s first research instrument.[582] Becker and Eugene S. Love followed that success with their design of the 20-Inch Hypersonic Tunnel in 1958. Becker, Love, and their col­leagues used the tunnel for the investigation of heat transfer, pressure,

and forces acting on inlets and complete models at Mach 6. The facility featured an induction drive system that ran for approximately 15 min­utes in a nonreturn circuit operating at 220-550 psia (pounds-force per square inch absolute).[583]

The need for higher Mach numbers led to tunnels that did not rely upon the creation of a flow of air by fans. A counterflow tunnel featured a gun that fired a model into a continual onrushing stream of gas or air, which was an effective tool for supersonic and hypersonic testing. An impulse wind tunnel created high temperature and pressure in a test gas through an explosive release of energy. That expanded gas burst through a nozzle at hypersonic speeds and over a model in the test sec­tion in milliseconds. The two types of impulse tunnels—hotshot and shock—introduced the test gas differently and were important steps in reaching ever-higher speeds, but NASA required even faster tunnels.[584]

The companion to a hotshot tunnel was an arc-jet facility, which was capable of evaluating spacecraft heat shield materials under the extreme heat of planetary reentry. An electric arc preheated the test gas in the stilling chamber upstream of the nozzle to temperatures of 10,000-20,000 °F. Injected under pressure into the nozzle, the heated gas created a flow that was sustainable for several minutes at low- density numbers and supersonic Mach numbers. The electric arc required over 100,000 kilowatts of power. Unlike the hotshot, the arc-jet could operate continually.[585]

NASA combined these different types of nontraditional tunnels into the Ames Hypersonic Ballistic Range Complex in the 1960s.[586] The Ames Vertical Gun Range (1964) simulated planetary impact with various model­launching guns. Ames researchers used the Hypervelocity Free-Flight Aerodynamic Facility (1965) to examine the aerodynamic characteristics of atmospheric entry and hypervelocity vehicle configurations. The research programs investigated Earth atmosphere entry (Mercury, Gemini, Apollo,

and Shuttle), planetary entry (Viking, Pioneer-Venus, Galileo, and Mars Science Lab), supersonic and hypersonic flight (X-15), aerobraking con­figurations, and scramjet propulsion studies. The Electric Arc Shock Tube (1966) enabled the investigation of the effects of radiation and ionization that occurred during high-velocity atmospheric entries. The shock tube fired a gaseous bullet at a light-gas gun, which fired a small model into the onrushing gas.[587]

The NACA also investigated the use of test gases other than air. Designed by Antonio Ferri, Macon C. Ellis, and Clinton E. Brown, the Gas Dynamics Laboratory at Langley became operational in 1951. One facility was a high-pressure shock tube consisting of a constant area tube 3.75 inches in diameter, a 20-inch test section, a 14-foot-long high – pressure chamber, and 70-foot-long low-pressure section. The induction drive system consisted of a central 300-psi tank farm that provided heated fluid flow at a maximum speed of Mach 8 in a nonreturn circuit at a pres­sure of 20 atmospheres. Langley researchers investigated aerodynamic heating and fluid mechanical problems at speeds above the capability of conventional supersonic wind tunnels to simulate hypersonic and space-reentry conditions. For the space program, NASA used pure nitrogen and helium instead of heated air as the test medium to simulate reentry speeds.[588]

NASA built the similar Ames Thermal Protection Laboratory in the early 1960s to solve reentry materials problems for a new generation of craft, whether designed for Earth reentry or the penetration of the atmo­spheres of the outer planets. A central bank of 10 test cells provided the pressurized flow. Specifically, the Thermal Protection Laboratory found solutions for many vexing heat shield problems associated with the Space Shuttle, interplanetary probes, and intercontinental ballistic missiles.

Called the "suicidal wind tunnel” by Donald D. Baals and William R. Corliss because it was self-destructive, the Ames Voitenko Compressor was the only method for replicating the extreme velocities required for the design of interplanetary space probes. It was based on the Voitenko

The Advent of Hypersonic Tunnel and Aeroballistic Facilities

The Continuous Flow Hypersonic Tunnel at Langley in 1961. NASA.

concept from 1965 that a high-velocity explosive, or shaped, charge developed for military use be used for the acceleration of shock waves. Voitenko’s compressor consisted of a shaped charge, a malleable steel plate, and the test gas. At detonation, the shaped charge exerts pressure on the steel plate to drive it and the test gas forward. Researchers at the Ames Laboratory adapted the Voitenko compressor concept to a self­destroying shock tube comprised of a 66-pound shaped charge and a glass-walled tube 1.25 inches in diameter and 6.5 feet long. Observation of the tunnel in action revealed that the shock wave traveled well ahead of the rapidly disintegrating tube. The velocities generated upward of

220,0 feet per second could not be reached by any other method.[589]

Langley, building upon a rich history of research in high-speed flight, started work on two tunnels at the moment of transition from the NACA

to NASA. Eugene Love designed the Continuous Flow Hypersonic Tunnel for nonstop operation at Mach 10. A series of compressors pushed high­speed air through a 1.25-inch square nozzle into the 31-inch square test section. A 13,000-kilowatt electric resistance heater raised the air tem­perature to 1,450 °F in the settling chamber, while large water coolers and channels kept the tunnel walls cool. The tunnel became opera­tional in 1962 and became instrumental in study of the aerodynamic performance and heat transfer on winged reentry vehicles such as the Space Shuttle.[590]

The 8-Foot High-Temperature Structures Tunnel, opened in 1967, permitted full-scale testing of hypersonic and spacecraft components. By burning methane in air at high pressure and through a hypersonic nozzle in the tunnel, Langley researchers could test structures at Mach 7 speeds and at temperatures of 3,000 °F. Too late for the 1960s space program, the tunnel was instrumental in the testing of the insulating tiles used on the Space Shuttle.[591]

NASA researchers Richard R. Heldenfels and E. Barton Geer devel­oped the 9- by 6-Foot Thermal Structures Tunnel to test aircraft and missile structural components operating under the combined effects of aerodynamic heating and loading. The tunnel became operational in 1957 and featured a Mach 3 drive system consisting of 600-psia air stored in a tank farm filled by a high-capacity compressor. The spent air simply exhausted to the atmosphere. Modifications included addi­tional air storage (1957), a high-speed digital data system (1959), a sub­sonic diffuser (1960), a Topping compressor (1961), and a boost heater system that generated 2,000 °F of heat (1963). NASA closed the 9- by 6-Foot Thermal Structures Tunnel in September 1971. Metal fatigue in the air storage field led to an explosion that destroyed part of the facil­ity and nearby buildings.[592]

NASA’s wind tunnels contributed to the growing refinement of space­craft technology. The multiple design changes made during the transi­tion from the Mercury program to the Gemini program and the need for more information on the effects of angle of attack, heat transfer, and surface pressure resulted in a new wind tunnel and flight-test program. Wind tunnel tests of the Gemini spacecraft were conducted in the range

of Mach 3.51 to 16.8 at the Langley Unitary Plan and tunnels at AEDC and Cornell University. The flight-test program gathered data from the first four launches and reentries of Gemini spacecraft.[593] Correlation revealed that both independent sets of data were in agreement.[594]

The Propulsion Perspective

Aerodynamics always constituted an important facet of NACA-NASA GA research, but no less significant is flight propulsion, for the aircraft engine is often termed the "heart” of an airplane. In the 1920s and 1930s, NACA research by Fred Weick, Eastman Jacobs, John Stack, and others had profoundly influenced the efficiency of the piston engine-propeller­cowling combination.[800] Agency work in the early jet age had been no less influential upon improving the performance of turbojet, turboshaft, and turbofan engines, producing data judged "essential to industry designers.”[801]

The rapid proliferation of turbofan-powered GA aircraft—over 2,100 of which were in service by 1978, with 250 more being added each year— stimulated even greater attention.[802] NASA swiftly supported development of a specialized computer-based program for assessing engine perfor­mance and efficiency. In 1977, for example, Ames Research Center funded development of GASP, the General Aviation Synthesis Program, by the Aerophysics Research Corporation, to compute propulsion system per­formance for engine sizing and studies of overall aircraft performance. GASP consisted of an overall program routine, ENGSZ, to determine appropriate fanjet engine size, with specialized subroutines such as ENGDT and NACDG assessing engine data and nacelle drag. Additional subroutines treated performance for propeller powerplants, including PWEPLT for piston engines, TURBEG for turboprops, ENGDAT and PERFM for propeller characteristics and performance, GEARBX for gearbox cost and weight, and PNOYS for propeller and engine noise.[803]

Such study efforts reflected the increasing numbers of noisy turbine-powered aircraft operating into over 14,500 airports and airfields in the United States, most in suburban areas, as well as the growing cost of aviation fuel and the consequent quest for greater engine effi­ciency. NASA had long been interested in reducing jet engine noise, and the Agency’s first efforts to find means of suppressing jet noise dated to the late NACA in 1957. The needs of the space program had necessarily focused Lewis research primarily on space, but it returned vigorously to air-breathing propulsion at the conclusion of the Apollo program, spurred by the widespread introduction of turbofan engines for mili­tary and civil purposes and the onset of the first oil crisis in the wake of the 1973 Arab-Israeli War.

Out of this came a variety of cooperative research efforts and pro­grams, including the congressionally mandated ACEE program (for Aircraft Engine Efficiency, launched in 1975), the NASA-industry QCSEE (for Quiet Clean STOL Experimental Engine) study effort, and the QCGAT (Quiet Clean General Aviation Turbofan) program. All benefited future propulsion studies, the latter two particularly so.[804]

QCGAT, launched in 1975, involved awarding initial study contracts to Garrett AiResearch, General Electric, and Avco Lycoming to explore applying large turbofan technology to GA needs. Next, AiResearch and Avco were selected to build a small turbofan demonstrator engine suit­able for GA applications that could meet stringent noise, emissions, and fuel consumption standards using an existing gas-generating engine core. AiResearch and Avco took different approaches, the former with a high-thrust engine suitable for long-range high-speed and high alti­tude GA aircraft (using as a baseline a stretched Lear 35), and the lat­ter with a lower-thrust engine for a lower, slower, intermediate-range design (based upon a Cessna Citation I). Subsequent testing indicated that each company did an excellent job in meeting the QCGAT pro­gram goals, each having various strengths. The Avco engine was qui­eter, and both engines bettered the QCQAT emissions goals for carbon monoxide and unburned hydrocarbons. While the Avco engine was "right at the goal” for nitrous oxide emissions, the AiResearch engine was higher, though much better than the baseline TFE-731-2 turbo­fan used for comparative purposes. While the AiResearch engine met sea-level takeoff and design cruise thrust goals, the Avco engine missed both, though its measured numbers were nevertheless "quite respect­able.” Overall, NASA considered that the QCGAT program, executed on schedule and within budget, constituted "a very successful NASA joint effort with industry,” concluding that it had "demonstrated that noise need not be a major constraint on the future growth of the GA turbofan fleet.”[805] Subsequently, NASA launched GATE (General Aviation Turbine Engines) to explore other opportunities for the application of small tur­bine technology to GA, awarding study contracts to AiResearch, Detroit Diesel Allison, Teledyne CAE, and Williams Research.[806] GA propulsion study efforts gained renewed impetus through the Advanced General Aviation Transport Experiment (AGATE) program launched in 1994, which is discussed later in this study.