Category NASA’S CONTRIBUTIONS TO AERONAUTICS

The Quest for Safety Amid Crowded Skies

James Banke

Since 1926 and the passage of the Air Commerce Act, the Federal Government has had a vital commitment to aviation safety. Even before this, however, the NACA championed regulation of aeronau­tics, the establishment of licensing procedures for pilots and aircraft, and the definition of technical criteria to enhance the safety of air operations. NASA has worked closely with the FAA and other aviation organizations to ensure the safety of America’s air transport network.

HEN THE FIRST AIRPLANE LIFTED OFF from the sands of Kitty Hawk during 1903, there was no concern of a midair collision with another airplane. The Wright brothers had the North Carolina skies all to themselves. But as more and more aircraft found their way off the ground and then began to share the increasing num­ber of new airfields, the need to coordinate movements among pilots quickly grew. As flight technology matured to allow cross-country trips, methods to improve safe navigation between airports evolved as well. Initially, bonfires lit the airways. Then came light towers, two-way radio, omnidirectional beacons, radar, and—ultimately—Global Positioning System (GPS) navigation signals from space.[181]

Today, the skies are crowded, and the potential for catastrophic loss of life is ever present, as more than 87,000 flights take place each day over the United States. Despite repeated reports of computer crashes or bad weather slowing an overburdened national airspace system, air- related fatalities remain historically low, thanks in large part to the technical advances developed by the National Aeronautics and Space Administration (NASA), but especially to the daily efforts of some 15,000 air traffic controllers keeping a close eye on all of those airplanes.[182]

The Quest for Safety Amid Crowded Skies

From an Australian government slide show in 1 956, the basic concepts of an emerging air traffic control system are explained to the public. Airways Museum & Civil Aviation Historical Society, Melbourne, Australia (www. airwaysmuseum. com).

All of those controllers work for, or are under contract to, the Federal Aviation Administration (FAA), which is the Federal agency respon­sible for keeping U. S. skyways safe by setting and enforcing regula­tions. Before the FAA (formed in 1958), it was the Civil Aeronautics Administration (formed in 1941), and even earlier than that, it was the Department of Commerce’s Aeronautics Bureau (formed in 1926). That that administrative job today is not part of NASA’s duties is the result of decisions made by the White House, Congress, and NASA’s prede­cessor organization, the National Advisory Committee for Aeronautics (NACA), during 1920.[183]

At the time (specifically 1919), the International Commission for Air Navigation had been created to develop the world’s first set of rules for governing air traffic. But the United States did not sign on to the con­vention. Instead, U. S. officials turned to the NACA and other organiza­tions to determine how best to organize the Government for handling

all aspects of this new transportation system. The NACA in 1920 already was the focal point of aviation research in the Nation, and many thought it only natural, and best, that the Committee be the Government’s all­inclusive home for aviation matters. A similar organizational model existed in Europe but didn’t appear to some with the NACA to be an ideal solution. This sentiment was most clearly expressed by John F. Hayford, a charter member of the NACA and a Northwestern University engineer, who said during a meeting, "The NACA is adapted to function well as an advisory committee but not to function satisfac­torily as an administrative body.”[184]

So, in a way, NASA’s earliest contribution to making safer skyways was to shed itself of the responsibility for overseeing improvements to and regulating the operation of the national airspace. With the FAA secure in that management role, NASA has been free to continue to play to its strengths as a research organization. It has provided techni­cal innovation to enhance safety in the cockpits; increase efficiencies along the air routes; introduce reliable automation, navigation, and com­munication systems for the many air traffic control (ATC) facilities that dot the Nation; and manage complex safety reporting systems that have required creation of new data-crunching capabilities.

This case study will present a survey in a more-or-less chronolog­ical order of NASA’s efforts to assist the FAA in making safer skyways. An overview of key NASA programs, as seen through the eyes of the FAA until 1996, will be presented first. NASA’s contributions to air traffic safety after the 1997 establishment of national goals for reducing fatal air acci­dents will be highlighted next. The case study will continue with a sur­vey of NASA’s current programs and facilities related to airspace safety and conclude with an introduction of the NextGen Air Transportation System, which is to be in place by 2025.

Commercial Aviation Safety Team (CAST)

Commercial Aviation Safety Team (CAST)Подпись: NASA's work with improving the National Airspace System has won the Agency two Collier Trophies: one in 2007 for its work with developing the new next-generation ADS-B instrumentation, and one in 2008 as part of the Commercial Aviation Safety Team, which helped improve air safety during the past decade. NASA.When NASA’s Aviation Safety Program was begun in 1997, the agency joined with a large group of aviation-related organizations from Government, industry, and academia in forming a Commercial Aviation Safety Team (CAST) to help reduce the U. S. com­mercial aviation fatal accident rate by 80 percent in 10 years. During those 10 years, the group analyzed data from some 500 accidents and thou­sands of safety incidents and helped develop 47 safety enhancements.[249] In 2008, the group could boast that the rate had been reduced by 83 percent, and for that, CAST was awarded avi­ation’s most prestigious honor, the Robert J. Collier Trophy.

The Altitude Problem

The interface between humans and technology was no less important for those early pioneers, who, for the first time in history, were start­ing to reach for the sky. Human factors research in aeronautics did not, however, begin with the Wright brothers’ first powered flight in 1903; it began more than a century earlier.

Much of this early work dealt with the effects of high altitude on humans. At greater heights above the Earth, barometric pressure decreases. This allows the air to expand and become thinner. The net effect is diminished breathable oxygen at higher altitudes. In humans operating high above sea level without supplemental oxygen, this trans­lates to a medical condition known as hypoxia. The untoward effects on humans of hypoxia, or altitude sickness, had been known for centu­ries—long before man ever took to the skies. It was a well-known entity to ancient explorers traversing high mountains, thus the still commonly used term mountain sickness.[298]

The world’s first aeronauts—the early balloonists—soon noticed this phenomenon when ascending to higher altitudes; eventually, some of the early flying scientists began to study it. As early as 1784, American physician John Jeffries ascended to more than 9,000 feet over London with French balloonist Jean Pierre Blanchard.[299] During this flight, they recorded changes in temperature and barometric pressure and became perhaps the first to record an "aeromedical” problem, in the form of ear pain associated with altitude changes.[300] Another early flying doctor, British physician John Shelton, also wrote of the detrimental effects of high-altitude flight on humans.[301]

During the 1870s—with mankind’s first powered, winged human flight still decades in the future—French physiologist Paul Bert conducted important research on the manner in which high – altitude flight affects living organisms. Using the world’s first pressure chamber, he studied the effects of varying barometric pressure and oxygen levels on dogs and later humans—himself included. He conducted 670 experiments at simulated altitudes of up to 36,000 feet. His findings clarified the effects of high-altitude conditions on humans and established the requirement for supplemental oxygen at higher altitudes.[302] Later studies by other researchers followed, so that by the time piloted flight in powered aircraft became a reality at Kitty Hawk, NC, on December 17, 1903, the scientific community already had a substantial amount of knowledge concerning the physiology of high-altitude flight. Even so, there was much more to be learned, and additional research in this important area would continue in the decades to come.

Taking Human Factors Technology into the 21st Century

From the foregoing, it is clear that NASA’s human factors research has over the past decades specifically focused on aviation safety. This work, however, has also maintained an equally strong focus on improving the human-machine interface of aviation professionals, both in the air and on the ground. NASA has accomplished this through its many highly devel­oped programs that have emphasized human-centered considerations in the design and engineering of increasingly complex flight systems.

These human factors considerations in systems design and integration have directly translated to increased human performance and efficiency and, indirectly, to greater flight safety. The scope of these contributions is [421] [422]

best illustrated by briefly discussing a representative sampling of NASA programs that have benefitted aviation in various ways, including the Man – Machine Integration Design and Analysis System (MIDAS), Controller – Pilot Data Link Communications (CPDLC), NASA’s High-Speed Research (HSR) program, the Advanced Air Transportation Technologies (AATT) program, and the Agency’s Vision Science and Technology effort.

Safe Return: Space Capsules

The selection of blunt capsule designs for the Mercury, Gemini, and Apollo programs resulted in numerous investigations of the dynamic stability and recovery of such shapes. Nonlinear, unstable varia­tions of aerodynamic forces and moments with angle of attack and sideslip were known to exist for these configurations, and extensive conventional force tests, dynamic free-flight model tests, and analyti­cal studies were conducted to define the nature of potential problems that might be encountered during atmospheric reentry. At Ames, the supersonic and hypersonic free-flight aerodynamic facilities have been used to observe dynamic stability characteristics, extract aero­dynamic data from flight tests, provide stabilizing concepts, and develop mathematical models for flight simulation at hypersonic and supersonic speeds.

Meanwhile, at Langley, researchers in the Spin Tunnel were con­ducting dynamic stability investigations of the Mercury, Gemini, and Apollo capsules in vertically descending subsonic flight.[492]

Results of these studies dramatically illustrated potential dynamic stability issues during the spacecraft recovery procedure. For example, the Gemini capsule model was very unstable; it would at various times oscillate, tumble, or spin about a vertical axis with its symmetrical axis tilted as much as 90 degrees from the vertical. However, the deployment of a drogue parachute during any spinning or tumbling motions quickly terminated these unstable motions at subsonic speeds. Extensive tests of various drogue-parachute configurations resulted in definitions of acceptable parachute bridle-line lengths and attachment points. Spin Tunnel results for the Apollo command module configuration were even more dramatic. The Apollo capsule with blunt end forward was dynam­ically unstable and displayed violent gyrations, including large oscilla­tions, tumbling, and spinning motions. With the apex end forward, the capsule was dynamically stable and would trim at an angle of attack of about 40 degrees and glide in large circles. Once again, the use of a drogue parachute stabilized the capsule, and the researchers also found that retention of the launch escape system, with either a drogue para­chute or canard surfaces attached to it, would prevent an unacceptable apex-forward trim condition during launch abort.

Following the Apollo program, NASA conducted a considerable effort on unpiloted space probes and planetary exploration. In the Langley Spin Tunnel, several planetary-entry capsule configurations were tested to evaluate their dynamic stability during descent, with a priority in simulating descent in the Martian atmosphere.[493] Studies also included assessments of the Pioneer Venus probe in the 1970s. These tests pro­vided considerable design information on the dynamic stability of a vari­ety of potential planetary exploration capsule shapes. Additional studies

Safe Return: Space Capsules

Photograph of a free-flight model of the Project Mercury capsule in vertical descent in the Spin Tunnel with drogue parachute deployed. Tests to improve the dynamic stability characteristics of capsules have continued to this day. NASA.

of the stability characteristics of blunt, large-angle capsules were con­ducted in the late 1990s in the Spin Tunnel.

As the new millennium began, NASA’s interests in piloted and unpi­loted planetary exploration resulted in additional studies of dynamic sta­bility in the Spin Tunnel. Currently, the tunnel and its dynamic model testing techniques are supporting NASA’s Constellation program for

lunar exploration. Included in the dynamic stability testing are the Orion launch abort vehicle, the crew module, and alternate launch abort systems.[494]

Forcing Factors

One of the more impressive advances in aerospace capability in the last few years has been the acceptance and accelerated development of remotely piloted unmanned aerial vehicles (UAVs) by the military. The progress in innovative hardware and software products to support this focus has truly been impressive and warrants a consideration that properly scaled free – flight models have reached the appropriate limits of development. In com­parison to today’s capabilities, the past equipment used by the NACA and NASA seems primitive. It is difficult to anticipate hardware breakthroughs in free-flight model technologies beyond those currently employed, but NASA’s most valuable contributions have come from the applications of the models to specific aerospace issues—especially those that require years of difficult research and participation in model-to-flight correlation studies.

Changes in the world situation are now having an impact on aero­nautics, with a trickle-down effect on technical areas such as free-flight

testing. The end of the Cold War and industrial mergers have resulted in a dramatic reduction in new aircraft designs, especially for uncon­ventional configurations that would benefit from free-flight testing. Reductions in research budgets for industry and NASA have further aggravated the situation.

These factors have led to a slowdown in requirements for the ongoing NASA capabilities in free-flight testing at a time when rollover changes in the NASA workforce is resulting in the retirements of specialists in this and other technologies without adequate transfer of knowledge and mentoring to the new research staffs. In addition, planned closures of key NASA facilities will challenge new generations of researchers to reinvent the free-flight capabilities discussed herein. For example, the planned demolition of the Langley Full-Scale Tunnel in 2009 will terminate that historic 78-year-old facility’s role in providing free-flight testing capa­bility, and although exploratory free-flight tests have been conducted in the much smaller test section of the Langley 14- by 22-Foot Tunnel, it remains to be seen if the technique will continue as a testing capa­bility. Based on the foregoing observations, NASA will be challenged to provide the facilities and expertise required to continue to provide the Nation with contributions from free-flight models.

The Wind Tunnel’s Future

Is the wind tunnel obsolete? In a word, no. But the value and merit of the tunnel in the early 21st century must be evaluated in the light of manifold other techniques that researchers can now employ. The range of these new techniques, particularly CFD, coupled with the seeming maturity of the airplane, has led some observers to conclude that there is little need for extensive investment in research, development, and infrastructure.[630] That facile assumption has been carried over into the question of whether there is a continued need for wind tunnels. It brings into question the role of the wind tunnel in contemporary aero­space research and development.

A 1988 New York Times article titled "In the Space Age, the Old Wind Tunnel Is Being Left Behind” proclaimed "aerospace engineers have hit

a dead end in conventional efforts to test designs for the next generation of spaceships, planetary probes and other futuristic flying machines.” The technology for the anticipated next generation in spacecraft technol­ogy that would appear in the 21st century included speeds in the escape velocity range and the ability to maneuver in and out of planetary atmo­spheres rather than the now-familiar single direction and uncontrolled descents of today. At the core of the problem was getting realistic flight data from a "nineteenth century invention used by the Wright brothers,” the wind tunnel. William I. Scallion of NASA Langley asserted, "We’ve pushed beyond the capacity of most of our ground facilities.” NASA, the Air Force, and various national universities began work on meth­ods to simulate the speeds, temperatures, stress, forces, and vibration challenging the success of these new craft. The proposed solutions were improved wind tunnels capable of higher speeds, the firing of small-scale models atop rockets into the atmosphere, and the dropping of small test vehicles from the Space Shuttle while in orbit.[631]

The need for new testing methods and facilities reflected the chang­ing nature of aerospace craft missions and design. Several programs per­ceived to be pathways to the future in the 1980s exemplified the need for new testing facilities. Proponents of the X-30 aerospace plane believed it would be able to take off and fly directly into space by reaching Mach 25, or 17,000 mph, while being powered by air-breathing engines. In 1988, wind tunnels could only simulate speeds up to Mach 12.5. NASA intended the Aeromanuevering Orbit Transfer Vehicle to be a low-cost "space tug” that could move payloads between high – and low-Earth orbits beginning in the late 1990s. The vehicle slowed itself in orbit by graz­ing the Earth’s outer atmosphere with an aerobrake, or a lightweight shield, rather than relying upon heavy retrorockets, a technique that was impossible to replicate in a wind tunnel. NASA planned to launch small models from the Space Shuttle for evaluation. The final program con­cerned new interplanetary probes destined for Mars; Jupiter; Saturn’s moon, Titan; and their atmospheres, which were much unlike Earth’s. They no longer just dropped back into Earth’s or another planet’s atmo­sphere from space. The craft required maneuverability and flexibility as incorporated into the Space Shuttle for better economy.[632]

NASA allocated funds for the demolition of unused facilities for the first time in the long history of the Agency in 2003. The process required that each of the Research Centers submit listings of target facilities.[633] NASA’s Assistant Inspector General for Auditing conducted a survey of the utilization of NASA’s wind tunnels at three Centers in 2003 and reported the findings to the directors of Langley, Ames, and Lewis and to the Associate Administrator for Aerospace Technology. Private indus­try and the Department of Defense spent approximately 28,000 hours in NASA tunnels in 2002. The number dwindled to 10,000 hours in 2003, dipping to about 2,500 hours in 2008. NASA managers acknowledged there was a direct correlation between a higher user fee schedule intro­duced in 2002 and the decline in usage. The audit also included the first complete list of tunnel closures for the Agency. Of the 19 closed facili­ties, NASA classified 5 as having been "mothballed,” with the remain­ing 14 being "abandoned.”[634]

Budget pressures also forced NASA to close running facilities. Unfortunately, NASA’s operation of the NFAC was short-lived when the Agency closed the facility in 2003. Recognizing the need for full-scale testing of rotorcraft and powered-lift V/STOL aircraft, the Air Force leased the facility in 2006 for use by the AEDC. The NFAC became operational again in 2008. Besides aircraft, the schedule at the NFAC accommodated nontraditional test subjects, including wind turbines, parachutes, and trucks.[635]

In 2005, NASA announced its plan to reduce its aeronautics budget by 20 percent over the following 5 years. The budget cuts included the closing of wind tunnels and other research facilities and the elimination of hundreds of jobs. NASA had spread thin what was left of the aero­nautics budget (down $54 million to $852 million) over too many pro­grams. NASA did receive a small increase in its overall budget to cover the costs of the new Moon-Mars initiative, which meant cuts in aviation – related research. In a hearing before the House Science Subcommittee

on Space and Aeronautics to discuss the budget cuts, aerospace industry experts and politicians commented on the future of fundamental aeronautics research in the United States. Dr. John M. Klineberg, a former NASA official and industry executive, asserted that the NASA aeronautics program was "on its way to becoming irrelevant to the future of aeronautics in this country and in the world.” Representative Dennis Kucinich, whose district included Cleveland, the home of NASA Glenn, warned that the United States was "going to take the ‘A’ out” of NASA and that the new Agency was "just going to be the National Space Administration.”[636]

Philip S. Anton, Director of the RAND Corporation’s Acquisition and Technology Policy Center, spoke before the Committee. RAND concluded a 3-year investigation that revealed that only 2 of NASA’s 31 wind tun­nels warranted closure.[637] As to the lingering question of the supremacy of CFD, Anton asserted that NASA should pursue wind tunnel facility, CFD, and flight-testing to meet national testing needs. RAND recom­mended a veritable laundry list of suggested improvements that ranged from the practical—the establishment of a minimum set of facilities that could serve national needs and the financial support to keep them run­ning—to the visionary—continued investment in CFD and focus on the challenge of hypersonic air-breathing research.

RAND analysts had concluded in 2004 that NASA’s wind tunnel facilities continued to be important to continued American competitiveness in the military, commercial, and space sectors of the world aerospace industry while "management issues” were "creating real risks.” NASA needed a clear aeronautics test technology vision based on the idea of a national test facility plan that identified and maintained a minimum set of facilities.

For RAND, the bottom line was the establishment of shared financial support that kept NASA’s underutilized but essential facilities from crumbling into ruin.[638] Anton found the alterna­tive—the use of foreign tunnels, a practice many of the leading

aerospace manufacturers embraced—problematic because of the myriad of security, access, and availability challenges.[639]

NASA’s wind tunnel heritage and the Agency’s viability in the inter­national aerospace community came to a head in 2009. Those issues centered on the planned demolition of the most famous, recognizable, and oldest operating research facility at Langley, the 30- by 60-Foot Tunnel, in 2009 or 2010. Better known by its NACA name, the Full-Scale Tunnel was, according to many, "old, inefficient and not designed for the computer age” in 2009.[640] The Deputy of NASA’s Aeronautics Test Program, Tim Marshall, explained that the Agency decided "to focus its abilities on things that are strategically more important to the nation.” NASA’s focus was supersonic and hypersonic research that required smaller, faster tunnels for experiments on new technologies such as scramjets, not subsonic testing. In the case of the last operator of the FST, Old Dominion University, it had an important mission, refining the aerodynamics of motor trucks at a time of high fuel prices. It was told that economics, NASA’s strategic mission, and the desire of the Agency’s landlord, the U. S. Air Force, to regain the land, even if only for a park­ing lot in a flood zone, overrode its desire to continue using the FST for landlocked aerodynamic research.[641]

In conclusion, wind tunnels have been a central element in the success of NACA and NASA research throughout the century of flight. They are the physical representation of the rich and dynamic legacy of the organization. Their evolution, shaped by the innovative minds at Langley, Ames, and Glenn, paralleled the continual development of aircraft and spacecraft as national, economic, and technological missions shaped both. As newer, smaller, and cheaper digital technologies emerged in the late 20th century, wind tunnels and the testing methodologies pioneered in them still retained a place in the aerospace engineer’s toolbox, no matter how low-tech they appeared. What resulted was a richer fabric of opportunities and modes of research that continued to contribute to the future of flight.

Crash Impact Research

In support of the Apollo lunar landing program, engineers at the Langley Research Center had constructed a huge steel A-frame gantry structure, the Lunar Landing Research Facility (LLRF). Longer than a football field and nearly half as high as the Washington Monument, this facility proved less useful for its intended purposes than free-flight jet-and-rocket powered training vehicles tested and flown at Edwards and Houston. In serendipitous fashion, however, it proved of tremendous value for aviation safety after having been resurrected as a crash-impact test facility, the Impact Dynamics Research Facility (IDRF) in 1974, coincident with the conclusion of the Apollo program.[851]

Подпись: Test Director Victor Vaughan studies the results of one 1 974 crash impact test at the Langley Impact Dynamics Research Facility. NASA. Подпись: 8

Over its first three decades, the IDRF was used to conduct 41 full – scale crash tests of GA aircraft and approximately 125 other impact tests of helicopters and aircraft components. The IDRF could pendulum-sling aircraft and components into the ground at precise impact angles and velocities, simulating the dynamic conditions of a full-scale accident

or impact.[852] In the first 10 years of its existence, the IDRF served as the focal point for a joint NASA-FAA-GA industry study to improve the crashworthiness of light aircraft. It was a case of making the best of a bad situation: a flood had rendered a sizeable portion of Piper’s single – and-twin-engine GA production at its Lock Haven, PA, plant unfit for sale and service.[853] Rather than simply scrap the aircraft, NASA and Piper worked together to turn them to the benefit of the GA industry and user communities. A variety of Piper Aztecs, Cherokees, and Navajos, and later some Cessna 172s, some adorned with colorful names like "Born to Lose,” were instrumented, suspended from cable harnesses, and then "crashed” at various impact angles, attitudes, velocities, and sink-rates, and against hard and soft surfaces. To gain greater fidelity, some were accelerated during their drop by small solid-fuel rockets installed in their engine nacelles.[854]

Later tests, undertaken in 1995 as part of the Advanced General Aviation Transport Experiment (AGATE) study effort (discussed subse­quently), tested Beech Starship, Cirrus SR-20, Lear Fan 2100, and Lancair aircraft.[855] The rapid maturation of computerized analysis programs led to its swift adoption for crash impact research. In partnership with NASA, researchers at the Grumman Corporation Research Center developed DYCAST (DYnamic Crash Analysis of STructures) to analyze structural response during crashes. DYCAST, a finite element program, was quali­fied during extensive NASA testing for light aircraft component testing, including seat and fuselage section analysis, and then made available for broader aviation community use in 1987.[856] Application of computa­
tional methodologies to crash impact research expanded so greatly that by the early 1990s, NASA, in partnership with the University of Virginia Center for Computational Structures Technology, held a seminal work­shop on advances in the field.[857] Out of all of this testing came better understanding of the dynamics of an accident and the behavior of air­craft at and after impact, quantitative data applicable to the design of new and more survivable aircraft structures, better seats and restraint systems, comparative data on the relative merits of conventional ver­sus composite construction, and computational methodologies for ever­more precise and informed analysis of crashworthiness.

Toward Precision Autonomous Spacecraft Recovery

From October 1991 to December 1996, a research program known as the Spacecraft Autoland Project was conducted at Dryden to determine the feasibility of autonomous spacecraft recovery using a ram-air parafoil system for the final stages of flight, including a precision landing. The latter characteristic was the focus of a portion of the project that called for development of a system for precision cargo delivery. NASA Johnson Space Center and the U. S. Army also participated in various phases of the program, with the Charles Stark Draper Laboratory of Cambridge, MA, developing Precision Guided Airdrop Software (PGAS) under contract to the Army.[989] Four generic spacecraft models (each called a Spacewedge, or simply Wedge) were built to test the concept’s feasibility. The proj­ect demonstrated precision flare and landing into the wind at a pre­determined location, proving that a flexible, deployable system that entailed autonomous navigation and landing was a viable and practical way to recover spacecraft.

Key personnel included R. Dale Reed, who participated in flight-test operations. Alexander Sim managed the project and documented the results. James Murray served as the principal Dryden investigator and as lead for all systems integration for Phases I and II. He designed and fabricated much of the instrumentation for Phase II and was the lead for flight data retrieval and analysis in Phases II and III. David Neufeld performed mechanical integration for the Wedge vehicles’ systems dur­ing all three phases and served as parachute rigger, among other duties. Philip Hattis of the Charles Stark Draper Laboratory served as the proj­ect technical director for Phase III. For the Army, Richard Benney was the technical point of contact, while Rob Meyerson served as the tech­nical point of contact for NASA Johnson and provided the specifica­tions for the Spacewedges.[990] The Spacewedge configuration consisted of a flattened biconic airframe joined to a ram-air parafoil with a cus­tom harness. In the manual control mode, the vehicle was flown using radio uplink. In the autonomous mode, it was controlled using a small computer that received inputs from onboard sensors. Selected sensor data were recorded onto several onboard data loggers.

Two Spacewedge shapes, resembling half cones with a flattened bot­tom, were used for four airframes that represented generic hypersonic vehicle configurations. Wedge 1 and Wedge 2 had sloping sides, and the underside of the nose sloped up slightly. Wedge 3 had flattened sides, to create a larger internal volume for instrumentation. The Spacewedge vehi­cles were 48 inches long, 30 inches wide, and 21 inches in height. The basic weight was 120 pounds, although various configurations ranged from 127 to 184 pounds during the course of the test program. Wedge 1 had a tubular steel structure, covered with plywood on the rear and underside that could withstand hard landings. It had a fiberglass-covered wooden nose and removable aluminum upper and side skins. Wedge 2, originally uninstrumented, was later configured with instrumentation. It had a fiberglass outer shell, with plywood internal bulkheads and bottom structure. Wedge 3 was constructed as a two-piece fiberglass shell, with a plywood and aluminum shelf for instrumentation.[991] A commercially available 288-square-foot ram-air parafoil of a type commonly used by sport parachutists was selected for Phase I tests. The docile flight charac­teristics, low wing loading, and proven design allowed the project team to concentrate on developing the vehicle rather than the parachute. With the exception of lengthened control lines, the parachute was not modi­fied. Its large size allowed the vehicle to land without flaring and without sustaining damage. For Phase II and III, a smaller (88 square feet) para­foil was used to allow for a wing loading more representative of space vehicle or cargo applications.

Spacewedge Phase I and II instrumentation system architecture was driven by cost, hardware availability, and program evolution. Essential items consisted of the uplink receiver, Global Positioning System (GPS) receiver and antenna, barometric altimeter, flight control computer, servo – actuators, electronic compass, and ultrasonic altimeter. NASA techni­cians integrated additional such off-the-shelf components as a camcorder, control position transducers, a data logger, and a pocket personal com­puter. Wedge 3 instrumentation was considerably more complex in order to accommodate the PGAS system.[992] Spacewedge control systems had programming, manual, and autonomous flight modes. The programming mode was used to initialize and configure the flight control computer. The manual mode incorporated a radio-control model receiver and uplink transmitter, configured to allow the ground pilot to enter either brake (pitch) or turn (yaw) commands. The vehicle reverted to manual mode whenever the transmitter controls were moved, even when the autono­mous mode was selected. Flight in the autonomous mode included four primary elements and three decision altitudes. This mode allowed the vehicle to navigate to the landing point, maintain the holding pattern while descending, enter the landing pattern, and initiate the flare maneu­ver. The three decision altitudes were at the start of the landing pattern, the turn to final approach, and the flare initiation.

NASA researchers initially launched Wedge 1 from a hillside near the town of Tehachapi, in the mountains northwest of Edwards, to evaluate general flying qualities, including gentle turns and landing flare. Two of these slope soar flights were made April 23, 1992, with approximately 15-knot winds, achieving altitudes of 10 to 50 feet. The test program was then moved to Rogers Dry Lake at Edwards and to a sport parachute drop zone at California City.115 A second vehicle (known as Inert Spacewedge, or Wedge 2) was fabricated with the same external geometry and weight as Wedge 1. It was initially used to validate parachute deployment, har­ness design, and drop separation characteristics. Wedge 2 was inexpen­sive, lacked internal components, and was considered expendable. It was first dropped from a Cessna U-206 Stationair on June 10, 1992. A sec­ond drop of Wedge 2 verified repeatability of the parachute deployment system. The Wedge 2 vehicle was also used for the first drop from a Rans S-12 ultralight modified as a RPV on August 14, 1992. Wedge 2 was later instrumented and used for ground tests while mounted on top of a van, becoming the primary Phase I test vehicle.116 Thirty-six flight tests were conducted during Phase I, the last taking place February 12, 1993. These flights, 11 of which were remotely controlled, verified the vehicle’s manual and autonomous landing systems. Most were launched from the Cessna U-206 Stationair. Only two flights were launched from the Rans S-12 RPV.

Phase II of the program, from March 1993 to March 1995, encom­passed 45 flights using a smaller parafoil for higher wing loading [993] [994] (2 lb/ft2) and incorporating a new guidance, control, and instrumentation system developed at Dryden. The remaining 34 Phase III flights evaluated the PGAS system using Wedge 3 from June 1995 to December 1996. The software was developed by the Charles Stark Draper Laboratory under contract to the U. S. Army to develop a guidance system to be used for precision offset cargo delivery. The Wedge 3 vehicle was 4 feet long and was dropped at weights varying from 127 to 184 pounds.[995] Technology developed in the Spacewedge program has numerous civil and military applications. Potential NASA users for a deployable, precision, autono­mous landing system include proposed piloted vehicles as well as plan­etary probes and booster-recovery systems. Military applications of autonomous gliding-parachute systems include recovery of aircraft ejec­tion seats and high-altitude, offset delivery of cargo to minimize danger to aircraft and crews. Such a cargo delivery system could also be used for providing humanitarian aid.[996] In August 1995, R. Dale Reed incor­porated a 75-square-foot Spacewedge-type parafoil on a 48-inch-long, 150-pound lifting body model called ACRV-X. During a series of 13 flights at the California City drop zone, he assessed the landing characteris­tics of Johnson Space Center’s proposed Assured Crew Return Vehicle design (essentially a lifeboat for the International Space Station). The instrumented R/C model exhibited good flight control and stable ground slide-out characteristics, paving the way for a larger, heavyweight test vehicle known as the X-38.[997]

XB-70 Supersonic Cruise Program Takes to the Air

Despite the AV-1 aircraft limitations, the XB-70 test program proceeded, now with NASA directing the effort with USAF support. Eleven flights were flown under NASA direction as Phase II of the original XB-70 planned flight-test program, ending January 31, 1967. Nine of the
flights were primarily dedicated to the NSBP. As the XB-70 was the only aircraft in the world with the speed, altitude capability, and weight of the U. S. SST, priority was given to aspects that supported that program. The sonic boom promised to be a factor that was drastically different from current jet airliner operations and one whose initial impact was underrated. It was thought that a rapid climb to high altitude before going supersonic would muffle the initial strong normal shock; once at high altitude, even at higher Mach numbers, the boom would be sufficiently attenuated by distance from the ground and the shock wave inclination "lay back” as Mach number increased, to not be a disturbance to ground observers. This proved not to be the case, as over­flights by B-58s and the XB-70 proved. Another case study in this vol­ume provides details on sonic boom research by NASA. Overpressure measurements on the ground during XB-70 overflights as well as the observer questionnaires and measurements in instrumented homes constructed at Edwards AFB indicated that overland supersonic cruise would produce unacceptable annoyance to the public on the ground. Overpressure beneath the flight path reached values of 1.5 to 2 pounds per square foot. A lower limit goal of not more than 0.5 pounds per foot to preclude ground disturbance seemed unachievable with current designs and technology.[1084]

Подпись: 10Supersonic cruise test missions proved challenging for pilots and flight-test engineers alike. Ideally, the test conductor on the ground would be in constant contact with the test pilots to assist in most efficient use of test time. But with an aircraft traveling 25-30 miles per minute, the aircraft rapidly disappeared over the horizon from test mission control. Fortunately, NASA had installed a 450-mile "high range” extending to Utah, with additional tracking radars, telemetry receivers, and radio relays for the hypersonic X-15 research rocket plane. The X-15 was typically released from the B-52 at the north end of the range and was back on the ground within 15 minutes. The high range provided extended mission command and control and data collection but was not optimized for the missions flown by the XB-70 and YF-12.

The XB-70 ground track presented a different problem for mission planners. The author flew the SR-71 Blackbird from Southern California for 5 years and faced the same problems in establishing a test ground track. The test aircraft would take over 200 miles to get to the test cruise speed and altitude. Then it would remain at test conditions, collecting data for 30-40 minutes. It then required an additional 200-250 miles to slow to "normal” subsonic flight. Ground tracks had to be estab­lished that would provide data collection legs while flying straight or performing planned turning maneuvers, and avoiding areas that would be sensitive to the increasingly contentious sonic booms. Examples of the areas included built-up cities and towns; the "avoidance radius” was generally 30 nautical miles. Less obvious areas included mink farms and large poultry ranches, as unexplained sudden loud noises could apparently interfere with breeding habits and egg-laying practices. The Western United States fortunately had a considerably lower population density than the area east of the Mississippi River, and test tracks could be established on a generally north-south orientation.

Подпись: 10The presence of Canada to the north and Mexico to the south, not to mention the densely populated Los Angeles/San Diego corridor and the "island” of Las Vegas, set further bounding limits. Planning a test profile that accounted for the limits/avoidance areas could be a challenge, as the turn radius of a Mach 3 aircraft at 30 degrees of bank was over 65 nautical miles. Experience and the sonic boom research showed that a sonic boom laid down by a turning or descending super­sonic aircraft would "focus” the boom on the ground, decreasing the area affected but increasing the overpressure on the ground within a smaller region. Because planning ground tracks was so complicated and arduous, once a track was established, it tended to be used numerous times. This in turn increased the frequency of residents being subjected to sudden loud noises, and complaints often appeared only after a track had been used several times. The USAF 9th Reconnaissance Wing operating the Mach 3+ SR-71 at Beale Air Force Base near Sacramento, CA, had the same problem as NASA flight-testing for developing training routes (but without the constraints of maintaining telemetry contact with a test control), and it soon discovered another category for avoidance areas: congressional complaints relayed from the Office of the Secretary of the Air Force.

For the limited XB-70 test program, a ground track was established that remained within radio and telemetry range of Edwards. As a result,
the aircraft at high Mach would only fly straight and level for 20 min­utes at best, requiring careful sequencing of the test points. The profile included California, Nevada, and Utah.[1085]

Подпись: 10This planning experience was a forerunner of what problems a fleet of Supersonic Transports would face on overland long-distance flights if they used their design speed. A factor to be overcome in supersonic cruise flight test, it would be critical to a supersonic airliner. Pending development of sonic boom reduction for an aircraft, the impact of off – design-speed operation over land would have to be factored into SST designs. This would affect both range performance and economics.

The flight tests conducted on the XB-70 missions collected data on many areas besides sonic boom impact. The research data were gen­erally focused on areas that were a byproduct of the aeronautical tech­nology inherent in a large airplane designed to go very fast for a long distance with a large payload. An instrumentation package was devel­oped to record research data.[1086] Later, boundary layer rakes were installed to measure boundary layer growth on the long fuselage at high Mach at

70,0 feet altitude; this would influence the drag and hence the range performance of a design. The long flexible fuselage of the XB-70 pro­duced some interesting aeroelastic effects when in turbulence, not to mention taxing over a rough taxiway, similar to the pilot being on a div­ing board. Two 8-inch exciter vane "miniature canards” were mounted near the cockpit as part of the Identically Located Acceleration and Force (ILAF) experiment for the final XB-70 flight-test sorties. These vanes could be programmed to oscillate to induce frequencies in the fuselage to explore its response. Additionally, frequencies could be pro­duced to cancel accelerations induced by turbulence or gusts, leading to a smoother ride for pilots and ultimately SST passengers. This sys­tem was demonstrated to be effective.[1087] A similar system was employed in the Rockwell B-1 Lancer bomber, the Air Force bomber eventually built instead of the B-70.

Inlet performance would have a critical effect on the specific fuel consumption performance, which had a direct effect on range achieved. In addition to collecting inlet data on all supersonic cruise sorties,
numerous test sorties involved investigating inlet unstarts deliberately induced by pilot action, as well as the "unplanned” events. This was important for future aircraft, as the Valkyrie used a two-dimensional (rectangular) inlet with mixed external (to the inlet)/internal compres­sion, with one inlet feeding multiple engines. As a comparison, the A-12/ SR-71 used an axisymmetric (round) inlet, also with external/internal compression feeding a single engine. There was a considerable debate in the propulsion community in general and the Boeing and Lockheed competitive SST designers in particular as to which configuration was better. Theoretical values of pressure recovery had been tested in propul­sion installations in wind tunnels, but the XB-70 presented an opportu­nity to collect data and verify wind tunnel results in extended supersonic free-flight operations, including "off-design” conditions during unstart operations. These data were also important as an operational SST fac­tor, as inlet unstarts were disconcerting to pilots, not to mention pro­spective passengers.

Подпись: 10Traditional aircraft flight-test data on performance, stability, con­trol, and handling qualities were collected, although AV-1 was limited to Mach 2.5 and eventually Mach 2.6. Data to Mach 3 were sometimes also available from AV-2 flights. As USAF-NASA test pilot Fitzhugh Fulton reported in a paper presented to the Society of Automotive Engineers (SAE) in 1968 in Anaheim, CA, on test results as applied to SST opera­tions, the XB-70 flew well, although there were numerous deficiencies that would have to be corrected.[1088] The airplane’s large size and delta wing high-incidence landing attitude required pilot adjustments in take­off, approach, and landing techniques but nothing extraordinary. High Mach cruise was controllable, but the lack of an autopilot in the XB-70 and the need of the pilot to "hand-fly” the airplane brought out another pilot interface problem; at a speed of nearly 3,000 feet per second, a change in pitch attitude of only 1 degree would produce a healthy climb or descent rate of 3,000 feet per minute (50 feet per second). Maintaining a precise altitude was difficult. Various expanded instrument displays were used to assist the task, but the inherent lag in Pitot-static instru­ments relying on measuring tiny pressure differentials (outside static pressure approximately 0.5 pounds per square inch [psi]) to indicate altitude change meant the pilot was often playing catchup.

High Mach cruise at 70,000 feet may have become routine, but it required much more careful flight planning than do contemporary sub­sonic jet operations. The high fuel flows at high Mach numbers meant that fuel reserves were critical in the event of unplanned excursions in flight. Weather forecasts at the extreme altitudes were important, as temperature differences at cruise had a disproportionate influence on fuel flows at a given Mach and altitude; 10 °F hotter than a standard day at altitude could reduce range, requiring an additional fuel stop, unless it was factored into the flight plan. (Early jet operations over the North Atlantic had similar problems; better weather forecasts and larger aircraft with larger fuel reserves rectified this within several years.) Supersonic cruise platforms traveling at 25-30 miles per minute had an additional problem. Although the atmosphere is generally portrayed as a "layer cake,” pilots in the XB-70 and Mach 3 Blackbird discovered it was more like a "carrot cake,” as there were localized regions of hot and cold air that were quickly traversed by high Mach aircraft This could lead to range performance concerns and autopilot instabilities in Mach hold because of the temperature changes encountered. The increase in stagnation temperatures on a hot day could require the aircraft to slow because of engine compressor inlet temperature (CIT) limitations, fur­ther degrading range performance.

Подпись: 10Fuel criticality and the over 200 miles required to achieve and descend from the optimum cruise conditions meant that the SST could brook no air traffic control delays, so merging SST operations with sub­sonic traffic would stress traffic flow into SST airports. Similar concerns about subsonic jet airliner traffic in the mid-1950s resulted in revamp­ing the ATC system to provide nationwide radar coverage and better automate traffic handoffs. To gather contemporary data on this problem for SST concerns, NASA test pilots flew a Mach 2 North American A-5A (former A3J-1) Vigilante on supersonic entry profiles into Los Angeles International Airport. The limited test program flying into Los Angeles showed that the piloting task was easy and that the ATC system was capable of integrating the supersonic aircraft into the subsonic flow.[1089]

One result mentioned in test pilot Fulton’s paper had serious impli­cations not only for the SST but also supersonic research. The XB-70
had been designed using the latest NASA theories (compression lift) and NASA wind tunnels. Nevertheless, the XB-70 as flown was deficient in achieving its design range by approximately 25 percent. What was the cause of the deficiency? Some theorized the thermal expansion in such a large aircraft at cruise Mach, unaccounted for in the wind tun­nels, increased the size of the aircraft to the point where the reference areas for the theoretical calculations were incorrect. Others thought the flexibility of the large aircraft was unaccounted for in the wind tunnel model configuration. Another possibility was that the skin friction drag on the large surface area at high Mach was higher than estimated. Yet another was that the compression lift assumption of up to 30-percent enhancement of lift at cruise speed was incorrect.

Подпись: 10The limited duration of the XB-70 test program meant that further flight tests could not be flown to investigate the discrepancy. Flight – test engineer William Schweikhard proposed a reverse investigation. He structured a program that would use specific flight-test conditions from the program and duplicate them in wind tunnels using high- fidelity models of the XB-70 built to represent the configuration of the aircraft as it was estimated to exist at Mach 2.5. The flight-test data would thus serve as a truth source for the tunnel results.[1090] This comparison showed good correlation between the flight-test data and the wind tun­nel, with the exception of a 20-percent-too-low transonic drag estimate, mainly caused by an incorrect estimate of the control surface deflec­tion necessary to trim the aircraft at transonic speeds. It was doubtful that that would account for the range discrepancy, because the aircraft spent little time at that speed.

The NASA test program with the XB-70 extended from June 16, 1966, to January 22, 1969, with the final flight being a subsonic flight to the Air Force Museum at Wright-Patterson Air Force Base in Dayton, OH. Thirty-four sorties were flown during the program. The original funding agreement with the USAF to provide B-58 chase support and mainte­nance was due to expire at the end of 1968, and the XB-70 would require extensive depot level maintenance as envisioned at the end of the 180- hour test program. NASA research program goals had essentially been
reached, and because of the high costs of operating a one-aircraft fleet, the program was not extended. The X-15 program was also terminated at this time.

Подпись: 10The legacy of the XB-70 program was in the archived mountains of data and the almost 100 technical reports written using that data. As late as 1992, the sonic boom test data generated in the NSBP flights were transferred to modern digital data files for use by researchers of high-speed transports.[1091] But it was fitting that the XB-70’s final super­sonic test sortie included collecting ozone data at high altitudes. The United States SST program that would use supersonic cruise research data was about to encounter something that the engineers had not con­sidered: the increasing interest of both decision makers and the public in the social consequences of high technology, exemplified by the rise of the modern environmental movement. This would have an impact on the direction of NASA supersonic cruise research. Never again in the 20th century would such a large aircraft fly as fast as the Valkyrie.