Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Assessing NASA’s Wind Shear Research Effort

NASA’s wind shear research effort involved complex, cooperative rela­tionships between the FAA, industry manufacturers, and several NASA Langley directorates, with significant political oversight, scrutiny, and public interest. It faced many significant technical challenges, not the least of which were potentially dangerous flight tests and evaluations.[91] Yet, during a 7-year effort, NASA, along with industry technicians and researchers, had risen to the challenge. Like many classic NACA research projects, it was tightly focused and mission-oriented, taking "a proven,
significant threat to aviation and air transportation and [developing] new technology that could defeat it.”[92] It drew on technical capabilities and expertise from across the Agency—in meteorology, flight systems, aero­nautics, engineering, and electronics—and from researchers in industry, academia, and agencies such as the National Center for Atmospheric Research. This collaborative effort spawned several important break­throughs and discoveries, particularly the derivation of the F-Factor and the invention of Langley’s forward-looking Doppler microwave radar wind shear detector. As a result of this Government-industry-academic partnership, the risk of microburst wind shear could at last be mitigated.[93]

Assessing NASA's Wind Shear Research EffortIn 1992, the NASA-FAA Airborne Windshear Research Program was nominated for the Robert J. Collier Trophy, aviation’s most prestigious honor. Industry evaluations described the project as "the perfect role for NASA in support of national needs” and "NASA at its best.” Langley’s Jeremiah Creedon said, "we might get that good again, but we can’t get any better.”[94] In any other year, the program might easily have won, but it was the NASA-FAA team’s ill luck to be competing that year with the revolutionary Global Positioning System, which had proven its value in spectacular fashion during the Gulf War of 1991. Not surprisingly, then, it was GPS, not the wind shear program, which was awarded the Collier Trophy. But if the wind shear team members lost their shot at this pres­tigious award, they could nevertheless take satisfaction in knowing that together, their agencies had developed and demonstrated a "technology base” enabling the manufacture of many subsequent wind shear detec­tion and prediction systems, to the safety and undoubted benefit of the traveling public, and airmen everywhere.[95]

NASA engineers had coordinated their research with commercial manufacturers from the start of wind shear research and detector devel­opment, so its subsequent transfer to the private sector occurred quickly and effectively. Annual conferences hosted jointly by NASA Langley and the FAA during the project’s evolution provided a ready forum for manufacturers to review new technology and for NASA researchers to obtain a better understanding of the issues that manufacturers were
encountering as they developed airborne equipment to meet FAA cer­tification requirements. The fifth and final combined manufacturers’ and technologists’ airborne wind shear conference was held at NASA Langley on September 28-30, 1993, marking an end to what NASA and the FAA jointly recognized as "the highly successful wind shear experi­ments conducted by government, academic institutions, and industry.” From this point onward, emphasis would shift to certification, regula­tion, and implementation as the technology transitioned into commer­cial service.[96] There were some minor issues among NASA, the airlines, and plane manufacturers about how to calibrate and where to place the various components of the system for maximum effectiveness. Sometimes, the airlines would begin testing installed systems before NASA finished its testing. Airline representatives said that they were pleased with the system, but they noted that their pilots were highly trained profession­als who, historically, had often avoided wind shear on their own. Pilots, who of course had direct control over plane performance, wished to have detailed information about the system’s technical components. Airline rep­resentatives debated the necessity of considering the performance spec­ifications of particular aircraft when installing the airborne system but ultimately went with a single Doppler radar system that could work with all passenger airliners.[97] Through all this, Langley researchers worked with the FAA and industry to develop certification standards for the wind shear sensors. These standards involved the wind shear hazard, the cock­pit interface, alerts given to flight crews, and sensor performance levels. NASA research, as it had in other aspects of aeronautics over the history of American civil aviation, formed the basis for these specifications.[98]

Assessing NASA's Wind Shear Research EffortAlthough its airborne sensor development effort garnered the great­est attention during the 1980s and 1990s, NASA Langley also devel­oped several ground-based wind shear detection systems. One was the

low-level wind shear alert system installed at over 100 United States air­ports. By 1994, ground-based radar systems (Terminal Doppler Weather Radar) were in place at hundreds of airports that could predict when such shears would come, but plane-based systems continue to be neces­sary because not all of the thousands of airports around the world had such systems. Of plane-based systems, NASA’s forward-looking predic­tive radar worked best.[99]

Assessing NASA's Wind Shear Research EffortThe end of the tyranny of microburst did not come without one last serious accident that had its own consequences for wind shear allevia­tion. On July 2, 1994, US Air Flight 1016, a twin-engine Douglas DC-9, crashed and burned after flying through a microburst during a missed approach at Charlotte-Douglas International Airport. The crew had real­ized too late that conditions were not favorable for landing on Runway 18R, had tried to go around, and had been caught by a violent micro­burst that sent the airplane into trees and a home. Of the 57 passen­gers and crew, 37 perished, and the rest were injured, 16 seriously. The NTSB faulted the crew for continuing its approach "into severe con­vective activity that was conducive to a microburst,” for "failure to rec­ognize a windshear situation in a timely manner,” and for "failure to establish and maintain the proper airplane attitude and thrust setting necessary to escape the windshear.” As well, it blamed a "lack of real­time adverse weather and windshear hazard information dissemination from air traffic control.”[100] Several factors came together to make the accident more tragic. In 1991, US Air had installed a Honeywell wind shear detector in the plane that could furnish the crew with both a visual warning light and an audible "wind shear, wind shear, wind shear” warn­ing once an airplane entered a wind shear. But it failed to function dur­ing this encounter. Its operating algorithms were designed to minimize "nuisance alerts,” such as routine changes in aircraft motions induced by flap movement. When Flight 1016 encountered its fatal shear, the plane’s landing flaps were in transition as the crew executed its missed approach, and this likely played a role in its failure to function. As well, Charlotte had been scheduled to be the fifth airport to receive Terminal Doppler Weather Radar, a highly sensitive and precise wind shear

detection system. But a land dispute involving the cost of property that the airport was trying to purchase for the radar site bumped it from 5 th to 38 th on the list to get the new TDWR. Thus, when the accident occurred, Charlotte only had the far less capable LLWAS in service.[101] Clearly, to survive the dangers of wind shear, airline crews needed air­craft equipped with forward-looking predictive wind shear warning systems, airports equipped with up-to-date precise wind shear Doppler radar detection systems, and air traffic controllers cognizant of the prob­lem and willing to unhesitatingly shift flights away from potential wind shear threats. Finally, pilots needed to exercise extreme prudence when operating in conditions conducive to wind shear formation.

Assessing NASA's Wind Shear Research EffortNot quite 5 months later, on November 30, 1994, Continental Airlines Flight 1637, a Boeing 737 jetliner, lifted off from Washington-Reagan Airport, Washington, DC, bound for Cleveland. It is doubtful whether any passengers realized that they were helping usher in a new chapter in the history of aviation safety. This flight marked the introduction of a commercial airliner equipped with a forward-looking sensor for detect­ing and predicting wind shear. The sensor was a Bendix RDR-4B devel­oped by Allied Signal Commercial Avionic Systems of Fort Lauderdale, FL. The RDR-4B was the first of the predictive Doppler microwave radar wind shear detection systems based upon NASA Langley’s research to gain FAA certification, achieving this milestone on September 1, 1994. It consisted of an antenna, a receiver-transmitter, and a Planned Position Indicator (PPI), which displayed the direction and distance of a wind shear microburst and the regular weather display. Since then, the num­ber of wind shear accidents has dropped precipitously, reflecting the proliferation and synergistic benefits accruing from both air – and land – based advanced wind shear sensors.[102]

In the mid-1990s, as part of NASA’s Terminal Area Productivity Program, Langley researchers used numerical modeling to predict weather in the area of airport terminals. Their large-eddy simulation (LES) model had a meteorological framework that allowed the predic­tion and depiction of the interaction of the airplane’s wake vortexes (the rotating turbulence that streams from an aircraft’s wingtips when it passes through the air) with environments containing crosswind shear,
stratification, atmospheric turbulence, and humidity. Meteorological effects can, to a large degree, determine the behavior of wake vortexes. Turbulence can gradually decay the rotation of the vortex, robbing it of strength, and other dynamic instabilities can cause the vortex to collapse. Results from the numerical simulations helped engineers to develop useful algorithms to determine the way aircraft should be spaced when aloft in the narrow approach corridors surrounding the airport terminal, in the presence of wake turbulence. The models utilized both two and three dimensions to obtain the broadest possible picture of phenomena interaction and provided a solid basis for the development of the Aircraft Vortex Spacing System (AVOSS), which safely increased airport capacity.[103]

Assessing NASA's Wind Shear Research EffortIn 1999, researchers at NASA’s Goddard Space Flight Center in Greenbelt, MD, concluded a 20-year experiment on wind-stress simulations and equatorial dynamics. The use of existing datasets and the creation of models that paired atmosphere and ocean forecasts of changes in sea surface temperatures helped the researchers to obtain predictions of climatic conditions of large areas of Earth, even months and years in advance. Researchers found that these conditions affect the speed and timing of the transition from laminar to turbulent air­flow in a plane’s boundary layer, and their work contributed to a more sophisticated understanding of aerodynamics.[104]

In 2008, researchers at NASA Goddard compared various NASA satellite datasets and global analyses from the National Centers for Environmental Protection to characterize properties of the Saharan Air Layer (SAL), a layer of dry, dusty, warm air that moves westward off the Saharan Desert of Africa and over the tropical Atlantic. The researchers also examined the effects of the SAL on hurricane development. Although the SAL causes a degree of low-level vertical wind shear that pilots have to be cognizant of, the researchers concluded that the SAL’s effects on hurricane and microburst formation were negligible.[105]

Advanced research into turbulence will be a vital part of the aero­space sciences as long as vehicles move through the atmosphere. Since 1997, Stanford has been one of five universities sponsored by the U. S. Department of Energy as a national Advanced Simulation and Computing Center. Today, researchers at Stanford’s Center for Turbulence use computer clusters, which are many times more powerful than the pioneering Illiac IV. For large-scale turbulence research proj­ects, they also have access to cutting-edge computational facilities at the National Laboratories, including the Columbia computer at NASA Ames Research Center, which has 10,000 processors. Such advanced research into turbulent flow continues to help steer aerodynamics devel­opments as the aerospace community confronts the challenges of the 21st century.[106]

Assessing NASA's Wind Shear Research EffortIn 2003, President George W. Bush signed the Vision 100 Century of Aviation Reauthorization Act.[107] This initiative established within the FAA a joint planning and development office to oversee and manage the Next Generation Air Transportation System (NextGen). NextGen incor­porated seven goals:

1. Improve the level of safety, security, efficiency, qual­ity, and affordability of the National Airspace System and aviation services.

2. Take advantage of data from emerging ground-based and space-based communications, navigation, and surveillance technologies.

3. Integrate data streams from multiple agencies and sources to enable situational awareness and seam­less global operations for all appropriate users of the system, including users responsible for civil aviation, homeland security, and national security.

4. Leverage investments in civil aviation, homeland security, and national security and build upon cur­rent air traffic management and infrastructure ini­tiatives to meet system performance requirements for all system uses.

5. Be scalable to accommodate and encourage substan­tial growth in domestic and international transpor­tation and anticipate and accommodate continuing technology upgrades and advances.

6. Assessing NASA's Wind Shear Research EffortAccommodate a range of aircraft operations, includ­ing airlines, air taxis, helicopters, general-aviation, and unmanned aerial vehicles.

7. Take into consideration, to the greatest extent prac­ticable, design of airport approach and departure flight paths to reduce exposure of noise and emis­sions pollution on affected residents.[108]

NASA is now working with the FAA, industry, the academic com­munity, the Departments of Commerce, Defense, Homeland Security, and Transportation, and the Office of Science and Technology Policy to turn the ambitious goals of NextGen into air transport reality. Continual improvement of Terminal Doppler Weather Radar and the Low-Level Windshear Alert System are essential elements of the reduced weather impact goals within the NextGen initiatives. Service life extension pro­grams are underway to maintain and improve airport TDWR and the older LLWAS capabilities.[109] There are LLWAS at 116 airports worldwide, and an improvement plan for the program was completed in 2008, con­sisting of updating system algorithms and creating new information/ alert displays to increase wind shear detection capabilities, reduce the number of false alarms, and lower maintenance costs.[110]

FAA and NASA researchers and engineers have not been content to rest on their accomplishment and have continued to perfect the wind shear prediction systems they pioneered in the 1980s and 1990s. Building upon this fruitful NASA-FAA turbulence and wind shear partnership effort, the FAA has developed Graphical Turbulence Guidance (GTG), which provides clear air turbulence forecasts out to 12 hours in advance for planes flying at altitudes of 20,000 feet and higher. An improved system, GTG-2, will enable forecasts out to 12 hours for planes flying at lower altitudes down to 10,000 feet.[111] As of 2010, forward-looking
predictive Doppler microwave radar systems of the type pioneered by Langley are installed on most passenger aircraft.

Assessing NASA's Wind Shear Research EffortThis introduction to NASA research on the hazards of turbulence, gusts, and wind shear offers but a glimpse of the detailed work under­taken by Agency staff. However brief, it furnishes yet another exam­ple of how NASA, and the NACA before it, has contributed to aviation safety. This is due, in no small measure, to the unique qualities of its professional staff. The enthusiasm and dedication of those who worked NASA’s wind shear research programs, and the gust and turbulence studies of the NACA earlier, have been evident throughout the history of both agencies. Their work has helped the air traveler evade the haz­ards of wild winds, turbulence, and storm, to the benefit of all who jour­ney through the world’s skies.

Microwave Landing System: 1976

As soon as it was possible to join the new inventions of the airplane and the radio in a practical way, it was done. Pilots found themselves "flying the beam” to navigate from one city to another and lining up with the runway, even in poor visibility, using the Instrument Landing System (ILS). ILS could tell the pilots if they were left or right of the runway centerline and if they were higher or lower than the established glide slope during the final approach. ILS required straight-in approaches and separation between aircraft, which limited the number of land­ings allowed each hour at the busiest airports. To improve upon this, the FAA, NASA, and the Department of Defense (DOD) in 1971 began developing the Microwave Landing System (MLS), which promised,

among other things, to increase the frequency of landings by allowing multiple approach paths to be used at the same time. Five years later, the FAA took delivery of a prototype system and had it installed at the FAA’s National Aviation Facilities Experimental Center in Atlantic City, NJ, and at NASA’s Wallops Flight Research Facility in Virginia.[210]

Between 1976 and 1994, NASA was actively involved in understand­ing how MLS could be integrated into the national airspace system. Configuration and operation of aircraft instrumentation,[211] pilot proce­dures and workload,[212] air traffic controller procedures,[213] use of MLS with helicopters,[214] effects of local terrain on the MLS signal,[215] and the deter­mination to what extent MLS could be used to automate air traffic con­trol[216] were among the topics NASA researchers tackled as the FAA made plans to employ MLS at airports around the Nation.

But having proven with NASA’s Applications Technology Satellite program that space-based communication and navigation were more than feasible (but skipping endorsement of the use of satellites in the FAA’s 1982 National Airspace System Plan), the FAA dropped the MLS program in 1994 to pursue the use of GPS technology, which was just beginning to work itself into the public consciousness. GPS signals, when enhanced by a ground-based system known as the Wide Area Augmentation System (WAAS), would provide more accurate position information and do it in a more efficient and potentially less costly man­ner than by deploying MLS around the Nation.[217]

Although never widely deployed in the United States for civilian use, MLS remains a tool of the Air Force at its airbases. NASA has

employed a version of the system called the Microwave Scan Beam Landing System for use at its Space Shuttle landing sites in Florida and California. Moreover, Europe has embraced MLS in recent years, and an increasing number of airports there are being equipped with the system, with London’s Heathrow Airport among the first to roll it out.[218]

En Route Descent Adviser

The National Airspace System relies on a complex set of actions with thousands of variables. If one aircraft is so much as 5 minutes out of position as it approaches a major airport, the error could trigger a dom­ino effect that results in traffic congestion in the air, too many airplanes on the ground needing to use the same taxiway at the same time, late arrivals to the gate, and missed connections. One specific tool created by NASA to avoid this is the En Route Descent Adviser. Using data from CTAS, TMA, and live radar updates, the EDA software generates spe­cific traffic control instructions for each aircraft approaching a TRACON so that it crosses an exact navigation fix in the sky at the precise time set by the TMA tool. The EDA tool does this with all ATC constraints in mind and with maneuvers that are as fuel efficient as possible for the type of aircraft.[269]

Improving the efficient flow of air traffic through the TRACON to the airport by using EDA as early in the approach as practical makes it possible for the airport to receive traffic in a constant feed, avoiding the need for aircraft to waste time and fuel by circling in a parking orbit before taking turn to approach the field. Another benefit: EDA allows controllers during certain high-workload periods to concentrate less on timing and more on dealing with variables such as changing weather and airspace conditions or handling special requests from pilots.[270]

Landing Impact and Aircraft Crashworthiness/Survivability Research

Among NASA’s earliest research conducted primarily in the interest of aviation safety was its Aircraft Crash Test program. Aircraft crash survivability has been a serious concern almost since the beginning of flight. On September 17, 1908, U. S. Army Lt. Thomas E. Selfridge became powered aviation’s first fatality, after the aircraft in which he was a passenger crashed at Fort Myers, VA. His pilot, Orville Wright, survived the crash.[363] Since then, untold thousands of humans have per­ished in aviation accidents. To address this grim aspect of flight, NASA Langley Research Center began in the early 1970s to investigate ways to increase the human survivability of aircraft crashes. This important series of studies has been instrumental in the development of impor­tant safety improvements in commercial, general aviation, and military aircraft, as well as NASA space vehicles.[364]

These unique experiments involved dropping various types and components of aircraft from a 240-foot-high gantry structure at NASA Langley. This towering structure had been built in the 1960s as the Lunar Landing Research Facility to provide a realistic setting for Apollo astronauts to train for lunar landings. At the end of the Apollo program in 1972, the gantry was converted for use as a full-scale crash test facility. The goal was to learn more about the effects of crash impact on aircraft structures and their occupants, and to evaluate seat and restraint systems. At this time, the gantry was renamed the Impact Dynamics Research Facility (IDRF).[365]

This aircraft test site was the only such testing facility in the coun­try capable of slinging a full-scale aircraft into the ground, similar to the way it would impact during a real crash. To add to the realism, many of the aircraft dropped during these tests carried instrumented anthropo­morphic test dummies to simulate passengers and crew. The gantry was able to support aircraft weighing up to 30,000 pounds and drop them from as high as 200 feet above the ground. Each crash was recorded and evaluated using both external and internal cameras, as well as an array of onboard scientific instrumentation.[366]

Since 1974, NASA has conducted crash tests on a variety of aircraft, including high and low wing, single – and twin-engine general-aviation air­craft and fuselage sections, military rotorcraft, and a variety of other aviation and space components. During the 30-year period after the first full-scale crash test in February 1974, this system was employed to conduct 41 crash/ impact tests on full-sized general-aviation aircraft and 11 full-scale rotor – craft tests. It also provided for 48 Wire Strike Protection System (WSPS) Army helicopter qualification tests, 3 Boeing 707 fuselage section verti­cal drop tests, and at least 60 drop tests of the F-111 crew escape module.[367]

The massive amount of data collected in these tests has been used to determine what types of crashes are survivable. More specifically, this information has been used to establish guidelines for aircraft seat design that are still used by the FAA as its standard for certification. It has also contributed to new technologies, such as energy-absorbing seats, and to improving the impact characteristics of new advanced composite mate­rials, cabin floors, engine support fittings, and other aircraft components and equipment.[368] Indeed, much of today’s aircraft safety technology can trace its roots to NASA’s pioneering landing impact research.

Birthing the Testing Techniques

The development and use of free-flying model techniques within the NACA originated in the 1920s at the Langley Memorial Aeronautical Laboratory at Hampton, VA. The early efforts had been stimulated by concerns over a critical lack of understanding and design criteria for methods to improve aircraft spin behavior.[441] Although early aviation pioneers had been frequently using flying models to demonstrate con­cepts for flying machines, many of the applications had not adhered to the proper scaling procedures required for realistic simulation of full – scale aircraft motions. The NACA researchers were very aware that cer­tain model features other than geometrical shape required application of scaling factors to ensure that the flight motions of the model would replicate those of the aircraft during flight. In particular, the require­ments to scale the mass and the distribution of mass within the model were very specific.[442] The fundamental theories and derivation of scaling factors for free-flight models are based on the science known as dimen­sional analysis. Briefly, dynamic free-flight models are constructed so that the linear and angular motions and rates of the model can be readily scaled to full-scale values. For example, a dynamically scaled 1/9-scale model will have a wingspan 1/9 that of the airplane and it will have a weight of 1/729 that of the airplane. Of more importance is the fact that the scaled model will exhibit angular velocities that are three times faster than those of the airplane, creating a potential challenge for a remotely located human pilot to control its rapid motions.

Initial NACA testing of dynamically scaled models consisted of spin tests of biplane models that were hand-launched by a researcher or cat­apulted from a platform about 100 feet above the ground in an airship hangar at Langley Field.[443] As the unpowered model spun toward the ground, its path was tracked and followed by a pair of researchers hold­ing a retrieval net similar to those used in fire rescues. To an observer,

the testing technique contained all the elements of an old silent movie, including the dash for the falling object. The information provided by this free-spin test technique was valuable and provided confidence (or lack thereof) in the ability of the model to predict full-scale behavior, but the briefness of the test and the inevitable delays caused by dam­age to the model left much to be desired.

The free-flight model testing at Langley was accompanied by other forms of analysis, including a 5-foot vertical wind tunnel in which the aerodynamic characteristics of the models could be measured during simulated spinning motions while attached to a motor-driven spinning apparatus. The aerodynamic data gathered in the Langley 5-Foot Vertical Tunnel were used for analyses of spin modes, the effects of various air­plane components in spins, and the impact of configuration changes. The airstream in the tunnel was directed downward, therefore free – spinning tests could not be conducted.[444]

Meanwhile, in England, the Royal Aircraft Establishment (RAE) was aware of the NACA’s airship hangar free-spinning technique and had been inspired to explore the use of similar catapulted model spin tests in a large building. The RAE experience led to the same unsatisfac­tory conclusions and redirected its interest to experiments with a novel 2-foot-diameter vertical free-spinning tunnel. The positive results of tests of very small models (wingspans of a few inches) in the apparatus led the British to construct a 12-foot vertical spin tunnel that became operational in 1932.[445] Tests in the facility were conducted with the model launched into a vertically rising airstream, with the model’s weight being supported by its aerodynamic drag in the rising airstream. The mod­el’s vertical position in the test section could be reasonably maintained within the view of an observer by precise and rapid control of the tun­nel speed, and the resulting test time could be much longer than that obtained with catapulted models. The advantages of this technique were very apparent to the international research community, and the facility features of the RAE tunnel have influenced the design of all other ver­tical spin tunnels to this day.

Birthing the Testing Techniquesturning vanes

test section

documentation

camera

data

acquisition cameras (2 of 8)

honeycomb

This cross-sectional view of the Langley 20-Foot Vertical Spin Tunnel shows the closed-return tun­nel configuration, the location of the drive fan at the top of the facility, and the locations of safety nets above and below the test section to restrain and retrieve models. NASA.

When the NACA learned of the new British tunnel, Charles H. Zimmerman of the Langley staff led the design of a similar tunnel known as the Langley 15-Foot Free-Spinning Wind Tunnel, which became opera­tional in 1935.[446] The use of clockwork delayed-action mechanisms to move the control surfaces of the model during the spin enabled the researchers

to evaluate the effectiveness of various combinations of spin recovery tech­niques. The tunnel was immediately used to accumulate design data for satisfactory spin characteristics, and its workload increased dramatically.

Langley replaced its 15-Foot Free-Spinning Wind Tunnel in 1941 with a 20-foot spin tunnel that produced higher test speeds to support scaled models of the heavier aircraft emerging at the time. Control inputs for spin recovery were actuated at the command of a researcher rather than the preset clockwork mechanisms of the previous tunnel. Copper coils placed around the periphery of the tunnel set up a magnetic field in the tunnel when energized, and the magnetic field actuated a magnetic device in the model to operate the model’s aerodynamic control surfaces.[447]

The Langley 20-Foot Vertical Spin Tunnel has since continued to serve the Nation as the most active facility for spinning experiments and other studies requiring a vertical airstream. Data acquisition is based on a model space positioning system that uses retro-reflective targets attached on the model for determining model position, and results include spin rate, model attitudes, and control positions.[448] The Spin Tunnel has sup­ported the development of nearly all U. S. military fighter and attack aircraft, trainers, and bombers during its 68-year history, with nearly 600 projects conducted for different aerospace configurations to date.

Quest for Guidelines: Tail Damping Power Factor

An empirical criterion based on the projected side area and mass distribution of the airplane was derived in England, and the Langley staff proposed a design criterion in 1939 based solely on the geometry of aircraft tail surfaces. Known as the tail-damping power factor (TDPF), it was touted as a rapid estimation method for determining whether a new design was likely to comply with the minimum requirements for safety in spinning.[508]

The beginning of World War II and the introduction of a new Langley 20-Foot Spin Tunnel in 1941 resulted in a tremendous demand for spin­ning tests of high-priority military aircraft. The workload of the staff increased dramatically, and a tremendous amount of data was gath­ered for a large number of different configurations. Military requests for spin tunnel tests filled all available tunnel test times, leaving no time for general research. At the same time, configurations were tested with

radical differences in geometry and mass distribution. Tailless aircraft with their masses distributed in a primarily spanwise direction were introduced, along with twin-engine bombers and other unconventional designs with moderately swept wings and canards.

In the 1950s, advances in aircraft performance provided by the introduction of jet propulsion resulted in radical changes in aircraft configurations, creating new challenges for spin technology. Military fighters no longer resembled the aircraft of World War II, as the intro­duction of swept wings and long, pointed fuselages became common­place. Suddenly, certain factors, such as mass distribution, became even more important, and airflow around the unconventional, long fuselage shapes during spins dominated the spin behavior of some configurations. At the same time, fighter aircraft became larger and heavier, resulting in much higher masses relative to the atmospheric density, especially during flight at high altitudes.

The Unitary Plan Tunnels

In the aftermath of World War II and the early days of the Cold War, the Air Force, Army, Navy, and the NACA evaluated what the aeronautical

industry needed to continue leadership and innovation in aircraft and missile development. Specifically, the United States needed more tran­sonic and supersonic tunnels. The joint evaluation resulted in pro­posal called the Unitary Plan. President Harry S. Truman’s Air Policy Commission urged the passage of the Unitary Plan in January 1948. The draft plan, distributed to the press at the White House, proposed the installation of the 16 wind tunnels "as quickly as possible,” with the remainder to quickly follow.[551]

Congress passed the Unitary Wind Tunnel Plan Act, and President Truman signed it October 27, 1949. The act authorized the construction of a group of wind tunnels at U. S. Air Force and NACA installations for the testing of supersonic aircraft and missiles and for the high-speed and high-altitude evaluation of engines. The wind tunnel system was to benefit industry, the military, and other Government agencies.[552]

The portion of the Unitary Plan assigned to the U. S. Air Force led to the creation of the Arnold Engineering Development Center (AEDC) at Tullahoma, TN. Dedicated in June 1951, the AEDC took advantage of abundant hydroelectric power provided by the nearby Tennessee Valley Authority. The Air Force erected facilities, such as the Propulsion Wind Tunnel and two individual 16-Foot wind tunnels that covered the range of Mach 0.2 to Mach 4.75, for the evaluation of full-scale jet and rocket engines in simulated aircraft and missile applications. Starting with 2 wind tunnels and an engine test facility, the research equipment at the AEDC expanded to 58 aerodynamic and propulsion wind tunnels.[553] The Aeropropulsion Systems Test Facility, operational in 1985, was the fin­ishing touch, which made the AEDC, in the words of one observer, "the world’s most complete aerospace ground test complex.”[554]

The sole focus of the AEDC on military aeronautics led the NACA to focus on commercial aeronautics. The Unitary Plan provided two ben­efits for the NACA. First, it upgraded and repowered the NACA’s exist­ing wind tunnel facilities. Second, and more importantly, the Unitary

Plan and provided for three new tunnels at each of the three NACA lab­oratories at the cost of $75 million. Overall, those three tunnels rep­resented, to one observer, "a landmark in wind tunnel design by any criterion—size, cost, performance, or complexity.”[555]

The NACA provided a manual for users of the Unitary Plan Wind Tunnel system in 1956, after the facilities became operational. The docu­ment allowed aircraft manufacturers, the military, and other Government agencies to plan development testing. Two general classes of work could be conducted in the Unitary Plan wind tunnels: company or Government projects. Industrial clients were responsible for renting the facility, which amounted to between $25,000 and $35,000 per week (approximately $190,000 to $265,000 in modern currency), depending on the tunnel, the utility costs required to power the facility, and the labor, materials, and overhead related to the creation of the basic test report. The test report consisted of plotted curves, tabulated data, and a description of the methods and procedures that allowed the company to properly interpret the data. The NACA kept the original report in a secure file for 2 years to protect the interests of the company. There were no fees for work initiated by Government agencies.[556]

The Langley Unitary Plan Wind Tunnel began operations in 1955. NACA researcher Herbert Wilson led a design team that created a closed – circuit, continual flow, variable density supersonic tunnel with two test sections. The test sections, each measuring 4 by 4 feet and 7 feet long, covered the range between low Mach (1.5 to 2.9) and high Mach (2.3 to 4.6). Tests in the Langley Unitary Plan Tunnel included force and moment, surface pressure measurements and distribution, visualization of on – and off-surface airflow patterns, and heat transfer. The tunnel operated at 150 °F, with the capability of generating 300-400 °F in short bursts for heat transfer studies. Built at an initial cost of $15.4 million, the Langley facility was the cheapest of the three NACA Unitary Plan wind tunnels.[557]

The original intention of the Langley Unitary Plan tunnel was mis­sile development. A long series of missile tests addressed high-speed

The Unitary Plan Tunnels

A model of the Apollo Launch Escape System in the Unitary Wind Tunnel at NASA Ames. NASA.

performance, stability and control, maneuverability, jet-exhaust effects, and other factors. NACA researchers quickly placed models of the McDonnell-Douglas F-4 Phantom II in the tunnel in 1956, and soon after, various models of the North American X-15, the General Dynamics F-111 Aardvark, proposed supersonic transport configurations, and spacecraft appeared in the tunnel.[558]

The Ames Unitary Plan Wind Tunnel opened in 1956. It featured three test sections: an 11- by 11-foot transonic section (Mach 0.3 to 1.5) and two supersonic sections that measured 9 by 7 feet (Mach 1.5 to 2.6) and 8 by 7 feet (Mach 2.5 to 3.5). Tunnel personnel could adjust the air­flow to simulate flying conditions at various altitudes in each section.[559]

The power and magnitude of the tunnel facility called for unprec­edented design and construction. The 11-stage axial-flow compressor featured a 20-foot diameter and was capable of moving air at 3.2 mil­lion cubic feet per minute. The complete assembly, which included over

2,0 rotor and stator blades, weighed 445 tons. The flow diversion valve allowed the compressor to drive either the 9- by 7-foot or 8- by 7-foot

supersonic wind tunnels. At 24 feet in diameter, the compressor was the largest of its kind in the world in 1956 but took only 3.5 minutes to switch between the two wind tunnels. Four main drive rotors, weighing 150 tons each, powered the facility. They could generate 180,000 horsepower on a continual basis and 216,000 horsepower at 1-hour intervals. Crews used

10,0 cubic yards of concrete for the foundation and 7,500 tons of steel plate for the major structural components. Workers expended 100 tons of welding rods during construction. When the facility began operations in 1956, the project had cost the NACA $35 million.[560]

The personnel of the Ames Unitary Plan Wind Tunnel evaluated every major craft in the American aerospace industry from the late 1950s to the late 20th century. In aeronautics, models of nearly every commercial transport and military fighter underwent testing. For the space program, the Unitary Plan Wind Tunnel was crucial to the design of the landmark Mercury, Gemini, and Apollo spacecraft, and the Space Shuttle. That record led NASA to assert that the facility was a "unique national asset of vital importance to the nation’s defense and its competitive position in the world aerospace market.” It also reflected the fact that the Unitary Plan facility was NASA’s most heavily used wind tunnel, with over 1,000 test programs conducted during 60,000 hours of operation by 1994.[561]

SAMPLE AEROSPACE VEHICLES EVALUATED IN THE UNITARY PLAN WIND TUNNEL

MILITARY

COMMERCIAL

SPACE

Convair B-58

McDonnell-Douglas

DC-8

Mercury spacecraft

Lockheed A-1 2/YF-1 2/SR-71

McDonnell-Douglas

DC-10

Gemini spacecraft

Lockheed F-104

Boeing 727

Apollo Command Module

North American XB-70

Boeing 767

Space Shuttle orbiter

Rockwell International B-1

General Dynamics F-1 1 1

McDonnell-Douglas F/A-18

Northrop/McDonnell-Douglas YF-23

The National Park Service designated the Ames Unitary Plan Wind Tunnel Facility a national historic landmark in 1985. The Unitary Plan Wind Tunnel represented "the logical crossover point from NACA to NASA” and "contributed equally to both the development of advanced American aircraft and manned spacecraft.”[562]

The Unitary Plan facility at Lewis Research Center allowed the obser­vation and development of full-scale jet and rocket engines in a 10- by 10-foot supersonic wind tunnel that cost $24.6 million. Designed by Abe Silverstein and Eugene Wasliewski, the test section featured a flexible wall made up of 10-foot-wide polished stainless steel plates, almost 1.5 inches thick and 76 feet long. Hydraulic jacks changed the shape of the plates to simulate nozzle shapes covering the range of Mach 2 to Mach 3.5. Silverstein and Wasliewski also incorporated both open and closed operation. For propulsion tests, air entered the tunnel and exited on the other side of the test section continually. In the aerodynamic mode, the same air circulated repeatedly to maintain a higher atmospheric pressure, desired temperature, or moisture content. The Lewis Unitary Plan Wind Tunnel contributed to the development of the General Electric F110 and Pratt & Whitney TF30 jet engines intended for the Grumman F-14 Tomcat and the liquid-fueled rocket engines destined for the Space Shuttle.[563]

Many NACA tunnels found long-term use with NASA. After NASA made modifications in the 1950s, the 20-Foot VST allowed the study of spacecraft and recovery devices in vertical descent. In the early 21st cen­tury, researchers used the 20-Foot VST to test the free-fall and dynamic stability characteristics of spacecraft models. It remains one of only two operation spin tunnels in the world.[564]

The Unitary Plan Tunnels

The 8-Foot Transonic Pressure Tunnel (TPT). NASA.

Toward the Future

NASA remains active in the pursuit of new materials that will sup­port fresh objectives for enabling a step change in efficiency for com­mercial aircraft of the next few decades. A key element of NASA’s strategy is to promote the transition from conventional, fuselage-and – wing designs for large commercial aircraft to flying wing designs, with the Boeing X-48 Blended Wing-Body subscale demonstrator as the model. The concept assumes many changes in current approaches to

Подпись: NASA's Langley Research Center started experimenting with this stitching machine in the early 1990s. The machine stitches carbon, Kevlar, and fiberglass composite preforms before they are infused with plastic epoxy through the resin transfer molding process. The machine was limited to stitching only small and nearly flat panels. NASA.
flight controls, propulsion, and, indeed, expectations for the passenger experience. Among the many innovations to maximize efficiency, such flying wing airliners also must be supported by a radical new look at how composite materials are produced and incorporated in aircraft design.

To support the structural technology for the BWB, Boeing faces the challenge of manufacturing an aircraft with a flat bottom, no constant section, and a diversity of shapes across the outer mold line.[772] To meet these challenges, Boeing is returning to the stitching method, although with a different concept. Boeing’s concept is called pultruded rod stitched efficient unitized structure (PRSEUS). Aviation Week & Space Technology described the idea: "This stitches the composite frames and stringers to the skin to produce a fail-safe structure. The frames and stringers pro­vide continuous load paths and the nylon stitching stops cracks. The design allows the use of minimum-gauge-post-buckled-skins, and Boeing
estimates a PRSEUS pressure vessel will be 28% lighter than a compos­ite sandwich structure.”[773]

Toward the FutureUnder a NASA contract, Boeing is building a 4-foot by 8-foot pressure box with multiple frames and a 30-foot-wide test article of the double­deck BWB airframe. The manufacturing process resembles past expe­rience with the advanced stitching machine. Structure laid up by dry fabric is stitched before a machine pulls carbon fiber rods through pick­ets in the stringers. The process locks the structure and stringers into a preform without the need for a mold-line tool. The parts are cured in an oven, not an autoclave.[774]

The dream of designing a commercially viable, large transport air­craft made entirely out of plastic may finally soon be realized. The all­composite fuselage of the Boeing 787 and the proposed Airbus A350 are only the latest markers in progress toward this objective. But the next generation of both commercial and military transports will be the first to benefit from composite materials that may be produced and assem­bled nearly as efficiently as are aluminum and steel.

Extending the Vision: The Evolution of Mini-Sniffer

The Mini-Sniffer program was initiated in 1975 to develop a small, unpi­loted, propeller-driven aircraft with which to conduct research on tur­bulence, natural particulates, and manmade pollutants in the upper atmosphere. Unencumbered and flying at speeds of around 45 mph, the craft was designed to reach a maximum altitude of 90,000 feet. The Mini-Sniffer was capable of carrying a 25-pound instrument package to 70,000 feet and cruising there for about 1 hour within a 200-mile range.

The Aircraft Propulsion Division of NASA’s Office of Aeronautics and Space Technology sponsored the project and a team at the Flight Research Center, led by R. Dale Reed, was charged with designing and testing the airplane. Researchers at Johnson Space Center devel­oped a hydrazine-fueled engine for use at high altitudes, where oxy­gen is scarce. To avoid delays while waiting for the revolutionary new engine, Reed’s team built two Mini-Sniffer aircraft powered by conven­tional gasoline engines. These were used for validating the airplane’s struc­ture, aerodynamics, handling qualities, guidance and control systems, and operational techniques.[899] As Reed worked on the airframe design, he built small, hand-launched balsa wood gliders for qualitative evalua­tion of different configurations. He decided from the outset that the Mini­Sniffer should have a pusher engine to leave the nose-mounted payload free to collect air samples without disruption or contamination from the engine. Climb performance was given priority over cruise performance.

Eventually, Reed’s team constructed three configurations. The first two—using the same airframe—were powered by a single two-stroke, gasoline-fueled go-cart engine driving a 22-inch-diameter propeller. The third was powered by a hydrazine-fueled engine developed by James W. Akkerman, a propulsion engineer at Johnson Space Center. Thirty-three flights were completed with the three airplanes, each of which provided experimental research results. Thanks to the use of a six-degree-of-freedom simulator, none of the Mini-Sniffer flights had to be devoted to training. Simulation also proved useful for designing the control system and, when compared with flight results, proved an accurate representation of the vehicle’s flight characteristics.

The Mini-Sniffer I featured an 18-foot-span, aft-mounted wing, and a nose-mounted canard. Initially, it was flown via a model airplane radio­control box. Dual-redundant batteries supplied power, and fail-safe units were provided to put the airplane into a gliding turn for landing descent in the event of a transmitter failure. After 12 test flights, Reed abandoned the flying-wing canard configuration for one with substantially greater stability.[900] The Mini-Sniffer II design had a 22-foot wingspan with twin tail booms supporting a horizontal stabilizer. This configuration was less susceptible to flat spin, encountered with the Mini-Sniffer I on its final flight when the ground pilot’s timing between right and left yaw pulses coupled the adverse yaw characteristics of the ailerons with the vehicle’s Dutch roll motions. The ensuing unrecoverable spin resulted in only minor damage to the airplane, as the landing gear absorbed most of the impact forces. It took 3 weeks to restore the airframe to flying condition and convert it to the Mini-Sniffer II configuration. Dihedral wingtips provided additional roll control.

The modified craft was flown 20 times, including 10 flights using wing-mounted ailerons to evaluate their effectiveness in controlling the aircraft. Simulations showed that summing a yaw-rate gyro and pilot inputs to the rudders gave automatic wings leveling at all altitudes and yaw damping at altitudes above 60,000 feet. Subsequently, the ailerons were locked and a turn-rate command system introduced in which the ground controller needed only to turn a knob to achieve desired turn­ing radius. Flight-testing indicated that the Mini-Sniffer II had a high static-stability margin, making the aircraft very easy to trim and min­imizing the effects of altering nose shapes and sizes or adding pods of various shapes and sizes under the fuselage to accommodate instrumen­tation. A highly damped short-period longitudinal oscillation resulted in rapid recovery from turbulence or upset. When an inadvertent hard – over rudder command rolled the airplane inverted, the ground pilot sim­ply turned the yaw damper on and the vehicle recovered automatically, losing just 200 feet of altitude.[901] The Mini-Sniffer III was a completely new airframe, similar in configuration to the Mini-Sniffer II but with a lengthened forward fuselage. An 18-inch nose extension provided better balance and greater payload capacity—up to 50 pounds plus telemetry equipment, radar transponder, radio-control gear, instrumentation, and sensors for stability and control investigations. Technicians at a sailplane repair company constructed the fuselage and wings from fiberglass and plastic foam, and they built tail surfaces from Kevlar and carbon fiber. Metal workers at Dryden fashioned an aluminum tail assembly, while a manufacturer of mini-RPVs designed and constructed an aluminum hydrazine tank to be integral with the fuselage. The Mini-Sniffer III was assembled at Dryden and integrated with Akkerman’s engine.

The 15-horsepower, hydrazine-fueled piston engine drove a 38-inch-diameter, 4-bladed propeller. Plans called for eventually using

Подпись: Ground crew for the Mini-Sniffer III wore self-contained suits and oxygen tanks because the engine was fueled with hydrazine. NASA. Подпись: 9

a 6-foot-diameter, 2-bladed propeller for high-altitude flights. A slightly pressurized tank fed liquid hydrazine into a fuel pump, where it became pressurized to 850 pounds per square inch (psi). A fuel valve then routed some of the pressurized hydrazine to a gas generator, where liquid fuel was converted to hot gas at 1,700 degrees Fahrenheit (°F). Expansion of the hot gas drove the piston.[902] Since hydrazine doesn’t need to be mixed with oxygen for combustion, it is highly suited to use in the thin upper atmosphere. This led to a proposal to send a hydrazine-powered aircraft, based on the Mini-Sniffer concept, to Mars, where it would be flown in the thin Martian atmosphere while collecting data and transmitting it back to scientists on Earth. Regrettably, such a vehicle has yet to be built.

During a 1-hour shakedown flight on November 23, 1976, the Mini­Sniffer III reached an altitude of 20,000 feet. Power fluctuations pre­vented the airplane from attaining the planned altitude of 40,000 feet, but otherwise, the engine performed well. About 34 minutes into the flight, fuel tank pressure was near zero, so the ground pilot closed the throttle and initiated a gliding descent. Some 30 minutes later, the Mini­Sniffer III touched down on the dry lakebed. The retrieval crew, wearing
protective garments to prevent contact with toxic and highly flamma­ble fuels, found that there had been a hydrazine leak. This in itself did not account for the power reduction, however. Investigators suggested a possible fuel line blockage or valve malfunction might have been to blame.[903] Although the mission successfully demonstrated the opera­tional characteristics of a hydrazine-fueled, non-air-breathing aircraft, the Mini-Sniffer III never flew again. Funding for tests with a variable – pitch propeller needed for flights at higher altitudes was not forthcoming, although interest in a Mars exploration airplane resurfaced from time to time over the next few decades.[904] The Mini-Sniffer project yielded a great deal of useful information for application to future RPRV efforts. One area of interest concerned procedures for controlling the vehicle. On the first flights of Mini-Sniffer I, ordinary model radio-control gear was used. This was later replaced with a custom-made, multichannel radio-control system for greater range and equipped with built-in fail­safe circuits to retain control when more than one transmitter was used. The onboard receiver was designed to respond only to the strongest sig­nal. To demonstrate this feature, one of the vehicles was flown over two operating transmitter units located 50 feet apart on the ground. As the Mini-Sniffer passed overhead, the controller of the transmitter nearest the airplane took command from the other controller, with both trans­mitters broadcasting on the same frequency. With typical model radio­control gear, interference from two simultaneously operating transmitters usually results in loss of control regardless of relative signal strength.[905] A chase truck was used during developmental flights to collect early data on control issues. A controller, called the visual pilot, operated the air­plane from the truck bed while observing its response to commands. Speed and trim curves were plotted based on the truck’s speed and a recording of the pilot’s inputs. During later flights, a remote pilot con­trolled the Mini-Sniffer from a chase helicopter. Technicians installed a telemetering system and radar transponder in the airplane so that it could be controlled at altitude from the NASA Mission Control Room at Dryden. Plot boards at the control station displayed position and alti­tude, airspeed, turn rate, elevator trim, and engine data. A miniature

television camera provided a visual reference for the pilot. In most cases, a visual pilot took control for landing while directly observing the air­plane from a vantage point adjacent to the landing area. Reed, how­ever, also demonstrated a solo flight, which he controlled unassisted from takeoff to landing.

"I got a bigger thrill from doing this than from my first flight in a light plane as a teenager,” he said, "probably because I felt more was at stake.”[906]

19

NASA and Supersonic Cruise

William Flanagan

NASA and Supersonic CruiseПодпись: 10For an aircraft to attain supersonic cruise, or the capability to fly faster than sound for a significant portion of time, the designer must balance lift, drag, and thrust to achieve the performance requirements, which in turn will affect the weight. Although supersonic flight was achieved over 60 years ago, successful piloted supersonic cruise aircraft have been rare. NASA has been involved in developing the required technol­ogy for those rare designs, despite periodic shifting national priorities.

N THE 1 930S AND EARLY 1 940S, investigation of flight at speeds faster than sound began to assume increasing importance, thanks ini­tially to the "compressibility” problems encountered by rapidly rotat­ing propeller tips but then to the dangerous trim changes and buffeting encountered by diving aircraft. Researchers at the National Advisory Committee for Aeronautics (NACA) began to focus on this new and trou­blesome area. The concept of Mach number (ratio of a body’s speed to the speed of sound in air at the body’s location) swiftly became a famil­iar term to researchers. At first, the subject seemed heavily theoreti­cal. But then, with the increasing prospect of American involvement in the Second World War, NACA research had to shift to shorter-term objectives of improving American warplane performance, notably by reducing drag and refining the Agency’s symmetrical low-drag airfoil sections. But with the development of fighter aircraft with engines exhibiting 1,500 to 2,000 horsepower and capable of diving in excess of Mach 0.75, supersonic flight became an issue of paramount military importance. Fighter aircraft in steep power on-dives from combat alti­tudes over 25,000 feet could reach 450 mph, corresponding to Mach numbers over 0.7. Unusual flight characteristics could then manifest themselves, such as severe buffeting, uncommanded increasing dive angles, and unusually high stick forces.

The sleek, twin-engine, high-altitude Lockheed P-38 showed these characteristics early in the war, and a crash effort by the manufacturer

aided by NACA showed that although the aircraft was not "supersonic,” i. e., flying faster than the speed of sound at its altitude, the airflow at the thickest part of the wing was at that speed, producing shock waves that were unaccounted for in the design of the flight control surfaces. The shock waves were a thin area of high pressure, where the supersonic airflow around the body began to slow toward its customary subsonic speed. This shock region increased drag on the vehicle considerably, as well as altered the lift distribution on the wing and control surfaces. An expedient fix, in the form of a dive flap to be activated by the pilot, was installed on the P-38, but the concept of a "critical Mach number” was introduced to the aviation industry: the aircraft flight speed at which supersonic flow could be present on the wing and fuselage. Newer high­speed, propeller-driven fighters, such as the P-51D with its thin laminar flow wing, had critical Mach numbers of 0.75, which allowed an ade­quate combat envelope, but the looming turbojet revolution removed the self-governing speed limit of reduced thrust because of supersonic propeller tips. Investigation of supersonic aircraft was no longer a the­oretical exercise.[1054]