Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Opportunities

After the results of the NASA HATP project in 1996 and the F/A-18E/F wing-drop and AWS programs were disseminated, it was widely recog­nized that computational fluid dynamics had tremendous potential as an additional tool in the designer’s toolkit for high-angle-of-attack flight conditions. However, it was also appreciated that the complexity of the physics of flow separation, the enormous computational resources required for accurate predictions, and the fundamental issues regarding representation of key characteristics such as turbulence would be for­midable barriers to progress. Even more important, the lack of commu­nication between the experimental test and evaluation community and the CFD community was apparent. More specifically, the T&E commu­nity placed its trust in design methods it routinely used for high-angle – of-attack analysis—namely, the wind tunnel and experimental methods. Furthermore, a majority of the T&E engineers were not willing to accept what they regarded as an aggressive "oversell” of CFD capabilities with­out many examples that the computer could reliably predict aircraft stability and control parameters at high angles of attack. Meanwhile, the CFD community had continued its focus on applications related to aircraft performance, with little or no awareness of the aerodynamic
problems faced by the T&E community for high-angle-of-attack pre­dictions. One example of the different cultures of the communities was that a typical CFD expert was used to striving for accuracies within a few percent for performance-related estimates, whereas the T&E ana­lyst was, in many cases, elated to know simply whether parameters at high angles of attack were positive or negative.

Подпись: 13Stimulated to bring these two groups together for discussions, Langley conceived a plan for a project known as Computational Methods for Stability and Control (COMSAC), which could poten­tially spin off focused joint programs to assess, modify, and calibrate computational codes for the prediction of critical aircraft stability and control parameters for high-angle-of-attack conditions.[1328] Many envi­sioned the start of another HATP-like effort, with similar outlooks for success. In 2004, Langley hosted a COMSAC Workshop, which was well – attended by representatives of the military and civil aviation industries, DOD, and academia. As expected, controversy was widespread regard­ing the probability of success in applying CFD to high-angle-of-attack stability and control predictions. Stability and control attendees expressed their "show me that it works” philosophy regarding CFD, while the CFD experts were alarmed by the complexity of typical exper­imental aerodynamic data for high-angle-of-attack flight conditions. Nonetheless, the main objective of establishing communications between the two scientific communities was accomplished, and NASA’s follow-on plans for establishing research efforts in this area were eagerly awaited.

Unfortunately, changes in NASA priorities and funding distribu­tions terminated the COMSAC planning activity after the workshop. However, several attendees returned to their organizations to initiate CFD studies to evaluate the ability of existing computer codes to pre­dict stability and control at high angles of attack. Experts at the Naval Air Systems Command have had notable success using the F/A-18E as a test configuration.[1329]

Despite the inability to generate a sustainable NASA research effort to advance the powerful CFD methods for stability and control, the COMSAC experience did inspire other organizations to venture into the area. It appears that such an effort is urgently needed, especially in view of the shortcomings in the design process.

HSR and the Genesis of the Tu-144 Flight Experiments

NASA’s High-Speed Research program was initiated in 1990 to investi­gate a number of technical challenges involved with developing a Mach 2+ High-Speed Civil Transport (HSCT). This followed several years of NASA-sponsored studies in response to a White House Office of Science and Technology Policy call for research into promoting long-range, high­speed aircraft. The speed spectrum for these initial studies spanned the supersonic to transatmospheric regions, and the areas of interest included economic, environmental, and technical considerations. The studies suggested a viable speed for a proposed aircraft in the Mach 2 to Mach 3.2 range, and this led to the conceptual model for the HSR pro­gram. The initial goal was to determine if major environmental obsta­cles—including ozone depletion, community noise, and sonic boom generation—could be overcome. NASA selected the Langley Research Center in Hampton, VA, to lead the effort, but all NASA aeronautics Centers became deeply involved in this enormous program. During this

Phase I period, NASA and its industry partners determined that the state of the art in high-speed design would allow mitigation of the ozone and noise issues, but sonic boom alleviation remained a daunting challenge.[1460]

Подпись: 15Encouraged by these assessments, NASA began Phase II of the HSR program in 1995 in partnership with Boeing Commercial Airplane Group, McDonnell-Douglas Aerospace, Rockwell North American Aircraft Division, General Electric Aircraft Engines, and Pratt & Whitney. By this time, a baseline concept had emerged for a Mach 2.4 aircraft, known as the Reference H model and capable of carrying 300 passengers non­stop across the Pacific Ocean. A comprehensive list of technical issues was slated for investigation, including sonic boom effects, ozone deple­tion, aeroacoustics and community noise, airframe/propulsion integra­tion, high lift, and flight deck design. Of high interest to NASA Langley Research Center engineers was the concept of Supersonic Laminar Flow Control (SLFC). Maintaining laminar flow of the supersonic airstream across the wing surface for as long as possible would lead to much higher cruise efficiencies. NASA Langley investigated SLFC using wind tunnel, computational fluid dynamics, and flight-test experiments, including the use of NASA’s two F-16XL research aircraft flown at NASA Langley and NASA Dryden Flight Research Centers. Unfortunately, the relatively small size of the unique, swept wing F-16XL led to contamination of the laminar flow by shock waves emanating from the nose and canopy of the aircraft. Clearly, a larger airplane was needed.[1461]

That larger airplane seemed more and more likely to be the Tupolev Tu-144 as proposals devolved from a number of disparate sources, and a variety of serendipitous circumstances aligned in the early 1990s to make that a reality. Aware of the HSR program, the Tupolev Aircraft Design Bureau as early as 1990 proposed a Tu-144 as a flying laboratory for supersonic research. In 1992, NASA Langley’s Dennis Bushnell discussed with Tupolev this possibility of returning to flight one of the few remain­ing Tu-144 SSTs as a supersonic research aircraft. Pursuing Bushnell’s ini­tial inquiries, Joseph R. Chambers, Chief of Langley’s Flight Applications Division, and Kenneth Szalai, NASA’s Dryden Flight Research Center

Подпись: 15Director, developed a formal proposal for NASA Headquarters sug­gesting the use of a Tu-144 for SLFC research. Szalai discussed this idea with his friend Lou Williams, of the HSR Program Office at NASA Headquarters, who became very interested in the Tu-144 concept. NASA Headquarters had, in the meantime, already been considering using a Tu-144 for HSR research and had contracted Rockwell North American Aircraft Division to conduct a feasibility study. NASA and Tupolev officials, including Ken Szalai, Lou Williams, and Tupolev chief engineer Alexander Pukhov, first directly discussed the details of a joint program at the Paris Air Show in 1993, after Szalai and Williams had requested to meet with Tupolev officials the previous day.[1462] The synergistic force ultimately uniting all of this varied interest was the 1993 U. S.-Russian Joint Commission on Economic and Technological Cooperation. Looking at peaceful means of technological cooperation in the wake of the Cold War, the two former adversaries now pur­sued programs of mutual interest. Spurred by the Commission, NASA, industry, and Tupolev managers and researchers evaluated the potential benefits of a joint flight experiment with a refurbished Tu-144 and developed a prioritized list of potential experiments. With positive responses from NASA and Tupolev, a cooperative Tu-144 flight research project was initiated and an agreement signed in 1994 in Vancouver, Canada, between Russian Prime Minister Viktor Chernomyrdin and Vice President Al Gore. Ironically, Langley’s interest in SLFC was not included in the list of experiments to be addressed in this largest joint aeronautics research project between the two former adversaries.[1463] Ultimately, seven flight experiments were funded and accomplished by NASA, Tupolev, and Boeing personnel (Boeing acquired McDonnell – Douglas and Rockwell’s aerospace division in December 1996). Overcoming large distances, language and political barriers, cultural differences, and even different approaches to technical and engineer­ing problems, these dedicated researchers, test pilots, and technicians accomplished 27 successful test flights in 2 years.

Avionics

Lightning effects on avionics can be disastrous, as illustrated by the account of the loss of AC-67. Composite aircraft with internal radio anten­nas require fiberglass composite "windows” in the lightning-strike mesh near the antenna. (Fiberglass composites are employed because of their transparency to radio frequencies, unlike carbon fiber.) Lightning pro­tection and avoidance are important for planning and conducting flight tests. Consequently, NASA’s development of lightning warning and detec­tion systems has been a priority in furthering fly-by-wire (FBW) systems. Early digital computers in flight control systems encountered conditions in which their processors could be adversely affected by lightning-generated electrical pulses. Subsequently, design processes were developed to pro­tect electronic equipment from lightning strikes. As a study by the North Atlantic Treaty Organization (NATO) noted, such protection is "particu­larly important on aircraft with composite structures. Although equipment bench tests can be used to demonstrate equipment resistance to lightning strikes and EMP, it is now often considered necessary to perform whole aircraft lightning-strike tests to validate the design and clearance process.”[173]

Celeste M. Belcastro of Langley contrasted laboratory, ground-based, and in-flight testing of electromagnetic environmental effects, noting:

Laboratory tests are primarily open-loop and static at a few operating points over the performance envelope of the equipment and do not consider system level effects. Full-aircraft tests are also static with the aircraft situated on the ground and equipment powered on during expo­sure to electromagnetic energy. These tests do not pro­vide a means of validating system performance over the operating envelope or under various flight conditions. . . .

The assessment process is a combination of analysis, sim­ulation, and tests and is currently under development for demonstration at the NASA Langley Research Center. The assessment process is comprehensive in that it addresses (i) closed-loop operation of the controller under test, (ii) real-time dynamic detection of controller malfunctions that occur due to the effects of electromagnetic distur­bances caused by lightning, HIRF, and electromagnetic interference and incompatibilities, and (iii) the resulting effects on the aircraft relative to the stage of flight, flight conditions, and required operational performance.[174]

A prime example of full-system assessment is the F-16 Fighting Falcon, nicknamed "the electric jet,” because of its fly-by-wire flight con­trol system. Like any operational aircraft, F-16s have received lightning strikes, the effects of which demonstrate FCS durability. Anecdotal evi­dence within the F-16 community contains references to multiple light­ning strikes on multiple aircraft—as many as four at a time in close formation. In another instance, the leader of a two-plane section was struck, and the bolt leapt from his wing to the wingman’s canopy.

Aircraft are inherently sensor and weapons platforms, and so the lightning threat to external ordnance is serious and requires exami­nation. In 1977, the Air Force conducted tests on the susceptibility of AIM-9 missiles to lightning strikes. The main concern was whether the Sidewinders, mounted on wingtip rails, could attract strobes that could enter the airframe via the missiles. The evaluators concluded that the optical dome of the missile was vulnerable to simulated lightning strikes even at moderate currents. The AIM-9’s dome was shattered, and burn marks were left on the zinc-coated fiberglass housing. However, there was no evidence of internal arcing, and the test concluded that "it is unlikely that lightning will directly enter the F-16 via AIM-9 missiles.”[175] Quite clearly, lightning had the potential of damaging the sensitive optics and sensors of missiles, thus rendering an aircraft impotent. With the increasing digitization and integration of electronic engine controls, in addition to airframes and avionics, engine management systems are now a significant area for lightning resistance research.

National Aviation Operations Monitoring Service

A further contribution to the Aviation Safety Monitoring and Modeling project provided yet another method for gathering data and crunch­ing numbers in the name of making the Nation’s airspace safer amid increasingly crowded skies. Whereas the Aviation Safety Reporting System involved volunteered safety reports and the Performance Data Analysis and Reporting System took its input in real time from digital data sources, the National Aviation Operations Monitoring Service was a scientifically designed survey of the aviation community to generate statistically valid reports about the number and frequency of incidents that might compromise safety.[242]

After a survey was developed that would gather credible data from anonymous volunteers, an initial field trial of the NAOMS was held in 2000, followed by the launch of the program in 2001. Initially, the sur­veyors only sought out air carrier pilots who were randomly chosen from the FAA Airman’s Medical Database. Researchers characterized the response to the NAOMS survey as enthusiastic. Between April 2001 and December 2004, nearly 30,000 pilot interviews were completed, with a remarkable 83-percent return rate, before the project ran short of funds and had to stop. The level of response was enough to achieve statistical validity and prove that NAOMS could be used as a perma­nent tool for managers to assess the operational health of the ATC sys­tem and suggest changes before they were actually needed. Although NASA and the FAA desired for the project to continue, it was shut down on January 31, 2008.[243]

It’s worth mentioning that the NAOMS briefly became the sub­ject of public controversy in 2007, when NASA received a Freedom of Information Act request by a reporter for the data obtained in the NAOMS survey. NASA denied the request, using language that then NASA Administrator Mike Griffin said left an "unfortunate impression” that the Agency was not acting in the best interest of the public. NASA eventually released the data after ensuring the anonymity originally guaranteed to those who were surveyed. In a January 14, 2008, letter from Griffin to all NASA employees, the Administrator summed up the experience by writing: "As usual in such circumstances, there are les­sons to be learned, remembered, and applied. The NAOMS case dem­onstrates again, if such demonstrations were needed, the importance of peer review, scientific integrity, admitting mistakes when they are made, correcting them as best we can, and keeping our word, despite the crit­icism that can ensue.”[244]

The Science of Human Factors

To be clear, however, NASA did not invent the science of human factors. Not only has the term been in use long before NASA ever existed, the concept it describes has existed since the beginning of mankind. Human factors research encompasses nearly all aspects of science and technol­ogy and therefore has been described with several different names. In simplest terms, human factors studies the interface between humans and the machines they operate. One of the pioneers of this science, Dr. Alphonse Chapanis, provided a more inclusive and descriptive definition: "Human factors discovers and applies information about human behavior, abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, jobs, and environments for produc­tive, safe, comfortable, and effective human use.”[292] The goal of human factors research, therefore, is to reduce error, while increasing produc­tivity, safety, and comfort in the interaction between humans and the tools with which they work.[293]

As already suggested, the study of human factors involves a myriad of disciplines. These include medicine, physiology, applied psychology, engineering, sociology, anthropology, biology, and education.[294] These in turn interact with one another and with other technical and scientific fields, as they relate to behavior and usage of technology. Human factors issues are also described by many similar—though not necessarily syn­onymous—terms, such as human engineering, human factors engineer­ing, human factors integration, human systems integration, ergonomics, usability, engineering psychology, applied experimental psychology, bio­mechanics, biotechnology, man-machine design (or integration), and human-centered design.[295]

Automation Design

Automation technology is an important factor in helping aircrew mem­bers to perform more wide-ranging and complicated cockpit activities. NASA engineers and psychologists have long been actively engaged in developing automated cockpit displays and other technologies.[400] These will be essential to pilots in order for them to safely and effectively operate within a new air traffic system being developed by NASA and others, called Free Flight. This system will use technically advanced aircraft computer systems to reduce the need for air traffic controllers and allow pilots to choose their path and speed, while allowing the com­puters to ensure proper aircraft separation. It is anticipated that Free Flight will in the upcoming decades become incorporated into the Next Generation Air Transportation System.[401]

Out of the Box: V/STOL Configurations

International interest in Vertical Take-Off and Landing (VTOL) and Vertical/Short Take-Off and Landing (V/STOL) configurations escalated during the 1950s and persisted through the mid-1960s with a huge num­ber of radical propulsion/aircraft combinations proposed and evaluated

throughout industry, DOD, the NACA, and NASA. The configurations included an amazing variety of propulsion concepts to achieve hover­ing flight and the conversion to and from conventional forward flight. However, all these aircraft concepts were plagued with common issues regarding stability, control, and handling qualities.[476]

The first VTOL nonhelicopter concept to capture the interests of the U. S. military was the vertical-attitude tail-sitter concept. In 1947, the Air Force and Navy initiated an activity known as Project Hummingbird, which requested design approaches for VTOL aircraft. At Langley, dis­cussions with Navy managers led to exploratory NACA free-flight studies in 1949 of simplified tail-sitter models to evaluate stability and control during hovering flight. Conducted in a large open area within a build­ing, powered-model testing enabled researchers to explore the dynamic stability and control of such configurations.[477] The test results provided valuable information on the relative severity of unstable oscillations encountered during hovering flight. The instabilities in roll and pitch were caused by aerodynamic interactions of the propeller during for­ward or sideward translation, but the period of the growing oscilla­tions was sufficiently long to permit relatively easy control. The model flight tests also provided guidance regarding the level of control power required for satisfactory maneuvering during hovering flight.

Navy interest in the tail-sitter concept led to contracts for the devel­opment of the Consolidated-Vultee (later Convair) XFY-1 "Pogo” and the Lockheed XFV-1 "Salmon” tail-sitter aircraft in 1951. The Navy asked Langley to conduct dynamic stability and control investigations of both configurations using its free-flight model test techniques. In 1952, hovering flights of the Pogo were conducted within the huge return passage of the Langley Full-Scale Tunnel, followed by transition flights from hovering to forward flight in the tunnel test section during a brief break in the tunnel’s busy test schedule.[478] Observed by Convair

personnel (including the XFY-1 test pilot), the flight tests provided encouragement and confidence to the visitors and the Navy.

Without doubt, the most successful NASA application of free-flight models for VTOL research was in support of the British P.1127 vectored – thrust fighter program. As the British Hawker Aircraft Company matured its design of the revolutionary P.1127 in the late 1950s, Langley’s senior man­ager, John P. Stack, became a staunch supporter of the activity and directed that tests in the 16-Foot Transonic Tunnel and free-flight research activi­ties in the Full-Scale Tunnel be used for cooperative development work.[479]

In response to the directive, a one-sixth-scale free-flight model was flown in the Full-Scale Tunnel to examine the hovering and transition behavior of the design. Results of the free-flight tests were witnessed by Hawker staff members, including the test pilot slated to conduct the first transition flights, were very impressive. The NASA researchers regarded the P.1127 model as the most docile V/STOL configuration ever flown during their extensive experiences with free-flight VTOL designs. As was the case for many free-flight model projects, the motion-picture segments showing successful transitions from hovering to conventional flight in the Full-Scale Tunnel were a powerful influence in convincing critics that the concept was feasible. In this case, the model flight dem­onstrations helped sway a doubtful British government to fund the proj­ect. Refined versions of the P.1127 design were subsequently developed into today’s British Harrier and Boeing AV-8 fighter/attack aircraft.

The NACA and NASA also conducted pioneering free-flight model research on tilt wing aircraft for V/STOL missions. In the early 1950s, several generic free-flight propeller-powered models were flown to eval­uate some of the stability and control issues that were anticipated to limit the feasibility of the concept.[480] The fundamental principle used by the tilt wing concept to convert from hovering to forward flight involves reorienting the wing from a vertical position for takeoff to a conventional position for forward flight. However, this simple conversion of the wing angle relative to the fuselage brings major challenges. For example, the

wing experiences large changes in its angle of attack relative to the flight path during the transition, and areas of wing stall may be encountered during the maneuver. The asymmetric loss of wing lift during stall can result in wing-dropping, wallowing motions and uncommanded transient maneuvers. Therefore, the wing must be carefully designed to minimize or eliminate flow separation that would otherwise result in degraded or unsatisfactory stability and control characteristics. Extensive wind tunnel and flight research on many generic NACA and NASA models, as well as the Hiller X-18, Vertol VZ-2, and Ling-Temco-Vought XC-142A tilt wing configurations at Langley, included a series of free-flight model tests in the Full-Scale Tunnel.[481]

Coordinated closely with full-scale flight tests, the model testing initially focused on providing early information on dynamic stability and the adequacy of control power in hovering and transition flight for the configurations. However, all projects quickly encountered the anticipated problem of wing stall, especially in reduced-power descend­ing flight maneuvers. Tilt wing aircraft depend on the high-energy slipstream of large propellers to prevent local wing stall by reducing the effective angle of attack across the wingspan. For reduced-power con­ditions, which are required for steep descents to accomplish short-field missions, the energy of the slipstream is severely reduced, and wing stall is experienced. Large uncontrolled dynamic motions may be exhibited by the configuration for such conditions, and the undesirable motions can limit the descent capability (or safety) of the airplane. Flying model tests provided valuable information on the acceptability of uncontrolled motions such as wing dropping and lateral-directional wallowing dur­ing descent, and the test technique was used to evaluate the effective­ness of aircraft modifications such as wing flaps or slats, which were ultimately adapted by full-scale aircraft such as the XC-142A.

As the 1960s drew to a close, the worldwide engineering community began to appreciate that the weight and complexity required for VTOL missions presented significant penalties in aircraft design. It therefore

turned its attention to the possibility of providing less demanding STOL capability with fewer penalties, particularly for large military transport aircraft. Langley researchers had begun to explore methods of using pro­peller or jet exhaust flows to induce additional lift on wing surfaces in the 1950s, and although the magnitude of lift augmentation was relatively high, practical propulsion limitations stymied the application of most concepts.

A particularly promising concept known as the externally blown flap (EBF) used the redirected jet engine exhausts from conventional pod – mounted engines to induce additional circulation lift at low speeds for takeoff and landing.[482] However, the relatively hot exhaust temperatures of turbojets of the 1950s were much too high for structural integrity and feasible applications. Nonetheless, Langley continued to explore and mature such ideas, known as powered-lift concepts. These research stud­ies embodied conventional powered model tests in several wind tunnels, including free-flight investigations of the dynamic stability and control of multiengine EBF configurations in the Full-Scale Tunnel, with empha­sis on providing satisfactory lateral control and lateral-directional trim after the failure of an engine. Other powered-lift concepts were also explored, including the upper-surface-blowing (USB) configuration, in which the engine exhaust is directed over the upper surface of the wing to induce additional circulation and lift.[483] Advantages of this approach included potential noise shielding and flow-turning efficiency.

While Langley continued its fundamental research on EBF and USB configurations, in the early 1970s, an enabling technology leap occurred with the introduction of turbofan engines, which inherently produce relatively cool exhaust fan flows.[484] The turbofan was the perfect match for these STOL concepts, and industry’s awareness and participation in the basic NASA research program matured the state of the art for design data for powered-lift aircraft. The free-flight model results, coupled with NASA piloted simulator studies of full-scale aircraft STOL missions, helped provide the fundamental knowledge and data required to reduce

Out of the Box: V/STOL Configurations

John P. Campbell, Jr., left, inventor of the externally blown flap, and Gerald G. Kayten of NASA Headquarters pose with a free-flight model of an STOL configuration at the Full-Scale Tunnel. Slotted trailing-edge flaps were used to deflect the exhaust flows of turbofan engines. NASA.

 

Подпись: arded a patent for his invention.risk in development programs. Ultimately applied to the McDonnell- Douglas YC-15 and Boeing YC-14 prototype transports in the 1970s and to today’s Boeing C-17, the EBF and USB concepts were the result of over 30 years of NASA research and development, including many valu­able studies of free-flight models in the Full-Scale Tunnel.[485]

General-Aviation Configurations

As part of its General Aviation Spin Research program in the 1970s, Langley included the development of a testing technique using powered radio-controlled models to study spin resistance, spin entry, and spin recovery during the incipient phase of the spin.[521] Equally important was a focus on developing a reliable, low-cost model testing technique that could be used by the industry for spin predictions in early design stages. The dynamically scaled models, which were about 1/5-scale (wingspan of about 4-5 feet), were powered and flown with hobby equipment.

Although resembling conventional radio-control models flown by hobbyists, the scaling process discussed earlier resulted in models that were much heavier (about 15-20 pounds) than conventional hobby models (about 6-8 pounds).

The radio-controlled model activities in the Langley program con­sisted of three distinct phases. Initially, model testing and analysis was directed at producing timely data for correlation with spin tunnel and full-scale flight results to establish the accuracy of the model results in predicting spin and recovery characteristics, and to gain experience with the testing technique. The second phase of the radio-controlled model program involved assessments of the effectiveness of NASA-developed wing leading-edge modifications to enhance the spin resistance of sev­eral general-aviation configurations. The focus of this research was a concept consisting of a drooped leading edge on the outboard wing panel with a sharp discontinuity at the inboard edge of the droop. The third phase of radio-controlled model testing involved cooperative stud­ies of specific general-aviation designs with industry. In this segment of the program, studies centered on industry’s assessment of the radio – controlled model technique.

Direct correlation of results for radio-controlled model tests and full – scale airplane results for a low wing NASA configuration was very good, especially with regard to susceptibility of the design to enter a fast, flat spin with poor or no recovery.[522] In addition, the effects of various con­trol input strategies agreed very well. For example, with normal pro-spin controls and any use of ailerons, the radio-controlled model and the air­plane were both reluctant to enter the flat spin mode that had been pre­dicted by spin tunnel tests; they only exhibited steeper spins from which recovery could still be accomplished. Subsequently, the test pilot and flight – test engineers of the full-scale airplane developed a unique control scheme during spin tests that would aggravate the steeper spin and propel the airplane into a flat spin requiring the emergency parachute for recovery. When a similar control technique was used on the radio-controlled model, it also would enter the flat spin, also requiring its parachute for recovery.

Some of the more impressive results of the radio-controlled model program for the low wing configuration related to the ability of the model to demonstrate effects of the discontinuous leading-edge droop concept that had been developed by Langley for improved spin resis­tance.[523] Several wing-leading-edge droop configurations had been derived in wind tunnel tests with the objective to delay wing autorotation and spin entry to high angles of attack. Tests with the radio-controlled model when modified with a full-span droop indicated better stall character­istics than the basic configuration did, but the resistance of the model to enter the unrecoverable flat spin was significantly degraded. The flat spin could be obtained on virtually every flight if pro-spin controls were maintained beyond about three turns after stall.

In contrast to this result, when the discontinuous droop was applied to the outer wing, the model would enter a very steep spin from which recovery could be obtained by simply neutralizing controls. When the discontinuity on the inboard edge of the droop was faired over, the model reverted to the same characteristics that had been displayed with the full-span droop and could easily be flown into the flat spin. Correlation between the radio-controlled model and aircraft results in this phase of the project was outstanding. The agreement was particularly noteworthy

in view of the large differences between the model and full-scale flight Reynolds numbers. All of the important stall/spin characteristics dis­played by the low wing, radio-controlled model with the full-span droop configuration and the outboard droop configuration (with and without the fairing on the discontinuous juncture) were nearly identical to those exhibited by the full-scale aircraft, including stall characteristics, spin modes, spin resistance, and recovery characteristics.[524]

While researchers were conducting the technical objectives of the radio-controlled model program, an effort was directed at developing test techniques that might be used by industry for relatively low-cost testing. Innovative instrumentation techniques were developed that used relatively inexpensive hobby-type onboard sensors to measure control positions, angle of attack, airspeed, angular rates, and other variables. Data output from the sensors was transmitted to a low-cost ground-based data acquisition station by modifying a conventional seven- channel radio-control model transmitter. The ground station consisted of separate receivers for monitoring angle of attack, angle of sideslip, and control commands. The receivers operated servos to drive potentiome­ters, whose signals were recorded on an oscillograph recorder. Tracking equipment and cameras were also developed. Other facets of the test technique development included the design and operational deployment of emergency spin recovery parachutes for the models.

One particularly innovative testing technique demonstrated by NASA in the radio-controlled model flight programs was the use of miniature auxiliary rockets mounted on the wingtips of models to artificially pro­mote flat spins. This approach was particularly useful in determining the potential existence of dangerous flat spins that were difficult to enter from conventional flight. In this application, the pilot remotely ignited one of the rockets during a spin entry, resulting in extremely high spin rates and a transition to very high angles of attack and flat-spin attitudes. After the "spin up” maneuver was complete, the rocket thrust subsided, and the model either remained in a stable flat spin or pitched down to a steeper spin mode. Beech Aircraft used this technique in its subse­quent applications to radio-controlled models.

General-aviation manufacturers maintained a close liaison with Langley researchers during the NASA stall/spin program, absorbing data produced by the coordinated testing of models and full-scale aircraft. The radio-controlled testing technique was of great interest, and following frequent interactions with Langley’s test team, industry conducted its own evaluations of radio-controlled mod­els for spin testing. In the mid-1970s, Beech Aircraft conducted radio-controlled testing of its T-34 trainer aircraft, the Model 77 Skipper trainer, and the twin-engine Model 76 Duchess.[525] Piper Aircraft also conducted radio-controlled model testing to explore the spin entry, developed spin, and recovery techniques of a light twin-engine config­uration.[526] Later in the 1980s, a joint program was conducted with the DeVore Aviation Corporation to evaluate the spin resistance of a model of a high wing trainer design that incorporated the NASA-developed leading-edge droop concept.[527]

As a result of these cooperative ventures, industry obtained valuable experience in model construction techniques, spin recovery parachute system technology, methods of measuring moments of inertia and scal­ing engine thrust, the cost and time required to conduct such programs, and correlation with full-scale flight-test results.

The Future of the Tunnel in the Era of CFD

A longstanding flaw with wind tunnels was the aerodynamic interfer­ence caused by the "sting,” or the connection between the model and the test instrumentation. Researchers around the world experimented with magnetic suspension systems beginning in the late 1950s. Langley,

in collaboration with the AEDC, constructed the 13-Inch Magnetic Suspension and Balance System (MSBS). The transparent test section measured about 12.6 inches high and 10.7 inches wide. Five powerful electromagnets installed in the test section suspended the model and pro­vided lift, drag, side forces, and pitching and yaw moments. Control of the iron-cored model over these five axes removed the need for a model support. The lift force of the system enabled the suspension of a 6-pound iron-cored model. The rest of the tunnel was conventional: a continual – flow, closed-throat, open-circuit design capable of speeds up to Mach 0.5.[614]

When the 13-Inch MSBS became operational in 1965, NASA used the tunnel for wake studies and general research. Persistent problems with the system led to its closing in 1970. New technology and renewed interest revived the tunnel in 1979, and it ran until the early 1990s.[615]

NASA’s work on magnetic suspension and balance systems led to a newfound interest in a wind tunnel capable of generating cryogenic test temperatures in 1971. Testing a model at below -150 °F permitted the­oretically an increase in Reynolds number. There was a precedent for a cryogenic wind tunnel. R. Smelt at the Royal Aircraft Establishment at Farnborough conducted an investigation into the use of airflow at cryo­genic temperatures in a wind tunnel. His work revealed that a cryogenic wind tunnel could be reduced in size and required less power as com­pared with a similar ambient temperature wind tunnel operated at the same pressure, Mach number, and Reynolds number.[616]

The state of the art in cooling techniques and structural materials required to build a cryogenic tunnel did not exist in the 1940s. American and European interest in the development of a transonic tunnel that gen­erated high Reynolds numbers, combined with advances in cryogenics and structures in the 1960s, revived interest in Smelt’s findings. A team of Langley researchers led by Robert A. Kilgore initiated a study of the viability of a cryogenic wind tunnel. The first experiment with a low-

speed tunnel during summer 1972 resulted in an extension of the pro­gram into the transonic regime. Kilgore and his team began design of the tunnel in December 1972, and the Langley Pilot Transonic Cryogenic Tunnel became operational in September 1973.[617]

The pilot tunnel was a continual-flow, fan-driven tunnel with a slot­ted octagonal test section, 0.3 meters (1 foot) across the flats, and was constructed almost entirely out of aluminum alloy. The normal test medium was gaseous nitrogen, but air could be used at ambient temper­atures. The experimental tunnel provided true simulation of full-scale transonic Reynolds numbers (up to 100 x 106 per foot) from Mach 0.1 to 0.9 and was a departure from conventional wind tunnel design. The key was decreasing air temperature, which increased the density and decreased the viscosity factor in the denominator of the Reynolds num­ber. The result was the simulation of full-scale flight conditions at tran­sonic speeds with great accuracy.[618]

Kilgore and his team’s work generated fundamental conclusions about cryogenic tunnels. First, cooling with liquid nitrogen was prac­tical at the power levels required for transonic testing. It was also sim­ple to operate. Researchers could predict accurately the amount of time required to cool the tunnel, a basic operational parameter, and the amount of liquid nitrogen needed for testing. Through the use of a simple liquid nitrogen injection system, tunnel personnel could con­trol and evenly distribute the temperature. Finally, the cryogenic tunnel was quieter than was an identical tunnel operating at ambient temper­ature. The experiment was such a success and generated such promis­ing results that NASA reclassified the temporary tunnel as a "permanent” facility and renamed it the 0.3-Meter Transonic Cryogenic Tunnel (TCT).[619]

The Future of the Tunnel in the Era of CFD

The 0.3-Meter Transonic Cryogenic Tunnel. NASA.

After 6 years of operation, NASA researchers shared their experi­ences at the First International Symposium on Cryogenic Wind Tunnels at the University of Southampton, England, in 1979. Their operation of the 0.3-Meter TCT demonstrated that there were no insurmountable problems associated with a variety of aerodynamic tests with gaseous nitrogen at transonic Mach numbers and high Reynolds numbers. The

team found that the injection of liquid nitrogen into the tunnel circuit to induce cryogenic cooling caused no problems with temperature dis­tribution or dynamic response characteristics. Not everything, however, was known about cryogenic tunnels. There would be a significant learn­ing process, which included the challenges of tunnel control, run logic, economics, instrumentation, and model technology.[620]

Developments in computer technology in the mid-1980s allowed con­tinual improvement in transonic data collection in the 0.3-Meter TCT, which alleviated a long-term problem with all wind tunnels. The walls, floor, and ceiling of all tunnels provided artificial constraints on flight simulation. The installation of computer-controlled adaptive, or "smart,” tunnel walls in March 1986 lessened airflow disturbances, because they allowed the addition or expulsion of air through the expansion and con­traction along the length, width, and height of the tunnel walls. The result was a more realistic simulation of an aircraft flying in the open atmosphere. The 0.3-Meter TCT’s computer system also automatically tailored Mach number, pressure, temperature, and angle of attack to a specific test program and monitored the drive, electrical, lubrication, hydraulic, cooling, and pneumatic systems for dangerous leaks and fail­ures. The success of the 0.3-Meter TCT led to further investigation of smart walls at Langley and Lewis.[621]

NASA’s success with the 0.3-Meter Transonic Cryogenic Tunnel led to the creation of the National Transonic Facility (NTF) at Langley. Both NASA and the Air Force were considering the construction of a large transonic wind tunnel. NASA proposed a larger cryogenic tunnel, and the Air Force wanted a Ludweig-tube tunnel. The Federal Government decided in 1974 to fund a facility to meet commercial, military, and sci­entific needs based on NASA’s pioneering operation of the cryogenic tun­nel. Contractors built the tunnel on the site of the 4-Foot Supersonic Pressure Tunnel and incorporated the old tunnel’s drive motors, sup­port buildings, and cooling towers.[622]

Becoming operational in 1983, the NTF was a high-pressure, cryo­genic, closed-circuit wind tunnel with a Mach number range from 0.1 to 1.2 and a Reynolds number range of 4 x 106 to 145 x 106 per foot. It featured a 2.5-meter test section with 12 slots and 14 reentry flaps in the ceiling and floor. Langley personnel designed a drive system to include a fan with variable inlet guide vanes for precise Mach number control. Injected as super-cold liquid and evaporated into a gas, nitro­gen is the primary test medium. Air is the test gas in the ambient tem­perature mode, while a heat exchanger maintains the tunnel temperature. Thermal insulation of the tunnel’s pressure shell ensured minimal energy consumption. The NTF continues to be one of Langley’s more advanced facilities as researchers evaluate the stability and control, cruise perfor­mance, stall buffet onset, and aerodynamic configurations of model air­craft and airfoil sections.[623]

The movement toward the establishment of national aeronautical facilities led NASA to expand the operational flexibility of the highly suc­cessful subsonic 40- by 80-foot wind tunnel at Ames Research Center. A major renovation project added an additional 80- by 120-foot test section capable of testing a full-size Boeing 737 airliner, making it the world’s largest wind tunnel. A central drive system that featured fans almost 4 stories tall and electric motors capable of generating 135,000 horsepower created the airflow for both sections through movable vanes that directed air through either section. The 40- by 80-foot test section acted as a closed circuit up to 345 mph. The air driven through the 80- by 120-foot test section traveled up to 115 mph before exhausting into the atmosphere. Each section incorporated a range of model supports to facilitate a variety of experiments. The two sections became opera­tional in 1987 (40- by 80-foot) and 1988 (80- by 120-foot). NASA chris­tened the tunnel the National Full-Scale Aerodynamics Complex (NFAC) at Ames Research Center.[624]

The Future of the Tunnel in the Era of CFD

A Pathfinder I advanced transport model being prepared for a test in the super-cold nitrogen and high-pressure environment of the National Transonic Facility (NTF) in 1986. NASA.

Spin Research

One of the areas of greatest interest has been that of spin behavior. When an airplane stalls, it may enter a spin, typically following a steeply

descending flightpath accompanied by a rotational motion (sometimes accompanied by other rolling and pitching motions) that is highly dis­orientating to a pilot. Depending on the dynamics of the entry and the design of the aircraft, it may be easily recoverable, difficult to recover from, or irrecoverable. Spins were a killer in the early days of aviation, when their onset and recovery phenomena were imperfectly understood, but have remained a dangerous problem since, as well.[840] Using special­ized vertical spin tunnels, the NACA, and later NASA, undertook exten­sive research on aircraft spin performance, looking at the dynamics of spins, the inertial characteristics of aircraft, the influence of aircraft design (such as tail placement and volume), corrective control input, and the like.[841]

As noted, spins have remained an area of concern as aviation has pro­gressed, because of the strong influence of aircraft configuration upon spin behavior. During the early jet age, for example, the coupled motion dynamics of high-performance low-aspect-ratio and high-fineness-ratio jet fighters triggered intense interest in their departure and spin charac­teristics, which differed significantly from earlier aircraft because their mass was now primarily distributed along the longitudinal, not lateral, axis of the aircraft.[842] Because spins were not a normal part of GA flying operations, GA pilots often lacked the skills to recognize and cope with spin-onset, and GA aircraft themselves were often inadequately designed to deal with out-of-balance or out-of-trim conditions that might force a spin entry. If encountered at low altitude, such as approach to landing, the consequences could be disastrous. Indeed, landing accidents com­posed more than half of all GA accidents, and of these, as one NASA document noted, "the largest single factor in General Aviation fatal acci­dents is the stall/spin.”[843]

The Flight Research Center’s 1966 study of comparative handling qualities and behavior of a range of GA aircraft had underscored
the continuing need to study stall-spin behavior. Accordingly, in the 1970s, NASA devoted particular attention to studying GA spins (and con­tinued studying the spins of high-performance aircraft as well), mark­ing "the most progressive era of NASA stall/spin research for general aviation configurations.”[844] Langley researchers James S. Bowman, Jr.; James M. Patton, Jr.; and Sanger M. Burk oversaw a broad program of stall/spin research. They and other investigators evaluated tail location and its influence upon spin recovery behavior using both spin-tunnel models,[845] and free-flight tests of radio-controlled models and actual air­craft at the Wallops Flight Center, on the Virginia coast of the Delmarva Peninsula.[846] Between 1977 and 1989, NASA instrumented and modified four aircraft of differing configuration for spin research: an experimental low-wing Piper design with a T-tail, a Grumman American AA-1 Yankee modified so that researchers could evaluate three different horizontal tail positions, a low-wing Beech Sundowner equipped with wingtip rockets to aid in stopping spin rotation, and a high-wing Cessna C-172. Overall, the tests revealed the critical importance of designers ensuring that the vertical fin and rudder of their new GA aircraft be in active air­flow during a spin, so as to ensure their effectiveness in spin recovery. To do that, the horizontal tail needed to be located in such a position on the aft fuselage or fin so as not to shield the vertical fin and rudder from active flow. The program was not without danger and incident. Mission planners prudently equipped the four aircraft with an emergency 10.5-ft – diameter spin-recovery parachute. Over that time, the ‘chute had to be deployed on 29 occasions when the test aircraft entered unrecoverable
spins; each of the four aircraft deployed the ‘chute at least twice, a mea­sure of the risk inherent in stall-spin testing.[847]

Подпись: 8

Подпись: Aircraft entering wake vortex flow encountered a series of dangers, ranging from upset to structural failure, depending on their approach to the turbulent flow. From NASA SP-409 (1977).

NASA’s work in stall-spin research has continued, but at a lower level of effort than in the heyday of the late 1970s and 1980s, reflecting changes in the Agency’s research priorities, but also that NASA’s work had mate­rially aided the understanding of spins, and hence had influenced the data and experience base available to designers shaping the GA aircraft of the future. As well, the widespread advent of electronic flight con­trols and computer-aided flight has dramatically improved spin behav­ior. Newer designs exhibit a degree of flying ease and safety unknown to earlier generations of GA aircraft. This does not mean that the spin is a danger of the past—only that it is under control. In the present and future, as in the past, ensuring GA aircraft have safe stall/spin behav­ior will continue to require high-order analysis, engineering, and test.