Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Birthing the Testing Techniques

The development and use of free-flying model techniques within the NACA originated in the 1920s at the Langley Memorial Aeronautical Laboratory at Hampton, VA. The early efforts had been stimulated by concerns over a critical lack of understanding and design criteria for methods to improve aircraft spin behavior.[441] Although early aviation pioneers had been frequently using flying models to demonstrate con­cepts for flying machines, many of the applications had not adhered to the proper scaling procedures required for realistic simulation of full – scale aircraft motions. The NACA researchers were very aware that cer­tain model features other than geometrical shape required application of scaling factors to ensure that the flight motions of the model would replicate those of the aircraft during flight. In particular, the require­ments to scale the mass and the distribution of mass within the model were very specific.[442] The fundamental theories and derivation of scaling factors for free-flight models are based on the science known as dimen­sional analysis. Briefly, dynamic free-flight models are constructed so that the linear and angular motions and rates of the model can be readily scaled to full-scale values. For example, a dynamically scaled 1/9-scale model will have a wingspan 1/9 that of the airplane and it will have a weight of 1/729 that of the airplane. Of more importance is the fact that the scaled model will exhibit angular velocities that are three times faster than those of the airplane, creating a potential challenge for a remotely located human pilot to control its rapid motions.

Initial NACA testing of dynamically scaled models consisted of spin tests of biplane models that were hand-launched by a researcher or cat­apulted from a platform about 100 feet above the ground in an airship hangar at Langley Field.[443] As the unpowered model spun toward the ground, its path was tracked and followed by a pair of researchers hold­ing a retrieval net similar to those used in fire rescues. To an observer,

the testing technique contained all the elements of an old silent movie, including the dash for the falling object. The information provided by this free-spin test technique was valuable and provided confidence (or lack thereof) in the ability of the model to predict full-scale behavior, but the briefness of the test and the inevitable delays caused by dam­age to the model left much to be desired.

The free-flight model testing at Langley was accompanied by other forms of analysis, including a 5-foot vertical wind tunnel in which the aerodynamic characteristics of the models could be measured during simulated spinning motions while attached to a motor-driven spinning apparatus. The aerodynamic data gathered in the Langley 5-Foot Vertical Tunnel were used for analyses of spin modes, the effects of various air­plane components in spins, and the impact of configuration changes. The airstream in the tunnel was directed downward, therefore free – spinning tests could not be conducted.[444]

Meanwhile, in England, the Royal Aircraft Establishment (RAE) was aware of the NACA’s airship hangar free-spinning technique and had been inspired to explore the use of similar catapulted model spin tests in a large building. The RAE experience led to the same unsatisfac­tory conclusions and redirected its interest to experiments with a novel 2-foot-diameter vertical free-spinning tunnel. The positive results of tests of very small models (wingspans of a few inches) in the apparatus led the British to construct a 12-foot vertical spin tunnel that became operational in 1932.[445] Tests in the facility were conducted with the model launched into a vertically rising airstream, with the model’s weight being supported by its aerodynamic drag in the rising airstream. The mod­el’s vertical position in the test section could be reasonably maintained within the view of an observer by precise and rapid control of the tun­nel speed, and the resulting test time could be much longer than that obtained with catapulted models. The advantages of this technique were very apparent to the international research community, and the facility features of the RAE tunnel have influenced the design of all other ver­tical spin tunnels to this day.

Birthing the Testing Techniquesturning vanes

test section

documentation

camera

data

acquisition cameras (2 of 8)

honeycomb

This cross-sectional view of the Langley 20-Foot Vertical Spin Tunnel shows the closed-return tun­nel configuration, the location of the drive fan at the top of the facility, and the locations of safety nets above and below the test section to restrain and retrieve models. NASA.

When the NACA learned of the new British tunnel, Charles H. Zimmerman of the Langley staff led the design of a similar tunnel known as the Langley 15-Foot Free-Spinning Wind Tunnel, which became opera­tional in 1935.[446] The use of clockwork delayed-action mechanisms to move the control surfaces of the model during the spin enabled the researchers

to evaluate the effectiveness of various combinations of spin recovery tech­niques. The tunnel was immediately used to accumulate design data for satisfactory spin characteristics, and its workload increased dramatically.

Langley replaced its 15-Foot Free-Spinning Wind Tunnel in 1941 with a 20-foot spin tunnel that produced higher test speeds to support scaled models of the heavier aircraft emerging at the time. Control inputs for spin recovery were actuated at the command of a researcher rather than the preset clockwork mechanisms of the previous tunnel. Copper coils placed around the periphery of the tunnel set up a magnetic field in the tunnel when energized, and the magnetic field actuated a magnetic device in the model to operate the model’s aerodynamic control surfaces.[447]

The Langley 20-Foot Vertical Spin Tunnel has since continued to serve the Nation as the most active facility for spinning experiments and other studies requiring a vertical airstream. Data acquisition is based on a model space positioning system that uses retro-reflective targets attached on the model for determining model position, and results include spin rate, model attitudes, and control positions.[448] The Spin Tunnel has sup­ported the development of nearly all U. S. military fighter and attack aircraft, trainers, and bombers during its 68-year history, with nearly 600 projects conducted for different aerospace configurations to date.

Quest for Guidelines: Tail Damping Power Factor

An empirical criterion based on the projected side area and mass distribution of the airplane was derived in England, and the Langley staff proposed a design criterion in 1939 based solely on the geometry of aircraft tail surfaces. Known as the tail-damping power factor (TDPF), it was touted as a rapid estimation method for determining whether a new design was likely to comply with the minimum requirements for safety in spinning.[508]

The beginning of World War II and the introduction of a new Langley 20-Foot Spin Tunnel in 1941 resulted in a tremendous demand for spin­ning tests of high-priority military aircraft. The workload of the staff increased dramatically, and a tremendous amount of data was gath­ered for a large number of different configurations. Military requests for spin tunnel tests filled all available tunnel test times, leaving no time for general research. At the same time, configurations were tested with

radical differences in geometry and mass distribution. Tailless aircraft with their masses distributed in a primarily spanwise direction were introduced, along with twin-engine bombers and other unconventional designs with moderately swept wings and canards.

In the 1950s, advances in aircraft performance provided by the introduction of jet propulsion resulted in radical changes in aircraft configurations, creating new challenges for spin technology. Military fighters no longer resembled the aircraft of World War II, as the intro­duction of swept wings and long, pointed fuselages became common­place. Suddenly, certain factors, such as mass distribution, became even more important, and airflow around the unconventional, long fuselage shapes during spins dominated the spin behavior of some configurations. At the same time, fighter aircraft became larger and heavier, resulting in much higher masses relative to the atmospheric density, especially during flight at high altitudes.

The Unitary Plan Tunnels

In the aftermath of World War II and the early days of the Cold War, the Air Force, Army, Navy, and the NACA evaluated what the aeronautical

industry needed to continue leadership and innovation in aircraft and missile development. Specifically, the United States needed more tran­sonic and supersonic tunnels. The joint evaluation resulted in pro­posal called the Unitary Plan. President Harry S. Truman’s Air Policy Commission urged the passage of the Unitary Plan in January 1948. The draft plan, distributed to the press at the White House, proposed the installation of the 16 wind tunnels "as quickly as possible,” with the remainder to quickly follow.[551]

Congress passed the Unitary Wind Tunnel Plan Act, and President Truman signed it October 27, 1949. The act authorized the construction of a group of wind tunnels at U. S. Air Force and NACA installations for the testing of supersonic aircraft and missiles and for the high-speed and high-altitude evaluation of engines. The wind tunnel system was to benefit industry, the military, and other Government agencies.[552]

The portion of the Unitary Plan assigned to the U. S. Air Force led to the creation of the Arnold Engineering Development Center (AEDC) at Tullahoma, TN. Dedicated in June 1951, the AEDC took advantage of abundant hydroelectric power provided by the nearby Tennessee Valley Authority. The Air Force erected facilities, such as the Propulsion Wind Tunnel and two individual 16-Foot wind tunnels that covered the range of Mach 0.2 to Mach 4.75, for the evaluation of full-scale jet and rocket engines in simulated aircraft and missile applications. Starting with 2 wind tunnels and an engine test facility, the research equipment at the AEDC expanded to 58 aerodynamic and propulsion wind tunnels.[553] The Aeropropulsion Systems Test Facility, operational in 1985, was the fin­ishing touch, which made the AEDC, in the words of one observer, "the world’s most complete aerospace ground test complex.”[554]

The sole focus of the AEDC on military aeronautics led the NACA to focus on commercial aeronautics. The Unitary Plan provided two ben­efits for the NACA. First, it upgraded and repowered the NACA’s exist­ing wind tunnel facilities. Second, and more importantly, the Unitary

Plan and provided for three new tunnels at each of the three NACA lab­oratories at the cost of $75 million. Overall, those three tunnels rep­resented, to one observer, "a landmark in wind tunnel design by any criterion—size, cost, performance, or complexity.”[555]

The NACA provided a manual for users of the Unitary Plan Wind Tunnel system in 1956, after the facilities became operational. The docu­ment allowed aircraft manufacturers, the military, and other Government agencies to plan development testing. Two general classes of work could be conducted in the Unitary Plan wind tunnels: company or Government projects. Industrial clients were responsible for renting the facility, which amounted to between $25,000 and $35,000 per week (approximately $190,000 to $265,000 in modern currency), depending on the tunnel, the utility costs required to power the facility, and the labor, materials, and overhead related to the creation of the basic test report. The test report consisted of plotted curves, tabulated data, and a description of the methods and procedures that allowed the company to properly interpret the data. The NACA kept the original report in a secure file for 2 years to protect the interests of the company. There were no fees for work initiated by Government agencies.[556]

The Langley Unitary Plan Wind Tunnel began operations in 1955. NACA researcher Herbert Wilson led a design team that created a closed – circuit, continual flow, variable density supersonic tunnel with two test sections. The test sections, each measuring 4 by 4 feet and 7 feet long, covered the range between low Mach (1.5 to 2.9) and high Mach (2.3 to 4.6). Tests in the Langley Unitary Plan Tunnel included force and moment, surface pressure measurements and distribution, visualization of on – and off-surface airflow patterns, and heat transfer. The tunnel operated at 150 °F, with the capability of generating 300-400 °F in short bursts for heat transfer studies. Built at an initial cost of $15.4 million, the Langley facility was the cheapest of the three NACA Unitary Plan wind tunnels.[557]

The original intention of the Langley Unitary Plan tunnel was mis­sile development. A long series of missile tests addressed high-speed

The Unitary Plan Tunnels

A model of the Apollo Launch Escape System in the Unitary Wind Tunnel at NASA Ames. NASA.

performance, stability and control, maneuverability, jet-exhaust effects, and other factors. NACA researchers quickly placed models of the McDonnell-Douglas F-4 Phantom II in the tunnel in 1956, and soon after, various models of the North American X-15, the General Dynamics F-111 Aardvark, proposed supersonic transport configurations, and spacecraft appeared in the tunnel.[558]

The Ames Unitary Plan Wind Tunnel opened in 1956. It featured three test sections: an 11- by 11-foot transonic section (Mach 0.3 to 1.5) and two supersonic sections that measured 9 by 7 feet (Mach 1.5 to 2.6) and 8 by 7 feet (Mach 2.5 to 3.5). Tunnel personnel could adjust the air­flow to simulate flying conditions at various altitudes in each section.[559]

The power and magnitude of the tunnel facility called for unprec­edented design and construction. The 11-stage axial-flow compressor featured a 20-foot diameter and was capable of moving air at 3.2 mil­lion cubic feet per minute. The complete assembly, which included over

2,0 rotor and stator blades, weighed 445 tons. The flow diversion valve allowed the compressor to drive either the 9- by 7-foot or 8- by 7-foot

supersonic wind tunnels. At 24 feet in diameter, the compressor was the largest of its kind in the world in 1956 but took only 3.5 minutes to switch between the two wind tunnels. Four main drive rotors, weighing 150 tons each, powered the facility. They could generate 180,000 horsepower on a continual basis and 216,000 horsepower at 1-hour intervals. Crews used

10,0 cubic yards of concrete for the foundation and 7,500 tons of steel plate for the major structural components. Workers expended 100 tons of welding rods during construction. When the facility began operations in 1956, the project had cost the NACA $35 million.[560]

The personnel of the Ames Unitary Plan Wind Tunnel evaluated every major craft in the American aerospace industry from the late 1950s to the late 20th century. In aeronautics, models of nearly every commercial transport and military fighter underwent testing. For the space program, the Unitary Plan Wind Tunnel was crucial to the design of the landmark Mercury, Gemini, and Apollo spacecraft, and the Space Shuttle. That record led NASA to assert that the facility was a "unique national asset of vital importance to the nation’s defense and its competitive position in the world aerospace market.” It also reflected the fact that the Unitary Plan facility was NASA’s most heavily used wind tunnel, with over 1,000 test programs conducted during 60,000 hours of operation by 1994.[561]

SAMPLE AEROSPACE VEHICLES EVALUATED IN THE UNITARY PLAN WIND TUNNEL

MILITARY

COMMERCIAL

SPACE

Convair B-58

McDonnell-Douglas

DC-8

Mercury spacecraft

Lockheed A-1 2/YF-1 2/SR-71

McDonnell-Douglas

DC-10

Gemini spacecraft

Lockheed F-104

Boeing 727

Apollo Command Module

North American XB-70

Boeing 767

Space Shuttle orbiter

Rockwell International B-1

General Dynamics F-1 1 1

McDonnell-Douglas F/A-18

Northrop/McDonnell-Douglas YF-23

The National Park Service designated the Ames Unitary Plan Wind Tunnel Facility a national historic landmark in 1985. The Unitary Plan Wind Tunnel represented "the logical crossover point from NACA to NASA” and "contributed equally to both the development of advanced American aircraft and manned spacecraft.”[562]

The Unitary Plan facility at Lewis Research Center allowed the obser­vation and development of full-scale jet and rocket engines in a 10- by 10-foot supersonic wind tunnel that cost $24.6 million. Designed by Abe Silverstein and Eugene Wasliewski, the test section featured a flexible wall made up of 10-foot-wide polished stainless steel plates, almost 1.5 inches thick and 76 feet long. Hydraulic jacks changed the shape of the plates to simulate nozzle shapes covering the range of Mach 2 to Mach 3.5. Silverstein and Wasliewski also incorporated both open and closed operation. For propulsion tests, air entered the tunnel and exited on the other side of the test section continually. In the aerodynamic mode, the same air circulated repeatedly to maintain a higher atmospheric pressure, desired temperature, or moisture content. The Lewis Unitary Plan Wind Tunnel contributed to the development of the General Electric F110 and Pratt & Whitney TF30 jet engines intended for the Grumman F-14 Tomcat and the liquid-fueled rocket engines destined for the Space Shuttle.[563]

Many NACA tunnels found long-term use with NASA. After NASA made modifications in the 1950s, the 20-Foot VST allowed the study of spacecraft and recovery devices in vertical descent. In the early 21st cen­tury, researchers used the 20-Foot VST to test the free-fall and dynamic stability characteristics of spacecraft models. It remains one of only two operation spin tunnels in the world.[564]

The Unitary Plan Tunnels

The 8-Foot Transonic Pressure Tunnel (TPT). NASA.

Toward the Future

NASA remains active in the pursuit of new materials that will sup­port fresh objectives for enabling a step change in efficiency for com­mercial aircraft of the next few decades. A key element of NASA’s strategy is to promote the transition from conventional, fuselage-and – wing designs for large commercial aircraft to flying wing designs, with the Boeing X-48 Blended Wing-Body subscale demonstrator as the model. The concept assumes many changes in current approaches to

Подпись: NASA's Langley Research Center started experimenting with this stitching machine in the early 1990s. The machine stitches carbon, Kevlar, and fiberglass composite preforms before they are infused with plastic epoxy through the resin transfer molding process. The machine was limited to stitching only small and nearly flat panels. NASA.
flight controls, propulsion, and, indeed, expectations for the passenger experience. Among the many innovations to maximize efficiency, such flying wing airliners also must be supported by a radical new look at how composite materials are produced and incorporated in aircraft design.

To support the structural technology for the BWB, Boeing faces the challenge of manufacturing an aircraft with a flat bottom, no constant section, and a diversity of shapes across the outer mold line.[772] To meet these challenges, Boeing is returning to the stitching method, although with a different concept. Boeing’s concept is called pultruded rod stitched efficient unitized structure (PRSEUS). Aviation Week & Space Technology described the idea: "This stitches the composite frames and stringers to the skin to produce a fail-safe structure. The frames and stringers pro­vide continuous load paths and the nylon stitching stops cracks. The design allows the use of minimum-gauge-post-buckled-skins, and Boeing
estimates a PRSEUS pressure vessel will be 28% lighter than a compos­ite sandwich structure.”[773]

Toward the FutureUnder a NASA contract, Boeing is building a 4-foot by 8-foot pressure box with multiple frames and a 30-foot-wide test article of the double­deck BWB airframe. The manufacturing process resembles past expe­rience with the advanced stitching machine. Structure laid up by dry fabric is stitched before a machine pulls carbon fiber rods through pick­ets in the stringers. The process locks the structure and stringers into a preform without the need for a mold-line tool. The parts are cured in an oven, not an autoclave.[774]

The dream of designing a commercially viable, large transport air­craft made entirely out of plastic may finally soon be realized. The all­composite fuselage of the Boeing 787 and the proposed Airbus A350 are only the latest markers in progress toward this objective. But the next generation of both commercial and military transports will be the first to benefit from composite materials that may be produced and assem­bled nearly as efficiently as are aluminum and steel.

Extending the Vision: The Evolution of Mini-Sniffer

The Mini-Sniffer program was initiated in 1975 to develop a small, unpi­loted, propeller-driven aircraft with which to conduct research on tur­bulence, natural particulates, and manmade pollutants in the upper atmosphere. Unencumbered and flying at speeds of around 45 mph, the craft was designed to reach a maximum altitude of 90,000 feet. The Mini-Sniffer was capable of carrying a 25-pound instrument package to 70,000 feet and cruising there for about 1 hour within a 200-mile range.

The Aircraft Propulsion Division of NASA’s Office of Aeronautics and Space Technology sponsored the project and a team at the Flight Research Center, led by R. Dale Reed, was charged with designing and testing the airplane. Researchers at Johnson Space Center devel­oped a hydrazine-fueled engine for use at high altitudes, where oxy­gen is scarce. To avoid delays while waiting for the revolutionary new engine, Reed’s team built two Mini-Sniffer aircraft powered by conven­tional gasoline engines. These were used for validating the airplane’s struc­ture, aerodynamics, handling qualities, guidance and control systems, and operational techniques.[899] As Reed worked on the airframe design, he built small, hand-launched balsa wood gliders for qualitative evalua­tion of different configurations. He decided from the outset that the Mini­Sniffer should have a pusher engine to leave the nose-mounted payload free to collect air samples without disruption or contamination from the engine. Climb performance was given priority over cruise performance.

Eventually, Reed’s team constructed three configurations. The first two—using the same airframe—were powered by a single two-stroke, gasoline-fueled go-cart engine driving a 22-inch-diameter propeller. The third was powered by a hydrazine-fueled engine developed by James W. Akkerman, a propulsion engineer at Johnson Space Center. Thirty-three flights were completed with the three airplanes, each of which provided experimental research results. Thanks to the use of a six-degree-of-freedom simulator, none of the Mini-Sniffer flights had to be devoted to training. Simulation also proved useful for designing the control system and, when compared with flight results, proved an accurate representation of the vehicle’s flight characteristics.

The Mini-Sniffer I featured an 18-foot-span, aft-mounted wing, and a nose-mounted canard. Initially, it was flown via a model airplane radio­control box. Dual-redundant batteries supplied power, and fail-safe units were provided to put the airplane into a gliding turn for landing descent in the event of a transmitter failure. After 12 test flights, Reed abandoned the flying-wing canard configuration for one with substantially greater stability.[900] The Mini-Sniffer II design had a 22-foot wingspan with twin tail booms supporting a horizontal stabilizer. This configuration was less susceptible to flat spin, encountered with the Mini-Sniffer I on its final flight when the ground pilot’s timing between right and left yaw pulses coupled the adverse yaw characteristics of the ailerons with the vehicle’s Dutch roll motions. The ensuing unrecoverable spin resulted in only minor damage to the airplane, as the landing gear absorbed most of the impact forces. It took 3 weeks to restore the airframe to flying condition and convert it to the Mini-Sniffer II configuration. Dihedral wingtips provided additional roll control.

The modified craft was flown 20 times, including 10 flights using wing-mounted ailerons to evaluate their effectiveness in controlling the aircraft. Simulations showed that summing a yaw-rate gyro and pilot inputs to the rudders gave automatic wings leveling at all altitudes and yaw damping at altitudes above 60,000 feet. Subsequently, the ailerons were locked and a turn-rate command system introduced in which the ground controller needed only to turn a knob to achieve desired turn­ing radius. Flight-testing indicated that the Mini-Sniffer II had a high static-stability margin, making the aircraft very easy to trim and min­imizing the effects of altering nose shapes and sizes or adding pods of various shapes and sizes under the fuselage to accommodate instrumen­tation. A highly damped short-period longitudinal oscillation resulted in rapid recovery from turbulence or upset. When an inadvertent hard – over rudder command rolled the airplane inverted, the ground pilot sim­ply turned the yaw damper on and the vehicle recovered automatically, losing just 200 feet of altitude.[901] The Mini-Sniffer III was a completely new airframe, similar in configuration to the Mini-Sniffer II but with a lengthened forward fuselage. An 18-inch nose extension provided better balance and greater payload capacity—up to 50 pounds plus telemetry equipment, radar transponder, radio-control gear, instrumentation, and sensors for stability and control investigations. Technicians at a sailplane repair company constructed the fuselage and wings from fiberglass and plastic foam, and they built tail surfaces from Kevlar and carbon fiber. Metal workers at Dryden fashioned an aluminum tail assembly, while a manufacturer of mini-RPVs designed and constructed an aluminum hydrazine tank to be integral with the fuselage. The Mini-Sniffer III was assembled at Dryden and integrated with Akkerman’s engine.

The 15-horsepower, hydrazine-fueled piston engine drove a 38-inch-diameter, 4-bladed propeller. Plans called for eventually using

Подпись: Ground crew for the Mini-Sniffer III wore self-contained suits and oxygen tanks because the engine was fueled with hydrazine. NASA. Подпись: 9

a 6-foot-diameter, 2-bladed propeller for high-altitude flights. A slightly pressurized tank fed liquid hydrazine into a fuel pump, where it became pressurized to 850 pounds per square inch (psi). A fuel valve then routed some of the pressurized hydrazine to a gas generator, where liquid fuel was converted to hot gas at 1,700 degrees Fahrenheit (°F). Expansion of the hot gas drove the piston.[902] Since hydrazine doesn’t need to be mixed with oxygen for combustion, it is highly suited to use in the thin upper atmosphere. This led to a proposal to send a hydrazine-powered aircraft, based on the Mini-Sniffer concept, to Mars, where it would be flown in the thin Martian atmosphere while collecting data and transmitting it back to scientists on Earth. Regrettably, such a vehicle has yet to be built.

During a 1-hour shakedown flight on November 23, 1976, the Mini­Sniffer III reached an altitude of 20,000 feet. Power fluctuations pre­vented the airplane from attaining the planned altitude of 40,000 feet, but otherwise, the engine performed well. About 34 minutes into the flight, fuel tank pressure was near zero, so the ground pilot closed the throttle and initiated a gliding descent. Some 30 minutes later, the Mini­Sniffer III touched down on the dry lakebed. The retrieval crew, wearing
protective garments to prevent contact with toxic and highly flamma­ble fuels, found that there had been a hydrazine leak. This in itself did not account for the power reduction, however. Investigators suggested a possible fuel line blockage or valve malfunction might have been to blame.[903] Although the mission successfully demonstrated the opera­tional characteristics of a hydrazine-fueled, non-air-breathing aircraft, the Mini-Sniffer III never flew again. Funding for tests with a variable – pitch propeller needed for flights at higher altitudes was not forthcoming, although interest in a Mars exploration airplane resurfaced from time to time over the next few decades.[904] The Mini-Sniffer project yielded a great deal of useful information for application to future RPRV efforts. One area of interest concerned procedures for controlling the vehicle. On the first flights of Mini-Sniffer I, ordinary model radio-control gear was used. This was later replaced with a custom-made, multichannel radio-control system for greater range and equipped with built-in fail­safe circuits to retain control when more than one transmitter was used. The onboard receiver was designed to respond only to the strongest sig­nal. To demonstrate this feature, one of the vehicles was flown over two operating transmitter units located 50 feet apart on the ground. As the Mini-Sniffer passed overhead, the controller of the transmitter nearest the airplane took command from the other controller, with both trans­mitters broadcasting on the same frequency. With typical model radio­control gear, interference from two simultaneously operating transmitters usually results in loss of control regardless of relative signal strength.[905] A chase truck was used during developmental flights to collect early data on control issues. A controller, called the visual pilot, operated the air­plane from the truck bed while observing its response to commands. Speed and trim curves were plotted based on the truck’s speed and a recording of the pilot’s inputs. During later flights, a remote pilot con­trolled the Mini-Sniffer from a chase helicopter. Technicians installed a telemetering system and radar transponder in the airplane so that it could be controlled at altitude from the NASA Mission Control Room at Dryden. Plot boards at the control station displayed position and alti­tude, airspeed, turn rate, elevator trim, and engine data. A miniature

television camera provided a visual reference for the pilot. In most cases, a visual pilot took control for landing while directly observing the air­plane from a vantage point adjacent to the landing area. Reed, how­ever, also demonstrated a solo flight, which he controlled unassisted from takeoff to landing.

"I got a bigger thrill from doing this than from my first flight in a light plane as a teenager,” he said, "probably because I felt more was at stake.”[906]

19

NASA and Supersonic Cruise

William Flanagan

NASA and Supersonic CruiseПодпись: 10For an aircraft to attain supersonic cruise, or the capability to fly faster than sound for a significant portion of time, the designer must balance lift, drag, and thrust to achieve the performance requirements, which in turn will affect the weight. Although supersonic flight was achieved over 60 years ago, successful piloted supersonic cruise aircraft have been rare. NASA has been involved in developing the required technol­ogy for those rare designs, despite periodic shifting national priorities.

N THE 1 930S AND EARLY 1 940S, investigation of flight at speeds faster than sound began to assume increasing importance, thanks ini­tially to the "compressibility” problems encountered by rapidly rotat­ing propeller tips but then to the dangerous trim changes and buffeting encountered by diving aircraft. Researchers at the National Advisory Committee for Aeronautics (NACA) began to focus on this new and trou­blesome area. The concept of Mach number (ratio of a body’s speed to the speed of sound in air at the body’s location) swiftly became a famil­iar term to researchers. At first, the subject seemed heavily theoreti­cal. But then, with the increasing prospect of American involvement in the Second World War, NACA research had to shift to shorter-term objectives of improving American warplane performance, notably by reducing drag and refining the Agency’s symmetrical low-drag airfoil sections. But with the development of fighter aircraft with engines exhibiting 1,500 to 2,000 horsepower and capable of diving in excess of Mach 0.75, supersonic flight became an issue of paramount military importance. Fighter aircraft in steep power on-dives from combat alti­tudes over 25,000 feet could reach 450 mph, corresponding to Mach numbers over 0.7. Unusual flight characteristics could then manifest themselves, such as severe buffeting, uncommanded increasing dive angles, and unusually high stick forces.

The sleek, twin-engine, high-altitude Lockheed P-38 showed these characteristics early in the war, and a crash effort by the manufacturer

aided by NACA showed that although the aircraft was not "supersonic,” i. e., flying faster than the speed of sound at its altitude, the airflow at the thickest part of the wing was at that speed, producing shock waves that were unaccounted for in the design of the flight control surfaces. The shock waves were a thin area of high pressure, where the supersonic airflow around the body began to slow toward its customary subsonic speed. This shock region increased drag on the vehicle considerably, as well as altered the lift distribution on the wing and control surfaces. An expedient fix, in the form of a dive flap to be activated by the pilot, was installed on the P-38, but the concept of a "critical Mach number” was introduced to the aviation industry: the aircraft flight speed at which supersonic flow could be present on the wing and fuselage. Newer high­speed, propeller-driven fighters, such as the P-51D with its thin laminar flow wing, had critical Mach numbers of 0.75, which allowed an ade­quate combat envelope, but the looming turbojet revolution removed the self-governing speed limit of reduced thrust because of supersonic propeller tips. Investigation of supersonic aircraft was no longer a the­oretical exercise.[1054]

Introducing Synthetic Vision to the Cockpit

Introducing Synthetic Vision to the CockpitRobert A. Rivers

Подпись: 11The evolution of flight has witnessed the steady advancement of instru­mentation to furnish safety and efficiency. Providing revolutionary enhancements to aircraft instrument panels for improved situational awareness, efficiency of operation, and mitigation of hazards has been a NASA priority for over 30 years. NASA’s heritage of research in synthetic vision has generated useful concepts, demonstrations of key technological breakthroughs, and prototype systems and architectures.

HE CONNECTION OF THE NATIONAL AERONAUTICS AND SPACE

ADMINISTRATION (NASA) to improving instrument displays dates to the advent of instrument flying, when James H. Doolittle conducted his "blind flying” experiments with the Guggenheim Flight Laboratory in 1929, in the era of the Ford Tri-Motor transport.[1121] Doolittle became the first pilot to take off, fly, and land entirely by instruments, his visibility being totally obscured by a canvas hood. At the time of this flight, Doolittle was already a world-famous airman, who had earned a doctorate in aeronauti­cal engineering from the Massachusetts Institute of Technology and whose research on accelerations in flight constituted one of the most important contributions to interwar aeronautics. His formal association with the National Advisory Committee for Aeronautics (NACA) Langley Aeronautical Laboratory began in 1928. In the late 1950s, Doolittle became the last Chairman of the NACA and helped guide its transition into NASA.

The capabilities of air transport aircraft increased dramatically between the era of the Ford Tri-Motor of the late 1920s and the jetliners

of the late 1960s. Passenger capacity increased thirtyfold, range by a fac­tor of ten, and speed by a factor of five.[1122] But little changed in one basic area: cockpit presentations and the pilot-aircraft interface. As NASA Ames Research Center test pilot George E. Cooper noted at a seminal November 1971 conference held at Langley Research Center (LaRC) on technologies for future civil aircraft:

Подпись: 11Controls, selectors, and dial and needle instruments which were in use over thirty years ago are still common in the major­ity of civil aircraft presently in use. By comparing the cockpit of a 30-year-old three-engine transport with that of a current four-engine jet transport, this similarity can be seen. However, the cockpit of the jet transport has become much more com­plicated than that of the older transport because of the evolu­tionary process of adding information by more instruments, controls, and selectors to provide increased capability or to overcome deficiencies. This trend toward complexity in the cockpit can be attributed to the use of more complex aircraft systems and the desire to extend the aircraft operating con­ditions to overcome limitations due to environmental con­straints of weather (e. g., poor visibility, low ceiling, etc.) and of congested air traffic. System complexity arises from add­ing more propulsion units, stability and control augmentation, control automation, sophisticated guidance and navigation systems, and a means for monitoring the status of various aircraft systems.[1123]

Assessing the state of available technology, human factors, and poten­tial improvement, Cooper issued a bold challenge to NASA and the larger aeronautical community, noting: "A major advance during the 1970s must be the development of more effective means for systemati­cally evaluating the available technology for improving the pilot-aircraft interface if major innovations in the cockpit are to be obtained during

Introducing Synthetic Vision to the Cockpit Подпись: @CG)Q ®O бвОЭш Introducing Synthetic Vision to the Cockpit Подпись: 11

Introducing Synthetic Vision to the CockpitPILOT-AIRCRAFT INTERFACE

Подпись:Подпись: CONTROLS SELECTORS KINESTHETIC CUES

G-FORCES

DISTURBANCES

Introducing Synthetic Vision to the Cockpit

The pilot-aircraft interface, as seen by NASA pilot George E. Cooper, circa 1 971. Note the predominance of gauges and dials. NASA.

Cooper’s concept of an advanced multifunction electronic cockpit. Note the flightpath "high­way in the sky” presentation. NASA.

the 1980s.”[1124] To illustrate his point, Cooper included two drawings, one representative of the dial-intensive contemporary jetliner cockpit pre­sentation and the other of what might be achieved with advanced mul­tifunction display approaches over the next decade.

Подпись: 11At the same conference, Langley Director Edgar M. Cortright noted that, in the 6 years from 1964 through 1969, airline costs incurred by congestion-caused terminal area traffic delays had risen from less than $40 million to $160 million per year. He said that it was "symptom­atic of the inability of many terminals to handle more traffic,” but that "improved ground and airborne electronic systems, coupled with accept­able aircraft characteristics, would improve all-weather operations, permit a wider variety of approach paths and closer spacing, and thereby increase airport capacity by about 100 percent if dual runways were provided.”[1125] Langley avionics researcher G. Barry Graves noted the poten­tiality of revolutionary breakthroughs in cockpit avionics to improve the pilot-aircraft interface and take aviation operations and safety to a new level, particularly the use of "computer-centered digital systems for both flight management and advanced control applications, automated com­munications, [and] systems for wide-area navigation and surveillance.”[1126]

But this early work generated little immediate response from the aviation community, as requisite supporting technologies were not suf­ficiently mature to permit their practical exploitation. It was not until the 1980s, when the pace of computer graphics and simulation devel­opment accelerated, that a heightened interest developed in improving pilot performance in poor visibility conditions. Accordingly, research­ers increasingly studied the application of artificial intelligence (AI)
to flight deck functions, working closely with professional pilots from the airlines, military, and flight-test community. While many exag­gerated claims were made—given the relative immaturity of the com­puter and AI field at that time—researchers nevertheless recognized, as Sheldon Baron and Carl Feehrer wrote, "one can conceive of a wide range of possible applications in the area of intelligent aids for flight crew.”[1127] Interviews with pilots revealed that "descent and approach phases accounted for the greatest amounts of workload when averaged across all system management categories,” stimulating efforts to develop what was then popularly termed a "pilot’s associate” AI system.[1128]

Подпись: 11In this growing climate of interest, John D. Shaughnessy and Hugh

P. Bergeron’s Single Pilot Instrument Flight Rules (SPIFR) project con­stituted a notable first step, inasmuch as SPIFR’s novel "follow-me box” showed promise as an intuitive aid for inexperienced pilots fly­ing in instrument conditions. Subsequently, Langley’s James J. Adams conducted simulator evaluations of the display, confirming its poten­tial.[1129] Building on these "follow-me box” developments, Langley’s Eric C. Stewart developed a concept for portraying an aircraft’s cur­rent and future desired positions. He created a synthetic display similar, to the scene a driver experiences while driving a car, combining it with
highway-in-the-sky (HITS) displays.[1130] This so-called "E-Z Fly” project was incorporated into Langley’s General-Aviation Stall/Spin Program, a major contemporary study to improve the safety of general-aviation (GA) pilots and passengers. Numerous test subjects, from nonpilots to highly experienced test pilots, evaluated Stewart’s concept of HITS implemen­tation. NASA flight-test reports illustrated both the challenges and the opportunities that the HITS/E-Z Fly combination offered.[1131]

Подпись: 11E-Z Fly decoupled the flight controls of a Cessna 402 twin-engine, general-aviation aircraft simulated in Langley’s GA Simulator, and HITS offered a system of guidance to the pilot. This decoupling, while mak­ing the simulated airplane "easy to fly,” also reduced its responsiveness. Providing this level of HITS technology in a low-end GA aircraft posed a range of technical, economic, implementation, and operational challenges. As stated in a flight-test report, "The concept of placing inexperienced pilots in the National Airspace System has many disadvantages. Certainly, system failures could have disastrous consequences.”[1132] Nevertheless, the basic technology was sound and helped set the stage for future projects. NASA Langley was developing the infrastructure in the early 1990s to support wide-ranging research into synthetically driven flight deck dis­plays for GA, commercial and business aircraft (CBA), and NASA’s High­Speed Civil Transport (HSCT).[1133] The initial limited idea of easing the workload for low-time pilots would lead to sophisticated display systems that would revolutionize the flight deck. Ultimately, in 1999, a dedicated, well-funded Synthetic Vision Systems Project was created, headed by

Daniel G. Baize under NASA’s Aviation Safety Program (AvSP). Inspired by Langley researcher Russell V. Parrish, researchers accomplished a number of comprehensive and successful GA and CBA flight and sim­ulation experiments before the project ended in 2005. These complex, highly organized, and efficiently interrelated experiments pushed the state of the art in aircraft guidance, display, and navigation systems.

Подпись: 11Significant work on synthetic vision systems and sensor fusion issues was also undertaken at the NASA Johnson Space Center (JSC) in the late 1990s, as researchers grappled with the challenge of developing displays for ground-based pilots to control the proposed X-38 reentry test vehicle. As subsequently discussed, through a NASA-contractor partnership, they developed a highly efficient sensor fusion technique whereby real-time video signals could be blended with synthetically derived scenes using a laptop computer. After cancellation of the X-38 program, JSC engi­neer Jeffrey L. Fox and Michael Abernathy of Rapid Imaging Software, Inc., (RIS, which developed the sensor fusion technology for the X-38 program, supported by a small business contract) continued to expand these initial successes, together with Michael L. Coffman of the Federal Aviation Administration (FAA). Later joined by astronaut Eric C. Boe and the author (formerly a project pilot on a number of LaRC Synthetic Vision Systems (SVS) programs), this partnership accomplished four significant flight-test experiments using JSC and FAA aircraft, motivated by a unifying belief in the value of Synthetic Vision Systems technology for increasing flight safety and efficiency.

Synthetic Vision Systems research at NASA continues today at var­ious levels. After the SVS project ended in 2005, almost all team mem­bers continued building upon its accomplishments, transitioning to the new Integrated Intelligent Flight Deck Technologies (IIFDT) proj­ect, "a multi-disciplinary research effort to develop flight deck technol­ogies that mitigate operator-, automation-, and environment-induced hazards.”[1134] IIFDT constituted both a major element of NASA’s Aviation Safety Program and a crucial underpinning of the Next Generation Air Transportation System (NGATS), and it was itself dependent upon the maturation of SVS begun within the project that concluded in 2005. While

much work remains to be done to fulfill the vision, expectations, and prom­ise of NGATS, the principles and practicality of SVS and its application to the cockpit have been clearly demonstrated.[1135] The following account traces SVS research, as seen from the perspective of a NASA research pilot who participated in key efforts that demonstrated its potential and value for professional civil, military, and general-aviation pilots alike.

Flaming Out on Ice

Подпись: 12And just when the aircraft icing community thought it had seen every­thing—clear ice, rime ice, glazed ice, SLDs, tail plane icing, and freezing rain encountered within the coldest atmospheric conditions possible— a new icing concern was recently discovered in the least likely of places: the interior of jet engines, where parts are often several hundred degrees above freezing. Almost nothing is known about the mechanism behind engine core ice accretion, except that the problem does cause loss of power, even complete flameouts. According to data compiled by Boeing and cited in a number of news media stories and Government reports, there have been more than 100 dramatic power drops or midair engine stoppages since the mid 1990s, including 14 instances since 2002 of dual-engine flameouts in which engine core ice accretion turned a twin – engine jetliner into a glider. "It’s not happening in one particular type of engine and it’s not happening on one particular type of airframe,” said Tom Ratvasky, an icing flight research engineer at GRC. "The problem can be found on aircraft as big as large commercial airliners, all the way down to business-sized jet aircraft.”[1255]

The problem came to light in 2004, when the first documented dual­engine flameout occurred with a U. S. business jet due to core ice accre­tion. The incident was noted by the NTSB, and during the next 2 years Jim Hookey, an NTSB propulsion expert, watched as two more Beechjets lost engine power despite no evidence of mechanical problems or pilot error. One of those incidents took place over Florida in 2005, when both engines failed within 10 seconds of each other at 38,000 feet. Despite three failed attempts to restart the engines the pilots were able to safely glide in to a Jacksonville airport, dodging thunderstorms and threat­ening clouds all the way down. Hookey took the unusual step of inter­viewing the pilots and became convinced the cause of the power failures was due to an environmental condition. It was shortly after that realiza­tion that both the NTSB and the FAA began pursuing icing as a cause.[1256]

Hookey employed some commonsense investigative techniques to find commonality among the incidents he was aware of and others that were suspect. He contacted the engine manufacturers to request they take another look at the detailed technical reports of engines that had failed
and then also look at the archived weather data to see if any patterns emerged. By May 2006, the FAA began to argue that the engine prob­lems were being caused by ice crystals being ingested into the engine. The NTSB concurred and suggested how ice crystals can build up inside engines even if the interior temperatures are way above freezing. The theory is that ice particles from nearby storms melt in the hot engine air, and as more ice is ingested, some of the crystals stick to the wet surfaces, cooling them down. Eventually enough ice accretes to cause a problem, usually without warning. In August 2006, the NTSB sent a letter to the FAA detailing the problem as it was then understood and advising the FAA to take action.[1257]

Подпись: 12Part of the action the FAA is taking to continue to learn more about the phenomenon, its cause, and potential mitigation strategies is to part­ner with NASA and others in conducting an in-flight research program. "If we can find ways of detecting this condition and keeping aircraft out of it, that’s something we’re interested in doing,” said Ratvasky, who will help lead the NASA portion of the research program. Considering the number and type of sensors required, the weight and volume of the asso­ciated research equipment, the potentially higher loads that may stress the aircraft as it flies in and around fairly large warm-weather thunder­storms, the required range, and the number of people who would like to be on site for the research, NASA won’t be able to use its workhorse Twin Otter icing research aircraft. A twin-turbofan Lockheed S-3B Viking aircraft provided to NASA by the U. S. Navy originally was proposed for this icing research program, but the program requirements outgrew the jet’s capabilities. As of early 2010, the Agency still was considering its options for a host aircraft, although it was possible that the NASA DC-8 airborne science laboratory based at the Dryden Flight Research Center (DFRC) might be pressed into service. In any case, it’s going to take some time to put together the plan, prepare the aircraft, and test the equipment. It may be 2012 before the flight research begins. "It’s a fairly significant process to make sure we are going to be doing this program in a safe way, while at the same time we meet all the research requirements. What we’re doing right now is getting the instrumenta­tion integrated onto the aircraft and then doing the appropriate testing to qualify the instrumentation before we go fly all the way across the

world and make the measurements we want to make,” Ratvasky said. In addition to NASA, organizations providing support for this research include the FAA, NCAR, Boeing, Environment Canada, the Australian Bureau of Meteorology, and the National Research Council of Canada.[1258]

Подпись: 12In the meantime, ground-based research has been underway and safety advisories involving jet engines built by General Electric and Rolls – Royce has resulted in those companies making changes in their design and operations to prevent the chance of any interior ice buildup that could lead to engine failure. Efforts to unlock the science behind inter­nal engine icing also is taking place at Drexel University in Pennsylvania, where researchers are building computer models for use in better under­standing the mechanics of how ice crystals can accrete within turbofan engines at high altitude.[1259]

While few technical papers have been published on this subject— none yet appear in NASA’s archive of technical reports—expect the topic of engine ingestion of ice crystals and its detrimental effect on safe operations to get a lot of attention during the next decade as more is learned, rules are rewritten, and potential design changes in jet engines are ordered, built, and deployed into the air fleet.

Challenging Technology: The X-29 Program

Meetings between Defense Advanced Research Projects Agency (DARPA) and NASA Langley personnel in early 1980 initiated planning for sup­port of an advanced forward-swept wing (FSW) research aircraft project with numerous objectives, including assessments and demonstration of superior high-angle-of-attack maneuverability and departure resistance resulting from the aerodynamic behavior of the FSW at high angles of attack. Langley was a major participant in the subsequent program and conducted high-angle-of-attack wind tunnel tests of models of the competing designs by General Dynamics, Rockwell, and Grumman dur­ing 1980 and 1981. When Grumman was selected to develop the X-29

Подпись: The X-29 flies at high angle of attack during studies of the flow-field shed by the fuselage forebody. Note the smoke injected into the flow for visualization and the emergency spin parachute structure on the rear fuselage. NASA. Подпись: 13

research aircraft in December 1981, NASA was a major partner with DARPA and initiated several high-angle-of-attack/stall/spin/departure studies of the X-29, including dynamic force-testing and free-flight model tests in the Full-Scale Tunnel, spinning tests in the Spin Tunnel, initial high-angle-of-attack control system concept development and assess­ment in the DMS, and assessments of spin entry and poststall motions using a radio-controlled drop model.[1302]

Early in the test program, Langley researchers encountered an unan­ticipated aerodynamic phenomenon for the X-29 at high angles of attack. It had been expected that the FSW configuration would maintain satis­factory airflow on the outer wing panels at high angle of attack; however, dynamic wind tunnel testing to measure the aerodynamic roll damping
of an X-29 model in the Full-Scale Tunnel indicated that the configura­tion would exhibit unstable roll damping and a tendency for oscillatory large-amplitude wing-rocking motions for angles of attack above about 25 degrees. After additional testing and analysis, it was determined that the FSW of the aircraft worked as well as expected, but aerodynamic interactions between the vortical flow shed by the fuselage forebody with the wing were the cause of the undesirable wing rock. When the free-flight model was subsequently flown, the wing rock was encoun­tered as predicted by the earlier force test, resulting in large roll fluctu­ations at high angles of attack. However, the control effectiveness of the wing trailing-edge flapperon used for artificial damping on the full-scale X-29 was extremely powerful, and the model motions quickly damped out when the system was replicated and engaged for the model.

Подпись: 13Obtaining reliable aerodynamic data for high-angle-of-attack tests of subscale models at Langley included high Reynolds number tests in the NASA Ames 12-Foot Pressure Tunnel, where it was found that sig­nificant aerodynamic differences could exist for certain configurations between model and full-scale airplane test conditions. Wherever possi­ble, artificial devices such as nose strakes were used on the models to more accurately replicate full-scale aerodynamic phenomena. In lieu of approaches to correct Reynolds number effects for all test models, a conservative approach was used in the design of the flight control sys­tem to accommodate variability in system gains and logic to mitigate problems demonstrated by the subscale testing.[1303]

In the area of spin and recovery, the Langley spin tunnel staff mem­bers conducted tests to identify the spin modes that might be exhibited by the X-29 and the size of emergency spin recovery parachute recom­mended for the flight-test vehicles. They also investigated a growing concern within the airplane development program that the inherently unstable configuration might exhibit longitudinal tumbling during maneuvers involving low speeds and extreme angles of attack (such as during recovery from a "zoom climb” to zero airspeed). This concern was of the general category of ensuring that aircraft motions might over­power the relative ineffectiveness of aerodynamic controls for configu­rations with relaxed stability at low-speed conditions.

Using a unique, single-degree-of-freedom test apparatus, the research team demonstrated that tumbling might be encountered but that the aft-fuselage strake flaps—intended to be only trimming devices— could be used to prevent uncontrollable tumbling.[1304] As a result of these tests, the airplane’s control system was modified to use the flaps as active control devices, and with this modification, subsequent flight tests of the X-29 demonstrated a high degree of resistance to tumbling.

Подпись: 13In 1987, Langley conducted high-angle-of-attack and poststall assess­ments of the X-29 using the Langley helicopter drop-model technique that had been applied to numerous configurations since the early 1960s. However, the inherent aerodynamic longitudinal instability and sophis­ticated flight control architecture of the X-29 required an extensive upgrade to Langley’s test technique. The test program was considered the most challenging drop-model project ever conducted by Langley to that time. Among several highlights of the study was a demonstra­tion that the large-amplitude wing rock exhibited earlier by the unaug­mented wind tunnel free-flight model also existed for the drop model. In fact, when the angle of attack was increased beyond 30 degrees, the roll oscillations became divergent, and the model exhibited uncontrollable 360 degrees rolls that resulted in severe poststall gyrations. When the active wing-rock roll control system of the airplane was simulated, the roll motions were damped and controllable to extreme angles of attack.[1305]

Two X-29 research aircraft conducted joint DARPA-NASA-Grumman flight tests at NASA Dryden from 1984 to 1992.[1306] The first aircraft was used to verify the benefits of advanced technologies and expand the enve­lope to an angle of attack of about 23 degrees and to a Mach number of about 1.5. The second X-29 was equipped with hardware and software modifications for low-speed flight conditions for angles of attack up to about 70 degrees. The test program for X-29 No. 2 was planned and accomplished using collated results from wind tunnel tests, drop-model tests, simulator results, and results obtained from X-29 No. 1 for lower
angles of attack. Dryden and the Air Force Flight Test Center designed flight control system modifications, and Grumman made modifications. The high-angle-of-attack flight program included 120 flights between 1989 and 1991. Dryden researchers conducted a series of aerodynamic investigations in mid-1991 to assess the symmetry of flow from the fuselage forebody, the flow separation patterns on the wing as angle of attack was increased, and the flow quality at the vertical tail location.[1307] In 1992, the Air Force conducted an additional 60 flights to evaluate the effectiveness of forebody vortex flow control using blowing.

Подпись: 13The results of the high-angle-of-attack X-29 program were extremely impressive. Using only aerodynamic controls and no thrust vector­ing, X-29 No. 2 demonstrated positive and precise pitch-pointing capability to angles of attack as high as 70 degrees, and all-axis maneu­verability for 1 g flight up to an angle of attack of 45 degrees with lateral – directional control maintained. The wing-rock characteristic predicted by the Langley model tests was observed for angles of attack greater than about 35 degrees, but the motions were much milder than those exhib­ited by the models. It was concluded that the Reynolds number effects observed between model testing and full-scale flight tests were respon­sible for the discrepancy, as flight-test values were an order of magni­tude greater than those of subscale tests.

LIFT-FAN LIMITATIONS

It is recommended that a nose-mounted lift-fan NOT be incor­porated into the design of the SSTOVLF for pitch attitude control. XV-5A flight tests demonstrated that although the pitch-fan proved to be effective for pitch attitude control, fan ram drag forces caused adverse handling qualities and reduced the conversion airspeed corridor. It is thus recommended that a reaction control system be incorporated.

Подпись: 14The X-14A roll-control lift-fan tests revealed that control of rolling moment by varying fan rpm was unacceptable due to poor fan rpm response characteristics even when closed – loop control techniques were employed. Thus this method should not be considered for the SSTOVLF. However, lift-fan thrust spoiling proved to be successful in the XV-5 and is rec­ommended for the SSTOVLF.

Avoidance of the fan stall boundary placed significant oper­ational limitations on the XV5 and had the potential of doing the same with the SSTOVLF. Fan stall, like wing stall, must be avoided and a well defined safety margin required. Approach to the fan stall boundary proved to be a particular problem in the XV-5B, especially when performing steep terminal area maneu­vers during simulated or real instrument landing approaches. The SSTOVLF preliminary designers must account for antici­pated fan stall limitations and allow for adequate safety mar­gins when determining SSTOVLF configurations and flight profile specifications.