Category AERONAUTICS

Fly-By-Wire: Fulfilling Promise and Navigating Around Nuance

As designers and flightcrews became more comfortable with electronic flight control systems and the systems became more reliable, the idea of removing the extra weight of the pilot’s mechanical control system began to emerge. Pilots resisted the idea because electrical systems do fail, and the pilots (especially military pilots) wanted a "get-me-home” capabil­ity. One flight-test program received little attention but contributed a great deal to the acceptance of fly-by-wire technology. The Air Force ini­tiated a program to demonstrate that a properly designed fly-by-wire control system could be more reliable and survivable than a mechani­cal system. The F-4 Survivable Flight Control System (SFCS) program was initiated in the early 1970s. Many of the then-current accepted prac­tices for flight control installations were revised to improve survivabil­ity. Four independent analog computer systems provided fail-op, fail-op (FOFO) redundancy. A self-adaptive gain changer was also included in the control logic (similar to the MH-96 in the X-15). Redundant com­puters, gyros, and accelerometers were eventually mounted in separate locations in the airplane, as were power supplies. Flight control system wire bundles for redundant channels were separated and routed through different parts of the airplane. Individual surface actuators (one aileron for example) could be operated to continue to maintain control when the opposite control surface was inoperative. The result was a flight control system that was lighter yet more robust than a mechanical sys­tem (which could be disabled by a single failure of a pushrod or cable). After development flight-testing of the SFCS airplane was completed, the standard F-4 mechanical backup system was removed, and the air­plane was flown in a completely fly-by-wire configuration.[700]

The first production fly-by-wire airplane was the YF-16. It used four redundant analog computers with FOFO capability. The airplane was not only the first production aircraft to use FBW control, it was also the first airplane intentionally designed to be unstable in the pitch axis while

flying at subsonic speeds ("relaxed static stability”). The YF-16 proto­type test program allowed the Air Force and General Dynamics to iron out the quirks of the FBW control system as well as the airplane aero­dynamics before entering the full scale development of the F-16A/B. The high gains required for flying the unstable airplane resulted in some structural resonance and limit-cycle problems. The addition of exter­nal stores (tanks, bombs, and rockets) altered the structural mode fre­quencies and required fine-tuning of the control laws. Researchers and designers learned that flight control system design and aircraft inter­actions in the emergent FBW era were clearly far more complex and nuanced than control system design in the era of direct mechanical feedback and the augmented hydromechanical era that had followed.[701]

Configuration Influence upon Stall and Departure Behavior

Another maneuver that can lead to loss of control is a stall. An aircraft "stalls” when the wing’s angle of attack exceeds a critical angle beyond

which the wing can no longer generate the lift necessary to support the airplane. A typical stall consists of some pre-stall warning buffet as the flow over the wing begins to break down, followed by stall onset, usu­ally accompanied by an uncommanded nose-down pitching rotation of the aircraft, as gravity takes over and the airplane naturally tries to regain lost airspeed. The loss of control for a normal stall is quite brief and can usually be overcome, or prevented, by proper control applica­tion at the time of pre-stall warning. There are design features of some aircraft that result in quite different stall characteristics. Stalls may be a straightforward wings-level gentle drop (typically leading to a swift and smooth recovery), or sharply abrupt, or an unsymmetrical wing drop leading to a spin entry. The latter can be quite hazardous.

High-performance T-tail aircraft are particularly vulnerable to abnormal stall effects. Lockheed’s sleek F-104 Starfighter incorporated a T-tail operating behind a short, stubby, and extremely thin wing. As the wing approached the critical stall angle, the wing tip vortexes impinged on the horizontal tail creating an abrupt nose-up pitching moment, commonly referred to as a "pitch-up.” The pitch-up placed the airplane in an uncontrollable flight environment: either a highly oscillatory spin or a deep stall (a stable condition where the airplane remains locked in a high angle of attack vertical descent). To prevent inadvertent pitch-ups, the aircraft was equipped with a "stick shaker,” and a "stick kicker.” The stick shaker created an artificial vibration of the stick, simulating stall buffet, as the airplane approached a stall. The stick kicker applied a sharp nose-down command to the horizontal tail when the airplane reached the critical condition for an impending pitch – up. A similar situation developed for the McDonnell F-101 Voodoo (also a T-tail behind a short, stubby wing). Stick shakers and kickers were quite successful in allowing these airplanes to operate safely through­out their operational lifespan. Overall, however, the T-tail layout was largely discredited for high-performance fighter and attack aircraft, the most successful postwar fighters being those with low-placed hor­izontal tails. Such a configuration, typified by the F-100, F-101, F-105, F-5, F-14, F-15, F-16, F/A-18, F-22, F-35, and a host of foreign aircraft, is now a design standard for tailed transonic and supersonic military aircraft. It was a direct outgrowth of the extensive testing the NACA did in the late 1940s and early 1950s on such aircraft as the D-558-2, the North American F-86, and the Bell X-5, all of which, to greater or lesser extents, suffered from pitch-up.

The advent of the swept wing induced its own challenges. In 1935, German aerodynamicist Adolf Busemann discovered that aircraft could operate at higher speeds, and closer to the speed of sound (Mach 1), by using swept wings. By the end of the Second World War, American NACA researcher Robert T. Jones of Langley Memorial Aeronautical Laboratory had independently discovered its benefits as well. The swept wing sub­sequently transformed postwar military and civil aircraft design, but it was not without its own quite serious problems. The airflow over a swept wing tends to move aft and outboard, toward the tip. This results in the wingtip stalling before the rest of the wing. Because the wingtip is aft of the wing root, the loss of lift at the tip causes an uncommanded nose-rise as the airplane approaches a stall. This nose-rise is similar to a pitch-up but not nearly as abrupt. It can be controlled by the pilot, and for most swept wing airplanes there are no control system features spe­cifically to correct nose-rise problems. Understanding the manifestations of swept wing stall and swept wing pitch-up commanded a great deal of NACA and Air Force interest in the early years of the jet age, for reasons of both safety and combat effectiveness. Much of the NACA’s research program on its three swept wing Douglas D-558-2 Skyrockets involved examination of these problems. Research included analysis of a variety of technological "fixes,” such as sawtooth leading edge extensions, wing fences, and fixed and retracting slots. Afterward, various combinations of flaps, flow direction fences, wing twist, and other design features have been used to overcome the tip-stall characteristic in modern swept wing airplanes, which, of course, include most commercial airliners.[748]

Three-Dimensional Flows and Hypersonic Vehicles

Three-dimensional flow-field calculation was, for decades, a frustrat­ing impossibility. I recall colleagues in the 1960s who would have sold their children (at least they said) to be able to calculate three­dimensional flow fields. The number of grid points required for such cal­culations simply exceeded the capability of any computer at that time. With the advent of supercomputers, however, the practical calculation

Подпись:
of three-dimensional flow fields became realizable. Once again, NASA researchers led the way. The first truly three-dimensional flow calcula­tion of real importance was carried out by K. J. Weilmuenster in 1983 at the NASA Langley Research Center. He calculated the inviscid flow over a Shuttle-like body at angle of attack, including the shape and location of the three-dimensional bow shock wave. This was no small feat at the time, and it proved to the CFD community that the time had come for such three-dimensional calculations.[780]

This was followed by an even more spectacular success. In 1986, using the predictor-corrector method conceived by NASA Ames Research Center’s MacCormack, Joseph S. Shang and S. J. Scherr of the Air Force Flight Dynamics Laboratory (AFFDL) published the first Navier-Stokes calculation of the flow field around a complete airplane. The airplane

Подпись: X-24C computed surface streamlines. From author's collection.
Three-Dimensional Flows and Hypersonic Vehicles

was the "X-24C,” a proposed (though never completed) rocket-powered Mach 6+ hypersonic test vehicle conceived by the AFFDL, and the calcu­lation was made for flow conditions at Mach 5.95. The mesh system con­sisted of 475,200 grid points throughout the flow field, and the explicit time-marching procedure took days of computational time on a Cray computer. But it was the first such calculation and a genuine watershed in the advancement of computational fluid dynamics.[781]

Note that both of these pioneering three-dimensional calculations were carried out for hypersonic vehicles, once again underscoring the importance of hypersonic aerodynamics as a major driving force behind the development of computational fluid dynamics and of the leading role played by NASA in driving the whole field of hypersonics.[782]

Jet Propulsion Laboratory

Jet Propulsion Laboratory (JPL) began as an informal group of students and staff from the California Institute of Technology (Caltech) who experimented with rockets before and during World War II; evolved afterward into the Nation’s center for unpiloted exploration of the solar system and deep space, operating related tracking, and data acquisition systems; and was managed for NASA by Caltech.[890] Dr. Theodore von Karman, then head of Caltech’s Guggenheim Aeronautical Laboratory, shepherded this group to becoming a center of rocket research for the Army. Upon NASA’s formation in 1958, JPL came under NASA’s responsibility.[891]

Consistent with its origins and Caltech’s continuing role in its man­agement, JPL’s orientation has always emphasized advanced experimen­tal and analytical research in various disciplines, including structures. JPL developed efficiency improvements for NASTRAN as early as 1971.[892] Other JPL research included basic finite element techniques, high-veloc­ity impact effects, effect of spin on structural dynamics, geometrically nonlinear structures (i. e., structures that deflect sufficiently to signifi­cantly alter the structural properties), rocket engine structural dynam­ics, flexible manipulators, system identification, random processes, and optimization. The most notable of these are VISCEL, TEXLESP-S, and PID (AU-FREDI and MODE-ID).[893]

VISCEL (for Visco-Elastic and Hyperelastic Structures) and TEXLESP-S treat special classes of materials that general-purpose finite element codes typically cannot handle. VISCEL treats visco-elastic prob­lems, in which materials exhibit viscosity (normally a fluid characteris­tic) as well as elasticity. VISCEL was introduced in 1971 and was adapted by industry over the next decade.[894] In 1982, the Shell Oil Company used VISCEL to validate a proprietary code that was in development for the design of plastic products.[895] In 1984, AiResearch was using VISCEL to analyze seals and similar components in aircraft auxiliary power units (APUs).[896]

JPL has been leading research in the structural dynamics of solid rockets almost since the laboratory was first established. TEXLESP-S was specifically developed for analysis of solid rocket fuels, which may be polymeric materials exhibiting such hyperelastic behavior. TEXLESP-S is a finite element code developed for large-strain (hyperelastic) prob­lems, in which materials may be purely elastic but exhibit such large strain deformations that the geometric configuration of the structure is significantly altered. (This is distinct from the small-strain, large – deflection situations that can occur, for example, with long flexible booms on spacecraft.)[897]

System Identification/Parameter Identification (PID, including AU-FREDI and MODE-ID) is the use of empirical data to build or tune a mathematical model of a system. PID is used in many disciplines, including automatic control, flight-testing, and structural analysis.[898] Ideally, excitation of the system is performed by systematically exciting specific modes. However, such controlled excitation is not always prac­tical, and even under the best of circumstances, there is some uncer­tainty in the interpretation of the data. The MODE-ID program was developed in 1988 to estimate not only the modal parameters of a struc­ture, but also the level of uncertainty with which those parameters have been estimated:

Such a methodology is presented which allows the precision of the estimates of the model parameters to be computed.

It also leads to a guiding principle in applications. Namely, when selecting a single model from a given class of models, one should take the most probable model in the class based on the experimental data. Practical applications of this prin­ciple are given which are based on the utilization of measured seismic motions in large civil structures. Examples include the application of a computer program MODE-ID to identify modal properties directly from seismic excitation and response time histories from a nine-story steel-frame building at JPL and from a freeway overpass bridge.[899]

Another system identification program, Autonomous Frequency Domain Identification (AU-FREDI), was developed for the identification of structural dynamic parameters and the development of control laws for large and/or flexible space structures. It was furthermore intended to be used for online design and tuning of robust controllers, i. e., to develop control laws real time, although it could be modified for offline use as well. AU-FREDI was developed in 1989, validated in the Caltech/ Jet Propulsion Laboratory’s Large Spacecraft Control Laboratory and made publicly available.[900] This is just a small sample of the research that JPL has conducted and sponsored in system identification, control of flexible structures, integrated control/structural design, and related fields. While intended primarily for space structures, this research also has relevance for medicine, manufacturing technology, and the design and construction of large, ground-based structures.

NPLOT (Goddard, 1982)

NPLOT was a product of research into the visualization of finite element models, which had been ongoing at Goddard since the introduction of NASTRAN. A fast, hidden line algorithm was developed in 1982 and became the basis for the NPLOT plotting program for NASTRAN, pub­licly released initially in 1985 and in improved versions into the 1990s.[987]

2) Integrated Modeling of Optical Systems (IMOS) (Goddard and JPL,

1990s)

A combined multidisciplinary code, IMOS was developed during the 1990s by Goddard and JPL: "Integrated Modeling of Optical Systems (IMOS) is a finite element-based code combining structural, thermal, and optical ray-tracing capabilities in a single environment for analy­sis of space-based optical systems.”[988] IMOS represents a recent step in the continuing evolution of Structural-Thermal-Optical analysis capa­bility, which has been an important activity at the Space Flight Centers since the early 1970s.

Materials Research and Development: The NASP Legacy

The National Aero-Space Plane (NASP) program had much to contribute to metallurgy, with titanium being a particular point. It has lately come to the fore in aircraft construction because of its high strength-to-den – sity ratio, high corrosion resistance, and ability to withstand moderately
high temperatures without creeping. Careful redesign must be accom­plished to include it, and it appears only in limited quantities in aircraft that are out of production. But newer aircraft have made increasing use of it, including the two largest manufacturers of medium – and long – range commercial jetliners, Boeing and Airbus, whose aircraft and their weight of titanium is shown in Table 4.

TABLE 4:

BOEING AND AIRBUS AIRCRAFT MAKING SIGNIFICANT USE OF TITANIUM

AIRCRAFT (INCLUDING THEIR ENGINES)

WEIGHT OF TI, IN METRIC TONS

Boeing 787

134

Boeing 777

59

Boeing 747

45

Boeing 737

18

Airbus A380

145

Airbus A350

74

Airbus A340

32

Airbus A330

18

Airbus A320

12

These numbers offer ample evidence of the increasing prevalence of titanium as a mainstream (and hence no longer "exotic”) aviation mate­rial, mirroring its use in other aspects of the commercial sector. For example, in the 1970s, the Parker Pen Company used titanium in its T-1 line of ball pens and rollerballs, which it introduced in 1971. Production stopped in 1972 because of the high cost of the metal. But hammerheads fabricated of titanium entered service in 1999. Their light weight allows a longer handle, which increases the speed of the head and delivers more energy to the nail while decreasing arm fatigue. Titanium also substan­tially diminishes the shock transferred to the user because it generates much less recoil than a steel hammerhead.

In advancing titanium’s use, techniques of powder metallurgy have been at the forefront. These methods give direct control of the micro­structure of metals by forming them from powder, with the grains of powder sintering or welding together by being pressed in a mold at high temperature. A manufacturer can control the grain size independently of any heat-treating process. Powder metallurgy also overcomes restrictions on alloying by mixing in the desired additives as powdered ingredients.

Several techniques exist to produce the powders. Grinding a metal slab to sawdust is the simplest, though it yields relatively coarse grains. "Splat-cooling” gives better control. It extrudes molten metal onto the chilled rim of a rotating wheel that cools it instantly into a thin ribbon. This represents a quenching process that produces a fine-grained micro­structure in the metal. The ribbon then is chemically treated with hydro­gen, which makes it brittle so that it can be ground into a fine powder. Heating the powder then drives off the hydrogen.

The Plasma Rotating Electrode Process, developed by the firm of Nuclear Metals, has shown particular promise. The parent metal is shaped into a cylinder that rotates at up to 30,000 revolutions per min­ute (rpm) and serves as an electrode. An electric arc melts the spinning metal, which throws off droplets within an atmosphere of cool inert helium. The droplets plummet in temperature by thousands of degrees within milliseconds and their microstructures are so fine as to approach an amorphous state. Their molecules do not form crystals, even tiny ones, but arrange themselves in formless patterns. This process, called "rapid solidi­fication,” has brought particular gains in high-temperature strength.

Standard titanium alloys lose strength at temperatures above 700 to 900 °F. By using rapid solidification, McDonnell-Douglas raised this limit to 1,100 °F prior to 1986, when NASP got underway. Philip Parrish, the manager of powder metallurgy at the Defense Advanced Research Projects Agency (DARPA), notes that his agency spent some $30 mil­lion on rapid-solidification technology in the decade after 1975. In 1986, he described it as "an established technology. This technology now can stand along such traditional methods as ingot casting or drop forging.”[1095]

Eleven-hundred degrees nevertheless was not enough. But after 1990, the advent of new baseline configurations for the X-30 led to an appre­ciation that the pertinent areas of the vehicle would face temperatures no higher than 1,500 °F. At that temperature, advanced titanium alloys could serve in metal matrix composites (MMCs), with thin-gauge met­als being reinforced with fibers.

A particular composition came from the firm of Titanium Metals and was designated Beta-21S. That company developed it specifi­cally for the X-30 and patented it in 1989. It consisted of Ti along with 15Mo+2.8Cb+3Al+0.2Si. Resistance to oxidation proved to be its strong suit, with this alloy showing resistance that was two orders of magnitude greater than that of conventional aircraft titanium. Tests showed that it could also be exposed repeatedly to leaks of gaseous hydrogen without being subject to embrittlement. Moreover, it lent itself readily to being rolled to foil-gauge thicknesses of 4 to 5 mils in the fabrication of MMCs.[1096]

There also was interest in using carbon-carbon for primary struc­ture. Here the property that counted was not its heat resistance but its light weight. In an important experiment, the firm of LTV fabricated half of an entire wing box of this material. An airplane’s wing box is a major element of aircraft structure that joins the wings and provides a solid base for attachment of the fuselage fore and aft. Indeed, one could compare it with the keel of a ship. It extends to left and right of the air­craft centerline, with LTV’s box constituting the portion to the left of this line. Built at full scale, it represented a hot-structure wing proposed by General Dynamics. It measured 5 by 8 feet, with a maximum thickness of 16 inches. Three spars ran along its length, five ribs were mounted transversely, and the complete assembly weighed 802 pounds.

The test plan called for it to be pulled upward at the tip to repro­duce the bending loads of a wing in flight. Torsion or twisting was to be applied by pulling more strongly on the front or rear spar. The maxi­mum load corresponded to having the X-30 execute a pullup maneuver at Mach 2.2 with the wing box at room temperature. With the ascent continuing and the vehicle undergoing aerodynamic heating, the next key event brought the maximum difference in the temperatures of the top and bottom of the wing box, with the former being at 994 °F and the latter being at 1,671 °F. At that moment, the load on the wing box corre­sponded to 34 percent of the Mach 2.2 maximum. Farther in the flight the wing box was to reach peak temperature, 1,925 °F, on the lower surface. These three points were to be reproduced through mechanical forces applied at the ends of the spars and through the use of graphite heaters.

But several important parts delaminated during their fabrication, which seriously compromised the ability of the wing box to bear its specified loads. Plans to impose the peak or Mach 2.2 load were aban­doned, with the maximum planned load being reduced to the 34 percent associated with the maximum temperature difference. For the same rea­son, the application of torsion was deleted from the test program. Amid these reductions in the scope of the structural tests, two exercises went forward during December 1991. The first took place at room tempera­ture and successfully reached the mark of 34 percent without causing further damage to the wing box.

The second test, a week later, reproduced the condition of peak tem­perature difference while briefly applying the calculated load of 34 per­cent. The plan then called for further heating to the peak temperature of 1,925 °F. As the wing box approached this value, a difficulty arose because of the use of metal fasteners in its assembly. Some were made from coated columbium and were rated for 2,300 °F, but most were a nickel alloy that had a permissible temperature of 2,000 °F. However, an instrumented nickel-alloy fastener overheated and reached 2,147 °F. The wing box showed a maximum temperature of 1,917 °F at that moment, and the test was terminated because the strength of the fasteners now was in question. This test nevertheless counted as a success because it had come within 8 degrees of the specified temperature.[1097]

Both tests thus were marked as having achieved their goals, but their merits were largely in the mind of the beholder. The entire project would have been far more impressive if it had avoided delamination, had successfully achieved the Mach 2.2 peak load, and had subjected the wing box to repeated cycles of bending, torsion, and heating. This effort stood as a bold leap toward a future in which carbon-carbon might take its place as a mainstream material, but it was clear that this future would not arrive during the NASP program. However, the all-carbon – composite airplane, as distinct from one of carbon-carbon, has now become a reality. Carbon alone has high temperature resistance, whereas carbon composite burns or melts readily. The airplane that showcases carbon composites is the White Knight 2, built by Burt Rutan’s Scaled Composites firm as part of the Virgin Galactic venture that is to achieve commercial space flight. As of this writing, White Knight 2 is the world’s largest all-carbon-composite aircraft in service; even its control wires are carbon composite. Its 140-foot-span wing is the longest single car­bon composite aviation component ever fabricated.[1098]

Far below this rarefied world of transatmospheric tourism, carbon composites are becoming the standard material for commercial avia­tion. Aluminum has held this role up to Mach 2 since 1930, but after 80 years, Boeing is challenging this practice with its 787 airliner. By weight, the 787 is 50 percent composite, 20 percent aluminum, 15 per­cent titanium, 10 percent steel, and 5 percent other. The 787 is 80 per­cent composite by volume. Each of them contains 35 tons of composite reinforced with 23 tons of carbon fiber. Composites are used on fuse­lage, wings, tail, doors, and interior. Aluminum appears at wing and tail leading edges, with titanium used mainly on engines. The exten­sive application of composites promotes light weight and long range. The 787 can fly nonstop from New York City to Beijing. The makeup of the 787 contrasts notably with that of the Boeing 777. Itself consid­ered revolutionary when it entered service in 1995, it nevertheless had a structure that was 50 percent aluminum and 12 percent composite. The course to all-composite construction is clear, and if the path is not yet trodden, nevertheless, the goal is clearly in sight. As in 1930, when all-metal structures first predominated in American commercial avia­tion, the year 2010 marks the same point for the evolution of commer­cial composite aircraft.

The NASP program also dealt with beryllium. This metal had only two-thirds the density of aluminum and possessed good strength, but its temperature range was restricted. The conventional metal had a limit of some 850 °F, while an alloy from Lockheed called Lockalloy, which contained 38 percent aluminum, was rated only for 600 °F. It had never become a mainstream material like titanium, but, for the X-30, it offered the advantage of high thermal conductivity. Work with titanium had greatly increased its temperatures of use, and there was hope of achiev­ing similar results with beryllium.

Initial efforts used rapid-solidification techniques and sought tem­perature limits as high as 1,500 °F. These attempts bore no fruit, and from 1988 onward the temperature goal fell lower and lower. In May 1990, a program review shifted the emphasis away from high-tempera­ture formulations toward the development of beryllium as a metal suit­able for use at cryogenic temperatures. Standard forms of this metal became unacceptably brittle when only slightly colder than -100 °F, but cryoberyllium proved to be out of reach as well. By 1992, investiga­tors were working with ductile alloys of beryllium and were sacrificing all prospect of use at temperatures beyond a few hundred degrees but were winning only modest improvements in low-temperature capabil­ity. Terence Ronald, the NASP materials director, wrote in 1995 of rapid – solidification versions with temperature limits as low as 500 °F, which was not what the X-30 needed to reach orbit.[1099]

In sum, the NASP materials effort scored a major advance with Beta – 21S, but the genuinely radical possibilities failed to emerge. These included carbon-carbon as primary structure along with alloys of beryllium that were rated for temperatures well above 1,000 °F. The latter, if available, might have led to a primary structure with the strength and temperature resistance of Beta-21S but with less than half the weight. Indeed, such weight savings would have ramified throughout the entire design, lead­ing to a configuration that would have been smaller and lighter overall.

Generally, work with materials fell well short of its goals. In dealing with structures and materials, the contractors and the National Program Office established 19 program milestones that were to be accomplished by September 1993. A General Accounting Office program review, issued in December 1992, noted that only six of them would indeed be com­pleted.[1100] This slow progress encouraged conservatism in drawing up the bill of materials, but this conservatism carried a penalty.

When the scramjets faltered in their calculated performance and the X-30 gained weight while falling short of orbit, designers lacked recourse to new and very light materials, such as beryllium and carbon-carbon, that might have saved the situation. With this, NASP spi­raled to its end. The future belonged to other less ambitious but more attainable programs, such as the X-43 and X-51. They, too, would press the frontier of aerothermodynamic structural design, as they pioneered the hypersonic frontier.

The Lightweight Fighter Program and the YF-16

Подпись: 10In addition to the NASA F-8 DFBW program, several other highly note­worthy efforts involving the use of computer-controlled fly-by-wire flight control technology occurred during the 1970s. The Air Force had initi­ated the Lightweight Fighter program in early 1972. Its purpose was "to determine the feasibility of developing a small, light-weight, low-cost fighter, to establish what such an aircraft can do, and to evaluate its pos­sible operational feasibility.”[1167] The LWF effort was focused on demon­strating technologies that provided a direct contribution to performance, were of moderate risk (but sufficiently advanced to require prototyping to reduce risk), and helped hold both procurement and operating costs down. Two companies, General Dynamics (GD) and Northrop, were selected, and each was given a contract to build two flight-test prototypes. These would be known as the YF-16 and the YF-17. In its YF-16 design, GD chose to use an analog-computer-based quadruplex fly-by-wire flight control system with no mechanical backup. The aircraft had been designed with a negative longitudinal static stability margin of between 7 percent and 10 percent in subsonic flight—this indicated that its center of gravity was aft of the aerodynamic center by a distance of 7 to 10 per­cent of the mean aerodynamic chord of the wing. A high-speed, com­puter-controlled fly-by-wire flight control system was essential to provide the artificial stability that made the YF-16 flyable. The aircraft also incor­porated electronically activated and electrically actuated leading edge maneuvering laps that were automatically configured by the flight con­trol system to optimize lift-to-drag ratio based on angle of attack, Mach number, and aircraft pitch rate. A side stick controller was used in place of a conventional control column.[1168]

Following an exceptionally rapid development effort, the first of the two highly maneuverable YF-16 technology demonstrator aircraft (USAF serial No. 72-1567) had officially first flown in February 1974, piloted by General Dynamics test pilot Phil Oestricher. However, an unin­tentional first flight had actually occurred several weeks earlier, an event that is discussed in a following section as it relates to developmental issues with the YF-16 fly-by-wire flight control system. During its devel­opment, NASA had provided major assistance to GD and the Air Force on the YF-16 in many technical areas. Fly-by-wire technology and the side stick controller concept originally developed by NASA were incor­porated in the YF-16 design. The NASA Dryden DFBW F-8 was used as a flight testbed to validate the YF-16 side stick controller design. NASA Langley also helped solve numerous developmental challenges involving aerodynamics and control laws for the fly-by-wire flight control system. The aerodynamic configuration had been in development by GD since 1968. Initially, a sharp-edged strake fuselage forebody had been elim­inated from consideration because it led to flow separation; however, rounded forward fuselage cross sections caused significant directional instability at high angles of attack. NASA aerodynamicists conducted wind tunnel tests at NASA Langley that showed the vortexes generated by sharp forebody strakes produced a more stable flow pattern with increased lift and improved directional stability. This and NASA research into leading – and trailing-edge flaps were used by GD in the develop­ment of the final YF-16 configuration, which was intensively tested in the Langley Full-Scale Wind Tunnel at high angle-of-attack conditions.[1169]

Подпись: 10During NASA wind tunnel tests, deficiencies in stability and control, deep stall, and spin recovery were identified even though GD had pre­dicted the configuration to be controllable at angles of attack up to 36 degrees. NASA wind tunnel testing revealed serious loss of directional stability at angles of attack higher than 25 degrees. As a result, an auto­matic angle of attack limiter was incorporated into the YF-16 flight con­trol system along with other changes designed to address deep stall and spin issues. Ensuring adequate controllability at higher angles of attack also required further research on the ability of the YF-16’s fly-by-wire flight control system to automatically limit certain other flight param­eters during energetic air combat maneuvering. The YF-16’s all-moving

horizontal tails provided pitch control and also were designed to oper­ate differentially to assist the wing flaperons in rolling the aircraft. The ability of the horizontal tails and longitudinal control system to limit the aircraft’s angle of attack during maneuvers with high roll rates at low airspeeds was critically important. Rapid rolling maneuvers at low airspeeds and high angles of attack were found to create large nose-up trim changes because of inertial effects at the same time that the aero­dynamic effectiveness of the horizontal tails was reduced.[1170]

Подпись: 10An important aspect of NASA’s support to the YF-16 flight control sys­tem development involved piloted simulator studies in the NASA Langley Differential Maneuvering Simulator (DMS). The DMS provided a real­istic means of simulating two aircraft or spacecraft operating with (or against) each other (for example, spacecraft conducting docking maneu­vers or fighters engaged in aerial combat against each other). The DMS consisted of two identical fixed-base cockpits and projection systems, each housed inside a 40-foot-diameter spherical projection screen. Each projection system consisted of a sky-Earth projector to provide a hori­zon reference and a system for target-image generation and projection. The projectors and image generators were gimbaled to allow visual sim­ulation with completely unrestricted freedom of motion. The cockpits contained typical fighter cockpit instruments, a programmable buffet mechanism, and programmable control forces, plus a g-suit that acti­vated automatically during maneuvering.[1171] Extensive evaluations of the YF-16 flight control system were conducted in the DMS using pilots from NASA, GD, and the Air Force, including those who would later fly the aircraft. These studies verified the effectiveness of the YF-16 fly-by­wire flight control system and helped to identify critical flight control system components, timing schedules, and feedback gains necessary to stabilize the aircraft during high angle-of-attack maneuvering. As a result, gains in the flight control system were modified, and new con-

trol elements—such as a yaw rate limiter, a rudder command fadeout, and a roll rate limiter—were developed and evaluated.[1172]

Подпись: 10Despite the use of the DMS and the somewhat similar GD Fort Worth domed simulator to develop and refine the YF-16 flight control system, nearly all flight control functions, including roll stick force gra­dient, were initially too sensitive. This contributed to the unintentional YF-16 first flight by Phil Oestricher at Edwards AFB on January 20, 1974. The intent of the scheduled test mission on that day was to evalu­ate the aircraft’s pretakeoff handling characteristics. Oestricher rotated the YF-16 to a nose-up attitude of about 10 degrees when he reached 130 knots, with the airplane still accelerating slightly. He made small lateral stick inputs to get a feel for the roll response but initially got no response, presumably because the main gear were still on the ground. At that point, he slightly increased angle of attack, and the YF-16 lifted off the ground. The left wing then dropped rather rapidly. After a right roll command was applied, it went into a high-frequency pilot-induced oscillation. Before the roll oscillation could be stopped, the aft fin of the inert AIM-9 missile on the left wingtip lightly touched the runway, the right horizontal tail struck the ground, and the aircraft bounced on its landing gear several times, resulting in the YF-16 heading toward the edge of the runway. Oestricher decided to take off, believing it impossible to stay on the runway. He touched down 6 minutes later and reported: "The roll control was too sensitive, too much roll rate as a function of stick force. Every time I tried to correct the oscillation, I got full-rate roll.” The roll control sensitivity problem was corrected with adjustments to the control gain logic. Stick force gradients and control gains continued to be refined during the flight-test program, with the YF-16 subsequently demonstrating generally excellent control characteristics. Oestricher later said that the YF-16 control problem would have been discovered before the first flight if better visual displays had been available for flight simulators in the early 1970s.[1173] Lessons from the YF-16 and DFBW F-8 simulation experiences helped NASA, the Air Force, and industry refine the way that preflight simulation was structured to support new fly-by-wire flight control systems development. Another flight control issue that arose during

the YF-16 flight-test program involved an instability caused by inter­action of the active fly-by-wire flight control system with the aeroelas – tic properties of the airframe. Flutter analysis had not accounted for the effects of active flight control. Closed loop control systems test­ing on the ground had used simulated aircraft dynamics based on a rigid airframe modeling assumption. In flight, the roll sensors detected aeroelastic vibrations in the wings, and the active flight control system attempted to apply corrective roll commands. However, at times these actually amplified the airframe vibrations. This problem was corrected by reducing the gain in the roll control loop and adding a filter in the feedback patch that suppressed the high-frequency signals from struc­tural vibrations. The fact that this problem was also rapidly corrected added confidence in the ability of the fly-by-wire flight control system to be reconfigured. Another change made as a result of flight test was to fit a modified side stick controller that provided the pilot with some small degree of motion (although the control inputs to the flight con­trol system were still determined by the amount of force being exerted on the side stick, not by its position).[1174]

Подпись: 10Three days after its first official flight on February 2, 1974, the YF-16 demonstrated supersonic windup turns at Mach 1.2. By March 11, it had flown 20 times and achieved Mach 2.0 in an outstanding demon­stration of the high systems reliability and excellent performance that could be achieved with a fly-by-wire flight control system. By the time the 12-month flight-test program ended January 31, 1975, the two YF-16s had flown a total of 439 flight hours in 347 flights, with the YF-16 Joint Test Force averaging over 30 sorties per month. Open communications between NASA, the Air Force, and GD had been critical to the success of the YF-16 development program. In highlighting this success, Harry J. Hillaker, GD Vice President and Deputy Program Director for the F-16, noted the vital importance of the "free exchange of experience from the

U. S. Air Force Laboratories and McDonnell-Douglas 680J projects on the F-4 and from NASA’s F-8 fly-by-wire research program.”[1175] The YF-16 would serve as the basis for the extremely successful family of F-16 mul­
tinational fighters; over 4,400 were delivered from assembly lines in five countries by 2009, and production is expected to continue to 2015. While initial versions of the production F-16 (the A and B models) used analog computers, later versions (starting with the F-16C) incorporated digital computers in their flight control systems.[1176] Fly-by-wire and relaxed static stability gave the F-16 a major advantage in air combat capabil­ity over conventional fighters when it was introduced, and this technol­ogy still makes it a viable competitor today, 35 years after its first flight.

Подпись: 10The F-16’s main international competition for sales at the time was another statically unstable full fly-by-wire fighter, the French Dassault Mirage 2000, which first flew in 1978. Despite the F-16 being selected for European coproduction, over 600 Mirage 2000s would also eventu­ally be built and operated by a number of foreign air forces. The other technology demonstrator developed under the LWF program was the Northrop YF-17. It featured a conventional mechanical/hydraulic flight control system and was statically stable. When the Navy decided to build the McDonnell-Douglas F/A-18, the YF-17 was scaled up to meet fleet requirements. Positive longitudinal static stability was retained, and a pri­mary fly-by-wire flight control system was incorporated into the F/A-18’s design. The flight control system also had an electric backup that enabled the pilot to transmit control inputs directly to the control surfaces, bypass­ing the flight control computer but using electrical rather than mechan­ical transmission of signals. A second backup provided a mechanical linkage to the horizontal tails only. These backup systems were possible because the F/A-18, like the YF-17, was statically stable about all axes.[1177]

Japanese CCV T-2

In Japan, the CCV approach that was taken involved modification of a Mitsubishi T-2 jet training aircraft. Horizontal canards were fitted to reduce static stability, and an all-movable vertical surface was added to the forward fuselage to enable direct side force control investiga­tions. The existing wing-mounted flaps were modified to enable direct lift control and maneuver load control studies. A triply redundant dig­ital fly-by-wire flight control system was installed with quadruplex pilot force sensors used to sense stick and rudder pedal forces and air­craft motion sensors. Aircraft motion sensors (such as pitch, roll, and yaw rate gyros, and vertical and lateral acceleration sensors) were also quadruplex. The original mechanical flight control system was retained as a backup mode. Three identical digital computers processed sensor signals, and the resultant command signals were used to con­trol the horizontal stabilizer, leading and trailing edge flaps, rudder, and vertical canard. Electrohydraulic actuators converted electrical
signals into mechanical inputs for the control surface actuators. The CCV T-2 first flew in August 1983. After 24 flights by Mitsubishi, the aircraft was delivered to the Japanese Technical Research Development Institute (TRDI) at Gifu Air Base in March 1984 for government flight­testing, which was completed in March 1986.[1220]

Подпись: 10These research programs (along with the Soviet Projekt 100LDU testbed discussed earlier) provided invaluable hands-on experience with state – of-the-art flight control technologies. Data from the Jaguar ACT and the CCV F-104G supported the Experimental Aircraft Program (EAP) and contributed to the technology base for the Anglo-German-Italian – Spanish Eurofighter multirole fighter, now known as the Typhoon. Many other advanced aircraft development programs, including the French Rafale, the Mitsubishi F-2 fighter, the Russian Su-27 family of fighters and attack aircraft, and the entire family of Airbus airliners, were the beneficiaries of these research efforts. In addition, the importance of the infusion of technology made possible by open dissemination of NASA technical publications should not be underestimated.

Quiet Clean Short Haul Experimental Engine

A second wave of engine-improvement programs was initiated in 1969 and continued throughout the 1970s, as the noise around airports con­tinued to be a social and political issue and the FAA tightened its environ­mental regulations. Moreover, with the oil crisis and energy shortage later in the decade adding to the forces requiring change, the airline indus­try once again turned to NASA for help in identifying new technology.

Подпись: 11At the same time, the airline industry was studying the feasibility of introducing a new generation of commuter airliners to fly between cities along the Northeast corridor of the United States. To make these routes attractive to potential passengers, new airports would have to be built as close to the center of cities such as Boston, New York, and Philadelphia. For aircraft to fly into such airports, which would have shorter runways and strict noise requirements, the airliners would have to be capable of making steep climbs after takeoff, quick turns without losing control, and steep descents on approach to landing, accommodat­ing short runways and meeting the standards for Stage 2 noise levels.[1301]

In terms of advancing propulsion technology, NASA’s answer to all of these requirements was the Quiet Clean Short Haul Experimental Engine. Contracts were awarded to GE to design, build, and test two types of high-bypass fanjet engines: an over-the-wing engine and an under-the-wing engine. Self-descriptive as to their place on the airplane, both turbofans were based on the same engine core used in the military F-101 fighter jet. Improvements to the design included noise-reduction features evolved from the Quiet Engine program; a drive-reduction gear to make the fan spin slower than the central shaft; a low-pressure tur­bine; advanced composite construction for the inlet, fan frame, and fan exhaust duct; and a new digital control system that allowed flight com­puters to monitor and control the jet engine’s operation with more pre­cision and quicker response than a pilot could.[1302]

In addition to those "standard” features on each engine, the under – the-wing engine tried out a variable pitch composite low-pressure fan with a 12 to 1 ratio—both features were thought to be valuable in reduc­ing noise, although the variable pitch proved challenging for the GE

Подпись: 11 Quiet Clean Short Haul Experimental Engine

team leading the research. Two pitch change mechanisms were tested, one by GE and the other by Hamilton Standard. Both worked well in controlled test conditions but would need a lot of work before they could go into production.[1303]

The over-the-wing engine incorporated a higher fan pressure and a 10 to 1 bypass ratio, a fixed pitch fan, a variable area D-shaped fan exhaust nozzle, and low tip speeds on the fans. Both engines directed their exhaust along the surface of the wing, which required modifica­tions to handle the hot gas and increase lift performance.[1304]

The under-the-wing engine was test-fired for 153 hours before it was delivered to NASA in August of 1978, while the over-the-wing engine received 58 hours of testing and was received by NASA during July of 1977. Results of the tests proved that the technology was sound and, when configured to generate 40,000 pounds of thrust, showed a reduction in
noise of 8 to 12 decibels, or about 60- to 75-percent quieter than the quietest engines flying on commercial airliners at that time. The new technologies also resulted in sharp reductions in emissions of carbon monoxide and unburned hydrocarbons.[1305]

Подпись: 11Unfortunately, the new generation of Short Take-Off and Landing (STOL) commuter airliners and small airports near city centers never materialized, so the new engine technology research managed and paid for by NASA but conducted mostly by its industry partners never found a direct commercial application. But there were many valuable lessons learned about the large-diameter turbofans and their nacelles, informa­tion that was put to good use by GE years later in the design and fabri­cation of the GE90 engine that powers the Boeing 777 aircraft.[1306]

Aircraft Materials and Structures

While refinements in engine design have been the cornerstone of NASA’s efforts to improve fuel efficiency, the Agency has also sought to improve airframe structures and materials. The ACEE included not only propulsion improvement programs but also efforts to develop light­weight composite airframe materials and new aerodynamic structures that would increase fuel efficiency. Composite materials, which consist of a strong fiber such as glass and a resin that binds the fibers together, hold the potential to dramatically reduce the weight—and therefore the fuel efficiency—of aircraft.

Подпись: 12Initially, Boeing began to investigate composite materials, using f iberglass for major parts such as the radome on the 707 and 747 com­mercial airliners.[1450] Starting around 1962, composite sandwich parts comprised of fiberglass-epoxy materials were applied to aircraft such as the Boeing 727 in a highly labor-intensive process.[1451] The next advance in composites was the use of graphite composite secondary aircraft struc­tures, such as wing control surfaces, wing trailing and leading edges, vertical fin and stabilizer control surfaces, and landing gear doors.[1452]

NASA research on composite materials began to gain momentum in 1972, when NASA and the Air Force undertook a study known as Long Range Planning Study for Composites (RECAST) to examine the state of existing composites research. The RECAST study found two major obstacles to the use of composites: high costs and lack of confi­dence in the materials.[1453]

However, by 1976, interest in composite materials had picked up steam because they are lighter than aluminum and therefore have the potential to increase aircraft fuel efficiency. Research on composites was formally wrapped into ACEE in the form of the Composite Primary Aircraft Structures program. NASA hoped that research on composites would yield a fuel savings for large aircraft of 15 percent by the 1990s.

NASA’s efforts under ACEE ultimately led the aircraft manufactur­ing industry to normalize the use of composites in its manufacturing
processes, driving down costs and making composites far more com­mon in aircraft structures. "Ever since the ACEE program has existed, manufacturers have been encouraged by the leap forward they have been able to make in composites,” Jeffrey Ethell, the late aviation author and analyst, wrote in his 1983 account NASA’s fuel-efficiency programs. "They have moved from what were expensive, exotic materials to routine manufacture by workers inexperienced in composite structures.”[1454] Today, composite materials have widely replaced metallic materials on parts of an aircraft’s tail, wings, fuselage, engine cowlings, and landing gear doors.[1455]

Подпись: 12NASA research under ACEE also led to the development of improved aerodynamic structures and active controls. This aspect of ACEE was known as the Energy Efficient Transport (EET) program. Aerodynamic structures can improve the way that the aircraft’s geometry affects the airflow over its entire surface. Active controls are flight control systems that can use computers and sensors to move aircraft surfaces to limit unwanted motion or aerodynamic loads on the aircraft structure and to increase stability. Active controls lighten the weight of the aircraft, because they replace heavy hydraulic lines, rods, and hinges. They also allow for reductions in the size and weight of the wing and tail. Both aerodynamic structures and active controls can increase fuel efficiency because they reduce weight and drag.[1456]

One highly significant aerodynamic structure that was explored under ACEE was the supercritical wing. During the 1960s and 1970s, Richard Whitcomb, an aeronautical engineer at NASA Langley Research Center, led the development of the new airfoil shape, which has a flat­tened top surface to reduce drag and tends to be rounder on the bot­tom, with a downward curve at the trailing edge to increase lift. ACEE research at NASA Dryden led to the finding that the supercritical wing could lead to increased cruising speed and flight range, as well as an
increase in fuel efficiency of about 15 percent over conventional-wing aircraft. Supercritical wings are now in widespread use on modern subsonic commercial transport aircraft.[1457]

Подпись: 12Whitcomb also conducted research on winglets, which are verti­cal extensions of wingtips that can improve an aircraft’s fuel efficiency and range. He predicted that adding winglets to transport-size aircraft would lead to improved cruising efficiencies between 6 and 9 percent. In 1979 and 1980, flight tests involving a U. S. Air Force KC-135 aerial refueling tanker demonstrated an increased mileage rate of 6.5 percent.[1458] The first big commercial aircraft to feature winglets was the MD-11, built by McDonnell-Douglas, which is now a part of Boeing. Today, winglets can be are commonly found on many U. S.- and foreign-made commercial airliners.[1459]

Laminar flow is another important fuel-saving aircraft concept spear­headed by NASA. Aircraft designed to maximize laminar flow offer the potential for as much as a 30-percent decrease in fuel usage, a benefit that can be traded for increases in range and endurance. The idea behind laminar flow is to minimize turbulence in the boundary layer—a layer of air that skims over the aircraft’s surface. The amount of turbulence in the boundary layer increases along with the speed of the aircraft’s sur­face and the distance air travels along that surface. The more turbulence, the more frictional drag the aircraft will experience. In a subsonic trans­port aircraft, about half the fuel required to maintain level flight in cruise results from the necessity to overcome frictional drag in the boundary layer.[1460]

There are two types of methods used to achieve laminar flow: active and passive. Active Laminar Flow Control (LFC) seeks to reduce turbu­lence in the boundary layer by removing a small amount of fluid (air) from the boundary layer. Active LFC test sections on an aircraft wing contain tiny holes or slots that siphon off the most turbulent air by using an internal suction system. Passive laminar flow does not involve a suc-

Подпись: 12
Aircraft Materials and Structures

An F-1 6XL flow visualization test. This F-1 6 Scamp model was tested in the NASA Langley Research Center Basic Aerodynamics Research Tunnel. This was a basic flow visualization test using a laser light sheet to illuminate the smoke. NASA.

tion system to remove turbulent air; instead, it relies on careful contour­ing of the wing’s surface to reduce turbulence.[1461]

In 1990, NASA and Boeing sponsored flight tests of a Boeing 757 that used a hybrid of both active and passive LFC. The holes or slots used in active LFC can get clogged with bugs. As a result, NASA and Boeing used
a hybrid LFC system on the 757 that limited the air extraction system to the leading edge of the wing, followed by a run of the natural lami­nar flow.[1462] Based on the flight tests, engineers calculated that the appli­cation of hybrid LFC on a 300-passenger, long-range subsonic transport could provide a 15-percent reduction in fuel burned, compared with a conventional equivalent.[1463]

Подпись: 12NASA laminar flow research continued to evolve, with NASA Dryden conducting flight tests on two F-16 test aircraft known as the F-16XL-1 and F-16XL-2 in the early and mid-1990s. The purpose was to test the application of active and passive laminar flow at supersonic speeds. Technical data from the tests are available to inform the development of future high-speed aircraft, including commercial transports.[1464]

Today, laminar flow research continues, although active LFC, required for large transport aircraft, has not yet made its way into widespread use on commercial aircraft. However, NASA is continuing work in this area. NASA’s subsonic fixed wing project, the largest of its four aeronautics programs, is working on projects to reduce noise, emissions, and fuel burn on commercial-transport-size aircraft by employing several tech­nology concepts, including laminar flow control. The Agency is hoping to develop technology to reduce fuel burn for both a next generation of narrow-body aircraft (N+1) and a next generation of hybrid wing/body aircraft (N+2).[1465] NASA is expected to conduct wind tunnel tests of two hybrid wing body (also known as blended wing body) aircraft known as N2A and N2B in 2011. Those aircraft, which will incorporate hybrid LFC, are expected to reduce fuel burn by as much as 40 percent.[1466]

Together with this research on emissions and fuel burn has come a heightened awareness on reducing aircraft noise. One example of a very
beneficial technical "fix” to the noise problem is the chevron exhaust nozzle, so called because it has a serrated edge resembling a circular saw blade, or a series of interlinked chevrons. The exhaust nozzle chev­ron has become a feature of recent aircraft design, though how to best configure chevron shapes to achieve maximum noise-reduction bene­fit without losing important propulsive efficiencies is not yet a refined science. The takeoff noise reduction benefits, when "traded off” against potential losses in cruise efficiency, clearly required continued study, in much the same fashion that, in the piston-engine era, earlier NACA engineers grappled with assessing the benefits of the controllable-pitch propeller and the best way to configure early radial engine cowlings. As that resulted in the emergence of the NACA cowling as a staple and indeed, design standard, for future aircraft design, so too, presumably, will NASA’s work lead to better understanding of the benefits and design tradeoffs that must be made for chevron design.[1467]