Category AERONAUTICS

Self-Adaptive Flight Control Systems

One of the more sophisticated electronic control system concepts was funded by the AF Flight Dynamics Lab and created by Minneapolis Honeywell in the late 1950s for use in the Air Force-NASA-Boeing X-20 Dyna-Soar reentry glider. The extreme environment associated with a reentry from space (across a large range of dynamic pressures and Mach numbers) caused engineers to seek a better way of adjusting the feedback gains than stored programs and direct measurements of the atmospheric variables. The concept was based on increasing the elec­trical gain until a small limit-cycle was measured at the control surface, then alternately lowering and raising the electrical gain to maintain a small continuous, but controlled, limit-cycle throughout the flight. This allowed the total loop gains to remain at their highest safe value but avoided the need to accurately predict (or measure) the aerodynamic gains (control surface effectiveness).

This system, the MH-96 Adaptive Flight Control System (AFCS), was installed in a McDonnell F-101 Voodoo testbed and flown successfully by Minneapolis Honeywell in 1959-1960. It proved to be fairly robust in flight, and further system development occurred after the cancellation of the X-20 Dyna-Soar program in 1963. After a ground-test explosion during an engine run with the third X-15 in June 1960, NASA and the Air Force decided to install the MH-96 in the hypersonic research air­craft when it was rebuilt. The system was expanded to include several autopilot features, as well as a blending of the aerodynamic and reac­tion controls for the entry environment. The system was triply redun­dant, thus providing fail-operational, fail-safe capability. This was an improvement over the other two X-15s, which had only fail-safe fea­tures. Because of the added features of the MH-96, and the additional

redundancy it provided, NASA and the Air Force used the third X-15 for all planned high-altitude flights (above 250,000 feet) after an initial enve­lope expansion program to validate the aircraft’s basic performance.[689]

Unfortunately, on November 15, 1967, the third X-15 crashed, kill­ing its pilot, Major Michael J. Adams. The loss of X-15 No. 3 was related to the MH-96 Adaptive Flight Control System design, along with several other factors. The aircraft began a drift off its heading and then entered a spin at high altitude (where dynamic pressure—"q” in engineering shorthand—is very low). The flight control system gain was at its max­imum when the spin started. The control surfaces were all deflected to their respective stops attempting to counter the spin, thus no limit-cycle motion—4 hertz (Hz) for this airplane—was being detected by the gain changer. Thus, it remained at maximum gain, even though the dynamic pressure (and hence the structural loading) was increasing rapidly dur­ing entry. When the spin finally broke and the airplane returned to a normal angle of attack, the gain was well above normal, and the sys­tem commanded maximum pitch rate response from the all-moving elevon surface actuators. With the surface actuators operating at their maximum rate, there was still no 4-Hz limit-cycle being sensed by the gain changer, and the gain remained at the maximum value, driving the airplane into structural failure at approximately 60,000 feet and at a velocity of Mach 3.93.[690]

As the accident to the third X-15 indicated, the self-adaptive con­trol system concept, although used successfully for several years, had some subtle yet profound difficulties that resulted in it being used in only one subsequent production aircraft, the General Dynamics F-111 multipurpose strike aircraft. One characteristic common to most of the model-following systems was a disturbing tendency to mask deteriorat­ing handling qualities. The system was capable of providing good han­dling qualities to the pilot right up until the system became saturated, resulting in an instantaneous loss of control without the typical warn­ing a pilot would receive from any of the traditional signs of impending loss of control, such as lightening of control forces and the beginning

of control reversal.[691] A second serious drawback that affected the F-111 was the relative ease with which the self-adaptive system’s gain changer could be "fooled,” as with the accident to the third X-15. During early testing of the self-adaptive flight control system on the F-111, testers dis­covered that, while the plane was flying in very still air, the gain changer in the flight control system could drive the gain to quite high values before the limit-cycle was observed. Then a divergent limit-cycle would occur for several seconds while the gain changer stepped the gain back to the proper levels. The solution was to install a "thumper” in the sys­tem that periodically introduced a small bump in the control system to start an oscillation that the gain changer could recognize. These oscilla­tions were small and not detectable by the pilot, and thus, by inducing a little "acceptable” perturbation, the danger of encountering an unex­pected larger one was avoided.

For most current airplane applications, flight control systems use stored gain schedules as a function of measured flight conditions (alti­tude, airspeed, etc.). The air data measurement systems are already installed on the airplane for pilot displays and navigational purposes, so the additional complication of a self-adaptive feature is considered unnecessary. As the third X-15’s accident indicated, even a well-designed adaptive flight control system can be fooled, resulting in tragic conse­quences.[692] The "lesson learned,” of course (or, more properly, the "les­son relearned”) is that the more complex the system, the harder it is to identify the potential hazards. It is a lesson that engineers and design­ers might profitably take to heart, no matter what their specialty.

Flight Control Coupling

Flight control coupling is a slow loss of control of an airplane because of a unique combination of static stability and control effectiveness. Day described control coupling—the second mode of dynamic coupling—as " a coupling of static yaw and roll stability and control moments which can produce untrimmability, control reversal, or pilot-induced oscilla­tion (PIO).”[742] So-called "adverse yaw” is a common phenomenon associ­ated with control of an aircraft equipped with ailerons. The down-going aileron creates an increase in lift and drag for one wing, while the up – going aileron creates a decrease in lift and drag for the opposite wing. The change in lift causes the airplane to roll toward the up-going aile­ron. The change in drag, however, results in the nose of the airplane swinging away from the direction of the roll (adverse yaw). If the air­plane exhibits strong dihedral effect (roll produced by sideslip, a quality more pronounced in a swept wing design), the sideslip produced by the aileron deflections will tend to detract from the commanded roll. In the extreme case, with high dihedral effect and strong adverse yaw, the roll can actually reverse, and the airplane will roll in the opposite direction to that commanded by the pilot—as sometimes happened with the Boeing

B-47, though by aeroelastic twisting of a wing because of air loads. If the pilot responds by adding more aileron deflection, the roll reversal and sideslip will increase, and the airplane could go out of control.

As discussed previously, the most dramatic incident of control cou­pling occurred during the last flight of the X-2 rocket-powered research airplane in September 1956. The dihedral effect for the X-2 was quite strong because of the influence of wing sweep rather than the existence of actual wing dihedral. Dihedral effect because of wing sweep is non­existent at zero-lift but increases proportionally as the angle of attack of the wing increases. After the rocket burned out, which occurred at the end of a ballistic, zero-lift trajectory, the pilot started a gradual turn by applying aileron. He also increased the angle of attack slightly to facili­tate the turn, and the airplane entered a region of roll reversal. The side­slip increased until the airplane went out of control, tumbling violently. The data from this accident were fully recovered, and the maneuver was analyzed extensively by the NACA, resulting in a better understanding of the control-coupling phenomenon. The concept of a control parame­ter was subsequently created by the NACA and introduced to the indus­try. This was a simple equation that predicted the boundary conditions for aileron reversal based on four stability derivatives. When the yaw­ing moment due to sideslip divided by the yawing moment due to aile­ron is equal to the rolling moment due to sideslip divided by the rolling moment due to aileron, the airplane remains in balance and aileron deflection will not cause the airplane to roll in either direction.[743]

CFD and Transonic Airfoils

The analysis of transonic flows suffers from the same problems as those for the supersonic blunt body discussed above. Just considering the flow to be inviscid, the governing Euler equations are highly nonlinear for both transonic and hypersonic flows. From the numerical point of view, both flow fields are mixed regions of locally subsonic and super­sonic flows. Thus, the numerical solution of transonic flows originally encountered the same problem as that for the supersonic blunt body problem: whatever worked in the subsonic region did not work in the supersonic region, and vice versa. Ultimately, this problem was solved from two points of view. Historically, the first truly successful CFD solu­tion for the inviscid transonic flow over an airfoil was carried out in
1971 by Earll Murman and Julian Cole of Boeing Scientific Research Laboratories, whose collaborative research began at the urging of Arnold "Bud” Goldburg, then Chief Scientist of Boeing.[776] They treated a simpli­fied version of the Euler equations called the small-perturbation veloc­ity potential equation. This limited their solutions to the flows over thin airfoils at small angles of attack. Nevertheless, Murman and Cole intro­duced the concept of writing the finite differences in the equations such that they reached in both the upstream and downstream directions when in the subsonic region, but they reached in only the upwind direction in the supersonic regions. This is motivated by the physical process that in subsonic flow disturbances propagate in all directions but in a supersonic flow disturbances propagate only in the downstream direction. Thus it is proper to form the finite differences in the supersonic region such that they take only information from the upstream side of the grid point.

CFD and Transonic AirfoilsToday, this approach in modern CFD is called "upwinding” and is part of many modern algorithms in use for all kinds of flows. In 1971, this idea was groundbreaking, and it allowed Murman and Cole to obtain the first successful numerical solutions of the transonic flow over a body. In addition to the restriction of thin airfoils at small angles of attack, how­ever, their use of the small perturbation velocity potential equation also limited their solutions to isentropic flows. This meant that, although their solution captured the semblance of a shock wave in the flow, the loca­tion and flow changes across a shock wave were not accurate. Because many transonic flows involve shock waves embedded in the flow, this was definitely a bit of a problem. The solution to this problem involved the numerical treatment of the Euler equations, which, as we have dis­cussed early in this article, accurately pertain to any inviscid flow, not just one with small perturbations and free of shocks.

The finest in such CFD solutions were developed by Antony Jameson, then a professor at Princeton University (and now at Stanford), whose work was heavily sponsored by the NASA Langley Research Laboratory. Using the concept of time marching in combination with a Runge-Kutta time integration of the unsteady equations, Jameson constructed a series of outstanding transonic airfoil codes under the general code name of the FLO codes. These codes entered standard use in many aircraft com­panies and laboratories. Once again, NASA had been responsible for a
major advancement in CFD, helping to develop transonic flow codes that advanced the design of many airfoil shapes used today on modern commercial jet transports.[777]

Glenn (Formerly Lewis) Research Center

Glenn is the primary Center for research on all aspects of aircraft and spacecraft propulsion, including engine-related structures. The struc­tures area has typically consisted of approximately 50 researchers (not counting materials).[866] Structures research topics include: structures sub­jected to thermal loading, dynamic loading, and cyclic loading; spinning structures; coupled thermo-fluid-structural problems; structures with local plasticity and time-varying properties; probabilistic methods and reliability; analysis of practically every part of a turbine engine; Space Shuttle Main Engine (SSME) components; propeller and propfan flut­ter; failed blade containment analysis; and bird impact analysis. Some of the impact analysis research has been collaborative with Marshall Space Flight Center, which was interested in meteor and space debris impact effects on spacecraft.[867] Glenn has also collaborated extensively with Langley. In 1987, there was a joint Lewis-Langley Workshop on Computational Structural Mechanics (CSM) "to encourage a cooper­ative Langley-Lewis CSM program in which Lewis concentrates on engine structures applications, Langley concentrates on airframe and space structures applications, and all participants share technology of mutual interest.”[868]

Glenn has been involved in NASTRAN improvements since NASTRAN was introduced in 1970 and hosted the sixth NASTRAN Users’ Colloquium. Many of the projects at Glenn built supplemental capabil­ity for NASTRAN to handle the unique problems of propulsion system structural analysis: "The NASA Lewis Research Center has sponsored the development of a number of related analytical/computational capa­bilities for the finite element analysis program, NASTRAN. This devel­opment is based on a unified approach to representing and integrating the structural, aerodynamic, and aeroelastic aspects of the static and dynamic stability and response problems of turbomachines.”[869]

The aircraft and spacecraft engine industries are naturally the pri­mary customers of Glenn technology. However, no attempt is made here to document this technology transfer in detail. Other essays in this vol­ume address advances in propulsion technology and high-temperature materials. Instead, attention is given here to those projects at Glenn that have advanced the general state of the art in computational structures methods and that have found other applications in addition to aero­space propulsion. These include SPAR, NESSUS, SCARE/CARE (and derivatives), ICAN, and MAC.

SPAR was a finite-element structural analysis system developed ini­tially at NASA Lewis in the early 1970s and upgraded extensively through the 1980s. SPAR was less powerful than NASTRAN but relatively inter­active and easy to use for tasks involving iterative design and analysis. Chrysler Corporation used SPAR for designing body panels, starting in the 1980s.[870] NASA Langley has made improvements to SPAR and has used it for many projects, including structural optimization, in conjunc­tion with the Ames CONMIN program.[871] SPAR evolved into the EAL pro­gram, which was used for the structural portion of structural-optical analyses at Marshall.[872] Dryden Flight Research Center has used SPAR for Space Shuttle reentry thermal modeling.

Numerical Evaluation of Stochastic Structures under Stress (NESSUS) was the product of a Probabilistic Structural Analysis Methods (PSAM) project initiated in 1984 for probabilistic structural analysis of Shuttle and future spacecraft propulsion system components. The prime contractor was Southwest Research Institute (SwRI). NESSUS was designed for solving problems in which the loads, boundary con­ditions, and/or the material properties involved are best described by statistical distributions of values, rather than by deterministic (known, single) values. PSAM officially completed in 1995 with the delivery of NESSUS Version 6.2. SwRI was awarded another contract in 2002 for enhancements to NESSUS, leading to the release of Version 8.2 to NASA in December 2004 and commercially in 2005. Los Alamos National Laboratory has used NESSUS for weapon-reliability analysis under its Stockpile Stewardship program. Other applications included auto­motive collision analysis and prediction of the probability of spinal injuries during aircraft ejections, carrier landings, or emergency water landings. NESSUS is used in teaching and research at the University of Texas at San Antonio.[873] In some applications, NESSUS is cou­pled with commercially available deterministic codes offering greater structural analysis capability, with NESSUS providing the statistically derived inputs.[874]

Ceramics Analysis and Reliability Evaluation of Structures (SCARE/ CARES) was introduced as SCARE in 1985 and later renamed CARES. This program performed fast-fracture reliability and failure probability analysis of ceramic components. SCARE was built as a postprocessor to MSC/NASTRAN. Using MSC/NASTRAN output of the stress state in a component, SCARE performed the crack growth and structural reli­ability analysis of the component.[875] Upgrades and a very comprehensive program description and user’s guide were introduced in 1990.[876] In 1993, an extension, CARES/LIFE, was developed to calculate the time depen­dence of the reliability of a component as it is subjected to testing or use. This was accomplished by including the effects of subcritical crack growth over time.[877] Another 1993 upgrade, CCARES (for CMC CARES), added the capability to analyze components made from ceramic matrix composite (CMC) materials, rather than just macroscopically isotropic materials.[878] CARES/PC, introduced in 1994 and made publicly available through COSMIC, ran on a personal computer but offered a more lim­ited capability (it did not include fast-fracture calculations).[879]

R&D Magazine gave an R&D 100 Award jointly to NASA Lewis and to Philips Display Components for application of CARES/Life to the development of an improved television picture tube in 1995. "Cares/ Life has been in high demand world-wide, although present technology transfer efforts are entirely focused on U. S.-based organizations. Success stories can be cited in numerous industrial sectors, including aerospace, automotive, biomedical, electronic, glass, nuclear, and conventional power-generation industries.”[880]

Integrated Composite Analyzer (ICAN) was developed in the early 1980s to perform design and analysis of multilayered fiber composites. ICAN considered hygrothermal (humidity-temperature) conditions as well as mechanical loads and provided results for stresses, stress con­centrations, and locations of probable delamination.[881] ICAN was used extensively for design and analysis of composite space antennas and for analysis of engine components. Upgrades were developed, includ­ing new capabilities and a version that ran on a PC in the early 1990s.[882] ICAN was adapted (as ICAN/PART) to analyze building materials under a cost-sharing agreement with Master Builders, Inc., in 1995.[883]

Goodyear began working with Glenn in 1995 to apply Glenn’s Micromechanics Analysis Code (MAC) to tire design. The relationship was formed, in part, as a result of Glenn’s involvement with the Great Lakes Industrial Technology Center (GLITeC) and the Consortium for the Design and Analysis of Composite Materials. NASA worked with Goodyear to tailor the code to Goodyear’s needs and provided onsite training. MAC was used to assess the effects of chord spacing, ply and belt configurations, and other tire design parameters. By 2002, Goodyear had several tires in production that had benefitted from the MAC design analysis capabilities. Dr. Steven Arnold was the Glenn point of contact in this effort.[884]

TRansfer ANalysis Code to Interface Thermal and Structural (3D TRANCITS, Glenn, 1985)

Transfer of data between different analysis codes has always been one of the challenges of multidisciplinary design, analysis, and optimization. Even if input and output format can be standardized, different types of analysis often require different types of information or different mesh densities, globally or locally. TRANCITS was developed to translate between heat transfer and structural analysis codes: "TRANCITS has the capability to couple finite difference and finite element heat transfer analysis codes to linear and nonlinear finite element structural analysis codes. TRANCITS currently supports the output of SINDA and MARC heat transfer codes directly. It will also format the thermal data out­put directly so that it is compatible with the input requirements of the NASTRAN and MARC structural analysis codes. . . . The transfer mod­ule can handle different elemental mesh densities for the heat transfer analysis and the structural analysis.”[982] MARC is a commercial, general- purpose, nonlinear finite element code introduced by MARC Analysis and Research Corp. in the late 1970s. Because of its nonlinear analysis capabilities, MARC was used extensively at Glenn for engine compo­nent analyses and for other applications, such as the analysis of a space station strongback for launch loads in 1992.[983] Other commercial finite element codes used at Glenn included MSC/NASTRAN, which was used along with NASA’s COSMIC version of NASTRAN.

Turbine Blades

Turbine blades operate at speeds well below hypersonic, but this topic shares the same exotic metals that are used for flight structures at the highest speeds. It is necessary to consider how such blades use coat­ings to stay cool, an issue that represents another form of cooling. It also is necessary to consider directionally solidified and single-crystal castings for blades.

Britain’s firm of Rolls-Royce has traditionally possessed a strong standing in this field, and The Economist has noted its activity:

The best place to start is the surprisingly small, almost under­whelming, turbine blades that make up the heart of the giant engines slung beneath the wings of the world’s biggest planes. These are not the huge fan blades you see when boarding, but are buried deep in the engines. Each turbine blade can fit in the hand like an oversized steak knife. At first glance it may not seem much more difficult to make. Yet they cost about $ 10,000 each. Rolls-Royce’s executives like to point out that their big engines, of almost six tonnes, are worth their weight in silver— and that the average car is worth its weight in hamburger.[1084]

Turbine blades are difficult to make because they have to survive high temperatures and huge stresses. The air inside big jet engines reaches about 2,900 °F in places, 750 degrees hotter than the melting point of the metal from which the turbine blades are made. Each blade is grown from a single crystal of alloy for strength and then coated with tough ceramics. A network of tiny air holes then creates a thin blanket of cool air that stops it from melting.

The study of turbine blades brings in the topic of thermal barrier coatings (TBC). By attaching an adherent layer of a material of low ther­mal conductivity to the surface of an internally cooled turbine blade, a temperature drop is induced across the thickness of the layer. This results in a drop in the temperature of the metal blade. Using this approach, temperature reductions of up to 340 °F at the metal surface have been estimated for 150-micron-thick yttria stabilized zirconia coatings. The rest of the temperature decrease is obtained by cooling the blade using air from the compressor that is ducted downstream to the turbine.

The cited temperature reductions reduce the oxidation rate of the bond coat applied to the blades and so delay failure by oxidation. They also retard the onset of thermal fatigue. One should note that such coat­ings are currently used only to extend the life of components. They are not used to increase the operating temperature of the engine.

Modern TBCs are required to not only limit heat transfer through the coating but to also protect engine components from oxidation and hot corrosion. No single coating composition appears able to satisfy these requirements. As a result, a "coating system” has evolved. Research in the last 20 years has led to a preferred coating system consisting of four separate layers to achieve long-term effectiveness in the high-tem­perature, oxidative, and corrosive environment in which the blades must function. At the bottom is the substrate, a nickel – or cobalt-based super­alloy that is cooled from the inside using compressor air. Overlaying it is the bond coat, an oxidation-resistant layer with thickness of 75-150 microns that is typically of a NiCrAlY or NiCoCrAlY alloy. It essentially dictates the spallation failure of the blade. Though it resists oxidation, it does not avoid it; oxidation of this coating forms a third layer, the ther­mally grown oxide, with a thickness of 1 to 10 microns. It forms as Al2O3. The topmost layer, the ceramic topcoat, provides thermal insulation. It is typically of yttria-stabilized ZrO2. Its thickness is characteristically about 300 microns when deposited by air plasma spray and 125 microns when deposited by electron beam physical vapor deposition (EB-PVD).[1085]

Yttria-stabilized zirconia has become the preferred TBC layer mate­rial for use in jet engines because of its low thermal conductivity and its relatively high thermal expansion coefficient, compared with many other ceramics. This reduces the thermal expansion mismatch with the met­als of high thermal expansion coefficient to which it is applied. It also has good erosion resistance, which is important because of the entrain­ment of particles having high velocity in the engine gases. Robert Miller, a leading specialist, notes that NASA and the NACA, its predecessor, have played a leading role in TBC development since 1942. Flame-sprayed Rokide coatings, which extended the life of the X-15 main engine com­bustion chamber, represented an early success. Magnesia-stabilized zir­conia later found use aboard the SR-71, allowing continuous use of the afterburner and sustained flight above Mach 3. By 1970, plasma-sprayed TBCs were in use in commercial combustors.[1086]

These applications involved components that had no moving parts. For turbines, the mid-1970s brought the first "modern” thermal spray coating. It used yttria as a zirconia stabilizer and a bond coat that contained MCrAlY, and demonstrated that blade TBCs were feasible.

C. W. Goward of Pratt & Whitney (P&W), writing of TBC experience with the firm’s J7 5 engine, noted: "Although the engine was run at rel­atively low pressures, the gas turbine engine community was sufficiently impressed to prompt an explosive increase in development funds and programs to attempt to achieve practical utilization of the coatings on turbine airfoils.”[1087]

But tests in 1977 on the more advanced JT9D, also conducted at P&W, brought more mixed results. The early TBC remained intact on lower-temperature regions of the blade but spalled at high tempera­tures. This meant that further development was required. Stefan Stecura reported an optimum concentration of Y2O3 in ZrO2 of 6-8 percent. This is still the state of the art. H. G. Scott reported that the optimum phase of zirconia was t’-ZrO2. In 1987, Stecura showed that ytterbia – stabilized zirconia on a ytterbium-containing bond coat doubled the blade life and took it from 300 1-hour cycles to 600 cycles. Also at that time, P&W used a zirconia-yttria TBC to address a problem with endur­ance of vane platforms. A metallic platform, with no thermal barrier, showed burn-through and cracking from thermal-mechanical fatigue after 1,500 test cycles. Use of a TBC extended the service life to 18,000 hours or 2,778 test cycles and left platforms that were clean, uncracked, and unburned. P&W shared these results with NASA, which led to the TBC task in the Hot Section Technology (HOST) program. NASA col­laborated with P&W and four other firms as it set out to predict TBC lifetimes. A preliminary NASA model showed good agreement between experiment and calculation. P&W identified major degradation modes and gave data that also showed good correlation between measured and modeled lives. Other important contributions came from Garrett Turbine Co. and General Electric. The late 1980s brought Prescribed Velocity Distribution (PVD) blades that showed failure when they were nearly out of the manufacturer’s box. EV-PVD blades resolved this issue and first entered service in 1989 on South African Airways 747s. They flew from Johannesburg, a high-altitude airport with high mean tem­peratures where an airliner needed a heavy fuel load to reach London. EV-PVD TBCs remain the coating of choice for first-row blades, which see the hottest combustion gases. TBC research continues to this day, both at NASA and its contractors. Fundamental studies in aeronautics are important, with emphasis on erosion of turbine components. This work has been oriented toward rotorcraft and has brought the first EV-PVD coating for their blades. There also has been an emphasis on damping of vibration amplitudes. A new effort has dealt with environ­mental barrier coatings (EBCs), which Miller describes as "ceramic coatings, such as SiC, on top of ceramics.”[1088]

Important collaborations have included work on coatings for die­sels, where thick TBCs permit higher operating temperatures that yield increased fuel economy and cleaner exhaust. This work has proceeded with Caterpillar Tractor Co. and the Army Research Laboratory.[1089]

Studies of supersonic engines have involved cooperation with P&W and GE, an industrial interaction that Miller described as "a useful reality check.”[1090] NASA has also pursued the Ultra Efficient Engine Technology program. Miller stated that it has not yet introduced engines for routine service but has led to experimental versions. This work has involved EBCs, as well as a search for low thermal conductivity. The latter can increase engine-operating temperatures and reduce cooling requirements, thereby achieving higher engine efficiency and lower emissions. At NASA Glenn, Miller and Dong-ming Zhu have built a test facility that uses a 3-kilowatt CO2 laser with wavelength of 10.6 microns. They also have complemented conventional ZrO2- Y2O3 coatings with other rare-earth oxides, including Nd2O3-Yb2O3 and Gd2O3-Yb2O3.[1091]

Can this be reduced further? A promising approach involves devel­opment of new deposition techniques that give better control of TBC pore morphology. Air plasma spray deposition creates many intersplat pores between initially molten droplets, in what Miller described as "a messy stack of pancakes.” By contrast, TBC layers produced by EB-PVD have a columnar microstructure with elongated intercolumnar pores that align perpendicular to the plane of the coating. Alternate depo­sition methods include sputtering, chemical vapor deposition (CVD), and sol-gel approaches. But these approaches involve low deposition rates that are unsuitable for economic production of coated blades. CVD and sol-gel techniques also require the use of dangerous and costly precursor materials. In addition, none of these approaches permit the precise control and manipulation of pore morphology. Thus, improved deposition methods that control this morphology do not now exist.

DFBW F-8: Phase II

On November 16, 1973, the DFBW team received a NASA group achieve­ment award for its highly impressive accomplishments during the Phase I effort. By that time, planning was well underway for the Phase II effort, with the first version of the software specification having already been issued in April 1973. Whereas Phase I had verified the feasibility of flight control using a digital computer, Phase II was intended to develop a more practical approach to the implementation of digital flight control, one that could be used to justify the incorporation of digital technology into production designs for both military and commercial use. In the Phase

II design, the single channel Apollo computer-based flight control sys­tem was replaced with a triply redundant flight control system approach using three International Business Machines (IBM) AP-101 digital com­puters. The challenge was how to program this multicomputer system to act as a single computer in processing flight control laws and directing aircraft maneuvers while functioning independently for purposes of fault tolerance.[1160] The 32-bit IBM AP-101 computer had been selected for use in the Space Shuttle. It consumed 370 watts of power, weighed about 50 pounds, and had 32,000 words of memory.[1161] The DFBW program decided to also use the AP-101 computer in its Phase II effort, and a purchase contact with IBM was signed in August 1973. However, the reliability of the AP-101 computer, as measured by mean time between failures, left much to be desired. The computer would turn out to require major rede­sign, and it never came close to meeting its reliability projections. As Ken Szalai recently commented: "the IBM AP-101 computer was one of the last of the ‘beasts.’ It was big and ran hot. The circuit boards tended to fail as temperatures increased. This was found to be due to thermal expan­sion causing the layers within the circuit boards to separate breaking their electrical connections.” Szalai recounted that he notified the Space Shuttle team as soon as the issue was discovered. They were surprised, as they had never seen a similar problem with the AP-101. The reason soon became apparent. The AP-101s installed in the F-8 Iron Bird were being tested in a non-air-conditioned hangar; Space Shuttle flight control sys­tem testing had been in a 50 degree Fahrenheit (50 °F) cooled laboratory environment. When the Space Shuttle was tested on the flight line in typ­ical outside temperatures encountered at Dryden, similar reliability prob­lems were encountered. IBM subsequently changed the thermal coating process used in the manufacture of the AP-101 circuit boards, a measure that partly resolved the AP-101’s reliability problems.[1162]

Подпись: 10Software for Phase II was also larger and more complex than that used in Phase I because of the need for new pilot interface devices. Flight control modes still included the direct (DIR) mode, the stability aug­mentation (SAS) mode, and the control-augmentation (CAS) mode. A pitch maneuver-load-control feature was added to the CAS mode, and a digital autopilot was fitted that incorporated Mach hold, altitude-hold,

and heading-hold selections. The software gradually matured to the point where pilots could begin verification evaluations in the Iron Bird sim­ulator in early 1976. By July, no anomalies were reported in the latest software release, with the direct and stability-augmentation modes con­sidered flight-ready. The autopilot and control-augmentation mode still required more development, but they were not necessary for first flight.

Подпись: 10The backup analog flight control system was also redesigned for Phase II, and the secondary actuators were upgraded. Sperry supplied an updated version of the Phase I Backup Control System using the same technology that had been used in the Air Force’s YF-4E project. Signals from the analog computers were now force-summed when they reached the actuators, resulting in a quicker response. The redesigned secondary actuators provided 20 percent more force, and they were also more reli­able. The hydraulic actuators used in Phase I had two sources of hydraulic pressure for the actuators; in those chosen for Phase II, there were three hydraulic sources that corresponded with the three channels in each of the primary and secondary flight control systems. The secondary elec­tronic actuators had three channels, with one dedicated to each computer in the primary system. The actuators were shared by the analog com­puter bypass system in the event of failure of the primary digital system.

The final Phase II design review occurred in late May 1975, with both the Iron Bird and aircraft 802 undergoing modification well into 1976. By early April, Gary Krier was able to fly the Iron Bird simulator with flight hardware and software. Handling qualities were generally rated as very good, but actuator anomalies and transients were noted, as were some problems with the latest software releases. After these issues were resolved, a flight qualification review was completed on August 20. High-speed taxi tests beginning 3 days later, then, on August 27, 1976, Gary Krier took off on the first flight of the Phase II program. On the second Phase II flight, one of the AP-101 computers failed with the aircraft at supersonic speed. An uneventful landing was accomplished with the flight control system remaining in the primary flight control mode. This was in accordance with the established flight-test procedure in the event of a failure of one of the primary computers. Flight-testing was halted, and all AP-101s were sent back to IBM for refurbishment. After 4 months, the AP-101s were back at Dryden, but another AP-101 computer failure occurred on the very next flight. Again, the primary digital flight control system handled the failure well, and flights were soon being accomplished without inci­dent, providing ever increasing confidence in the system.

In the spring of 1977, the DFBW F-8 was modified to support the Space Shuttle program. It flew eight times with the Shuttle Backup Flight System’s software test package running in parallel with the F-8 flight control software. Data from this package were downlinked as the F-8 pilots flew a series of simulated Shuttle landing profiles. Later in 1977, the unpowered Space Shuttle Enterprise was being used to evaluate the flight characteristics of the Space Shuttle during approach and landing in preparation for full-up shuttle missions. During the Shuttle Approach and Landing Test (ALT) program, the Enterprise was carried aloft atop the NASA 747 Shuttle carrier aircraft. After release, the Shuttle’s han­dling qualities and the responsiveness of its digital fly-by-wire system were evaluated. On the fifth and last of the shuttle ALT flights in October 1977, a pilot-induced oscillation developed just as the Enterprise was landing. The DFBW F-8C was then used in a project oriented to dupli­cating the PIO problem encountered on the Shuttle during a series of flights in 1978 that were initially flown by Krier and McMurtry. They were joined by Einar K. Enevoldson and John A. Manke, who had exten­sive experience flying NASA lifting body vehicles. The lifting body vehi­cles used side stick controllers and had approach characteristics that were similar to those of the Space Shuttle.

Подпись: 10Flying simulated Shuttle landing profiles with the DFBW F-8, the pilots gathered extremely valuable data that supported the Shuttle pro­gram in establishing sampling rates and control law execution limits. The DFBW F-8 flight control software had been modified to enable the pilot to vary transport delay times to evaluate their effect on control response. Transport delay is the elapsed time between pilot movement of his cockpit control and the actual movement of the flight control sur­faces. It is a function of several factors, including the time needed to do analog-to-digital conversion, the time required to execute the appropri­ate flight control law, length of the electrical wires to the actuators, and the lag in response of the hydraulic system. If transport delay is too long, the pilot may direct additional control surface movement while his ini­tial commands are in the process of being executed by the flight control system. This can result in overcontrol. Subsequent attempts to correct the overshoot can lead to a series of alternating overshoots or oscilla­tions that are commonly referred to as a PIO. The range of transport delay times within which the Shuttle would be unlikely to encounter a PIO was determined using the DFBW F-8, enabling Dryden to develop a PIO suppression filter for the Shuttle. The PIO suppression filter was

successfully evaluated in the F-8, installed in the Shuttle prior to its first mission into space, and proved to effectively eliminate the PIO issue.[1163]

Подпись: 10During Phase II, 169 flights were accomplished with several other test pilots joining the program, including Stephen D. Ishmael, Rogers Smith, and Edward Schneider. In addition to its previously noted accom­plishments, the DFBW F-8 successfully evaluated adaptive control law approaches that would later become standard in many FBW aircraft. It was used in the Optimum Trajectory Research Experiment (OPTRE). This involved testing data uplink and downlink between the F-8 and a computer in the then-new Remotely Piloted Vehicle Facility. This exper­iment demonstrated that an aircraft equipped with a digital flight con­trol system could be flown using control laws that were operating in ground-based digital computers. The F-8 conducted the first in-flight eval­uations of an automatic angle-of-attack limiter and maneuvering flaps. These features are now commonly used on nearly all military and com­mercial aircraft with fly-by-wire flight controls. The DFBW F-8 also suc­cessfully tested an approach that used a backup software system known as the Resident Backup System (REBUS) to survive potential software faults that could cause all three primary flight control system comput­ers to fail. The REBUS concept was later used in other experimental aircraft, as well as in production fly-by-wire flight control systems. The final flight-test effort of the DFBW program involved the development of a methodology called analytical redundancy management. In this concept, dynamic and kinematic relationships between dissimilar sen­sors and measurements were used to detect and isolate sensor failures.[1164]

U. K. Jaguar ACT

In the U. K., the Royal Aircraft Establishment began an effort ori­ented to producing a CCV testbed in 1977. For this purpose, an Anglo – French Jaguar strike fighter was modified by British Aerospace (BAe) to prove the feasibility of active control technology. Known as the Jaguar Active Control Technology (ACT), the aircraft’s mechanical flight control system was entirely removed and replaced with a quad-redundant digital fly-by-wire control system that used electrical channels to relay instructions to the flight control surfaces. The initial flight of the Jaguar ACT with the digital FBW system was in October 1981. As with the CCV F-104G, ballast was added to the aft fuselage to move the center of gravity aft and destabilize the aircraft. In 1984, the Jaguar ACT was fitted with rounded oversized leading-edge strakes to move the center of lift of the aircraft forward, further contributing to pitch instability. It first flew in this configuration in March 1984. Marconi developed the Jaguar ACT flight control system. It included an optically coupled data transmission link that was essentially similar to the one that they had developed for the U. S. Air Force YC-14 program (an interesting example of the rapid proliferation of advanced aerspace technology between nations).[1218]

Flight-testing began in 1981, with the test program ending in 1984 after 96 flights.[1219]

Advancing Propulsive Technology

James Banke

Подпись: ПEnsuring proper aircraft propulsion has been a powerful stimulus. In the interwar years, the NACA researched propellers, fuels, engine cool­ing, supercharging, and nacelle and cowling design. In the postwar years, the Agency refined gas turbine propulsion technology. NASA now leads research in advancing environmentally friendly and fuel-con­serving propulsion, thanks to the Agency’s strengths in aerodynamic and thermodynamic analysis, composite structures, and other areas.

E

ACH DAY, OUR SKIES FILL with general aviation aircraft, business jets, and commercial airliners. Every 24 hours, some 2 million passen­gers worldwide are moved from one airport to the next, almost all of them propelled by relatively quiet, fuel-efficient, and safe jet engines.[1291]

And no matter if the driving force moving these vehicles through the air comes from piston-driven propellers, turboprops, turbojets, turbofans— even rocket engines or scramjets—the National Aeronautics and Space Administration (NASA) during the past 50 years has played a significant role in advancing that propulsion technology the public counts on every day.

Many of the advances seen in today’s aircraft power-plants can trace their origins to NASA programs that began during the 1960s, when the Agency responded to public demand that the Government apply major resources to tackling the problems of noise pollution near major airports. Highlights of some of the more noteworthy research programs to reduce noise and other pollution, prolong engine life, and increase fuel efficiency will be described in this case study.

But efforts to improve engine efficiency and curb unwanted noise actu­ally predate NASA’s origins in 1958, when its predecessor, the National Advisory Committee for Aeronautics (NACA), served as the Nation’s pre­eminent laboratory for aviation research. It was during the 1920s that

Подпись: 11 Advancing Propulsive Technology

the NACA invented a cowling to surround the front of an airplane and its radial engine, smoothing the aerodynamic flow around the aircraft while also helping to keep the engine cool. In 1929, the NACA won its first Collier Trophy for the breakthrough in engine and aerodynamic technology.[1292]

During World War II, the NACA produced new ways to fix problems discovered in higher-powered piston engines being mass-produced for wartime bombers. NACA research into centrifugal superchargers was particularly useful, especially on the R-1820 Cyclone engines intended for use on the Boeing B-17 Flying Fortress, and later with the Wright R-3350 Duplex Cyclone engines that powered the B-29.

Basic research on aircraft engine noise was conducted by NACA engineers, who reported their findings in a paper presented in 1956 to the 51st Meeting of the Acoustical Society of America in Cambridge, MA. It would seem that measurements backed up the prediction that the noise level of the spinning propeller depended on several variables,
including the propeller diameter, how fast it is turning, and how far away the recording device is from the engine.[1293]

Подпись: 11As the jet engine made its way from Europe to the United States and designs for the basic turboprop, turbojet, and turbofan were refined, the NACA during the early 1950s began one of the earliest noise-reduction programs, installing multitube nozzles of increasing complexity at the back of the engines to, in effect, act as mufflers. These engines were tested in a wind tunnel at Langley Research Center in Hampton, VA. But the effort was not effective enough to prevent a growing public sentiment that commercial jet airliners should be seen and not heard.

In fact, a 1952 Presidential commission chaired by the legendary pilot James H. Doolittle predicted that aircraft noise would soon turn into a problem for airport managers and planners. The NACA’s response was to form a Special Subcommittee on Aircraft Noise and pursue a three – part program to understand better what makes a jet noisy, how to quiet it, and what, if any, impact the noise might have on the aircraft’s structure.[1294]

As the NACA on September 30, 1958, turned overnight into the National Aeronautics and Space Administration on October 1, the new space agency soon found itself with more work to do than just beating the Soviet Union to the Moon.

Advanced Subsonic Technology Program and UEET

Подпись: 12NASA started a project in the mid-1990s known as the Advanced Subsonic Technology program. Like HSR before it, the AST focused heavily on reducing emissions through new combustor technology. The overall objective of the AST was to spur technology innovation to ensure U. S. leadership in developing civil transport aircraft. That meant lowering NOx emissions, which not only raised concern in local airport com­munities but also by this time had become a global concern because of potential damage to the ozone layer. The AST sought to spur the devel­opment of new low-emissions combustors that could achieve at least a 50-percent reduction in NOx from 1996 International Civil Aviation Organization standards. The AST program also sought to develop tech­niques that would better measure how NOx impacts the environment.[1417]

GE, P&W, Allison Engines, and AlliedSignal engines all participated in the project.[1418] Once again, the challenge for these companies was to con­trol combustion in such a way that it would minimize emissions. This required carefully managing the way fuel and air mix inside the combustor to avoid extremely hot temperatures at which NOx would be created, or at least reducing the length of time that the gases are at their hottest point.

Ultimately the AST emissions reduction project achieved its goal of reducing NOx emissions by more than 50 percent over the ICAO stan­dard, a feat that was accomplished not with actual engine demonstrators but with a "piloted airblast fuel preparation chamber.”[1419]

Подпись: 12Despite their relative success, however, NASA’s efforts to improve engine efficiency and reduce emissions began to face budget cuts in 2000. Funding for NASA’s Atmospheric Effects of Aviation project, which was the only Government program to assess the effects of aircraft emissions at cruise altitudes on climate change, was canceled in 2000.[1420] Investments in the AST and the HSR also came to an end. However, NASA did manage to salvage parts of the AST aimed at reducing emissions by rolling those projects into the new Ultra Efficient Engine Technology program in 2000.[1421]

UEET was a 6-year, nearly $300 million program managed by NASA Glenn that began in October 1999 and included participation from NASA Centers Ames, Goddard, and Langley; engine companies GE Aircraft Engines, Pratt & Whitney, Honeywell, Allison/Rolls Royce, and Williams International; and airplane manufacturers Boeing and Lockheed Martin.[1422]

UEET sought to develop new engine technologies that would dramat­ically increase turbine performance and efficiency. It sought to reduce NOx emissions by 70 percent within 10 years and 80 percent within 25 years, using the 1996 International Civil Aviation Organization guidelines as a baseline.[1423] The UEET project also sought to reduce carbon dioxide emissions by 20 percent and 50 percent in the same timeframes, using 1997 subsonic aircraft technology as a baseline.[1424] The dual goals posed a major challenge because current aircraft engine technologies typically require a tradeoff between NOx and carbon emissions; when engines are designed to minimize carbon dioxide emissions, they tend to generate more NOx.

In the case of the UEET project, improving fuel efficiency was expected to lead to a reduction in carbon dioxide emissions by at least
8 percent: the less fuel burned, the less carbon dioxide released.[1425] The UEET program was expected to maximize fuel efficiency, requiring engine operations at pressure ratios as high as 55 to 1 and turbine inlet tem­peratures of 3,100 degrees Fahrenheit (°F).[1426] However, highly efficient engines tend to run at very hot temperatures, which lead to the genera­tion of more NOx. Therefore, in order to reduce NOx, the UEET program also sought to develop new fuel/air mixing processes and separate engine component technologies that would reduce NOx emissions 70 percent from 1996 ICAO standards for takeoff and landing conditions and also minimize NOx impact during cruise to avoid harming Earth’s ozone layer.

Подпись: 12Under UEET, NASA worked on ceramic matrix composite (CMC) combustor liners and other engine parts that can withstand the high temperatures required to maximize energy efficiency and reduce car­bon emissions while also lowering NOx emissions. These engine parts, particularly combustor liners, would need to endure the high temper­atures at which engines operate most efficiently without the benefit of cooling air. Cooling air, which is normally used to cool the hottest parts of an engine, is unacceptable in an engine designed to minimize NOx, because it would create stoichiometric fuel-air mixtures—meaning the number of fuel and air molecules would be optimized so the gases would be at their hottest point—thereby producing high levels of NOx in regions close to the combustor liner.[1427]

NASA’s sponsorship of the AST and the UEET also fed into the devel­opment of two game-changing combustor concepts that can lead to a significant reduction in NOx emissions. These are the Lean Pre-mixed, Pre-vaporized (LPP) and Rich, Quick Mix, Lean (RQL) combustor con­cepts. P&W and GE have since adopted these concepts to develop com­bustors for their own engine product lines. Both concepts focus on improving the way fuel and air mix inside the engine to ensure that core temperatures do not get so high that they produce NOx emissions.

GE has drawn from the LPP combustor concept to develop its Twin Annular Pre-mixing Swirler (TAPS) combustor. Under the LPP concept,
air from the high-pressure compressor comes into the combustor through two swirlers adjacent to the fuel nozzles. The swirlers premix the fuel and combustion air upstream from the combustion zone, creating a lean (more air than fuel) homogenous mixture that can combust inside the engine without reaching the hottest temperatures, at which NOx is created.[1428]

Подпись: 12NASA support also helped lay the groundwork for P&W’s Technology for Advanced Low Nitrogen Oxide (TALON) low-emissions combustor, which reduces NOx emissions through the RQL process. The front end of the combustor burns very rich (more fuel than air), a process that suppresses the formation of NOx. The combustor then transitions in milliseconds to burning lean. The air must mix very rapidly with the combustion products from the rich first stage to prevent NOx forma­tion as the rich gases are diluted.[1429] The goal is to spend almost no time at extremely hot temperatures, at which air and fuel particles are evenly matched, because this produces NOx.[1430]

Today, NASA continues to study the difficult problem of increas­ing fuel efficiency and reducing NOx, carbon dioxide, and other emis­sions. At NASA Glenn, researchers are using an Advanced Subsonic Combustion Rig (ASCR), which simulates gas turbine combustion, to engage in ongoing emissions testing. P&W, GE, Rolls Royce, and United Technologies Corporation are continuing contracts with NASA to work on low-emissions combustor concepts.

"The [ICAO] regulations for NOx keep getting more stringent,” said Dan Bulzan, NASA’s associate principle investigator for the sub­sonic fixed wing and supersonic aeronautics project. "You can’t just sit there with your old combustor and expect to meet the NOx emissions regulations. The Europeans are quite aggressive and active in this area as well. There is a competition on who can produce the lowest emissions combustor.”[1431]