Category AERONAUTICS

Elastic Aerostructural Effects

The distortion of the shape of an airplane structure because of applied loads also creates a static aerodynamic interaction. When air loads are applied to an aerodynamic surface, it will bend or twist proportional to the applied load, just like a spring. Depending on the surface configura­tion, the distorted shape can produce different aerodynamic properties when compared with the rigid shape. A swept wing, for example, will bend upward at the tip and may also twist as it is loaded.

This new shape may exhibit higher dihedral effect and altered span – wise lift distribution when compared with a rigid shape, impacting the performance of the aircraft. Because virtually all fighter aircraft have short wings and can withstand 7 to 9 g, their aeroelastic deformation is relatively small. In contrast, bomber, cargo, or high-altitude recon­naissance airplanes are typically designed for lower g levels, and the resulting structure, particularly its long, high aspect ratio wings, is often quite limber.

Notice that this is not a dynamic, oscillatory event, but a static con­dition that alters the steady-state handling qualities of the airplane. The prediction of these aeroelastic effects is a complex and not altogether accurate process, though the trends are usually correct. Because the effect is a static condition, the boundaries for safe flight can usually be determined during the buildup flight-test program, and, if necessary, placards, can be applied to avoid serious incidents once the aircraft enters operational service.

The six-engine Boeing B-47 Stratojet was the first airplane designed with a highly swept, relatively thin, high aspect ratio wing. At higher tran­sonic Mach numbers, deflection of the ailerons would cause the wing to twist sufficiently to cancel, and eventually exceed, the rolling moment produced by the aileron, thus producing an aileron reversal. (In effect, the aileron was acting like a big trim tab, twisting the wing and causing the exact opposite of what the pilot intended.) Aerodynamic loads are proportional to dynamic pressure, so the aeroelastic effects are usually more pronounced at high airspeed and low altitude, and this combina­tion caused several fatal accidents with the B-47 during its flight-testing and early deployment. After flight-testing determined the magnitude and region of reduced roll effectiveness, the airplane was placarded to 425 knots to avoid roll reversal. In sum, then, an aeroelastic problem forced the limiting of the maximum performance achievable by the airplane, rendering it more vulnerable to enemy defenses. The B-47’s successor,

the B-52, had a much thicker wing root and more robust structure to avoid the problems its predecessor had encountered.

The Mach 3.2+ Lockheed SR-71 Blackbird, designed to cruise at supersonic speeds at very high altitude, was another aircraft that exhib­ited significant aeroelastic structural deformation.[716] The Blackbird’s structure was quite limber, and the aeroelastic predictions for its behav­ior at cruise conditions were in error for the pitch axis. The SR-71 was a blended wing-body design with chines running along the forward sides of the fuselage and the engine nacelles, then blending smoothly into the rounded delta wing. These chines added lift to the airplane, and because they were well forward of the center of gravity, added a signifi­cant amount of pitching moment (much like a canard surface on an air­plane such as the Wright Flyer or the Saab AJ-37 Viggen). Flight-testing revealed that the airplane required more "nose-up” elevon deflection at cruise than predicted, adding a substantial amount of trim drag. This reduced the range the Blackbird could attain, degrading its operational performance. To correct the problem, a small shim was added to the production fuselage break just forward of the cockpit. The shim tilted the forebody nose cone and its attached chine surfaces slightly upward, producing a nose-up pitching moment. This allowed the elevons to be returned to their trim faired position at cruise flight conditions, thus regaining the lost range capability.

Sadly, the missed prediction of the aeroelastic effects also con­tributed to the loss of one of the early SR-71s. While the nose cone forebody shim was being designed and manufactured, the contractor desired to demonstrate that the airplane could attain its desired range if the elevons were faired. To achieve this, Lockheed technicians added trim-altering ballast to the third production SR-71, then being used for systems and sensor testing. The ballast shifted the center of grav­ity about 2 percent aft from its normal position and at the aft design limit for the airplane. The engineers calculated that this would permit the elevons to be set in their faired position at cruise conditions for this one flight so that the SR-71 could meet its desired range perfor­mance. Instead, the aft cg, combined with the nonlinear aerodynamics

and aeroelastic bending of the fuselage, resulted in the airplane going out of control at the start of a turn at a cruise Mach number. The air­plane broke in half, catapulting the pilot, who survived, from the cock­pit. Unfortunately, his flight-test engineer/navigator perished.[717] Shim installation, together with other minor changes to the control system and engine inlets, subsequently enabled the SR-71 to meet its perfor­mance goals, and it became a mainstay of America’s national reconnais­sance fleet until its retirement in early 1990.

Lockheed, the Air Force, and NASA continued to study Blackbird aeroelastic dynamics. In 1970, Lockheed proposed installation of a Loads Alleviation and Mode Suppression (LAMS) system on the YF-12A, installing very small canards called "exciter-” or "shaker-vanes” on the forebody to induce in-flight motions and subsequent suppression tech­niques that could be compared with analytical models, particularly NASA’s NASTRAN and Boeing’s FLEXSTAB computerized load predic­tion and response tools. The LAMS testing complemented Air Force – NASA research on other canard-configured aircraft such as the Mach 3+ North American XB-70A Valkyrie, a surrogate for large transport-sized supersonic cruise aircraft. The fruits of this research could be found on the trim canards used on the Rockwell B-1A and B-1B strategic bomb­ers, which entered service in the late 1980s and notably improved their high-speed "on the deck” ride qualities, compared with their three low – altitude predecessors, the Boeing B-52 Stratofortress, Convair B-58 Hustler, and General Dynamics FB-111.[718]

Active Cooling Approaches

There are other proposed methods for protecting vehicles from high temperature while flying at high speed or during reentry. Several active cooling concepts have been proposed where liquid is circulated through a hot area, then through a radiator to dissipate the heat. These concepts are quite complex and the risk is very high: failure of an active cooling system could result in loss of a hypersonic vehicle within a few seconds. None has been demonstrated in flight. Although work is continuing on active cooling concepts, their application will probably not be realized for many years.

As we look ahead to the future of aviation, it is easy to merely assess the current fleet of successful aircraft or spacecraft, and decide on what improvements we can provide, without considering the history and evolu­tion that produced these vehicles. T he danger is that some of the past prob­lems will reappear unless the design and test communities are aware of their history. This paper has attempted to summarize some of the problems that have been encountered, and resolved, during the technology explosion in aviation that has occurred over the last 60 years. The manner in which the problems were discovered, the methods used to determine causes, and the final resolution or correction that was implemented have been presented. Hopefully these brief summaries of historical events will stimulate further research by our younger engineers and historians into the various subjects covered, and to that end, the following works are particularly relevant.

Digital Computation Triggers Automated Structural Analysis

In 1946, the ENIAC, "commonly accepted as the first successful high­speed electronic digital computer,” became operational at the University

Подпись: Structure

Digital Computation Triggers Automated Structural Analysis Подпись: 8

Single Element Reactions

Simple example of discretized structure and single element. NASA.

of Pennsylvania.[800] It took up as much floor space as a medium-sized house and had to be "programmed” by physically rearranging its con­trol connections. Many advances followed rapidly: storing instructions in memory, conditional control transfer, random access memory, mag­netic core memory, and the transistor-circuit element. With these and other advances, digital computers progressed from large and ungainly experimental devices to programmable, useful, commercially available (albeit expensive) machines by the mid-1950s.[801]

The FORTRAN programming language was also developed in the mid-1950s and rapidly gained acceptance in technical communities. This was a "high level language,” which allowed programming instructions to be written in terms that an engineer or analyst could understand; a com­piler handled the translation into "machine language” that the computer could understand. International Business Machines (IBM) developed the original FORTRAN language and also some of the early practical digi­tal computers. Other early digital computers were produced by Control Data Corporation (CDC) and UNIVAC. These developments made it pos­
sible to take the new methods of structural analysis that were emerging and implement them in an automated, repeatable manner.

The essence of these new methods was to treat a structure as a finite number of discrete elastic elements, rather than as a continuum. Reactions (forces and moments) and deflections are only calculated at specific points, called "nodes.” Elements connect the nodes. The stress and strain fields in the regions between the nodes do not need to be solved in the global analysis. They only need to be solved when develop­ing the element-level solution, and once this is done for a particular type of element, that element is available as a prepackaged building block. Complex shapes and structures can then be built up from the simple elements. A simple example—using straight beam elements to model a curved beam structure—is illustrated here.

To find, for example, the relationship between the displacements of the nodes and the corresponding reactions, one could do the following (called the unit displacement method). First, a hypothetical unit dis­placement of one node in one degree of freedom (d. o.f.) only is assumed. This displacement is transposed into the local element coordinate sys­tems of all affected elements. (In the corresponding figure, this would entail the relatively simple transformation between global horizontal and vertical displacements, and element axial and transverse displace­ments. The angular displacements would require no transformation, except in some cases a sign change.) The predetermined element stiff­ness matrices are used to find the element-level reactions. The element reactions are then translated back into global coordinates and summed to give the total structure reactions—to the single hypothetical displace­ment. This set of global reactions, plus zeroes for all forces unaffected by the assumed displacement, constitutes one column in the "stiffness matrix.” By repeating the exercise for every degree of freedom of every node, the stiffness matrix can be built. Then the reactions to any set of nodal displacements may be found by multiplying the stiffness matrix by the displacement vector, i. e., the ordered list of displacements. This entails difficult bookkeeping but simple math.

It is more common in engineering, however, to have to find unknown displacements and stresses from known applied forces. This answer is not possible to obtain so directly. (That is, if the process just described seems direct to you. If it does, you are probably an engineer. If it seems too trivial to have even mentioned, then you are probably a mathematician.)

Instead, after the stiffness matrix is found, it must be inverted to obtain the flexibility matrix. The inversion of large matrices is a sci­ence in itself. But it can be done, using a computer, if one has time to wait. Most of the science lies in improving the efficiency of the process. Another important output is the stress distribution throughout the struc­ture. But this problem has already been solved at the element level for a hypothetical set of element nodal displacements. Scaling the generic stress distribution by the actual displacements, for all elements, yields the stress state throughout the structure.

There are, of course, many variations on this theme and many com­plexities that cannot be addressed here. The important point is that we have gone from an insoluble differential equation to a soluble matrix arithmetic problem. This, in turn, has enabled a change from individual analyses by hand of local portions of a structure to a model­ing effort followed by an automated calculation of the stresses and deflec­tions of the entire structure.

Pioneering papers on discretization of structures were published by Alexander Hrennikoff in 1941 at the Massachusetts Institute of Technology and by Richard Courant in 1943 at the mathematics institute he founded at New York University that would later bear his name. These papers did not lead to immediate application, in part perhaps because they were ahead of the necessary computational technology and in part because they were still somewhat theoretical and had not yet developed a well – formed practical implementation. The first example of what we now call the finite element method (FEM) is commonly considered to be a paper by M. J. Turner (Boeing), R. W. Clough (University of California at Berkeley, Civil Engineering Department), H. C. Martin (University of Washington, Aeronautical Engineering Department), and L. J. Topp in 1956.[802] This paper presented a method for plane stress problems, using triangular elements. John Argyris at the University of Stuttgart, Germany, also made impor­tant early contributions. The term "finite element method” was actually coined by Clough in 1960. The Civil Engineering Department at Berkeley became a major center of early finite element methods development.[803]

By the mid-1960s, aircraft companies, computing companies, univer­sities, and Government research centers were beginning to explore the possibilities—although the method allegedly suffered some initial lack of interest in the academic world, because it bypassed elegant mathe­matical solutions in favor of numerical brute force.[804] However, the prac­tical value could not long be ignored. The following insightful comment, made by a research team at the University of Denver in 1966 (working under NASA sponsorship), sums up the expectation of the period: "It is certain that this concept is going to become one of the most important tools of engineering in the future as structures become more complex and computers more versatile and available.”[805]

Modern Rotor Aerodynamic Limits Survey

The Modern Rotor Aerodynamic Limits Survey was a 10-year program launched in 1984, which encompassed flight efforts in 1987 and 1993— 1994. In 1987, a Sikorsky UH-60A Black Hawk was tested with conven­tional structural instrumentation installed on the rotor blades. Then:

. . . Sikorsky Aircraft was [subsequently] contracted to build a set of highly instrumented blades for the Black Hawk test air­craft: a pressure blade with 242 absolute pressure transduc­ers and a strain-gauge blade with an extensive suite of strain gauges and accelerometers. . . approximately 30 gigabytes of data were obtained in 1993-94 and installed in an electronic database that was immediately accessible to the domestic rotorcraft industry.[948]

Подпись: HiMAT NASTRAN Model

Подпись: 8 Modern Rotor Aerodynamic Limits Survey Подпись: • ?500 ELEMENTS Подпись: CQUAO-

Modern Rotor Aerodynamic Limits SurveyComparison Ground Test Data

to NASTRAN

: niches

OO WO 140 -«О WO 200 220

WING SPAN (cm)

NASTRAN model and NASTRAN to static test comparison. NASA.

The two types of measurement systems are complementary. Strain gauges give an indication of the total load in a member, but little insight to the details of where and how the load is generated. The pressure taps show the distribution of the applied aerodynamic load, but only at given stations, so the total load estimate depends on how one computes the data through the unknown regions between the pressure transducers. The combination of both types of data is most useful to researchers trying to correlate computational loads predictions with the test data.

HiMAT

HiMAT was a small, unpiloted aircraft (23.5-feet long, 15.6-foot wingspan, weight just over 3,000 pounds) somewhat representative of a fighter type configuration, flown between 1979 and 1983, and developed to evaluate the following set of technologies and features:

• Close-coupled canard.

• Winglets.

• Digital fly-by-wire flight control.

• Composite structure.

• Aeroelastic tailoring.

• Supercritical airfoil.

It was intended that the benefits of these collected advances be shown together rather than separately and on an unpiloted platform, so that

Подпись:

Подпись: Twist, ' deg _2 Подпись: О Flight data Predicted Подпись: 8

Modern Rotor Aerodynamic Limits SurveyНІМАТ Wing with Electro-Optical Deflection Measurement System

О Targets

HiMAT Electro-Optical Flight Deflection Measurement System. NASA.

the vehicle could be tested more aggressively without danger to a pilot.[949]

"Aeroelastic tailoring” refers to the design of a structure to achieve aerodynamically favorable deformation under load, rather than the more traditional approach of simply minimizing deformation. The goal of aero­elastic tailoring on the HiMAT ". . . was to achieve an aero-dynamically favorable spanwise twist distribution for maneuvering flight conditions” in the canard and the outboard wing. "The NASTRAN program was used to compute structural deflections at each model grid point. Verification of these deflections was accomplished by performing a loads test prior to delivery of the vehicle to NASA.” The ground-test loads were based on a sustained 8-g turn at Mach 0.9, which was one of the key performance design points of the aircraft. The NASTRAN model and a comparison between predicted and measured deflections are shown in the accompa­nying figure. Canard and wing twist were less than predicted. The differ­ence was attributed to insufficient understanding of the matrix-dominated laminate material properties.[950]

The vehicle was also equipped with a system to measure deflections of the wing surface in flight. Light emitting diodes (LEDs)—referred to as targets—on the wing upper surface were detected by a photodiode
array mounted on the fuselage, at a location overlooking the wing. Three inboard targets were used to determine a reference plane, from which the deflection of the remaining targets could be measured. To measure wing twist, targets were positioned primarily in pairs along the front and rear wing spars.[951] The HiMAT wing had a relatively small num­ber of targets—only two pairs besides the inboard reference set—so the in-flight measurements were not a detailed survey of the wing by any means. Rather, they provided measurement at a few key points, which could then be compared with the NASTRAN data and the ground loads test data. Target and receiver locations are illustrated here, together with a sample of the deflection data at the 8-g maneuver condition. In-flight deflection data showed similar twist to the ground-test data, indicating that the aerodynamic loads were well predicted.[952]

The HiMAT was an early step in the development of aeroelastic tai­loring capability, providing a set of NASTRAN data, static load test data, and flight-test data, for surface deflection at a given loading condition. The project also proved out the electro-optical system for in-flight deflection measurements, which would later be used in the X-29 project.

High-Temperature Structures and Materials

T. A. Heppenheimer

Taking fullest advantage of the high-speed potential of rocket and air­breathing propulsion systems required higher-temperature structures. Researchers recognized that aerothermodynamics involved linking aerodynamic and thermodynamic understanding with the mechanics of thermal loading and deformation of structures. This drove use of new structural materials. NASA and other engineers would experiment with active and passive thermal protection systems, metals, and materials.

N AEROSPACE ENGINEERING, high-temperature structures and materials solve two problems. They are used in flight above Mach 2 to overcome the elevated temperatures that occur naturally at such speeds. They also are extensively used at subsonic velocities, in building high-quality turbofan engines, and for the protection of structures exposed to heating.

Aluminum loses strength when exposed to temperatures above 210 degrees Fahrenheit (°F). This is why the Concorde airliner, which was built of this material, cruised at Mach 2.1 but did not go faster.[1013] Materials requirements come to the forefront at higher speeds and escalate sharply as airplanes’ speeds increase. The standard solutions have been to use titanium and nickel, and a review of history shows what this has meant.

Many people wrote about titanium during the 1950s, but to reduce it to practice was another matter. Alexander "Sasha” Kartveli, chief designer at Republic Aviation, proposed a titanium F-103 fighter, but his vision outreached his technology, and although started, it never flew. North American Aviation’s contemporaneous Navaho missile pro­gram introduced chemical milling (etching out unwanted material) for aluminum as well as for titanium and steel, and was the first to use titanium skin in an aircraft. However, the version of Navaho that

Подпись: 9 Подпись: The Lockheed Blackbird experienced a wide range of upper surface temperatures, up to 600 °F. NASA.

was to use these processes never flew, as the program was canceled in 1957.[1014]

The Lockheed A-12 Blackbird, progenitor of a family of exotic Mach 3.2 cruisers that included the SR-71, encountered temperatures as high as 1,050 °F, which required that 93 percent of its structural weight be tita­nium. The version selected was B-120 (Ti-13V-11Cr-3Al), which has the tensile strength of stainless steel but weighs only half as much. But tita­nium is not compatible with chlorine, cadmium, or fluorine, which led to difficulties. A line drawn on a sheet of titanium with a pen would eat a hole into it in a few hours. Boltheads tended to fall away from assem­blies; this proved to result from tiny cadmium deposits made by tools. This brought removal of all cadmium-plated tools from toolboxes. Spot – welded panels produced during the summer tended to fail because the local water supply was heavily chlorinated to kill algae. The managers took to washing the parts in distilled water, and the problem went away.[1015]

The SR-71 was a success. Its shop-floor practice with titanium at first was classified but now has entered the aerospace mainstream. Today’s commercial airliners—notably the Boeing 787 and the Airbus A-380, together with their engines—use titanium as a matter of routine. That is because this metal saves weight.

Beyond Mach 4, titanium falters and designers must turn instead to alternatives. The X-15 was built to top Mach 6 and to reach 1,200 °F. In competing for the contract, Douglas Aircraft proposed a design that was to use magnesium, whose properties were so favorable that the aircraft would only reach 600 °F. But this concept missed the point, for manag­ers wanted a vehicle that would cope successfully with temperatures of 1,200 °F. Hence it was built of Inconel X, a nickel alloy.[1016]

High-speed flight represents one application of advanced metals. Another involves turbofans for subsonic flight. This application lacks the drama of Mach-breaking speeds but is far more common. Such engines use turbine blades, with the blade itself being fabricated from a single-crystal superalloy and insulated with ceramics. Small holes in the blade promote a circulation of cooler gas that is ducted down­stream from high-pressure stages of the compressor. The arrangement can readily allow turbines to run at temperatures 750 °F above the melt­ing point of the superalloy itself.[1017]

The Air Force JB-47E Fly-By-Wire Project

Подпись: 10The USAF Flight Dynamics Laboratory at Wright Patterson Air Force Base (AFB), OH, sponsored a number of technology efforts and flight – test programs intended to increase the survivability of aircraft flight control system components such as fly-by-wire hydraulic actuators. Beginning in 1966, a Boeing B-47E bomber was progressively modi­fied (being redesignated JB-47E) to incorporate analog computer-con­trolled fly-by-wire actuators for both pitch and roll control, with pilot inputs being provided via a side stick controller. The program spanned three phases. For Phase I testing, the JB-47E only included fly-by-wire in its pitch axis. This axis was chosen because the flight control sys­tem in the standard B-47E was known to have a slow response in pitch because of the long control cables to the elevator stretching under load. Control signals to the pitch axis FBW actuator were generated by a transducer attached to the pilot’s control column. The pilot had a simple switch in the cockpit that allowed him to switch between the standard hydromechanical flight control system (which was retained as a backup) and the computer-controlled FBW system. Modified thus, the JB-47E flew for the first time, in December 1967. Test pilots reported that the modified B-47 had better handling qualities then were attainable with the standard B-47E elevator control system, especially in high-speed, low-level flight.[1134]

Phase II of the JB-47E program added fly-by-wire roll control and a side stick controller that used potentiometers to measure pilot input. By the end of the flight-test program, over 40 pilots had flown the FBW JB-47E. The Air Force chief test pilot during Phase II, Col. Frank Geisler, reported: "In ease of control there is no comparison between the standard system and the fly-by-wire. The fly-by-wire is superior in every aspect concerning ease of control. . . . It is positive, it is rapid—it responds well— and best of all the feel is good.”[1135] Before the JB-47E Phase III flight-test program ended in early 1969, a highly reliable four-channel redundant
electrohydraulic actuator had been installed in the pitch axis and success­fully evaluated.[1136] By this time, the Air Force had already initiated Project 680J, the Survivable Flight Control System (SFCS), which resulted in the prototype McDonnell-Douglas YF-4E Phantom aircraft being mod­ified into a testbed to evaluate the potential benefits of fly-by-wire in a high-performance, fighter-type aircraft.[1137] The SFCS YF-4E was intended to validate the concept that dispersed, redundant fly-by-wire flight con­trol elements would be less vulnerable to battle damage, as well as to improve the performance of the flight control system and increase over­all mission effectiveness.

Strike Technology Testbed

In the summer of 1991, a flight-test effort oriented to close air support and battlefield air interdiction began. The focus was to demonstrate tech­nologies to locate and destroy ground targets day or night, good weather or bad, while maneuvering at low altitudes. The AFTI/F-16 was modi­fied with two forward-looking infrared sensors mounted in turrets on the upper fuselage ahead of the canopy. The pilot was equipped with a helmet-mounted sight that was integrated with the infrared sensors. As he moved his head, they followed his line of sight and transmitted their images to eyepieces mounted in his helmet. The nose-mounted canards used in earlier AFTI/F-16 testing were removed. Testing emphasized giv­ing pilots the capability to fly their aircraft and attack targets in darkness or bad weather. To assist in this task, a digital terrain map was stored
in the aircraft computer. Advanced terrain following was also evalu­ated. This used the AFTI/F-16’s radar to scan terrain ahead of the air­craft and automatically fly over or around obstacles. The pilot could select minimum altitudes for his mission. The system would automat­ically calculate that the aircraft was about to descend below this alti­tude and initiate a 5 g pullup maneuver. The advanced terrain following system was connected to the Automated Maneuvering Attack System, enabling the pilot to delivery weapons from altitudes as low as 500 feet in a 5 g turn. An automatic Pilot Activated Recovery System was integrated with the flight control system. If the pilot became disori­ented at night or in bad weather, he could activate a switch on his side controller. This caused the flight control computer to automatically recover the aircraft putting it into a wings-level climb. Many of these technologies have subsequently transitioned into upgrades to existing fighter/attack aircraft.[1187]

Подпись: 10The final incarnation of this unique aircraft would be as the AFTI/F-16 power-by-wire flight technology demonstrator.

Performance Seeking Control

Подпись: 10The Performance Seeking Control (PSC) effort followed the Adaptive Electronic Control System project. Previous engine control modes uti­lized on the HIDEC aircraft used stored schedules of optimum engine pressure ratios based an average engine on a normal standard day. Using digital flight control, inlet control, and engine control systems, PSC used highly advanced computational techniques and control laws to identify the actual condition of the engine components and optimize the overall propulsion system for best efficiency based on actual engine and flight conditions that the aircraft was encountering, ensuring the highest engine and maneuvering performance in all flight environments. PSC testing with the HIDEC aircraft began in 1990. Results of flight-testing with PSC included increased fuel efficiency, improved engine thrust dur­ing accelerations and climbs, and increased engine service life achieved by reductions in turbine inlet temperature. Flight-testing demonstrated turbine inlet temperature reductions of more than 160 °F. Such large operating temperature reductions can significantly extend the life of jet engines. Additionally, improvements in thrust of between 9 percent and 15 percent were observed in various flight conditions, including accel­eration and climb.[1265] PSC also included the development of methodolo­gies within the digital engine control system designed to detect engine wear and impending failure of certain engine components. Such infor­mation, coupled with normal preventative maintenance, could assist in implementing future fail-safe propulsion systems.[1266] The flight dem­onstration and evaluation of the PSC system at NASA Dryden directly contributed to the rapid transition of the technology into operational use. For example, PSC technology has been applied to the F100 engine

used in the F-15 Eagle, the F119 engine in the F-22 Raptor, and the F135 engine for the F-35 Lightning II.

Numerical Propulsion System Simulation

NASA and its contractor colleagues soon found another use for computers to help improve engine performance. In fact, looking back at the history

Подпись: 11 Numerical Propulsion System Simulation

of NASA’s involvement with improving propulsion technology, a trilogy of major categories of advances can be suggested based on the develop­ment of the computer and its evolution in the role that electronic think­ers have played in our culture.

Part one of this story includes all the improvements NASA and its industry partners have made with jet engines before the computer came along. Having arrived at a basic operational design for a turbojet engine—and its relations, the turboprop and turbofan—engineers sought to improve fuel efficiency, reduce noise, decrease wear, and otherwise reduce the cost of maintaining the engines. They did this through such efforts as the Quiet Clean Short Haul Experimental Engine and Aircraft Energy Efficiency program, detailed earlier in this case study. By tinker­ing with the individual components and testing the engines on the ground and in the air for thousands of hours, incremental advances were made.[1338]

Part two of the story introduces the capabilities made available to engineers as computers became powerful enough and small enough to be incorporated into the engine design. Instead of requiring the pilot to manually make occasional adjustments to the engine operation in
flight depending on what the instruments read, a small digital computer built into the engine senses thousands of measurements per minute and caused an equal number of adjustments to be made to keep the power – plant performing at peak efficiency. With the Digital Electronic Engine Control, engines designed years before behaved as though they were fresh off the drawing boards, thanks to their increased capabilities.[1339]

Подпись: 11Having taken engine designs about as far as it was thought possible, the need for even more fuel-efficient, quieter, and capable engines con­tinued. Unfortunately, the cost of developing a new engine from scratch, building it, and testing it in flight can cost millions of dollars and take years to accomplish. What the aerospace industry needed was a way to take advantage of the powerful computers available at the dawn of the 21st century to make the engine development process less expen­sive and timelier. The result was part three of NASA’s overarching story of engine development: the Numerical Propulsion System Simulation (NPSS) program.[1340]

Working with the aerospace industry and academia, NASA’s Glenn Research Center led the collaborative effort to create the NPSS pro­gram, which was funded and operated as part of the High Performance Computing and Communications program. The idea was to use modern simulation techniques and create a virtual engine and test stand within a virtual wind tunnel, where new designs could be tried out, adjustments made, and the refinements exercised again without costly and time-con­suming tests in the "real” world. As stated in a 1999 industry review of the program, the NPSS was built around inclusion of three main ele­ments: "Engineering models that enable multi-disciplinary analysis of large subsystems and systems at various levels of detail, a simulation environment that maximizes designer productivity and a cost-effective, high-performance computing platform.”[1341]

In explaining to the industry the potential value of the program dur­ing a 2006 American Society of Mechanical Engineers conference in

Spain, a NASA briefer from Glenn suggested that if a standard turbo­jet development program for the military—such as the F100—took 10 years, $1.5 billion, construction of 14 ground-test engines, 9 flight-test engines, and more than 11,000 hours of engine tests, the NPSS pro­gram could realize a:

• 50-percent reduction in tooling cost.

• 33-percent reduction in the average development engine cost.

Подпись: 1130-percent reduction in the cost of fabricating, assembling, and testing rig hardware.

• 36-percent reduction in the number of development engines.

• 60-percent reduction in total hardware cost.[1342]

A key—and groundbreaking—feature of NPSS was its ability to inte­grate simulated tests of different engine components and features, and run them as a whole, fully modeling all aspects of a turbojet’s operation. The program did this through the use of the Common Object Request Broker Architecture (CORBA), which essentially provided a shared lan­guage among the objects and disciplines (mechanical, thermo-dynam­ics, structures, gas flow, etc.) being tested so the resulting data could be analyzed in an "apples to apples” manner. Through the creation of an NPSS developer’s kit, researchers had tools to customize the software for individual needs, share secure data, and distribute the simulations for use on multiple computer operating systems. The kit also provided for the use of CORBA to "zoom” in on the data to see specific informa­tion with higher fidelity.[1343]

Begun in 1997, the NPSS team consisted of propulsion experts and software engineers from GE, Pratt & Whitney, Boeing, Honeywell, Rolls – Royce, Williams International, Teledyne Ryan Aeronautical, Arnold Engineering Development Center, Wright-Patterson AFB, and NASA’s

Glenn Research Center. By the end of the 2000 fiscal year, the NPSS team had released Version 1.0.0 on schedule. According to a summary of the program produced that year:

Подпись: 11(The new software) can be used as an aero-thermodynamic zero-dimensional cycle simulation tool. The capabilities include text-based input syntax, a sophisticated solver, steady- state and transient operation, report generation, a built-in object-oriented programming language for user-definable components and functions, support for distributed running of external codes via CORBA, test data reduction, interactive debug capability and customer deck generation.[1344]

Additional capabilities were added in 2001, including the ability to support development of space transportation technologies. At the same time, the initial NPSS software quickly found applications in aviation safety, ground-based power, and alternative energy devices, such as fuel cells. Moreover, project officials at the time suggested that with the fur­ther development of the software, other applications could be found for the program in the areas of nuclear power, water treatment, biomedi­cine, chemical processing, and marine propulsion. NPSS proved to be so capable and promising of future applications that NASA designated the program a cowinner of the NASA Software of the Year Award for 2001.[1345]

Work to improve the capabilities and expand the applications of the software continued, and, in 2008, NASA transferred NPSS to a consor­tium of industry partners, and, through a Space Act Agreement, it is cur­rently offered commercially by Wolverine Ventures, Inc., of Jupiter, FL. Now at Version 1.6.5, NPSS’s features include the ability to model all types of complex systems, plug-and-play interfaces for fluid properties, built-in plotting package, interface to higher fidelity legacy codes, mul­tiple model views, command language interpreter with language sen­sitive text editor, comprehensive component solver, and variable setup controls. It also can operate on Linux, Windows, and UNIX platforms.[1346]

Originally begun as a virtual tool for designing new turbojet engines, NPSS has since found uses in testing rocket engines, fuel cells, analog controls, combined cycle engines, thermal management systems, air­frame vehicles preliminary design, and commercial and military engines.[1347]

Ultra Efficient Engine Technology Program

Подпись: 11With the NPSS tool firmly in place and some four decades of experience incrementally improving the design, operation, and maintenance of the jet engine, it was time to go for broke and assemble an ultra­bright team of engineers to come up with nothing short of the best jet

Building on the success of technology development programs such as the Quiet Clean Short Haul Experimental Engine and Energy Efficient Engine project—all of which led directly to the improvements and production of turbojet engines now propelling today’s commercial airliners—NASA approached the start of the 21st century with plans to take jet engine design to accomplish even more impressive feats. In 1999, the Aeronautics Directorate of NASA began the Ultra Efficient Engine Technology (UEET) program—a 5-year, $300-million effort— with two primary goals. The first was to find ways that would enable further improvements in engine efficiency to reduce fuel burn and, as a result, carbon dioxide emissions by yet another 15 percent. The second was to continue developing new materials and configuration schemes in the engine’s combustor to reduce emissions of nitrogen oxides (NOx) during takeoff and landings by 70 percent relative to the standards detailed in 1996 by the International Civil Aviation Organization.[1348]

NASA’s Glenn Research Center led the program, with participation from three other NASA Centers: Ames, Langley, and the Goddard Space Flight Center in Greenbelt, MD. Also involved were GE, Pratt & Whitney, Honeywell, Allison/Rolls-Royce, Williams International, Boeing, and Lockheed Martin.[1349]

Подпись: 11The program was comprised of seven major projects, each of which addressed particular technology needs and exploitation opportunities.[1350] The Propulsion Systems Integration and Assessment project examined overall component technology issues relevant to the UEET program to help furnish overall program guidance and identify technology short­falls.[1351] The Emissions Reduction project sought to significantly reduce NOx and other emissions, using new combustor concepts and tech­nologies such as lean burning combustors with advanced controls and high-temperature ceramic matrix composite materials.[1352] The Highly Loaded Turbomachinery project sought to design lighter-weight, reduced – stage cores, low-pressure spools and propulsors for more efficient and environmentally friendly engines, and advanced fan concepts for qui­eter, lighter, and more efficient fans.[1353] The Materials and Structures for High Performance project sought to develop and demonstrate high – temperature material concepts such as ceramic matrix composite combustor liners and turbine vanes, advanced disk alloys, turbine air­foil material systems, high-temperature polymer matrix composites, and innovative lightweight materials and structures for static engine struc – tures.[1354] The Propulsion-Airframe Integration project studied propul­sion systems and engine locations that could furnish improved engine and environmental benefits without compromising the aerodynamic performance of the airplane; lowering aircraft drag itself constituted a highly desirable means of reducing fuel burn, and, hence, CO2 emis­sions will develop advanced technologies to yield lower drag propulsion system integration with the airframe for a wide range of vehicle classes. Decreasing drag improves air vehicle performance and efficiency, which
reduces fuel burn to accomplish a particular mission, thereby reducing the CO2 emissions.[1355] The Intelligent Propulsion Controls Project sought to capitalize upon breakthroughs in electronic control technology to improve propulsion system life and enhance flight safety via integrating informa­tion, propulsion, and integrated flight propulsion control technologies.[1356] Finally, the Integrated Component Technology Demonstrations project sought to evaluate the benefits of off-the-shelf propulsion systems inte­gration on NASA, Department of Defense, and aeropropulsion industry partnership efforts, including both the UEET and the military’s Integrated High Performance Turbine Engine Technology (IHPTET) programs.[1357]

Подпись: 11By 2003, the 7 project areas had come up with 10 specific technol­ogy areas that UEET would investigate and incorporate into an engine that would meet the program’s goals for reducing pollution and increas­ing fuel burn efficiency. The technology goals included:

1. Advanced low-NOx combustor design that would feature a lean burning concept.

2. A highly loaded compressor that would lower system weight, improve overall performance, and result in lower fuel burn and carbon dioxide emissions.

3. A highly loaded, high-pressure turbine that could allow a reduction in the number of high-pressure stages, parts count, and cooling requirements, all of which could improve fuel burn and lower carbon dioxide emissions.

4. A highly loaded, low-pressure turbine and aggressive tran­sition duct that would use flow control techniques that would reduce the number of low-pressure stages within the engine.

5. Use of a ceramic matrix composite turbine vane that would allow high-pressure vanes to operate at a higher
inlet temperature, which would reduce the amount of engine cooling necessary and result in lower carbon diox­ide emissions.

6. The same ceramic matrix composite material would be used to line the combustor walls so it could operate at a higher temperature and reduce NOx emissions.

7. Coat the turbine airfoils with a ceramic thermal barrier material to allow the turbines to operate at a higher tem­perature and thus reduce carbon dioxide emissions.

8. Подпись: 11Use advanced materials in the construction of the tur­bine airfoil and disk. Specifically, use a lightweight single crystal superalloy to allow the turbine blades and vanes to operate at a higher temperature and reduce carbon dioxide emissions, as well as a dual microstructure nickel – base superalloy to manufacture turbine disks tailored to meet the demands of the higher-temperature environment.

9. Determine advanced materials and structural concepts for an improved, lighter-weight impact damage tolerance and noise-reducing fan containment case.

10. Develop active tip clearance control technology for use in the fan, compressor, and turbine to improve each compo­nent’s efficiency and reduce carbon dioxide emissions.[1358]

In 2003, the UEET program was integrated into NASA’s Vehicle Systems program to enable the enginework to be coordinated with research into improving other areas of overall aircraft technology. But in the wake of policy changes associated with the 2004 decision to redi­rect NASA’s space program to retire the Space Shuttle and return humans to the Moon, the Agency was forced to redirect some of its funding to Exploration, forcing the Aeronautics Directorate to give up the $21.6 mil­lion budgeted for UEET in fiscal year 2005, effectively canceling the big­gest and most complicate jet engine research program ever attempted. At the same time, NASA was directed to realign its jet engine research to concentrate on further reducing noise.[1359]

Nevertheless, results from tests of UEET hardware showed prom­ise that a large, subsonic aircraft equipped with some of the technologies detailed above would have a "very high probability” of achieving the pro­gram goals laid out for reducing emissions of carbon dioxide and other pollutants. The data remain for application to future aircraft and engine schemes.72

1973 RANN Symposium Sponsored by the National Science Foundation

In reviewing the current status and potential of wind energy, Ronald Thomas and Joseph M. Savino, both from NASA’s Lewis Research Center, in November 1973 presented a paper at the Research Applied to National Needs Symposium in Washington, DC, sponsored by the National Science Foundation. The paper reviewed past experience with wind generators, problems to be overcome, the feasibility of wind power to help meet energy needs, and the planned Wind Energy Program. Thomas and Savino pointed out that the Dutch had used windmills for years to pro­vide power for pumping water and grinding grain; that the Russians built
a 100-kilowatt generator at Balaclava in 1931 that feed into a power net­work; that the Danes used wind as a major source of power for many years, including the building of the 200-kilowatt Gedser mill system that operated from 1957 through 1968; that the British built several large wind generators in the early 1950s; that the Smith-Putnam wind tur­bine built in Vermont in 1941 supplied power into a hydroelectric power grid; and that Germans did fine work in the 1950s and 1960s building and testing machines of 10 and 100 kilowatts. The two NASA engineers noted, however, that in 1973, no large wind turbines were in operation.

Подпись: 13Thomas and Savino concluded that preliminary estimates indicated that wind could supply a significant amount of the Nation’s electricity needs and that utilizing energy from the wind was technically feasible, as evidenced by the past development of wind generators. They added, however, that a sustained development effort was needed to obtain eco­nomical systems. They noted that the effects of wind variability could be reduced by storage systems or connecting wind generators to fos­sil fuel or hydroelectric systems, or dispersing the generated electricity throughout a large grid system. Thomas and Savino[1497] recommended a number of steps that the NASA and National Science Foundation pro­gram should take, including: (1) designing, building, and testing modern machines for actual applications in order to provide baseline information for assessing the potential of wind energy as an electric power source, (2) operating wind generators in selected applications for determining actual power costs, and (3) identifying subsystems and components that might be further reduced in costs.[1498]