Category AERONAUTICS

Shuttle Aerodynamics and Structures

The Shuttle was one of the last major aircraft to rely almost entirely on wind tunnels for studies of its aerodynamics. There was much interest in an alternative: the use of supercomputers to derive aerodynamic data through solution of the governing equations of airflow, known as the Navier-Stokes equations. Solution of the complete equations was out of the question, for they carried the complete physics of turbulence, with turbulent eddies that spanned a range of sizes covering several orders of magnitude. But during the 1970s, investigators made headway by dro – ing the terms within these equations that contained viscosity, thereby suppressing turbulence.[629]

People pursued numerical simulation because it offered hope of overcoming the limitations of wind tunnels. Such facilities usually tested small models that failed to capture important details of the aerodynam­ics of full-scale aircraft. Other errors arose from tunnel walls and model supports. Hypersonic flight brought its own restrictions. No installation had the power to accommodate a large model, realistic in size, at the velocity and temperatures of reentry.[630]

By piecing together results from specialized facilities, it was possi­ble to gain insights into flows at near-orbital speeds. The Shuttle reen­tered at Mach 27. NASA Langley had a pair of wind tunnels that used helium, which expands to very high flow velocities. These attained Mach 20, Mach 26, and even Mach 50. But their test models were only a few inches in size, and their flows were very cold and could not duplicate the high temperatures of atmosphere entry. Shock tunnels, which heated and compressed air using shock waves, gave true temperature up to Mach 17 while accommodating somewhat larger models. Yet their flow dura­tions were measured in milliseconds.[631]

During the 1970s, the largest commercially available mainframe computers included the Control Data 7600 and the IBM 370-195.[632] These sufficed to treat complete aircraft—but only at the lowest level of approx­imation, which used linearized equations and treated the airflow over an airplane as a small disturbance within a uniform free stream. The full Navier-Stokes equations contained 60 partial derivatives; the linearized approximation retained only 3 of these terms. It nevertheless gave good accuracy in computing lift, successfully treating such complex configura­tions as a Shuttle orbiter mated to its 747. The next level of approximation restored the most important nonlinear terms and treated transonic and hypersonic flows, which were particularly difficult to simulate in wind tunnels. The inadequacies of wind tunnel work had brought such errors as faulty predictions of the location of shock waves along the wings of the C-141, an Air Force transport. In flight test, this plane tended to nose downward, and its design had to be modified at considerable expense.

Computers such as the 7600 could not treat complete aircraft in transonic flow, for the equations were more complex and the computa­tion requirements more severe. HiMAT, a highly maneuverable NASA experimental aircraft, flew at Dryden and showed excessive drag at Mach 0.9. Redesign of its wing used a transonic-flow computational code and approached the design point. The same program, used to reshape the wing of the Grumman Gulfstream, gave considerable increases in range and fuel economy while reducing the takeoff distance and landing speed.[633]

During the 1970s, NASA’s most powerful computer was the Illiac IV, at Ames Research Center. It used parallel processing and had 64 process­ing units, achieving speeds up to 25 million operations per second. Built by Burroughs Corporation with support from the Pentagon, this machine was one of a kind. It entered service at Ames in 1973 and soon showed that it could run flow-simulation codes an order of magnitude more rap­idly than a 7600. Indeed, its performance foreshadowed the Cray-1, a true supercomputer that became commercially available only after 1976.

The Illiac IV was a research tool, not an instrument of mainstream Shuttle development. It extended the reach of flow codes, treating three­dimensional inviscid problems while supporting simulations of viscous flows that used approximate equations to model the turbulence.[634] In the realm of Space Shuttle studies, Ames’s Walter Reinhardt used it to run a three-dimensional inviscid code that included equations of atmospheric chemistry. Near-peak-entry heating of the Shuttle would be surrounded by dissociated air that was chemically reacting and not in chemical equi­librium. Reinhardt’s code treated the full-scale orbiter during entry and gave a fine example of the computational simulation of flows that were impossible to reproduce in ground facilities.[635]

Such exercises gave tantalizing hints of what would be done with computers of the next generation. Still, the Shuttle program was at least a decade too early to use computational simulations both routinely and effectively. NASA therefore used its wind tunnels. The wind tun­nel program gave close attention to low-speed flight, which included approach and landing as well as separation from the 747 during the 1977 flight tests of Enterprise.

In 1975, Rockwell built a $1 million model of the orbiter at 0.36 scale, lemon yellow in color and marked with the blue NASA logo. It went into the 40- by 80-foot test section of Ames’s largest tunnel, which was easily visible from the adjacent freeway. It gave parame­ters for the astronauts’ flight simulators, which previously had used data from models at 3-percent scale. The big one had grooves in its surface that simulated the gaps between thermal protection tiles, permitting assessment of the consequences of the resulting rough­ness of the skin. It calibrated and tested systems for making aero­dynamic measurements during flight test and verified the design of the elevons and other flight control surfaces as well as of their actuators.[636]

Other wind tunnel work strongly influenced design changes that occurred early in development. The most important was the introduc­tion of the lightweight delta wing late in 1972, which reduced the size of the solid boosters and chopped 1 million pounds from the overall weight. Additional results changed the front of the external tank from a cone to an ogive and moved the solid boosters rearward, placing their nozzles farther from the orbiter. The modifications reduced drag, mini­mized aerodynamic interference on the orbiter, and increased stability by moving the aerodynamic center aft.

The activity disclosed and addressed problems that initially had not been known to exist. Because both the liquid main engines and the sol­ids had nozzles that gimbaled, it was clear that they had enough power to provide control during ascent. Aerodynamic control would not be necessary, and managers believed that the orbiter could set its elevons in a single position through the entire flight to orbit. But work in wind tunnels subsequently showed that aerodynamic forces during ascent would impose excessive loads on the wings. This required elevons to move while in powered flight to relieve these loads. Uncertainties in the

wind tunnel data then broadened this requirement to incorporate an active system that prevented overloading the elevon actuators. This sys­tem also helped the Shuttle to fly a variety of ascent trajectories, which imposed different elevon loads from one flight to the next.[637]

Much wind tunnel work involved issues of separation: Enterprise from its carrier aircraft, solid boosters from the external tank after burnout. At NASA Ames, a 14-foot transonic tunnel investigated prob­lems of Enterprise and its 747. Using the same equipment, engineers addressed the separation of an orbiter from its external tank. This was supposed to occur in near-vacuum, but it posed aerodynamic problems during an abort.

The solid boosters brought their own special issues and nuances. They had to separate cleanly; under no circumstances could a heavy steel casing strike a wing. Small solid rocket motors, mounted fore and aft on each booster, were to push them away safely. It then was necessary to understand the behavior of their exhaust plumes, for these small motors were to blast into onrushing airflow that could blow their plumes against the orbiter’s sensitive tiles or the delicate aluminum skin of the exter­nal tank. Wind tunnel tests helped to define appropriate angles of fire while also showing that a short, sharp burst from the motors was best.[638]

Prior to the first orbital flight in 1981, the program racked up 46,000 wind tunnel hours. This consisted of 24,900 hours for the orbiter, 17,200 for the mated launch configuration, and 3,900 for the carrier aircraft program. During the 9 years from contract award to first flight, this was equivalent to operating a facility 16 hours a day, 6 days a week. Specialized projects demanded unusual effort, such as an ongoing attempt to minimize model-to-model and tunnel-to-tunnel discrepan­cies. This work alone conducted 28 test series and used 14 wind tunnels.[639]

Structural tests complemented the work in aerodynamics. The mathematics of structural analysis was well developed, with computer

programs called NASTRAN that dealt with strength under load while addressing issues of vibration, bending, and flexing. The equations of NASTRAN were linear and algebraic, which meant that in principle they were easy to solve. The problem was that there were too many of them, for the most detailed mathematical model of the orbiter’s structure had some 50,000 degrees of freedom. Analysts introduced abridged versions that cut this number to 1,000 and then relied on experimental tests for data that could be compared with the predictions of the computers.[640]

There were numerous modes of vibration, with frequencies that changed as the Shuttle burned its propellants. Knowledge of these fre­quencies was essential, particularly in dealing with "pogo.” This involved a longitudinal oscillation like that of a pogo stick, with propellant flow­ing in periodic surges within its main feed line. Such surges arose when their frequency matched that of one of the structural modes, producing resonance. The consequent variations in propellant-flow rate then caused the engine thrust to oscillate at that same rate. This turned the engines into sledgehammers, striking the vehicle structure at its resonant fre­quency, and made the pogo stronger. It weakened only when consump­tion of propellant brought a further change in the structural frequency that broke the resonance, allowing the surges to die out.

Pogo was common; it had been present on earlier launch vehicles. It had brought vibrations with acceleration of 9 g’s in a Titan II, which was unacceptably severe. Engineering changes cut this to below 0.25 g, which enabled this rocket to launch the manned Gemini spacecraft. Pogo reappeared in Apollo during the flight of a test Saturn V in 1968. For the Shuttle, the cure was relatively simple, calling for installation of a gas-filled accumulator within the main oxygen line. This damped the pogo oscillations, though design of this accumulator called for close understanding of the pertinent frequencies.[641]

The most important structural tests used actual flight hardware, including the orbiter Enterprise and STA-099, a full-size test article that later became the Challenger. In 1978, Enterprise went to NASA Marshall, where the work now included studies on the external tank. For vibrational tests, engineers assembled a complete Shuttle by mat­ing Enterprise to such a tank and to a pair of dummy solid boosters. One problem that these models addressed came at lift-off. The ignition of the three main engines imposes a sudden load of more than 1 million pounds of thrust. This force bends the solid boosters, placing consider­able stress at their forward attachments to the tank. If the solid boost­ers were to ignite at that moment, their thrust would add to the stress.

To reduce the force on the attachment, analysts took advantage of the fact that the solid boosters would not only bend but would sway back and forth somewhat slowly, like an upright fishing rod. The strain on the attachment would increase and decrease with the sway, and it was possible to have the solid boosters ignite at an instant of minimum load. This called for delaying their ignition by 2.7 seconds, which cut the total load by 25 percent. The main engines fired during this inter­val, which consumed propellant, cutting the payload by 600 pounds. Still, this was acceptable.[642]

While Enterprise underwent vibration tests, STA-099 showed the orbiter’s structural strength by standing up to applied forces. Like a newborn baby that lacks hair, this nascent form of Challenger had no thermal-protection tiles. Built of aluminum, it looked like a large fighter plane. For the structural tests, tiles were not only unnecessary; they were counterproductive. The tiles had no structural strength of their own that had to be taken into account, and they would have received severe dam­age from the hydraulic jacks that applied the loads and forces.

STA-099 and Columbia had both been designed to accommodate a set of loads defined by a database designated 5.1. In 1978, there was a new database, 5.4, and STA-099 had to withstand its loads without acquir­ing strains or deformations that would render it unfit for flight. Yet in an important respect, this vehicle was untestable; it was not possible to validate the strength of its structural design merely by applying loads with those jacks. The Shuttle structure had evolved under such strong emphasis on saving weight that it was necessary to take full account

of thermal stresses that resulted from temperature differences across structural elements during reentry. No facility existed that could impose thermal stresses on so large an object as STA-099, for that would have required heating the entire vehicle.

STA-099 and Columbia had both been designed to withstand ulti­mate loads 140 percent greater than those of the 5.1 database. The structural tests on STA-099 now had to validate this safety factor for the new 5.4 database. Unfortunately, a test to 140 percent of the 5.4 loads threatened to produce permanent deformations in the structure. This was unacceptable, for STA-099 was slated for refurbishment into Challenger. Moreover, because thermal stresses could not be reproduced over the entire vehicle, a test to 140 percent would sacrifice the pros­pect of building Challenger while still leaving questions as to whether an orbiter could meet the safety factor of 140 percent.

NASA managers shaped the tests accordingly. For the entire vehi­cle, they used the jacks to apply stresses only up to 120 percent of the 5.4 loads. When the observed strains proved to match closely the values predicted by stress analysis, the 140 percent safety factor was deemed to be validated. In addition, the forward fuselage underwent the most severe aerodynamic heating, yet it was relatively small. It was subjected to a combination of thermal and mechanical loads that simulated the complete reentry stress environment in at least this limited region. STA – 099 then was given a detailed and well-documented posttest inspection. After these tests, STA-099 was readied as the flight vehicle Challenger, joining Columbia as part of NASA’s growing Shuttle fleet.[643]

Elastic Aerostructural Effects

The distortion of the shape of an airplane structure because of applied loads also creates a static aerodynamic interaction. When air loads are applied to an aerodynamic surface, it will bend or twist proportional to the applied load, just like a spring. Depending on the surface configura­tion, the distorted shape can produce different aerodynamic properties when compared with the rigid shape. A swept wing, for example, will bend upward at the tip and may also twist as it is loaded.

This new shape may exhibit higher dihedral effect and altered span – wise lift distribution when compared with a rigid shape, impacting the performance of the aircraft. Because virtually all fighter aircraft have short wings and can withstand 7 to 9 g, their aeroelastic deformation is relatively small. In contrast, bomber, cargo, or high-altitude recon­naissance airplanes are typically designed for lower g levels, and the resulting structure, particularly its long, high aspect ratio wings, is often quite limber.

Notice that this is not a dynamic, oscillatory event, but a static con­dition that alters the steady-state handling qualities of the airplane. The prediction of these aeroelastic effects is a complex and not altogether accurate process, though the trends are usually correct. Because the effect is a static condition, the boundaries for safe flight can usually be determined during the buildup flight-test program, and, if necessary, placards, can be applied to avoid serious incidents once the aircraft enters operational service.

The six-engine Boeing B-47 Stratojet was the first airplane designed with a highly swept, relatively thin, high aspect ratio wing. At higher tran­sonic Mach numbers, deflection of the ailerons would cause the wing to twist sufficiently to cancel, and eventually exceed, the rolling moment produced by the aileron, thus producing an aileron reversal. (In effect, the aileron was acting like a big trim tab, twisting the wing and causing the exact opposite of what the pilot intended.) Aerodynamic loads are proportional to dynamic pressure, so the aeroelastic effects are usually more pronounced at high airspeed and low altitude, and this combina­tion caused several fatal accidents with the B-47 during its flight-testing and early deployment. After flight-testing determined the magnitude and region of reduced roll effectiveness, the airplane was placarded to 425 knots to avoid roll reversal. In sum, then, an aeroelastic problem forced the limiting of the maximum performance achievable by the airplane, rendering it more vulnerable to enemy defenses. The B-47’s successor,

the B-52, had a much thicker wing root and more robust structure to avoid the problems its predecessor had encountered.

The Mach 3.2+ Lockheed SR-71 Blackbird, designed to cruise at supersonic speeds at very high altitude, was another aircraft that exhib­ited significant aeroelastic structural deformation.[716] The Blackbird’s structure was quite limber, and the aeroelastic predictions for its behav­ior at cruise conditions were in error for the pitch axis. The SR-71 was a blended wing-body design with chines running along the forward sides of the fuselage and the engine nacelles, then blending smoothly into the rounded delta wing. These chines added lift to the airplane, and because they were well forward of the center of gravity, added a signifi­cant amount of pitching moment (much like a canard surface on an air­plane such as the Wright Flyer or the Saab AJ-37 Viggen). Flight-testing revealed that the airplane required more "nose-up” elevon deflection at cruise than predicted, adding a substantial amount of trim drag. This reduced the range the Blackbird could attain, degrading its operational performance. To correct the problem, a small shim was added to the production fuselage break just forward of the cockpit. The shim tilted the forebody nose cone and its attached chine surfaces slightly upward, producing a nose-up pitching moment. This allowed the elevons to be returned to their trim faired position at cruise flight conditions, thus regaining the lost range capability.

Sadly, the missed prediction of the aeroelastic effects also con­tributed to the loss of one of the early SR-71s. While the nose cone forebody shim was being designed and manufactured, the contractor desired to demonstrate that the airplane could attain its desired range if the elevons were faired. To achieve this, Lockheed technicians added trim-altering ballast to the third production SR-71, then being used for systems and sensor testing. The ballast shifted the center of grav­ity about 2 percent aft from its normal position and at the aft design limit for the airplane. The engineers calculated that this would permit the elevons to be set in their faired position at cruise conditions for this one flight so that the SR-71 could meet its desired range perfor­mance. Instead, the aft cg, combined with the nonlinear aerodynamics

and aeroelastic bending of the fuselage, resulted in the airplane going out of control at the start of a turn at a cruise Mach number. The air­plane broke in half, catapulting the pilot, who survived, from the cock­pit. Unfortunately, his flight-test engineer/navigator perished.[717] Shim installation, together with other minor changes to the control system and engine inlets, subsequently enabled the SR-71 to meet its perfor­mance goals, and it became a mainstay of America’s national reconnais­sance fleet until its retirement in early 1990.

Lockheed, the Air Force, and NASA continued to study Blackbird aeroelastic dynamics. In 1970, Lockheed proposed installation of a Loads Alleviation and Mode Suppression (LAMS) system on the YF-12A, installing very small canards called "exciter-” or "shaker-vanes” on the forebody to induce in-flight motions and subsequent suppression tech­niques that could be compared with analytical models, particularly NASA’s NASTRAN and Boeing’s FLEXSTAB computerized load predic­tion and response tools. The LAMS testing complemented Air Force – NASA research on other canard-configured aircraft such as the Mach 3+ North American XB-70A Valkyrie, a surrogate for large transport-sized supersonic cruise aircraft. The fruits of this research could be found on the trim canards used on the Rockwell B-1A and B-1B strategic bomb­ers, which entered service in the late 1980s and notably improved their high-speed "on the deck” ride qualities, compared with their three low – altitude predecessors, the Boeing B-52 Stratofortress, Convair B-58 Hustler, and General Dynamics FB-111.[718]

Active Cooling Approaches

There are other proposed methods for protecting vehicles from high temperature while flying at high speed or during reentry. Several active cooling concepts have been proposed where liquid is circulated through a hot area, then through a radiator to dissipate the heat. These concepts are quite complex and the risk is very high: failure of an active cooling system could result in loss of a hypersonic vehicle within a few seconds. None has been demonstrated in flight. Although work is continuing on active cooling concepts, their application will probably not be realized for many years.

As we look ahead to the future of aviation, it is easy to merely assess the current fleet of successful aircraft or spacecraft, and decide on what improvements we can provide, without considering the history and evolu­tion that produced these vehicles. T he danger is that some of the past prob­lems will reappear unless the design and test communities are aware of their history. This paper has attempted to summarize some of the problems that have been encountered, and resolved, during the technology explosion in aviation that has occurred over the last 60 years. The manner in which the problems were discovered, the methods used to determine causes, and the final resolution or correction that was implemented have been presented. Hopefully these brief summaries of historical events will stimulate further research by our younger engineers and historians into the various subjects covered, and to that end, the following works are particularly relevant.

Digital Computation Triggers Automated Structural Analysis

In 1946, the ENIAC, "commonly accepted as the first successful high­speed electronic digital computer,” became operational at the University

Подпись: Structure

Digital Computation Triggers Automated Structural Analysis Подпись: 8

Single Element Reactions

Simple example of discretized structure and single element. NASA.

of Pennsylvania.[800] It took up as much floor space as a medium-sized house and had to be "programmed” by physically rearranging its con­trol connections. Many advances followed rapidly: storing instructions in memory, conditional control transfer, random access memory, mag­netic core memory, and the transistor-circuit element. With these and other advances, digital computers progressed from large and ungainly experimental devices to programmable, useful, commercially available (albeit expensive) machines by the mid-1950s.[801]

The FORTRAN programming language was also developed in the mid-1950s and rapidly gained acceptance in technical communities. This was a "high level language,” which allowed programming instructions to be written in terms that an engineer or analyst could understand; a com­piler handled the translation into "machine language” that the computer could understand. International Business Machines (IBM) developed the original FORTRAN language and also some of the early practical digi­tal computers. Other early digital computers were produced by Control Data Corporation (CDC) and UNIVAC. These developments made it pos­
sible to take the new methods of structural analysis that were emerging and implement them in an automated, repeatable manner.

The essence of these new methods was to treat a structure as a finite number of discrete elastic elements, rather than as a continuum. Reactions (forces and moments) and deflections are only calculated at specific points, called "nodes.” Elements connect the nodes. The stress and strain fields in the regions between the nodes do not need to be solved in the global analysis. They only need to be solved when develop­ing the element-level solution, and once this is done for a particular type of element, that element is available as a prepackaged building block. Complex shapes and structures can then be built up from the simple elements. A simple example—using straight beam elements to model a curved beam structure—is illustrated here.

To find, for example, the relationship between the displacements of the nodes and the corresponding reactions, one could do the following (called the unit displacement method). First, a hypothetical unit dis­placement of one node in one degree of freedom (d. o.f.) only is assumed. This displacement is transposed into the local element coordinate sys­tems of all affected elements. (In the corresponding figure, this would entail the relatively simple transformation between global horizontal and vertical displacements, and element axial and transverse displace­ments. The angular displacements would require no transformation, except in some cases a sign change.) The predetermined element stiff­ness matrices are used to find the element-level reactions. The element reactions are then translated back into global coordinates and summed to give the total structure reactions—to the single hypothetical displace­ment. This set of global reactions, plus zeroes for all forces unaffected by the assumed displacement, constitutes one column in the "stiffness matrix.” By repeating the exercise for every degree of freedom of every node, the stiffness matrix can be built. Then the reactions to any set of nodal displacements may be found by multiplying the stiffness matrix by the displacement vector, i. e., the ordered list of displacements. This entails difficult bookkeeping but simple math.

It is more common in engineering, however, to have to find unknown displacements and stresses from known applied forces. This answer is not possible to obtain so directly. (That is, if the process just described seems direct to you. If it does, you are probably an engineer. If it seems too trivial to have even mentioned, then you are probably a mathematician.)

Instead, after the stiffness matrix is found, it must be inverted to obtain the flexibility matrix. The inversion of large matrices is a sci­ence in itself. But it can be done, using a computer, if one has time to wait. Most of the science lies in improving the efficiency of the process. Another important output is the stress distribution throughout the struc­ture. But this problem has already been solved at the element level for a hypothetical set of element nodal displacements. Scaling the generic stress distribution by the actual displacements, for all elements, yields the stress state throughout the structure.

There are, of course, many variations on this theme and many com­plexities that cannot be addressed here. The important point is that we have gone from an insoluble differential equation to a soluble matrix arithmetic problem. This, in turn, has enabled a change from individual analyses by hand of local portions of a structure to a model­ing effort followed by an automated calculation of the stresses and deflec­tions of the entire structure.

Pioneering papers on discretization of structures were published by Alexander Hrennikoff in 1941 at the Massachusetts Institute of Technology and by Richard Courant in 1943 at the mathematics institute he founded at New York University that would later bear his name. These papers did not lead to immediate application, in part perhaps because they were ahead of the necessary computational technology and in part because they were still somewhat theoretical and had not yet developed a well – formed practical implementation. The first example of what we now call the finite element method (FEM) is commonly considered to be a paper by M. J. Turner (Boeing), R. W. Clough (University of California at Berkeley, Civil Engineering Department), H. C. Martin (University of Washington, Aeronautical Engineering Department), and L. J. Topp in 1956.[802] This paper presented a method for plane stress problems, using triangular elements. John Argyris at the University of Stuttgart, Germany, also made impor­tant early contributions. The term "finite element method” was actually coined by Clough in 1960. The Civil Engineering Department at Berkeley became a major center of early finite element methods development.[803]

By the mid-1960s, aircraft companies, computing companies, univer­sities, and Government research centers were beginning to explore the possibilities—although the method allegedly suffered some initial lack of interest in the academic world, because it bypassed elegant mathe­matical solutions in favor of numerical brute force.[804] However, the prac­tical value could not long be ignored. The following insightful comment, made by a research team at the University of Denver in 1966 (working under NASA sponsorship), sums up the expectation of the period: "It is certain that this concept is going to become one of the most important tools of engineering in the future as structures become more complex and computers more versatile and available.”[805]

Modern Rotor Aerodynamic Limits Survey

The Modern Rotor Aerodynamic Limits Survey was a 10-year program launched in 1984, which encompassed flight efforts in 1987 and 1993— 1994. In 1987, a Sikorsky UH-60A Black Hawk was tested with conven­tional structural instrumentation installed on the rotor blades. Then:

. . . Sikorsky Aircraft was [subsequently] contracted to build a set of highly instrumented blades for the Black Hawk test air­craft: a pressure blade with 242 absolute pressure transduc­ers and a strain-gauge blade with an extensive suite of strain gauges and accelerometers. . . approximately 30 gigabytes of data were obtained in 1993-94 and installed in an electronic database that was immediately accessible to the domestic rotorcraft industry.[948]

Подпись: HiMAT NASTRAN Model

Подпись: 8 Modern Rotor Aerodynamic Limits Survey Подпись: • ?500 ELEMENTS Подпись: CQUAO-

Modern Rotor Aerodynamic Limits SurveyComparison Ground Test Data

to NASTRAN

: niches

OO WO 140 -«О WO 200 220

WING SPAN (cm)

NASTRAN model and NASTRAN to static test comparison. NASA.

The two types of measurement systems are complementary. Strain gauges give an indication of the total load in a member, but little insight to the details of where and how the load is generated. The pressure taps show the distribution of the applied aerodynamic load, but only at given stations, so the total load estimate depends on how one computes the data through the unknown regions between the pressure transducers. The combination of both types of data is most useful to researchers trying to correlate computational loads predictions with the test data.

HiMAT

HiMAT was a small, unpiloted aircraft (23.5-feet long, 15.6-foot wingspan, weight just over 3,000 pounds) somewhat representative of a fighter type configuration, flown between 1979 and 1983, and developed to evaluate the following set of technologies and features:

• Close-coupled canard.

• Winglets.

• Digital fly-by-wire flight control.

• Composite structure.

• Aeroelastic tailoring.

• Supercritical airfoil.

It was intended that the benefits of these collected advances be shown together rather than separately and on an unpiloted platform, so that

Подпись:

Подпись: Twist, ' deg _2 Подпись: О Flight data Predicted Подпись: 8

Modern Rotor Aerodynamic Limits SurveyНІМАТ Wing with Electro-Optical Deflection Measurement System

О Targets

HiMAT Electro-Optical Flight Deflection Measurement System. NASA.

the vehicle could be tested more aggressively without danger to a pilot.[949]

"Aeroelastic tailoring” refers to the design of a structure to achieve aerodynamically favorable deformation under load, rather than the more traditional approach of simply minimizing deformation. The goal of aero­elastic tailoring on the HiMAT ". . . was to achieve an aero-dynamically favorable spanwise twist distribution for maneuvering flight conditions” in the canard and the outboard wing. "The NASTRAN program was used to compute structural deflections at each model grid point. Verification of these deflections was accomplished by performing a loads test prior to delivery of the vehicle to NASA.” The ground-test loads were based on a sustained 8-g turn at Mach 0.9, which was one of the key performance design points of the aircraft. The NASTRAN model and a comparison between predicted and measured deflections are shown in the accompa­nying figure. Canard and wing twist were less than predicted. The differ­ence was attributed to insufficient understanding of the matrix-dominated laminate material properties.[950]

The vehicle was also equipped with a system to measure deflections of the wing surface in flight. Light emitting diodes (LEDs)—referred to as targets—on the wing upper surface were detected by a photodiode
array mounted on the fuselage, at a location overlooking the wing. Three inboard targets were used to determine a reference plane, from which the deflection of the remaining targets could be measured. To measure wing twist, targets were positioned primarily in pairs along the front and rear wing spars.[951] The HiMAT wing had a relatively small num­ber of targets—only two pairs besides the inboard reference set—so the in-flight measurements were not a detailed survey of the wing by any means. Rather, they provided measurement at a few key points, which could then be compared with the NASTRAN data and the ground loads test data. Target and receiver locations are illustrated here, together with a sample of the deflection data at the 8-g maneuver condition. In-flight deflection data showed similar twist to the ground-test data, indicating that the aerodynamic loads were well predicted.[952]

The HiMAT was an early step in the development of aeroelastic tai­loring capability, providing a set of NASTRAN data, static load test data, and flight-test data, for surface deflection at a given loading condition. The project also proved out the electro-optical system for in-flight deflection measurements, which would later be used in the X-29 project.

High-Temperature Structures and Materials

T. A. Heppenheimer

Taking fullest advantage of the high-speed potential of rocket and air­breathing propulsion systems required higher-temperature structures. Researchers recognized that aerothermodynamics involved linking aerodynamic and thermodynamic understanding with the mechanics of thermal loading and deformation of structures. This drove use of new structural materials. NASA and other engineers would experiment with active and passive thermal protection systems, metals, and materials.

N AEROSPACE ENGINEERING, high-temperature structures and materials solve two problems. They are used in flight above Mach 2 to overcome the elevated temperatures that occur naturally at such speeds. They also are extensively used at subsonic velocities, in building high-quality turbofan engines, and for the protection of structures exposed to heating.

Aluminum loses strength when exposed to temperatures above 210 degrees Fahrenheit (°F). This is why the Concorde airliner, which was built of this material, cruised at Mach 2.1 but did not go faster.[1013] Materials requirements come to the forefront at higher speeds and escalate sharply as airplanes’ speeds increase. The standard solutions have been to use titanium and nickel, and a review of history shows what this has meant.

Many people wrote about titanium during the 1950s, but to reduce it to practice was another matter. Alexander "Sasha” Kartveli, chief designer at Republic Aviation, proposed a titanium F-103 fighter, but his vision outreached his technology, and although started, it never flew. North American Aviation’s contemporaneous Navaho missile pro­gram introduced chemical milling (etching out unwanted material) for aluminum as well as for titanium and steel, and was the first to use titanium skin in an aircraft. However, the version of Navaho that

Подпись: 9 Подпись: The Lockheed Blackbird experienced a wide range of upper surface temperatures, up to 600 °F. NASA.

was to use these processes never flew, as the program was canceled in 1957.[1014]

The Lockheed A-12 Blackbird, progenitor of a family of exotic Mach 3.2 cruisers that included the SR-71, encountered temperatures as high as 1,050 °F, which required that 93 percent of its structural weight be tita­nium. The version selected was B-120 (Ti-13V-11Cr-3Al), which has the tensile strength of stainless steel but weighs only half as much. But tita­nium is not compatible with chlorine, cadmium, or fluorine, which led to difficulties. A line drawn on a sheet of titanium with a pen would eat a hole into it in a few hours. Boltheads tended to fall away from assem­blies; this proved to result from tiny cadmium deposits made by tools. This brought removal of all cadmium-plated tools from toolboxes. Spot – welded panels produced during the summer tended to fail because the local water supply was heavily chlorinated to kill algae. The managers took to washing the parts in distilled water, and the problem went away.[1015]

The SR-71 was a success. Its shop-floor practice with titanium at first was classified but now has entered the aerospace mainstream. Today’s commercial airliners—notably the Boeing 787 and the Airbus A-380, together with their engines—use titanium as a matter of routine. That is because this metal saves weight.

Beyond Mach 4, titanium falters and designers must turn instead to alternatives. The X-15 was built to top Mach 6 and to reach 1,200 °F. In competing for the contract, Douglas Aircraft proposed a design that was to use magnesium, whose properties were so favorable that the aircraft would only reach 600 °F. But this concept missed the point, for manag­ers wanted a vehicle that would cope successfully with temperatures of 1,200 °F. Hence it was built of Inconel X, a nickel alloy.[1016]

High-speed flight represents one application of advanced metals. Another involves turbofans for subsonic flight. This application lacks the drama of Mach-breaking speeds but is far more common. Such engines use turbine blades, with the blade itself being fabricated from a single-crystal superalloy and insulated with ceramics. Small holes in the blade promote a circulation of cooler gas that is ducted down­stream from high-pressure stages of the compressor. The arrangement can readily allow turbines to run at temperatures 750 °F above the melt­ing point of the superalloy itself.[1017]

The Air Force JB-47E Fly-By-Wire Project

Подпись: 10The USAF Flight Dynamics Laboratory at Wright Patterson Air Force Base (AFB), OH, sponsored a number of technology efforts and flight – test programs intended to increase the survivability of aircraft flight control system components such as fly-by-wire hydraulic actuators. Beginning in 1966, a Boeing B-47E bomber was progressively modi­fied (being redesignated JB-47E) to incorporate analog computer-con­trolled fly-by-wire actuators for both pitch and roll control, with pilot inputs being provided via a side stick controller. The program spanned three phases. For Phase I testing, the JB-47E only included fly-by-wire in its pitch axis. This axis was chosen because the flight control sys­tem in the standard B-47E was known to have a slow response in pitch because of the long control cables to the elevator stretching under load. Control signals to the pitch axis FBW actuator were generated by a transducer attached to the pilot’s control column. The pilot had a simple switch in the cockpit that allowed him to switch between the standard hydromechanical flight control system (which was retained as a backup) and the computer-controlled FBW system. Modified thus, the JB-47E flew for the first time, in December 1967. Test pilots reported that the modified B-47 had better handling qualities then were attainable with the standard B-47E elevator control system, especially in high-speed, low-level flight.[1134]

Phase II of the JB-47E program added fly-by-wire roll control and a side stick controller that used potentiometers to measure pilot input. By the end of the flight-test program, over 40 pilots had flown the FBW JB-47E. The Air Force chief test pilot during Phase II, Col. Frank Geisler, reported: "In ease of control there is no comparison between the standard system and the fly-by-wire. The fly-by-wire is superior in every aspect concerning ease of control. . . . It is positive, it is rapid—it responds well— and best of all the feel is good.”[1135] Before the JB-47E Phase III flight-test program ended in early 1969, a highly reliable four-channel redundant
electrohydraulic actuator had been installed in the pitch axis and success­fully evaluated.[1136] By this time, the Air Force had already initiated Project 680J, the Survivable Flight Control System (SFCS), which resulted in the prototype McDonnell-Douglas YF-4E Phantom aircraft being mod­ified into a testbed to evaluate the potential benefits of fly-by-wire in a high-performance, fighter-type aircraft.[1137] The SFCS YF-4E was intended to validate the concept that dispersed, redundant fly-by-wire flight con­trol elements would be less vulnerable to battle damage, as well as to improve the performance of the flight control system and increase over­all mission effectiveness.

Strike Technology Testbed

In the summer of 1991, a flight-test effort oriented to close air support and battlefield air interdiction began. The focus was to demonstrate tech­nologies to locate and destroy ground targets day or night, good weather or bad, while maneuvering at low altitudes. The AFTI/F-16 was modi­fied with two forward-looking infrared sensors mounted in turrets on the upper fuselage ahead of the canopy. The pilot was equipped with a helmet-mounted sight that was integrated with the infrared sensors. As he moved his head, they followed his line of sight and transmitted their images to eyepieces mounted in his helmet. The nose-mounted canards used in earlier AFTI/F-16 testing were removed. Testing emphasized giv­ing pilots the capability to fly their aircraft and attack targets in darkness or bad weather. To assist in this task, a digital terrain map was stored
in the aircraft computer. Advanced terrain following was also evalu­ated. This used the AFTI/F-16’s radar to scan terrain ahead of the air­craft and automatically fly over or around obstacles. The pilot could select minimum altitudes for his mission. The system would automat­ically calculate that the aircraft was about to descend below this alti­tude and initiate a 5 g pullup maneuver. The advanced terrain following system was connected to the Automated Maneuvering Attack System, enabling the pilot to delivery weapons from altitudes as low as 500 feet in a 5 g turn. An automatic Pilot Activated Recovery System was integrated with the flight control system. If the pilot became disori­ented at night or in bad weather, he could activate a switch on his side controller. This caused the flight control computer to automatically recover the aircraft putting it into a wings-level climb. Many of these technologies have subsequently transitioned into upgrades to existing fighter/attack aircraft.[1187]

Подпись: 10The final incarnation of this unique aircraft would be as the AFTI/F-16 power-by-wire flight technology demonstrator.

Performance Seeking Control

Подпись: 10The Performance Seeking Control (PSC) effort followed the Adaptive Electronic Control System project. Previous engine control modes uti­lized on the HIDEC aircraft used stored schedules of optimum engine pressure ratios based an average engine on a normal standard day. Using digital flight control, inlet control, and engine control systems, PSC used highly advanced computational techniques and control laws to identify the actual condition of the engine components and optimize the overall propulsion system for best efficiency based on actual engine and flight conditions that the aircraft was encountering, ensuring the highest engine and maneuvering performance in all flight environments. PSC testing with the HIDEC aircraft began in 1990. Results of flight-testing with PSC included increased fuel efficiency, improved engine thrust dur­ing accelerations and climbs, and increased engine service life achieved by reductions in turbine inlet temperature. Flight-testing demonstrated turbine inlet temperature reductions of more than 160 °F. Such large operating temperature reductions can significantly extend the life of jet engines. Additionally, improvements in thrust of between 9 percent and 15 percent were observed in various flight conditions, including accel­eration and climb.[1265] PSC also included the development of methodolo­gies within the digital engine control system designed to detect engine wear and impending failure of certain engine components. Such infor­mation, coupled with normal preventative maintenance, could assist in implementing future fail-safe propulsion systems.[1266] The flight dem­onstration and evaluation of the PSC system at NASA Dryden directly contributed to the rapid transition of the technology into operational use. For example, PSC technology has been applied to the F100 engine

used in the F-15 Eagle, the F119 engine in the F-22 Raptor, and the F135 engine for the F-35 Lightning II.

Numerical Propulsion System Simulation

NASA and its contractor colleagues soon found another use for computers to help improve engine performance. In fact, looking back at the history

Подпись: 11 Numerical Propulsion System Simulation

of NASA’s involvement with improving propulsion technology, a trilogy of major categories of advances can be suggested based on the develop­ment of the computer and its evolution in the role that electronic think­ers have played in our culture.

Part one of this story includes all the improvements NASA and its industry partners have made with jet engines before the computer came along. Having arrived at a basic operational design for a turbojet engine—and its relations, the turboprop and turbofan—engineers sought to improve fuel efficiency, reduce noise, decrease wear, and otherwise reduce the cost of maintaining the engines. They did this through such efforts as the Quiet Clean Short Haul Experimental Engine and Aircraft Energy Efficiency program, detailed earlier in this case study. By tinker­ing with the individual components and testing the engines on the ground and in the air for thousands of hours, incremental advances were made.[1338]

Part two of the story introduces the capabilities made available to engineers as computers became powerful enough and small enough to be incorporated into the engine design. Instead of requiring the pilot to manually make occasional adjustments to the engine operation in
flight depending on what the instruments read, a small digital computer built into the engine senses thousands of measurements per minute and caused an equal number of adjustments to be made to keep the power – plant performing at peak efficiency. With the Digital Electronic Engine Control, engines designed years before behaved as though they were fresh off the drawing boards, thanks to their increased capabilities.[1339]

Подпись: 11Having taken engine designs about as far as it was thought possible, the need for even more fuel-efficient, quieter, and capable engines con­tinued. Unfortunately, the cost of developing a new engine from scratch, building it, and testing it in flight can cost millions of dollars and take years to accomplish. What the aerospace industry needed was a way to take advantage of the powerful computers available at the dawn of the 21st century to make the engine development process less expen­sive and timelier. The result was part three of NASA’s overarching story of engine development: the Numerical Propulsion System Simulation (NPSS) program.[1340]

Working with the aerospace industry and academia, NASA’s Glenn Research Center led the collaborative effort to create the NPSS pro­gram, which was funded and operated as part of the High Performance Computing and Communications program. The idea was to use modern simulation techniques and create a virtual engine and test stand within a virtual wind tunnel, where new designs could be tried out, adjustments made, and the refinements exercised again without costly and time-con­suming tests in the "real” world. As stated in a 1999 industry review of the program, the NPSS was built around inclusion of three main ele­ments: "Engineering models that enable multi-disciplinary analysis of large subsystems and systems at various levels of detail, a simulation environment that maximizes designer productivity and a cost-effective, high-performance computing platform.”[1341]

In explaining to the industry the potential value of the program dur­ing a 2006 American Society of Mechanical Engineers conference in

Spain, a NASA briefer from Glenn suggested that if a standard turbo­jet development program for the military—such as the F100—took 10 years, $1.5 billion, construction of 14 ground-test engines, 9 flight-test engines, and more than 11,000 hours of engine tests, the NPSS pro­gram could realize a:

• 50-percent reduction in tooling cost.

• 33-percent reduction in the average development engine cost.

Подпись: 1130-percent reduction in the cost of fabricating, assembling, and testing rig hardware.

• 36-percent reduction in the number of development engines.

• 60-percent reduction in total hardware cost.[1342]

A key—and groundbreaking—feature of NPSS was its ability to inte­grate simulated tests of different engine components and features, and run them as a whole, fully modeling all aspects of a turbojet’s operation. The program did this through the use of the Common Object Request Broker Architecture (CORBA), which essentially provided a shared lan­guage among the objects and disciplines (mechanical, thermo-dynam­ics, structures, gas flow, etc.) being tested so the resulting data could be analyzed in an "apples to apples” manner. Through the creation of an NPSS developer’s kit, researchers had tools to customize the software for individual needs, share secure data, and distribute the simulations for use on multiple computer operating systems. The kit also provided for the use of CORBA to "zoom” in on the data to see specific informa­tion with higher fidelity.[1343]

Begun in 1997, the NPSS team consisted of propulsion experts and software engineers from GE, Pratt & Whitney, Boeing, Honeywell, Rolls – Royce, Williams International, Teledyne Ryan Aeronautical, Arnold Engineering Development Center, Wright-Patterson AFB, and NASA’s

Glenn Research Center. By the end of the 2000 fiscal year, the NPSS team had released Version 1.0.0 on schedule. According to a summary of the program produced that year:

Подпись: 11(The new software) can be used as an aero-thermodynamic zero-dimensional cycle simulation tool. The capabilities include text-based input syntax, a sophisticated solver, steady- state and transient operation, report generation, a built-in object-oriented programming language for user-definable components and functions, support for distributed running of external codes via CORBA, test data reduction, interactive debug capability and customer deck generation.[1344]

Additional capabilities were added in 2001, including the ability to support development of space transportation technologies. At the same time, the initial NPSS software quickly found applications in aviation safety, ground-based power, and alternative energy devices, such as fuel cells. Moreover, project officials at the time suggested that with the fur­ther development of the software, other applications could be found for the program in the areas of nuclear power, water treatment, biomedi­cine, chemical processing, and marine propulsion. NPSS proved to be so capable and promising of future applications that NASA designated the program a cowinner of the NASA Software of the Year Award for 2001.[1345]

Work to improve the capabilities and expand the applications of the software continued, and, in 2008, NASA transferred NPSS to a consor­tium of industry partners, and, through a Space Act Agreement, it is cur­rently offered commercially by Wolverine Ventures, Inc., of Jupiter, FL. Now at Version 1.6.5, NPSS’s features include the ability to model all types of complex systems, plug-and-play interfaces for fluid properties, built-in plotting package, interface to higher fidelity legacy codes, mul­tiple model views, command language interpreter with language sen­sitive text editor, comprehensive component solver, and variable setup controls. It also can operate on Linux, Windows, and UNIX platforms.[1346]

Originally begun as a virtual tool for designing new turbojet engines, NPSS has since found uses in testing rocket engines, fuel cells, analog controls, combined cycle engines, thermal management systems, air­frame vehicles preliminary design, and commercial and military engines.[1347]

Ultra Efficient Engine Technology Program

Подпись: 11With the NPSS tool firmly in place and some four decades of experience incrementally improving the design, operation, and maintenance of the jet engine, it was time to go for broke and assemble an ultra­bright team of engineers to come up with nothing short of the best jet

Building on the success of technology development programs such as the Quiet Clean Short Haul Experimental Engine and Energy Efficient Engine project—all of which led directly to the improvements and production of turbojet engines now propelling today’s commercial airliners—NASA approached the start of the 21st century with plans to take jet engine design to accomplish even more impressive feats. In 1999, the Aeronautics Directorate of NASA began the Ultra Efficient Engine Technology (UEET) program—a 5-year, $300-million effort— with two primary goals. The first was to find ways that would enable further improvements in engine efficiency to reduce fuel burn and, as a result, carbon dioxide emissions by yet another 15 percent. The second was to continue developing new materials and configuration schemes in the engine’s combustor to reduce emissions of nitrogen oxides (NOx) during takeoff and landings by 70 percent relative to the standards detailed in 1996 by the International Civil Aviation Organization.[1348]

NASA’s Glenn Research Center led the program, with participation from three other NASA Centers: Ames, Langley, and the Goddard Space Flight Center in Greenbelt, MD. Also involved were GE, Pratt & Whitney, Honeywell, Allison/Rolls-Royce, Williams International, Boeing, and Lockheed Martin.[1349]

Подпись: 11The program was comprised of seven major projects, each of which addressed particular technology needs and exploitation opportunities.[1350] The Propulsion Systems Integration and Assessment project examined overall component technology issues relevant to the UEET program to help furnish overall program guidance and identify technology short­falls.[1351] The Emissions Reduction project sought to significantly reduce NOx and other emissions, using new combustor concepts and tech­nologies such as lean burning combustors with advanced controls and high-temperature ceramic matrix composite materials.[1352] The Highly Loaded Turbomachinery project sought to design lighter-weight, reduced – stage cores, low-pressure spools and propulsors for more efficient and environmentally friendly engines, and advanced fan concepts for qui­eter, lighter, and more efficient fans.[1353] The Materials and Structures for High Performance project sought to develop and demonstrate high – temperature material concepts such as ceramic matrix composite combustor liners and turbine vanes, advanced disk alloys, turbine air­foil material systems, high-temperature polymer matrix composites, and innovative lightweight materials and structures for static engine struc – tures.[1354] The Propulsion-Airframe Integration project studied propul­sion systems and engine locations that could furnish improved engine and environmental benefits without compromising the aerodynamic performance of the airplane; lowering aircraft drag itself constituted a highly desirable means of reducing fuel burn, and, hence, CO2 emis­sions will develop advanced technologies to yield lower drag propulsion system integration with the airframe for a wide range of vehicle classes. Decreasing drag improves air vehicle performance and efficiency, which
reduces fuel burn to accomplish a particular mission, thereby reducing the CO2 emissions.[1355] The Intelligent Propulsion Controls Project sought to capitalize upon breakthroughs in electronic control technology to improve propulsion system life and enhance flight safety via integrating informa­tion, propulsion, and integrated flight propulsion control technologies.[1356] Finally, the Integrated Component Technology Demonstrations project sought to evaluate the benefits of off-the-shelf propulsion systems inte­gration on NASA, Department of Defense, and aeropropulsion industry partnership efforts, including both the UEET and the military’s Integrated High Performance Turbine Engine Technology (IHPTET) programs.[1357]

Подпись: 11By 2003, the 7 project areas had come up with 10 specific technol­ogy areas that UEET would investigate and incorporate into an engine that would meet the program’s goals for reducing pollution and increas­ing fuel burn efficiency. The technology goals included:

1. Advanced low-NOx combustor design that would feature a lean burning concept.

2. A highly loaded compressor that would lower system weight, improve overall performance, and result in lower fuel burn and carbon dioxide emissions.

3. A highly loaded, high-pressure turbine that could allow a reduction in the number of high-pressure stages, parts count, and cooling requirements, all of which could improve fuel burn and lower carbon dioxide emissions.

4. A highly loaded, low-pressure turbine and aggressive tran­sition duct that would use flow control techniques that would reduce the number of low-pressure stages within the engine.

5. Use of a ceramic matrix composite turbine vane that would allow high-pressure vanes to operate at a higher
inlet temperature, which would reduce the amount of engine cooling necessary and result in lower carbon diox­ide emissions.

6. The same ceramic matrix composite material would be used to line the combustor walls so it could operate at a higher temperature and reduce NOx emissions.

7. Coat the turbine airfoils with a ceramic thermal barrier material to allow the turbines to operate at a higher tem­perature and thus reduce carbon dioxide emissions.

8. Подпись: 11Use advanced materials in the construction of the tur­bine airfoil and disk. Specifically, use a lightweight single crystal superalloy to allow the turbine blades and vanes to operate at a higher temperature and reduce carbon dioxide emissions, as well as a dual microstructure nickel – base superalloy to manufacture turbine disks tailored to meet the demands of the higher-temperature environment.

9. Determine advanced materials and structural concepts for an improved, lighter-weight impact damage tolerance and noise-reducing fan containment case.

10. Develop active tip clearance control technology for use in the fan, compressor, and turbine to improve each compo­nent’s efficiency and reduce carbon dioxide emissions.[1358]

In 2003, the UEET program was integrated into NASA’s Vehicle Systems program to enable the enginework to be coordinated with research into improving other areas of overall aircraft technology. But in the wake of policy changes associated with the 2004 decision to redi­rect NASA’s space program to retire the Space Shuttle and return humans to the Moon, the Agency was forced to redirect some of its funding to Exploration, forcing the Aeronautics Directorate to give up the $21.6 mil­lion budgeted for UEET in fiscal year 2005, effectively canceling the big­gest and most complicate jet engine research program ever attempted. At the same time, NASA was directed to realign its jet engine research to concentrate on further reducing noise.[1359]

Nevertheless, results from tests of UEET hardware showed prom­ise that a large, subsonic aircraft equipped with some of the technologies detailed above would have a "very high probability” of achieving the pro­gram goals laid out for reducing emissions of carbon dioxide and other pollutants. The data remain for application to future aircraft and engine schemes.72