Category AERONAUTICS

Digital Computer Simulation

The computational mathematical models for the early simulators mentioned previously were performed on analog computers. Analog computers were capable of solving complex differential equations in real time. The digital computers available in the 1950s were mechanical units that were extremely slow and not capable of the rapid integration that was required for simulation. One difficulty with analog comput­ers was the existence of electronic noise within the equipment, which caused the solutions to drift and become inaccurate after several min­utes of operation. For short simulation exercises (such as a 10-minute X-15 flight) the results were quite acceptable. A second difficulty was storing data, such as aerodynamic functions.

The X-20 Dyna-Soar program mentioned previously posed a chal­lenge to the field of simulation. The shortest flight was to be a once – around orbital flight with a flight time of over 90 minutes. A large volume

Digital Computer Simulation
Digital Computer Simulation

of aerodynamic data needed to be stored covering a very large range of Mach numbers and angles of attack. The analog inaccuracy problem was tackled by University of Michigan researchers, who revised the standard equations of motion so that the reference point for integration was a 300- mile circular orbit, rather than the starting Earth coordinates at takeoff. These equations greatly improved the accuracy of analog simulations of orbiting vehicles. As the AFFTC and NASA began to prepare for testing of the X-20, an analog simulation was created at Edwards that was used to develop test techniques and to train pilots. Comparing the real-time sim­ulation solutions with non-real-time digital solutions showed that the clo­sure after 90 minutes was within about 20,000 feet—probably adequate for training, but they still dictated that the mission be broken into segments for accurate results. The solution was the creation of a hybrid computer simulation that solved the three rotational equations using analog com­puters but solved the three translational equations at a slower rate using digital computers. The hybrid computer equipment was purchased for installation at the AFFTC before the X-20 program was canceled in 1963. When the system was delivered, it was reprogrammed to represent the X-15A-2, a rebuilt variant of the second X-15 intended for possible flight to Mach 7, carrying a scramjet aerodynamic test article on a stub ventral fin.[737] Although quite complex (it necessitated a myriad of analog-to-digital and digital-to-analog conversions), this hybrid system was subsequently

used in the AFFTC simulation lab to successfully simulate several other airplanes, including the C-5, F-15, and SR-71, as well as the M2-F2 and X-24A/B Lifting Bodies and Space Shuttle orbiter.

The speed of digital computers increased rapidly in the 1970s, and soon all real-time simulation was being done with digital equipment. Out – of-the-window visual displays also improved dramatically and began to be used in conjunction with the cockpit instruments to provide very real­istic training for flight crews. One of the last features to be developed in the field of visual displays was the accurate representation of the terrain surface during the last few feet of descent before touchdown.

Simulation has now become a primary tool for designers, flight-test engineers, and pilots during the design, development, and flight-testing of new aircraft and spacecraft.

Some Seminal Solutions and Applications

We have discussed the historical evolution of the governing flow equa­tions, the first essential element of CFD. We then discussed the evolu­tion of the high-speed digital computer, the second essential element of CFD. We now come to the crux of this article, the actual CFD flow-field solutions, their evolution, and their importance. Computational fluid dynamics has grown exponentially in the past four decades, render­ing any selective examination of applications problematical. This case study examines four applications that have driven the development of CFD to its present place of prominence: the supersonic blunt body prob­lem, transonic airfoils and wings, Navier-Stokes solutions, and hyper­sonic vehicles.

Ames Research Center

The Ames Research Center—with research responsibilities within aero­dynamics, aeronautical and space vehicle studies, reentry and thermal protection systems, simulation, biomedical research, human factors, nanotechnology, and information technology—is one of the world’s pre­mier aerospace research establishments. It was the second NACA labora­tory, established in 1939 as war loomed in Europe. The Center was built initially to provide for expansion of wind tunnel facilities beyond the space and power generation capacity available at Langley. Accordingly, in the computer age, Ames became a major center for computational fluid dynamics methods development.[850] Ames also developed a large and active structures effort, with approximately 50 to 100 researchers involved in the structural disciplines at any given time.[851] Areas of research include structural dynamics, hypersonic flight and r-entry, rotorcraft, and multidisciplinary design/analysis/optimization. These last two are discussed briefly below.

In the early 1970s, a joint NASA-U. S. Army rotorcraft program led to a significant amount of rotorcraft flight research at Ames. "The flight research activity initially concentrated on control and handling issues. . . . Later on, rotor aerodynamics, acoustics, vibration, loads, advanced concepts, and human factors research would be included as important elements in the joint program activity.”[852] As is typically the case, this effort impacted the direction of analytical work as well in rotor aeroelastics, aeroservoelastics, acoustics, rotor-body coupling, rotor air loads prediction, etc. For example, a "comprehensive analytical model” completed in 1980 combined struc­tural, inertial, and aerodynamic models to calculate rotor performance, loads, noise, vibration, gust response, flight dynamics, handling qualities, and aeroelastic stability of rotorcraft.[853] Other efforts were less comprehen­sive and produced specialized methods for treating various aspects of the rotorcraft problem, such as blade aeroelasticity.[854] The GeneralRotorcraft Aeromechanical Stability Program (GRASP) combined finite elements with concepts used in spacecraft multibody dynamics problems, treating the helicopter as a structure with flexible, rotating substructures.[855]

Rotorcraft analysis has to be multidisciplinary, because of the many types of coupling that are active. Fixed wing aircraft have not always been treated with a multidisciplinary perspective, but the multi-disci­plinary analysis and optimization of aircraft is a growing field and one in which Ames has made many valuable contributions. The Advanced Concepts Branch, not directly associated with Structures & Loads but responsible for multidisciplinary vehicle design and optimization stud­ies, has performed and/or sponsored much of this work.

A general-purpose optimization program, CONMIN, was devel­oped jointly by Ames and by the U. S. Army Air Mobility Research &

Development Laboratory in 1973[856] and had been used extensively by NASA Centers and contractors through the 1990s. Garret Vanderplaats was the principal developer. Because it is a generic mathematical func­tion minimization program, it can in principle drive any design/analysis process toward an optimum. CONMIN has been coupled with many dif­ferent types of analysis programs, including NASTRAN.[857]

Aircraft Synthesis (ACSYNT) was an early example of a multidis­ciplinary aircraft sizing and conceptual design code. Like many early (and some current) total-vehicle sizing and synthesis tools, ACSYNT did not actually perform structural analysis but instead used empirically based equations to estimate the weight of airframe structure. ACSYNT was initially released in the 1970s and has been widely used in the air­craft industry and at universities. Collaboration between Ames and the Virginia Polytechnic Institute’s CAD Laboratory, to develop a computer – aided design (CAD) interface for ACSYNT, eventually led to the commer­cialization of ACSYNT and the creation of Phoenix Integration, Inc., in 1995.[858] Phoenix Integration is currently a major supplier of analysis inte­gration and multidisciplinary optimization software.

Tools such as ACSYNT are very practical, but it has also been a goal at Ames to couple the prediction of aerodynamic forces and loads to more rigorous structural design and analysis, which would give more insight into the effects of new materials or novel vehicle configurations. To this end, a code called ENSAERO was developed, combining finite ele­ment structural analysis capability with high-fidelity Euler (inviscid) and Navier-Stokes (viscous) aerodynamics solutions. "The code is capable of computing unsteady flows on flexible wings with vortical flows,”[859] and pro­visions were made to include control or thermal effects as well. ENSAERO was introduced in 1990 and developed and used throughout the 1990s.

In a cooperative project with Virginia Tech and McDonnell-Douglas Aerospace, ENSAERO was eventually coupled with NASTRAN to provide higher structural fidelity than the relatively limited structural capability intrinsic to ENSAERO.[860] Guru Guruswamy was the principal developer.

In the late 1990s, Juan Alonso, James Reuther, and Joaquim Martins, with other researchers at Ames, applied the adjoint method to the prob­lem of combined aerostructural design optimization. The adjoint method, first applied to purely aerodynamic shape optimization in the late 1980s by Dr. Antony Jameson, is an approach to optimization that provides revolutionary gains in efficiency relative to traditional methods, espe­cially when there are a large number of design variables. It is not an exaggeration to say that adjoint methods have revolutionized the art of aerodynamic optimization. Technical conferences often contain whole sessions on applications of adjoint methods, and several aircraft com­panies have made practical applications of the technique to the aero­dynamic design of aircraft that are now in production.[861] Bringing this approach to aerostructural optimization is extremely significant.

ANSYMP Computer Program (Glenn Research Center, 1983)

ANSYMP was developed to capture the key elements of local plastic behavior without the overhead of a full nonlinear finite element anal­ysis. "Nonlinear, finite-element computer programs are too costly to use in the early design stages for hot-section components of aircraft gas turbine engines. . . . This study was conducted to develop a com­puter program for performing a simplified nonlinear structural analy­sis using only an elastic solution as input data. The simplified method was based on the assumption that the inelastic regions in the structure are constrained against stress redistribution by the surrounding elastic material. Therefore the total strain history can be defined by an elastic analysis. . . . [ANSYMP] was created to predict the stress-strain history at the critical fatigue location of a thermomechanically cycled structure from elastic input data. . . . Effective [inelastic] stresses and plastic strains are approximated by an iterative and incremental solution procedure.” ANSYMP was verified by comparison to a full nonlinear finite element code (MARC). Cyclic hysteresis loops and mean stresses from ANSYMP "were in generally good agreement with the MARC results. In a typical problem, ANSYMP used less than 1 percent of the central processor unit (CPU) time required by MARC to compute the inelastic solution.”[980]

Reusable Surface Insulation

Early in the 1960s, researchers at Lockheed introduced an entirely dif­ferent approach to thermal protection, which in time became the stan­dard. Ablatives were unrivalled for once-only use, but during that decade the hot structure continued to stand out as the preferred approach for reusable craft such as Dyna-Soar. As noted, it used an insulated primary or load-bearing structure with a skin of outer panels. These emitted heat by radiation, maintaining a temperature that was high but steady.

Подпись: ULTIMATE TENSILE STRENGTH Подпись: MAX TEMP

Reusable Surface Insulation2124-T851

7075-T76S1

Подпись: INCO 718 RENE 41 Подпись: 9

LOCKALLOV

Подпись: TI-6AI-4VHAYNES 188

HAYNES 188

Vi RENE 41 INC0 62S

Strength versus temperature for various superalloys, including Rene 41, the primary structural material used on the X-20 Dyna-Soar. NASA.

Metal fittings supported these panels, and while the insulation could be high in quality, these fittings unavoidably leaked heat to the underlying structure. This raised difficulties in crafting this structure of aluminum and even of titanium, which had greater heat resistance. On Dyna-Soar, only Rene 41 would do.[1062]

Ablatives avoided such heat leaks while being sufficiently capable as insulators to permit the use of aluminum. In principle, a third approach combined the best features of hot structures and ablatives. It called for the use of temperature-resistant tiles, made perhaps of ceramic, which could cover the vehicle skin. Like hot-structure panels, they would radi­ate heat, while remaining cool enough to avoid thermal damage. In addition, they were to be reusable. They also were to offer the excellent insulating properties of good ablators, preventing heat from reaching the underlying structure—which once more might be of aluminum. This concept became known as reusable surface insulation (RSI). In time, it gave rise to the thermal protection of the Shuttle.

RSI grew out of ongoing work with ceramics for thermal protec­tion. Ceramics had excellent temperature resistance, light weight, and good insulating properties. But they were brittle and cracked rather than stretched in response to the flexing under load of an underlying metal primary structure. Ceramics also were sensitive to thermal shock, as when heated glass breaks when plunged into cold water. In flight, such thermal shock resulted from rapid temperature changes during reentry.[1063]

Monolithic blocks of the ceramic zirconia had been specified for the nose cap of Dyna-Soar, but a different point of departure used mats of solid fiber in lieu of the solid blocks. The background to the Shuttle’s tiles lay in work with such mats that took place early in the 1960s at Lockheed Missiles and Space Company. Key people included R. M. Beasley, Ronald Banas, Douglas Izu, and Wilson Schramm. A Lockheed patent disclo­sure of December 1960 gave the first presentation of a reusable insula­tion made of ceramic fibers for use as a heat shield. Initial research dealt with casting fibrous layers from a slurry and bonding the fibers together.

Related work involved filament-wound structures that used long continuous strands. Silica fibers showed promise and led to an early success: a conical radome of 32-inch diameter built for Apollo in 1962. Designed for reentry, it had a filament-wound external shell and a light­weight layer of internal insulation cast from short fibers of silica. The two sections were densified with a colloid of silica particles and sintered into a composite. This gave a nonablative structure of silica composite reinforced with fiber. It never flew, as design requirements changed dur­ing the development of Apollo. Even so, it introduced silica fiber into the realm of reentry design.

Another early research effort, Lockheat, fabricated test versions of fibrous mats that had controlled porosity and microstructure. These were impregnated with organic fillers such as Plexiglas (methyl meth­acrylate). These composites resembled ablative materials, though the filler did not char. Instead it evaporated or volatilized, producing an outward flow of cool gas that protected the heat shield at high heat – transfer rates. The Lockheat studies investigated a range of fibers that included silica, alumina, and boria. Researchers constructed multilayer composite structures of filament-wound and short-fiber materials that resembled the Apollo radome. Impregnated densities were 40 to 60 lb/ ft3, the higher density being close to that of water. Thicknesses of no more than an inch gave acceptably low back-face temperatures during simulations of reentry.

This work with silica-fiber ceramics was well underway during 1962. Three years later, a specific formulation of bonded silica fibers was ready for further development. Known as LI-1500, it was 89 percent porous and had a density of 15 lb/ft3, one-fourth that of water. Its external sur­face was impregnated with filler to a predetermined depth, again to provide additional protection during the most severe reentry heating. By the time this filler was depleted, the heat shield was to have entered a zone of more moderate heating, where the fibrous insulation alone could provide protection.

Initial versions of LI-1500, with impregnant, were intended for use with small space vehicles similar to Dyna-Soar that had high heating rates. Space Shuttle concepts were already attracting attention—the January 1964 issue of Astronautics & Aeronautics, the journal of the American Institute of Aeronautics and Astronautics, presents the think­ing of the day—and in 1965 a Lockheed specialist, Maxwell Hunter, introduced an influential configuration called Star Clipper. His design employed LI-1500 for thermal protection.

Like other Shuttle concepts, Star Clipper was to fly repeatedly, but the need for an impregnant in LI-1500 compromised its reusability. But in contrast to earlier entry vehicle concepts, Star Clipper was large, offer­ing exposed surfaces that were sufficiently blunt to benefit from H. Julian Allen’s blunt-body principle. They had lower temperatures and heating rates, which made it possible to dispense with the impregnant. An unfilled version of LI-1500, which was inherently reusable, now could serve.

Here was the first concept of a flight vehicle with reusable insula­tion, bonded to the skin, which could reradiate heat in the fashion of a hot structure. However, the matted silica by itself was white and had low thermal emissivity, making it a poor radiator of heat. This brought excessive surface temperatures that called for thick layers of the silica insulation, adding weight. To reduce the temperatures and the thick­ness, the silica needed a coating that could turn it black for high emis – sivity. It then would radiate well and remain cooler.

The selected coating was a borosilicate glass, initially with an admix­ture of Cr2O3 and later with silicon carbide, which further raised the emissivity. The glass coating and the silica substrate were both silicon dioxide; this assured a match of their coefficients of thermal expansion, to prevent the coating from developing cracks under the temperature changes of reentry. The glass coating could soften at very high temper­atures to heal minor nicks or scratches. It also offered true reusability, surviving repeated cycles to 2,500 °F. A flight test came in 1968 as NASA Langley investigators mounted a panel of LI-1500 to a Pacemaker reen­try test vehicle along with several candidate ablators. This vehicle car­ried instruments and was recovered. Its trajectory reproduced the peak heating rates and temperatures of a reentering Star Clipper. The LI-1500 test panel reached 2,300 °F and did not crack, melt, or shrink. This proof – of-concept test gave further support to the concept of high-emittance reradiative tiles of coated silica for thermal protection.[1064]

Lockheed conducted further studies at its Palo Alto Research Center. Investigators cut the weight of RSI by raising its porosity from the 89 percent of LI-1500 to 93 percent. The material that resulted, LI-900, weighed only 9 pounds per cubic foot, one-seventh the density of water.[1065] There also was much fundamental work on materials. Silica exists in three crystalline forms: quartz, cristobalite, and tridymite. These not only have high coefficients of thermal expansion but also show sud­den expansion or contraction with temperature because of solid-state phase changes. Cristobalite is particularly noteworthy; above 400 °F, it expands by more than 1 percent as it transforms from one phase to another. Silica fibers for RSI were to be glass, an amorphous rather than a crystalline state with a very low coefficient of thermal expansion and an absence of phase changes. The glassy form thus offered superb resis­tance to thermal stress and thermal shock, which would recur repeat­edly during each return from orbit.[1066]

The raw silica fiber came from Johns Manville, which produced it from high-purity sand. At elevated temperatures, it tended to undergo "devitrification,” transforming from a glass into a crystalline state. Then, when cooling, it passed through phase-change temperatures and the fiber suddenly shrank, producing large internal tensile stresses. Some fibers broke, giving rise to internal cracking within the RSI and degra­dation of its properties. These problems threatened to grow worse dur­ing subsequent cycles of reentry heating.

To prevent devitrification, Lockheed worked to remove impurities from the raw fiber. Company specialists raised the purity of the silica to 99.9 percent while reducing contaminating alkalis to as low as 6 parts per million. Lockheed proceeded to do these things not only in the lab­oratory but also in a pilot plant. This plant took the silica from raw material to finished tile, applying 140 process controls along the way. Established in 1970, the pilot plant was expanded in 1971 to attain a true manufacturing capability. Within this facility, Lockheed produced tiles of LI-1500 and LI-900 for use in extensive programs of test and evalua­tion. In turn, the increasing availability of these tiles encouraged their selection for Shuttle protection in lieu of a hot-structure approach.[1067]

General Electric (GE) also became actively involved, studying types of RSI made from zirconia and from mullite, 3Al2O3+2SiO2, as well as from silica. The raw fibers were commercial grade, with the zirconia coming from Union Carbide and the mullite from Babcock and Wilcox. Devitrification was a problem, but whereas Lockheed had addressed it by purifying its fiber, GE took the raw silica from Johns Manville and tried to use it with little change. The basic fiber, the Q-felt of Dyna – Soar, also had served as insulation on the X-15. It contained 19 different elements as impurities. Some were present at a few parts per million, but others—aluminum, calcium, copper, lead, magnesium, potassium, sodium—ran from 100 to 1,000 parts per million. In total, up to 0.3 percent was impurity.

General Electric treated this fiber with a silicone resin that served as a binder, pyrolyzing the resin and causing it to break down at high temperatures. This transformed the fiber into a composite, sheathing each strand with a layer of amorphous silica that had a purity of 99.98 percent or more. This high purity resulted from that of the resin. The amorphous silica bound the fibers together while inhibiting their devit­rification. General Electric’s RSI had a density of 11.5 lb/ft3, midway between that of LI-900 and LI-1500.[1068]

Many Shuttle managers had supported hot structures, but by mid – 1971 they were in trouble. In Washington, the Office of Management and Budget (OMB) now was making it clear that it expected to impose stringent limits on funding for the Shuttle, which brought a demand for new configurations that could cut the cost of development. Within weeks, the contractors did a major turnabout. They abandoned hot struc­tures and embraced RSI. Managers were aware that it might take time to develop for operational use, but they were prepared to use ablatives for interim thermal protection and to switch to RSI once it was ready.[1069]

What brought this dramatic change? The advent of RSI production at Lockheed was critical. This drew attention from Max Faget, a long­time NACA-NASA leader who had kept his hand in the field of Shuttle design, offering a succession of conceptual design configurations that had helped to guide the work of the contractors. His most important concept, designated MSC-040, came out in September 1971 and served as a point of reference. It used RSI and proposed to build the Shuttle of aluminum rather than Rene 41 or anything similar.[1070]

Why aluminum? "My history has always been to take the most con­servative approach,” Faget explained subsequently. Everyone knew how to work with aluminum, for it was the most familiar of materials, but everything else carried large question marks. Titanium, for one, was lit­erally a black art. Much of the pertinent shop-floor experience had been gained within the SR-71 program and was classified. Few machine shops had pertinent background, for only Lockheed had constructed an air­plane—the SR-71—that used titanium hot structure. The situation was worse for columbium and the superalloys, for these metals had been used mostly in turbine blades. Lockheed had encountered serious difficul­ties as its machinists and metallurgists wrestled with titanium. With the Shuttle facing the OMB’s cost constraints, no one cared to risk an overrun while machinists struggled with the problems of other new materials.[1071]

NASA Langley had worked to build a columbium heat shield for the Shuttle and had gained a particularly clear view of its difficulties. It was heavier than RSI but offered no advantage in temperature resistance.

In addition, coatings posed serious problems. Silicides showed promise of reusability and long life, but they were fragile and easily damaged. A localized loss of coating could result in rapid oxygen embrittlement at high temperatures. Unprotected columbium oxidized readily, and above the melting point of its oxide, 2,730 °F, it could burst into flame.[1072] "The least little scratch in the coating, the shingle would be destroyed dur­ing reentry,” Faget said. Charles Donlan, the Shuttle Program Manager at NASA Headquarters, placed this in a broader perspective in 1983:

Phase B was the first really extensive effort to put together studies related to the completely reusable vehicle. As we went along, it became increasingly evident that there were some prob­lems. And then as we looked at the development problems, they became pretty expensive. We learned also that the metallic heat shield, of which the wings were to be made, was by no means ready for use. The slightest scratch and you are in trouble.[1073]

Other refractory metals offered alternatives to columbium, but even when proposing to use them, the complexity of a hot structure also mil­itated against its selection. As a mechanical installation, it called for large numbers of clips, brackets, standoffs, frames, beams, and fasten­ers. Structural analysis loomed as a formidable task. Each of many panel geometries needed its own analysis, to show with confidence that the panels would not fail through creep, buckling, flutter, or stress under load. Yet this confidence might be fragile, for hot structures had lim­ited ability to resist over-temperatures. They also faced the continuing issue of sealing panel edges against ingestion of hot gas during reentry.[1074]

In this fashion, having taken a long look at hot structures, NASA did an about-face as it turned toward the RSI that Lockheed’s Max Hunter had recommended as early as 1965. Then, in January 1972, President Richard Nixon gave his approval to the Space Shuttle program, thereby raising it to the level of a Presidential initiative. Within days, NASA’s Dale Myers spoke to a conference in Houston and stated that the Agency had made the basic decision to use RSI. Requests for proposal soon went out, inviting leading aerospace corporations to bid for the prime contract on the Shuttle orbiter, and North American won this $2.6-billion prize in July. However, the RSI wasn’t Lockheed’s. The proposal specified mullite RSI for the undersur­face and forward fuselage, a design feature that had been held over from the company’s studies of a fully reusable orbiter during the previous year.[1075]

Still, was mullite RSI truly the one to choose? It came from General Electric and had lower emissivity than the silica RSI of Lockheed but could withstand higher temperatures. Yet the true basis for selection lay in the ability to withstand 100 reentries as simulated in ground test. NASA conducted these tests during the last 5 months of 1972, using facilities at its Ames, Johnson, and Kennedy Centers, with support from Battelle Memorial Institute.

The main series of tests ran from August to November and gave a clear advantage to Lockheed. That firm’s LI-900 and LI-1500 went through 100 cycles to 2,300 °F and met specified requirements for main­tenance of low back-face temperature and minimal thermal conductiv­ity. The mullite showed excessive back-face temperatures and higher thermal conductivity, particularly at elevated temperatures. As test con­ditions increased in severity, the mullite also developed coating cracks and gave indications of substrate failure.

The tests then introduced acoustic loads, with each cycle of the sim­ulation now subjecting the RSI to loud roars of rocket flight along with the heating of reentry. LI-1500 continued to show promise. By mid – November, it demonstrated the equivalent of 20 cycles to 160 decibels, the acoustic level of a large launch vehicle, and 2,300 °F. A month later, NASA conducted what Lockheed describes as a "sudden death shoot­out”: a new series of thermal-acoustic tests, in which the contending materials went into a single large 24-tile array at NASA Johnson. After 20 cycles, only Lockheed’s LI-900 and LI-1500 remained intact. In sepa­rate tests, LI-1500 withstood 100 cycles to 2,500 °F and survived a ther­mal overshoot to 3,000 °F, as well as an acoustic overshoot to 174 dB. Clearly, this was the material NASA wanted.[1076]

Подпись: Thermal protection system for the proposed National Hypersonic Flight Research Facility, 1978. NASA. Подпись: 9

As insulation, the tiles were astonishing. A researcher could heat one in a furnace until it was white hot, remove it, allow its surface to cool for a couple of minutes, and pick it up at its edges using his or her fingers, with its interior still at white heat. Lockheed won the thermal-protection subcontract in 1973, with NASA specifying LI-900 as the baseline RSI. The firm responded with preparations for a full – scale production facility in Sunnyvale, CA. With this, tiles entered the mainstream of thermal protection.

The NASA Digital Fly-By-Wire F-8 Program

A former Navy F-8C Crusader fighter was chosen for modification, with the goal being to both validate the benefits of a digital fly-by-wire aircraft
flight control system and provide additional confidence on its use. Mel Burke had worked with the Navy to arrange for the transfer of four LTV F-8C Crusader supersonic fighters to the Flight Research Center. One would be modified for the F-8 Super Cruise Wing project, one was converted into the F-8 DFBW Iron Bird ground simulator, another was modified as the DFBW F-8, and one was retained in its basic service con­figuration and used for pilot familiarization training and general pro­ficiency flying. When Burke left for a job at NASA Headquarters, Cal Jarvis, a highly experienced engineer who worked on fly-by-wire sys­tems on the X-15 and LLRV programs, took over as program manager. In March 1971, modifications began to create the F-8 DFBW Iron Bird simulator. The Iron Bird effort was planned to ensure that development of the ground simulator always kept ahead of conversion efforts on the DFBW flight-test aircraft. This, the very first F-8C built for the Navy in 1958 (bureau No. 1445546), carried the NASA tail No. 802 along with a "DIGITAL FLY-BY-WIRE” logo painted in blue on its fuselage sides.

Highly Maneuverable Aircraft Technology

The Highly Maneuverable Aircraft Technology (HiMAT) program pro­vides an interesting perspective on the use of unmanned research air­craft equipped with digital fly-by-wire flight control systems, one that is perhaps most relevant to today’s rapidly expanding fleet of unpiloted aircraft whose use has proliferated throughout the military services over the past decade. HiMAT research at Dryden was conducted jointly by NASA and the Air Force Flight Dynamics Laboratory at NASA Dryden between 1979 and 1983. The project began in 1973, and, in August 1975, Rockwell International was awarded a contract to construct two HiMAT vehicles based on the use of advanced technologies applicable to future highly maneuverable fighter aircraft. Designed to provide a level of maneuverability that would enable a sustained 8 g turn at 0.9 Mach at an altitude of 25,000 feet, the HiMAT vehicles were approxi­mately half the size of an F-16. Wingspan was about 16 feet, and length was 23.5 feet. A GE J85 turbojet that produced 5,000 pounds of static thrust at sea level powered the vehicle that could attain about Mach

1. 4. Launched from the NASA B-52 carrier aircraft, the HiMAT weighed about 4,000 pounds, including 660 pounds of fuel. About 30 percent of the airframe consisted of experimental composite materials, mainly fiberglass and graphite epoxy. Rear-mounted swept wings, a digital flight control system, and controllable forward canards enabled exceptional maneuverability with a turn radius about half of a conventional piloted fighter. For example, at Mach 0.9 at 25,000 feet, the HiMAT could

Подпись: Research on the HiMAT remotely piloted test vehicle was conducted by NASA and the Air Force Flight Dynamics Laboratory between 1979 and 1983. NASA. Подпись: 10

sustain an 8-g turn, while F-16 capability under the same conditions is about 4.5 g.[1211]

Ground-based, digital fly-by-wire control systems, developed at Dryden on programs such as the DFBW F-8, were vital to success of the HiMAT remotely piloted research vehicle approach. NASA Ames Research Center and Dryden worked closely with Rockwell International in design and development of the two HiMAT vehicles and their ground control system, rapidly bringing the test vehicles to flight status. Many tests that would have been required for a more conventional piloted research aircraft were eliminated, an approach largely made possible by extensive use of computational aerodynamic design tools developed at Ames. This resulted in drastic reductions in wind tunnel testing but caused the need to devote several HiMAT flights to obtain stability and control data needed for refinements to the digital flight control system.[1212]

The HiMAT flight-test maneuver autopilot was based on a design developed by Teledyne Ryan Aeronautical, then a well-known man­
ufacturer of target drones and remotely piloted aircraft. Teledyne also developed the backup flight control system.[1213] Refining the vehicle control laws was an extremely challenging task. Dryden engineers and test pilots evaluated the contractor-developed flight control laws in a ground simulation facility and then tested them in flight, making adjust­ments until the flight control system performed properly. The HiMAT flight-test maneuver autopilot provided precise, repeatable control, enabling large quantities of reliable test data to be quickly gathered. It proved to be a broadly applicable technique for use in future flight research programs.[1214]

Подпись: 10Launched from the NASA B-52 at 45,000 feet at Mach 0.68, the HiMAT vehicle was remotely controlled by a NASA research pilot in a ground station at Dryden, using control techniques similar to those in conventional aircraft. The flight control system used a ground-based computer interlinked with the HiMAT vehicle through an uplink and downlink telemetry system. The pilot used proportional stick and rud­der inputs to command the computer in the primary flight control sys­tem. A television camera mounted in the cockpit provided visual cues to the pilot. A two-seat Lockheed TF-104G aircraft was used to chase each HiMAT mission. The F-104G was equipped with remote control capability, and it could take control of the HiMAT vehicle if problems developed at the ground control site. A set of retractable skids was deployed for landing, which was accomplished on the dry lakebed adja­cent to Dryden. Stopping distance was about 4,500 feet. During one of the HiMAT flight tests, a problem was encountered that resulted in a landing with the skids retracted. A timing change had been made in the ground-based HiMAT control system and in the onboard software that used the uplinked landing gear deployment command to extend the skids. Additionally, an onboard failure of one uplink receiver con­tributed to cause the anomaly. The timing change had been thoroughly tested with the onboard flight software. However, subsequent testing determined that the flight software operated differently when an uplink failure was present.[1215]

HiMAT research also brought about advances in digital flight con­trol systems used to monitor and automatically reconfigure aircraft flight control surfaces to compensate for in-flight failures. HiMAT pro­vided valuable information on a number of other advanced design fea­tures. These included integrated computerized flight control systems, aeroelastic tailoring, close-coupled canards and winglets, new compos­ite airframe materials, and a digital integrated propulsion control sys­tem. Most importantly, the complex interactions of this set of then-new technologies to enhance overall vehicle performance were closely eval­uated. The first HiMAT flight occurred July 27, 1979. The research pro­gram ended in January 1983, with the two vehicles completing a total of 26 flights, during which 11 hours of flying time were recorded.[1216] The two HiMAT research vehicles are today on exhibit at the NASA Ames Research Center and the Smithsonian Institution National Air and Space Museum.

Intelligent Flight Control System

Beginning in 1999, the NF-15B supported the Intelligent Flight Control System (IFCS) neural network project. This was oriented to developing a flight control system that could identify aircraft characteristics through the use of neural network technology in order to optimize performance and compensate for in-flight failures by automatically reconfiguring the flight control system. IFCS is an extension of the digital fly-by-wire flight control system and is intended to maintain positive aircraft con­trol under certain failure conditions that would normally lead to loss of control. IFCS would automatically vary engine thrust and reconfigure flight control surfaces to compensate for in-flight failures. This is accom­plished through the use of upgrades to the digital flight control system software that incorporate self-learning neural network technology. A
neural network that could train itself to analyze flight properties of an aircraft was developed, integrated into the NASA NF-15B, and evaluated in flight testing. The neural network "learns” aircraft flight characteris­tics in real time, using inputs from the aircraft sensors and from error corrections provided by the primary flight control computer. It uses this information to create different aircraft flight characteristic models. The neural network learns to recognize when the aircraft is in a stable flight condition. If one of the flight control surfaces becomes damaged or non­responsive, the IFCS detects this fault and changes the flight charac­teristic model for the aircraft. The neural network then drives the error between the reference model and the actual aircraft state to zero. Dryden test pilot Jim Smolka flew the first IFCS test mission on March 19, 1999, with test engineer Gerard Schkolnik in the rear cockpit.[1278]

Подпись: 10The NF-15B IFCS test program provided the opportunity for a limited flight evaluation of a direct adaptive neural network-based flight control system.[1279] This effort was led by the Dryden Flight Research Center, with collaboration from the Ames Research Center, Boeing, the Institute for Scientific Research at West Virginia University, and the Georgia Institute of Technology.[1280] John Bosworth was the NASA Dryden IFCS chief engi­neer. Flight-testing of the direct adaptive neural network-based flight control system began in 2003 and evaluated the outputs of the neural network. The neural network had been pretrained using flight charac­teristics obtained for the F-15 S/MTD aircraft from wind tunnel testing. During this phase of testing, the neural network did not actually pro­vide any flight control inputs in-flight. The outputs of the neural network were run directly to instrumentation for data collection purposes only.

In 2005, a fully integrated direct adaptive neural-network-based flight control system demonstrated that it could continuously provide error corrections and measure the effects of these corrections in order to learn new flight models or adjust existing ones. To measure the aircraft state, the neural network took a large number of inputs from the roll, pitch, and yaw axes and the aircraft’s control surfaces. If differences were detected between the measured aircraft state and the flight model, the neural network adjusted the outputs from the primary flight computer
to bring the differences to zero before they were sent to the actuator con­trol electronics that moved the control surfaces.[1281] IFCS software evalu­ations with the NF-15B included aircraft handling qualities maneuvers, envelope boundary maneuvers, control surface excitations for real-time parameter identification that included pitch, roll, and yaw doublets, and neural network performance assessments.[1282] During NF-15B flight-test­ing, a simulated failure was introduced into the right horizontal stabi­lizer that simulated a frozen pitch control surface. Handling qualities were evaluated with and without neural network adaptation. The per­formance of the adaptation system was assessed in terms of its abil­ity to decouple roll and pitch response and reestablish good onboard model tracking. Flight-testing with the simulated stabilator failure and the adaptive neural network flight control system adaptation showed general improvement in pitch response. However, a tendency for pilot – induced roll oscillations was encountered.[1283]

Подпись: 10Concurrent with NF-15B IFCS flight-testing, NASA Ames conducted a similar neutral network flight research program using a remotely con­trolled Experimental Air Vehicle (EAV) equipped with an Intelligent Flight Controller (IFC). Aerodynamically, the EAV was a one-quarter-scale model of the widely used Cessna 182 Skylane general aviation aircraft. The EAV was equipped with two electrical power supplies, one for the digital flight control system that incorporated the neural-network IFC capability and one for the avionics installation that included three video cameras to assist the pilots with situation awareness. Several pilots flew the EAV during the test program. Differences in individual pilot control techniques were found to have a noticeable effect on the performance of the Intelligent Flight Controller. Interestingly, IFCS flight-testing with the NF-15B aircraft uncovered many of the same issues related to the controller that the EAV program found. IFCS was determined to pro­vide increased stability margins in the presence of large destabilizing failures. The adaptive system provided better closed-loop behavior with

improved matching of the onboard reference model. However, the con­vergent properties of the controller were found to require improvement because continued maneuvering caused continued adaptation change. During ground simulator evaluation of the IFCS, a trained light-plane pilot was able to successfully land a heavily damaged large jet airliner despite the fact that he had no experience with such an aircraft. Test data from the IFCS program provided a basis for analysis and under­standing of neural network-based adaptive flight control system tech­nology as an option for implementation into future aircraft.[1284]

Подпись: 10After a 35-year career, during which it had flown with McDonnell – Douglas, the Air Force, and NASA, the NF-15B was retired following its final flight, on January 30, 2009. During its 14 years at NASA Dryden, the aircraft had flown 251 times. The NF-15B will be on permanent dis­play with a group of other retired NASA research aircraft at Dryden.[1285]

Lean and Clean Propulsion Systems

NASA’s efforts to improve engine design stand out as the Agency’s great­est breakthroughs in "lean and green” engine development because of their continuing relevance today. Engineers are constantly seeking to increase efficiency to make their engines more attractive to commer­cial airlines: with increased efficiency comes reduced fuel costs and increased performance in terms of speed, range, or payload.[1396] Emissions have also remained a concern for commercial aviation. The International Civil Aviation Organization (ICAO) has released increasingly strict stan­dards for NOx emissions since 1981.[1397] The Environmental Protection

Agency has adopted emissions standards to match those of ICAO and also has issued emissions standards for aircraft and aircraft engines under the Clean Air Act.[1398]

Подпись: 12NASA’s most important contribution to fuel-efficient aircraft technol­ogy to date has arguably been E Cubed, a program focused on improv­ing propulsion systems mainly to increase fuel efficiency. The end goal was not to produce a production-ready fuel-efficient engine, but rather to develop technologies that could—and did—result in propulsion effi­ciency breakthroughs at major U. S. engine companies. These break­throughs included advances in thermal and propulsive efficiency, as well as improvements in the design of component engine parts. Today, General Electric and Pratt & Whitney (P&W) continue to produce engines and evaluate propulsion system designs based on research conducted under the E Cubed program.

The U. S. Government’s high expectations for E Cubed were reflected in the program’s budget, which stood at about $250 million, in 1979 dol­lars.[1399] The money was divided between P&W and GE, which each used the funding to sweep its most cutting-edge technology into a demonstra­tor engine that would showcase the latest technology for conserving fuel, reducing emissions, and mitigating noise. Lawmakers funded E Cubed with the expectation that it would lead to a dramatic 12-percent reduc­tion in specific fuel consumption (SFC), a term to describe the mass of fuel needed to provide a certain amount of thrust for a given period.[1400] Other E Cubed goals included a 5-percent reduction in direct operat­ing costs, a 50-percent reduction in the rate of performance deteriora­tion, and further reductions in noise and emissions levels compared to other turbofan engines at the time.[1401]

The investment paid off in spades. What began as a proposal on Capitol Hill in 1975 to improve aircraft engine efficiency ended in 1983[1402] with GE and P&W testing engine demonstrators that improved SFC between 14 and 15 percent, exceeding the 12-percent goal. The dem­
onstrators were also able to achieve a reduction in emissions. A NASA report from 1984 hailed E Cubed for helping to "keep American engine technology at the forefront of the world market.”[1403] Engineers involved in E Cubed at both GE and P&W said the technology advances were game changing for the aircraft propulsion industry.

Подпись: 12"The E Cubed program is probably the single biggest impact that NASA has ever had on aircraft propulsion,” GE’s John Baughman said. "The improvements in fuel efficiency and noise and emissions that have evolved from the E Cubed program are going to be with us for years to come.”[1404] Ed Crow, former Senior Vice President of Engineering at P&W, agreed that E Cubed marked the pinnacle of NASA’s involvement in improving aircraft fuel efficiency. "This was a huge program,” he said. "It was NASA and the Government’s attempt to make a huge step forward.”[1405]

E Cubed spurred propulsion research that led to improved fuel effi­ciency in three fundamental ways:

First, E Cubed allowed both GE and P&W to improve the thermal efficiency of their engine designs. Company engineers were able to sig­nificantly increase the engine-pressure ratio, which means the pressure inside the combustor becomes much higher than atmospheric pres­sure. They were able to achieve the higher pressure ratio by improv­ing the efficiency of the engine’s compressor, which condenses air and forces it into the combustor.

In fact, one of the most significant outcomes of the E Cubed pro­gram was GE’s development of a new "E Cubed compressor” that dra­matically increased the pressure ratio while significantly reducing the number of compression stages. If there are too many stages, the engine can become big, heavy, and long; what is gained in fuel efficiency may be lost in the weight and cost of the engine. GE’s answer to that prob­lem was to develop a compressor that had only 10 stages and produced a pressure ratio of about 23 to 1, compared to the company’s previous compressors, which had 14 stages and produced a pressure ratio of 14 to 1.[1406] That compressor is still in use today in GE’s latest engines, including the GE-90.[1407]

P&W’s E Cubed demonstrator had a bigger, 14-stage compressor, but the company was able to increase the pressure ratio by modify­ing the compressor blades to allow for increased loading per stage. P&W’s engines prior to E Cubed had pressure ratios around 20 to 1; P&W’s E Cubed demonstrator took pressure ratios to about 33 to 1, according to Crow.[1408]

Подпись: 12The second major improvement enabled by E Cubed research was a substantial increase in propulsive efficiency. Air moves most efficiently through an engine when its velocity doesn’t change much. The way to ensure that the velocity remains relatively constant is to maximize the engine’s bypass ratio: in other words, a relatively large mass of air must bypass the engine core—where air is mixed with fuel—and go straight out the back of the engine at a relatively low exhaust speed. Both GE and P&W employed more efficient turbines and improved aerodynam­ics on the fan blades to increase the bypass ratio to about 7 to 1 (com­pared with about 4 to 1 on P&W’s older engines).[1409]

Finally, E Cubed enabled major improvements in engine com­ponent parts. This was critical, because other efficiencies can’t be maximized unless the engine parts are lightweight, durable, and aero­dynamic. Increasing the pressure ratio, for example, leads to very high temperatures that can stress the engine. Both P&W and GE devel­oped materials and cooling systems to ensure that engine components did not become too hot.

In addition to efforts to improve fuel efficiency, E Cubed gave both GE and P&W opportunities to build combustors that would reduce emissions. E Cubed emissions goals were based on the Environmental Protection Agency’s 1981 guidelines and called for reductions in car­bon monoxide, hydrocarbons, NOx, and smoke. Both companies devel­oped their emissions-curbing combustor technology under NASA’s Experimental Clean Combustor program, which ran from 1972 to 1976. Their main efforts were focused on controlling where and in what pro­portions air and fuel were mixed inside the combustor. Managing the fuel/air mix inside the combustor is critical to maximize combustion efficiency (and reduce carbon dioxide emissions as a natural byprod­uct) and to ensure that temperatures do not get so high that NOx is generated. GE tackled the mixing issue by developing a dual annular
combustor, while P&W went with a two-stage combustor that had two in-line combustor zones to control emissions.[1410]

Подпись: 12Ultimately, E Cubed provided the financial backing required for both GE & P&W to pursue propulsion technology that has fed into their biggest engine lines. GE’s E Cubed compressor technology is used to power three types of GE engines, including the GE90-115B, which powers the Boeing 777-300ER and holds the world record for thrust.[1411] Other GE engines incorporating the E Cubed compressor include the GP-7200, which recently went into service on the Airbus A380, and the GE-NX, which is about to enter service on the Boeing 787.[1412] P&W also got some mileage out of the technologies developed under E Cubed. The company’s E Cubed demonstrator engine served as the inspiration for the PW2037, which fed into other engine designs that today power the Boeing 757 commercial airliner (the engine is designated PW2000) and the U. S. military’s C-17 cargo aircraft (the engine is designated F117).[1413]

NASA’s Wind Turbine Supporting Research and Technology Contributions

A very significant NASA Lewis contribution to wind turbine development involved the Center’s Supporting Research and Technology (SR&T) pro­gram. The primary objectives of this component of NASA’s overall wind energy program were to gather and report new experimental data on var­ious aspects of wind turbine operation and to provide more accurate ana­lytical methods for predicting wind turbine operation and performance. The research and technology activity covered the four following areas: (1) aerodynamics, (2) structural dynamics and aeroelasticity, (3) com­posite materials, and (4) multiple wind turbine system interaction. In the area of aerodynamics, NASA testing indicated that rounded blade tips improved rotor performance as compared with square rotor tips, result­ing in an increase in peak rotor efficiency by approximately 10 percent. Also in the aerodynamics area, significant improvements were made in the design and fabrication of the rotor blades. Early NASA rotor blades used standard airfoil shapes from the aircraft industry, but wind turbine rotors operated over a significantly wider range of angles of attack (angles between the centerline of the blade and incoming airstream). The rotor
blades also needed to be designed to last up to 20 or 30 years, which represented a challenging problem because of the extremely high num­ber of cyclic loads involved in operating wind turbines. To help solve these problems, NASA awarded development grants to the Ohio State University to design and wind tunnel test various blade models, and to the University of Wichita to wind tunnel test a rotor airfoil with ailerons.[1516]

Подпись: 13In the structural dynamics area, NASA was presented with prob­lems related to wind loading conditions, including wind shear (vari­ation of wind velocity with altitude), nonuniform wind gusts over the swept rotor area, and directional changes in the wind velocity vec­tor field. NASA overcame this problem by developing a variable speed generator system that permitted the rotor speed to vary with the wind condition, thus producing constant power.

Development work on the blade component of the wind turbine systems, including selecting the material for fabrication of the blades, represents another example of supporting technology. As noted above, NASA Lewis brought considerable structural design expertise in this area to the wind energy program as a result of previous work on heli­copter rotor blades. Early in the program, NASA tested blades made of steel, aluminum, and wood. For the 2-megawatt Mod-1 phase of the program, however, NASA Lewis decided to contract with the Kaman Aerospace Corporation for the design, manufacture, and ground-test­ing of two 100-foot fiberglass composite blades. NASA provided the general design parameters, as well as the static and fatigue load infor­mation, required for Kaman to complete the structural design of the blades. As noted in Kaman’s report on the project, the use of fiberglass, which later became the preferred material for most wind turbine blades, had a number of advantages, including nearly unlimited design flexibil­ity in adopting optimum planform tapers, wall thickness taper, twist, and natural frequency control; resistance to corrosion and other envi­ronmental effects; low notch sensitivity with slow failure propagation rate; low television interference; and low cost potential because of adapt­ability to highly automated production methods.[1517]

The above efforts resulted in a significant number of technical reports, analytical tests and studies, and computer models based upon contributions of a number of NASA, university, and industry engineers and technicians. Many of the findings grew out of tests conducted on the Mod-0 testbed wind turbine at Plum Brook Station. One is work done by Larry A. Viterna, a senior NASA Lewis engineer working on the wind energy project, in aerodynamics. In studying wind turbine performance at high angles of attack, he developed a method (often referred to as the Viterna method or model) that is widely used throughout the wind tur­bine industry and is integrated into design codes that are available from the Department of Energy. The codes have been approved for worldwide certification of wind turbines. Tests with the Mod-0 and Gedser wind turbines formed the basis for his work on this analytical model, which, while not widely accepted at the time, later gained wide acceptance. Twenty-five years later, in 2006, NASA recognized Larry Viterna and Bob Corrigan, who assisted Viterna on data testing, with the Agency’s Space Act Award from the Inventions and Contributions Board.[1518]