Category AERONAUTICS

Self-Repairing Flight Control System

Подпись: 10The Self-Repairing Flight Control System (SRFCS) consists of software integrated into an aircraft’s digital flight control system that is used to detect failures or damage to the aircraft control surfaces. In the event of control surface damage, the remaining control surfaces are automat­ically reconfigured to maintain control, enabling pilots to complete their mission and land safely. The program, sponsored by the U. S. Air Force, demonstrated the ability of a flight control system to identify the failure of a control surface and reconfigure commands to other control devices, such as ailerons, rudders, elevators, and flaps, to continue the aircraft’s mission or allow it to be landed safely. As an example, if the horizontal elevator were damaged or failed in flight, the SRFCS would diagnose the failure and determine how the remaining flight control surfaces could be repositioned to compensate for the damaged or inoperable control surface. A visual warning to the pilot was used to explain the type of fail­ure that occurred. It also provided revised aircraft flight limits, such as reduced airspeed, angle of attack, and maneuvering loads. The SRFCS also had the capability of identifying failures in electrical, hydraulic, and mechanical systems. Built-in test and sensor data provided a diag­nostic capability and identified failed components or system faults for subsequent ground maintenance repair. System malfunctions on an air­craft with a SRFCS can be identified and isolated at the time they occur and then repaired as soon as the aircraft is on the ground, eliminating lengthy postflight maintenance troubleshooting.[1267]

The SRFCS was flown 25 times on the HIDEC F-15 at NASA Dryden between December 1989 and March 1990, with somewhat mixed results. The maintenance diagnostics aspect of the system was a general suc­cess, but there were frequent failures with the SRFCS. Simulated con­trol system failures were induced, with the SRFCS correctly identifying every failure that it detected. However, it only sensed induced control system failures 61 percent of the time. The overall conclusion was that the SRFCS concept was promising, but it needed more develop­ment if it was to be successfully implemented into production aircraft.

NASA test pilot Jim Smolka flew the first SRFCS flight, on December 12, 1989, with test engineer Gerard Schkolnik in the rear cockpit; other SRFCS test pilots were Bill Dana and Tom McMurtry.[1268]

Damage-Tolerant Fan Casing

Подпись: 11While most eyes were on the big picture of making major engine advance­ments through the years, some very specific problems were addressed with programs that are just as interesting to consider as the larger research endeavors. The casings that surround the jet engine’s turbo­machinery are a case in point.

With the 1989 crash of United Airlines Flight 232 at Sioux City, IA, aviation safety officials became more interested in finding new materials capable of containing the resulting shrapnel created when a jet engine’s blade or other component breaks free. In the case of the DC-10 involved in this particular crash, the fan disk of the No. 2 engine—the one located in the tail—separated from the engine and caused the powerplant to explode, creating a rain of shrapnel that could not be contained within the engine casing. The sharp metal fragments pierced the body of the aircraft and cut lines in all three of the aircraft’s hydraulic systems. As previously mentioned in this case study, the pilots on the DC-10 were able to steer their aircraft to a nearly controlled landing. The incident inspired NASA pilots to refine the idea of using only jet thrust to maneuver an airplane and undertake the Propulsion Controlled Aircraft program, which took full advantage of the earlier Digital Electronic Engine Control research. The Iowa accident also sent structures and materials experts off on a hunt to find a way to prevent accidents like this in the future.

Подпись: 72. Tong and Jones, Подпись: 'An Updated Assessment of NASA Ultra-Efficient Engine Technologies, Подпись: p. 1.

The United Flight 232 example notwithstanding, the challenge for structures engineers is to design an engine casing that will contain a failed fan blade within the engine so that it has no chance to pierce the passenger compartment wall and threaten the safety of passengers or cause a catastrophic tear in the aircraft wall. Moreover, not only does the casing have to be strong enough to withstand any blade or shrapnel impacts, it must not lose its structural integrity during an emergency

engine shutdown in flight. A damaged engine can take some 15 seconds to shut down, during which time cracks from the initial blade impacts can propagate in the fan case. Should the fan case totally fail, the result­ing breakup of the already compromised turbomachinery could be cat­astrophic to the aircraft and all aboard.[1360]

Подпись: 11As engineers considered the use of composite materials, two methods for containing blade damage within the engine casing were now available: the new softwall and the traditional hardwall. In the softwall concept, the casing was made of a sandwich-type aluminum structure overwound with dry aramid fibers. (Aramid fibers were introduced commercially by DuPont during the early 1960s and were known by the trade name Nomex.) The design allows broken blades and other shrapnel to pass through the "soft” aluminum and be stopped and contained within the aramid fiber wrap. In the hardwall approach, the casing is made of aluminum only and is built as a rigid wall to reflect blade bits and other collateral damage back into the casing interior. Of course that vastly increases the risk that the shrap­nel will be ingested through the engine and cause even greater damage, perhaps catastrophic. While that risk exists with the softwall design, it is not as substantial. Another benefit of the hardwall is that it maintains its structural soundness, or ductility, during a breakup of an engine. A softwall also features some amount of ductility, but the energy-absorb­ing properties of the aramid fibers is the major draw.[1361]

In 1994, NASA engineers at the Lewis Research Center began look­ing into better understanding engine fan case structures and conducted impact tests as part of the Enabling Propulsion Materials program. Various metallic materials and new ideas for lightweight fan contain­ment structures were studied. By 1998, the research expanded to include investigations into use of polymer composites for engine fan casings. As additional composite materials were made available, NASA researchers sought to understand their properties and the appropriateness of those materials in terms of containment capability, damage tolerance, com­mercial viability, and understanding any potential risk not yet identi­fied for their use on jet engines.[1362]

In 2001, NASA awarded a Small Business Innovation Research (SBIR) grant to A&P Technology, Inc., of Cincinnati to develop a dam­age-tolerant fan casing for a jet engine. Long before composites came along, the company’s expertise was in braiding materials together, such as clotheslines and candlewicks. A&P—working together with the FAA, Ohio State University, and the University of Akron—was able to rapidly develop a prototype composite fan case that could be compared to the metal fan case. Computer simulations were key to the effort and seren­dipitously provided an opportunity to grow the industry’s understand­ing and ability to use those very same simulation capabilities. First, well understood metallic casings undergoing a blade-out scenario were modeled, and the computer tested the resulting codes to reproduce the already-known results. Then came the trick of introducing code that would represent A&P’s composite casing and its reaction to a blade-out situation. The process was repeated for a composite material wrapped with a braided fiber material, and results were very promising.[1363]

Подпись: 11The composite casing proposed by A&P used a triaxial carbon braid, which has a toughness superior to aluminum but is lighter, which helps ease fuel consumption. In tests of debris impact, the braided laminate performed better than the metal casing, because in some cases, the com­posite structure absorbed the energy of the impact as the debris bounced off the wall, and in other cases where the shrapnel penetrated the mate­rial, the damage to the wall was isolated to the impact point and did not spread. In a metal casing that was pierced, the resulting hole would instigate several cracks that would continue to propagate along the cas­ing wall, appearing much like the spiderweb of cracks that appear on an automobile windshield when it is hit with a small stone on the freeway.

NASA continues to study the use of composite casings to better understand the potential effects of aging and/or degradation following the constant temperature, vibration, and pressure cycles a jet engine experiences during each flight. There also is interest in studying the effects of higher operating temperatures on the casing structure for pos­sible use on future supersonic jets. (The effect of composite fan blades on casing containment also has been studied.)[1364]

Подпись: A General Electric GEnx engine with a composite damage-tolerant fan casing is checked out before eventual installation on the new Boeing 787. General Electric. Подпись: 11

While composites have found many uses in commercial and military aviation, the first use of an all-composite engine casing, provided by A&P, is set to be used on GE’s GEnx turbojet designed for the Boeing 787. The braided casing weighs 350 pounds less per engine, and, when other engine installation hardware to handle the lighter powerplants is considered, the 787 should weigh 800 pounds less than a similarly equipped airliner using aluminum casings. The weight reduction also should provide a savings in fuel cost, increased payload, and/or a greater range for the aircraft.[1365]

NASA-Industry Wind Energy Program Large Horizontal-Axis Wind Turbines

The primary objective of the Federal Wind Energy Program and the specific objectives of NASA’s portion of the program were outlined in a followup technical paper presented in 1975 by Thomas, Savino, and Richard L. Puthoff. The paper noted that the overall objective of the
program was "to develop the technology for practical cost-competitive wind-generator conversion systems that can be used for supplying sig­nificant amounts of energy to help meet the nation’s energy needs.”[1499] The specific objectives of NASA Lewis’s portion of the program were to: (1) identify cost-effective configurations and sizes of wind-conversion systems; (2) develop the technology needed to produce cost-effective, reliable systems; (3) design wind turbine generators that are compati­ble with user applications, especially with electric utility networks; (4) build up industry capability in the design and fabrication of wind tur­bine generators; and (5) transfer the technology from the program to industry for commercial application. To satisfy these objectives, NASA Lewis divided the development function into the three following areas: (1) design, fabrication, and testing of a 100-kilowatt experimental wind turbine generator; (2) optimizing the wind turbines for selected user operation; and (3) supporting research and technology for the systems.

Подпись: 13The planned workload was divided further by assignment of dif­ferent tasks to different NASA Research Centers and industry partici­pants. NASA Lewis would provide project management and support in aerodynamics, instrumentation, structural dynamics, data reduction, machine design, facilities, and test operations. Other NASA Research Centers would provide consulting services within their areas of expertise. For example, Langley worked on aeroelasticity matters, Ames consulted on rotor dynamics, and Marshall provided meteorology support. Initial industry participants included Westinghouse, Lockheed Corporation, General Electric, Boeing, and Kaman Aerospace.

In order to undertake its project management role, NASA Lewis established the Center’s Wind Power Office, which consisted initially of three operational units—one covering the development of an experi­mental 100-kilowatt wind turbine, one handling the industry-built util­ity-operated wind turbines, and one providing supporting research and technology. The engineers in these offices basically worked together in a less formal structure, crossing over between various operational areas. Also, the internal organization apparently underwent several changes during the program’s existence. For example, in 1976, the program was

Подпись: 13 NASA-Industry Wind Energy Program Large Horizontal-Axis Wind Turbines

directed by the Wind Power Office as part of the Solar Energy Branch. The first two office managers were Ronald Thomas and William Robbins. By 1982, the organization consisted of a Wind Energy Project Office, which was once again under the supervision of Thomas and was part of the Wind and Stationary Power Division. The office consisted of a proj­ect development and support section under the supervision of James P. Couch (who managed the Mod-2 project), a research and technology sec­tion headed by Patrick M. Finnegan, and a wind turbine analysis section under the direction of David A. Spera. By 1984, the program organiza­tion had changed again with the Wind Energy Project Office, which was under the supervision of Darrell H. Baldwin, becoming part of the Energy Technology Division. The office consisted of a technology section under Richard L. Puthoff and an analysis section headed by David A. Spera. The last NASA Lewis wind energy program manager was Arthur Birchenough.

Dick Whitcomb and the Transonic-Supersonic Breakthrough

Whitcomb joined the research community at Langley in 1943 as a mem­ber of Stack’s Transonic Aerodynamics Branch working in the 8-foot High-Speed Tunnel (HST). Initially, NACA managers placed him in the Flight Instrument Research Division, but Whitcomb’s force of person­ality ensured that he would be working directly on problems related to aircraft design. As many of his colleagues and historians would attest, Whitcomb quickly became known for an analytical ability rooted in mathematics, instinct, and aesthetics.[145]

In 1945, Langley increased the power of the 8-foot HST to gener­ate Mach 0.95 speeds, and Whitcomb was becoming increasingly famil­iar with transonic aerodynamics, which helped him in his developing investigation into the design of supersonic aircraft. The onset of drag created by shock waves at transonic speeds was the primary challenge. John Stack, Ezra Kotcher, and Lawrence D. Bell proved that breaking the sound barrier was possible when Chuck Yeager flew the Bell X-1 to Mach 1.06 (700 mph) on October 14, 1947. Designed in the style of a.50- caliber bullet with straight wings, the Bell X-1 was a successful super­sonic airplane, but it was a rocket-powered research airplane designed specifically for and limited to that purpose. The X-1 would not offer designers the shape of future supersonic airplanes. Operational turbojet – powered aircraft designed for military missions were much heavier and would use up much of their fuel gradually accelerating toward Mach 1 to lessen transonic drag.[146] The key was to get operation aircraft through the transonic regime, which ranged from Mach 0.9 to Mach 1.1.

A very small body of transonic research existed when Whitcomb undertook his investigation. British researchers W. T. Lord of the Royal Aeronautical Establishment and G. N. Ward of the University of Manchester and American Wallace D. Hayes attempted to solve the problem of transonic drag through mathematical analyses shortly after World War II in 1946. These studies generated mathematical symbols that did not lend themselves to the design and shape of transonic and supersonic aircraft.[147]

Whitcomb’s analysis of available data generated by the NACA in ground and free-flight tests led him to submit a proposal for testing swept wing and fuselage combinations in the 8-foot HST in July 1948. There had been some success in delaying transonic drag by addressing the relationship between wing sweep and fuselage shape. Whitcomb believed that careful attention to arrangement and shape of the wing and fuselage would result in their counteracting each other. His goal was to reach a milestone in supersonic aircraft design. The tests, conducted from late 1949 to early 1950, revealed no significant decrease in drag at high subsonic (Mach 0.95) and low supersonic (Mach 1.2) speeds. The wing-fuselage combinations actually generated higher drag than their individual values combined. Whitcomb was at an impasse and realized he needed to refocus on learning more about the fundamental nature of transonic airflow.[148]

Just before Whitcomb had submitted his proposal for his wind tun­nel tests, John Stack ordered the conversion of the 8-foot HST in the spring of 1948 to a slotted throat to enable research in the transonic regime. In theory, slots in the tunnel’s test section, or throat, would enable smooth operation at very high subsonic speeds and at low supersonic speeds. The initial conversion was not satisfactory because of uneven flow. Whitcomb and his colleagues, physicist Ray Wright and engineer Virgil S. Ritchie, hand-shaped the slots based on their visualization of smooth transonic flow. They also worked directly with Langley wood­workers to design and fabricate a channel at the downstream end of the test section that reintroduced air that traveled through the slots. Their painstaking work led to the inauguration of transonic operations within the 8-foot HST 7 months later, on October 6, 1950.[149] Whitcomb,

Dick Whitcomb and the Transonic-Supersonic Breakthrough

The slotted-throat test section of the 8-foot High-Speed Tunnel. NASA.

as a young engineer, was helping to refine a tunnel configuration that was going to allow him to realize his potential as a visionary experimen­tal aeronautical engineer.

The NACA distributed a confidential report on the new tunnel during the fall of 1948, which was distributed to the military services and select manufacturers. By the following spring, rumors had been circulating about the new tunnel throughout the industry. Initially, the call for secrecy evolved into outright public acknowledgement of the NACAs new tran­sonic tunnels (including the 16-foot HST) with the awarding of the 1951 Collier Trophy to John Stack and 19 of his associates at Langley for the slotted wall. The Collier Trophy specifically recognized the importance of a research tool, which was a first in the 40-year history of the award. The NACA claimed that its slotted-throat transonic tunnels gave the United States a 2-year lead in the design of supersonic military aircraft.[150]

With the availability of the 8-foot HST and its slotted throat, the com­bined use of previously available wind tunnel components—the tunnel bal­ance, pressure orifice, tuft surveys, and schlieren photographs—resulted in a new theoretical understanding of transonic drag. The schlieren photo­graphs revealed three shock waves at transonic speeds. One was the famil­iar shock wave that formed at the nose of an aircraft as it pushed forward through the air. The other two were, according to Whitcomb, "fascinating new types” of shock waves never before observed, in which the fuselage and wings met and at the trailing edge of the wing. These shocks contributed to a new understanding that transonic drag was much larger in proportion to the size of the fuselage and wing than previously believed. Whitcomb speculated that these new shock waves were the cause of transonic drag.[151]

From SCAT Research to SST Development

The recently established FAA became the major advocate within the U. S. Government for a supersonic transport, with key personnel at three of the NACA’s former laboratories eager to help with this challenging new program. The Langley Research Center in Hampton, VA, (the NACA’s oldest and largest lab) and the Ames Research Center at Moffett Field in Sunnyvale, CA, both had airframe design expertise and facilities, while the Lewis Research Center in Cleveland, OH, specialized in the kind of advanced propulsion technologies needed for supersonic cruise.

The strategy for developing the SCAT depended heavily on leveraging technologies being developed for another Air Force bomber—one much larger, faster, and more advanced than the B-58. This would be the rev­olutionary B-70, designed to cruise several thousand miles at speeds of Mach 3. NACA experts had been helping the Air Force plan this giant intercontinental bomber since the mid-1950s (with aerodynamicist Alfred Eggers of the Ames Laboratory conceiving the innovative design for it to ride partially on compression lift created by its own supersonic shock waves). North American Aviation won the B-70 contract in 1958, but the projected expense of the program and advances in missile technol­ogy led President Dwight Eisenhower to cancel all but one prototype in 1959. The administration of President John Kennedy eventually approved production of two XB-70As. Their main purpose would be to serve as Mach 3 testbeds for what had become known simply as the Supersonic Transport (SST). NASA continued to refer to design concepts for the SST using the older acronym for Supersonic Commercial Air Transport. By 1962, these concepts had been narrowed down to three Langley designs (SCAT-4, SCAT-15, and SCAT-16) and one from Ames (SCAT-17). These became the baselines for industry studies and SST proposals.[345]

Even though Department of Defense resources (especially the Air Force’s) would be important in supporting the SST program, the aero­space industry made it clear that direct federal funding and assistance would be essential. Thus research and development (R&D) of the SST became a split responsibility between the Federal Aviation Agency and the National Aeronautics and Space Administration—with NASA con­ducting and sponsoring the supersonic research and the FAA in charge of the SST’s overall development. The first two leaders of the FAA, retired Lt. Gen. Elwood R. "Pete” Quesada (1958-1961) and Najeeb E. Halaby (1961-1965), were both staunch proponents of producing an SST, as to a slightly lesser degree was retired Gen. William F. "Bozo” McKee (1965­1968). As heads of an independent agency that reported directly to the president, they were at the same level as NASA Administrators T. Keith Glennan (1958-1961) and James Beggs (1961-1968). The FAA and NASA administrators, together with Secretary of Defense Robert McNamara (somewhat of a skeptic on the SST program), provided interagency ovesight and comprised the Presidential Advisory Committee (PAC) for the SST established in April 1964. This arrangement lasted until 1967, when the Federal Aviation Agency became the Federal Aviation Administration under the new Department of Transportation, whose secretary became responsible for the program.[346]

Much of NASA’s SST-related research involved advancing the state – of-the-art in such technologies as propulsion, fuels, materials, and aerodynamics. The latter included designing airframe configurations for sustained supersonic cruise at high altitudes, suitable subsonic maneuvering in civilian air traffic patterns at lower altitudes, safe take­offs and landings at commercial airports, and acceptable noise levels— to include the still-puzzling matter of sonic booms.

Dealing with the sonic boom entailed a multifaceted approach: (1) performing flight tests to better quantify the fluid dynamics and atmo­spheric physics involved in generating and propagating shock waves, as well as their effects on structures and people; (2) conducting com­munity surveys to gather public opinion data on sample populations exposed to booms; (3) building and using acoustic simulators to fur­ther evaluate human and structural responses in controlled settings; (4) performing field studies of possible effects on animals; (5) evaluat­ing various aerodynamic configurations in wind tunnel experiments; and (6) analyzing flight test and wind tunnel data to refine theoretical constructs and mathematical models for lower-boom aircraft designs. Within NASA, the Langley Research Center was a focal point for sonic boom studies, with the Flight Research Center (FRC) at Edwards AFB conducting many of the supersonic tests.[347]

Although the NACA, especially at Langley and Ames, had been doing research on supersonic flight since World War II, none of its technical reports (and only one conference paper) published through 1957 dealt directly with sonic booms.[348] That situation began to change when Langley’s long-time manager and advocate of supersonic programs, John P. Stack, formalized the SCAT venture in 1958. During the next year, three Langley employees whose names would become well known in the field of sonic boom research began publishing NASA’s first scientific papers on the sub­ject. These were Harry W. Carlson, a versatile supersonic aerodynamicist, Harvey H. Hubbard, chief of the Acoustics and Noise Control Division, and Domenic J. Maglieri, a young engineer who became Hubbard’s top sonic boom specialist. Carlson would tend to focus on wind tunnel exper­iments and sonic boom theory, while the other two men specialized in planning and monitoring field tests, then analyzing the data collected.[349] These research activities began to expand under the new pro-SST Kennedy Administration in 1961. After the president formally approved develop­ment of the supersonic transport in June 1963, sonic boom research took off. Langley’s experts, augmented by NASA contractors and grantees, pub­lished 26 papers on sonic booms just 3 years later.[350]

Transatmospherics after NASP

Two developments have paced work in hypersonics since NASP died in 1995. Continuing advances in computers, aided markedly by advance­ments in wind tunnels, have brought forth computational fluid dynam­ics (CFD). Today, CFD simulates the aerodynamics of flight vehicles with increasing (though not perfect) fidelity. In addition, NASA and the Air Force have pursued a sequence of projects that now aim clearly at developing operational scramjet-powered military systems.

Early in the NASP effort, in 1984, Robert Whitehead of the Office of Naval Research spoke on CFD to its people. Robert Williams recalls that Williams presented the equations of fluid dynamics "so the com­puter could solve them, then showed that the computer technology was also there. We realized that we could compute our way to Mach 25 with high confidence.”[658] Unfortunately, in reality, DARPA could not do that. In 1987, the trade journal Aerospace America reported: "almost noth­ing is known about the effects of heat transfer, pressure gradient, three – dimensionality, chemical reactions, shock waves, and other influences on hypersonic transition.”[659] (This transition causes a flow to change from laminar to turbulent, a matter of fundamental importance.)

Code development did mature so that it could adequately support the next hypersonic system, NASA’s X-43A program. In supporting the X-43A effort, NASA’s most important code was GASP. NASP had used version 2.0; the X-43A used 3.0.[660] Like any flow code, it could not cal­culate the turbulence directly but had to model it. GASP 3.0 used the Baldwin-Lomax algebraic model that Princeton’s Antony Jameson, a leading writer of flow codes, describes as: "the most popular model in the industry, primarily because it’s easy to program.”[661] GASP 3.0 also uses "eddy-viscosity” models, which Stanford’s Peter Bradshaw rejects out of hand: "Eddy viscosity does not even deserve to be described as a ‘theory’ of turbulence!” More broadly, he adds, "Even the most sophis­ticated turbulence models are based on brutal simplifications” of the pertinent nonlinear partial differential equations.[662]

Can increasing computer power make up for this? Calculations of the NASP era had been rated in gigaflops, billions of floating point oper­ations per second (FLOPS).[663] An IBM computer has recently cracked the petaflop mark—at a quadrillion operations per second, and even greater performance is being contemplated.[664] At Stanford University’s Center for Turbulence Research, analyst Krishnan Mahesh studied flow within a commercial turbojet and found a mean pressure drop that dif­fers from the observed value by only 2 percent. An earlier computa­tion had given an error of 26 percent, an order of magnitude higher.[665] He used Large Eddy Simulation, which calculates the larger turbulent eddies and models the small ones that have a more universal charac­ter. But John Anderson, a historian of fluid dynamics, notes that LES

"is not viewed as an industry standard.” He sees no prospect other than direct numerical simulation (DNS), which directly calculates all scales of turbulence. "It’s clear-cut,” he adds. "The best way to calculate turbu­lence is to use DNS. Put in a fine enough grid and calculate the entire flow field, including the turbulence. You don’t need any kind of model and the turbulence comes out in the wash as part of the solution.” But in seeking to apply DNS, even petaflops aren’t enough. Use of DNS for practical problems in industry is "many decades down the road. Nobody to my knowledge has used DNS to deal with flow through a scramjet. That type of application is decades away.”[666] With the limitations as well as benefits of CFD more readily apparent, it thus is significant that more traditional hypersonic test facilities are also improving. As just one exam­ple, NASA Langley’s largest hypersonic facility, the 8-foot High Temperature Tunnel (HTT), has been refitted to burn methane and use its combustion products, with oxygen replenishment, as the test gas. This heats the gas. As reviewed by the Journal of Spacecraft and Rockets: "the oxygen content of the freestream gas is representative of flight conditions as is the Mach num­ber, total enthalpy, dynamic pressure, and Reynolds number.”[667]

One fruitful area with NASP had been its aggressive research on scramjets, which benefited substantially because of NASA’s increasing investment in high-temperature hypersonic test facilities.[668]

Table 3 enumerates the range of hypersonic test facilities for scramjet and aerothermodynamic research available to researchers at the NASA Langley Research Center. Between 1987 and the end of 1994, Langley researchers ran over 1,500 tests on 10 NASP engine modules, over 1,200 in a single 3-year period, from the end of 1987 to 1990. After NASP wound down, Agency researchers ran nearly 700 tests on four other configura­tions between 1994 and 1996. These tests, ranging from Mach 4 to Mach

TABLE 3

NASA LRC SCRAMJET PROPULSION AND AEROTHERMODYNAMIC TEST FACILITIES

FACILITY NAME

MACH

REYNOLDS NUMBER

SIZE

8-foot High Temperature Tunnel

4, 5, 7

0.3-5.1 x 106 / ft.

8-ft. dia.

Arc-Heated Scramjet Test Facility

4.7-8.0

0.04-2.2 x 106 / ft.

4-ft. dia.

Combustion-Heated Scramjet Test Facility

3.5-6.0

1.0-6.8 x 106 / ft.

42" x 30"

Direct Connect Supersonic Combustion Test Facility

4.0-7.5

1.8-31.0 x 106 / ft.

[Note (a)]

HYPULSE Shock Tunnel [Note (b)]

5.0-25

0.5-2.5 x 106 / ft.

7-ft dia.

15-inch Mach 6 High Temperature Tunnel

6

0.5-8.0 x 106 / ft.

15" dia.

20-inch Mach 6 CF4 Tunnel

6

0.05-0.7 x 106 / ft.

20" dia.

20-inch Mach 6 Tunnel

6

0.5-8.0 x 106 / ft.

20" x 20"

31 – inch Mach 10 Tunnel

10

0.2-2.2 x 106 / ft.

31" x 31"

Source: Data from NASA LRC Facility brochures.

(a) DCSCTF section varies: 1.52" x 3.46" with a M = 2 nozzle and 1.50" x 6.69" with a M = 2.7 nozzle.

(b) LRCs HYPULSE shock tunnel is at the GASL Division of Allied Aerospace Industries, Ronkonkoma, NY.

8, so encouraged scramjet proponents that they went ahead with plans for a much-scaled-back effort, the Hyper-X (later designated X-43A), which compared in some respects with the ASSET program undertaken after cancellation of the X-20 Dyna-Soar three decades earlier.[669]

The X-43, managed at Langley Research Center by Vincent Rausch, a veteran of the earlier TAV and NASP efforts, began in 1995 as Hyper-X, coincident with the winddown of NASP. It combined a GASL scramjet engine with a 100-inch-long by 60-inch-span slender lifting body and an Orbital Sciences Pegasus booster, this combination being carried to a launch altitude of 40,000 feet by NASA Dryden’s NB-52B Stratofortress. After launch, the Pegasus took the X-43 to approximately 100,000 feet,

Transatmospherics after NASP

Schematic layout of the Hyper-X (subsequently X-43A) scramjet test vehicle and its Orbital Sciences Pegasus winged booster, itself a hypersonic vehicle. NASA.

where it would separate, demonstrating scramjet ignition (using silane and then adding gaseous hydrogen) and operation at velocities as high as Mach 10.

The X-43 program cost $230 million and consumed not quite a decade of development time. Built by Microcraft, Inc., of Tullahoma, TN, the X-43 used the shape of a Boeing study for a Mach 10 global reconnaissance and space access vehicle, conceived by a team under the leadership of George Orton. Langley Research Center furnished vital support, executing nearly 900 test runs of 4 engine configurations between 1996 and 2003.[670]

Microcraft completed three X-43A flight-test vehicles for testing by NASA Dryden Flight Research Center. Unfortunately, the first flight attempt failed in 2001, when the Pegasus booster shed a control fin after launch. A 3-year reexamination and review of the program led to a suc­cessful flight on March 27, 2004, the first successful hypersonic flight of a scramjet-powered airplane. The Pegasus boosted the X-43A to Mach 6.8. After separation, the X-43A burned silane, which ignites on contact

with the air, for 3 seconds. Then it ramped down the silane and began injecting gaseous hydrogen, burning this gas for 8 seconds. This was the world’s first flight test of such a scramjet.[671]

That November, NASA did it again with its third X-43A. On November 16, it separated from its booster at 110,000 feet and Mach 9.7 and its engine burned for 10 to 12 seconds with silane off. On its face, this looked like the fastest air-breathing flight in history, but this speed (approxi­mately 6,500 mph) resulted from its use of Pegasus, a rocket. The key point was that the scramjet worked, however briefly. During the flight, the X-43A experienced airframe temperatures as high as 3,600 °F.[672]

Meanwhile, the Air Force was preparing to take the next step with its HyTech program. Within it, Pratt & Whitney, now merged with Rocketdyne, has been a major participant. In January 2001, it dem­onstrated the Performance Test Engine (PTE), an airframe-integrated scramjet that operated at hypersonic speeds using the hydrocarbon JP-7. Like the X-43A engine, though, the PTE was heavy. Its successor, the Ground Demonstrator Engine (GDE), was flight-weight. It also used fuel to cool the engine structure. One GDE went to Langley for testing in the HTT in 2005. It made the important demonstration that the cooling could be achieved using no more fuel than was to be employed for propulsion.

Next on transatmospheric agenda is a new X-test vehicle, the X-51A, built by Boeing, with a scramjet by Pratt & Whitney Rocketdyne. These firms are also participants in a consortium that includes support from NASA, DARPA, and the Air Force. The X-51A scramjet is fuel-cooled, with the cooling allowing it to be built of Inconel 625 nickel alloy rather than an exotic superalloy. Lofted to Mach 4.7 by a modified Army Tactical Missile System (ATACMS) artillery rocket booster, the X-51A is intended to fly at Mach 7 for minutes at a time, burning JP-7, a hydrocarbon fuel

Transatmospherics after NASP

The first Boeing X-51 WaveRider undergoing final preparations for flight, Edwards AFB, California, 2010. USAF

used previously on the Lockheed SR-71. The X-51A uses ethylene to start the combustion. Then the flight continues on JP-7. Following checkout trials beginning in late 2009, the X-51 made its first powered flight on May 26, 2010. After being air-launched from a B-52, it demonstrated successful hydrocarbon scramjet ignition and acceleration. Further tests will hopefully advance the era of practical scramjet-powered flight, likely beginning with long-range missiles. As this review indicates, the story of transatmospherics illustrates the complexity of hypersonics; the tenac­ity and dedication of NASA’s aerodynamics, structures, and propulsion community; and the Agency’s commitment to take on challenges, no matter how difficult, if the end promises to be the advancement of flight and humanity’s ability to utilize the air and space medium.[673]

Updating Simulator Prediction with Flight-Test Experience

Test pilots who "flew” the early simulators were skeptical of the results that they observed, because there was usually some aspect of the sim­ulation that did not match the real airplane. Stick forces and control surface hinge moments were often not properly matched on the sim­ulator, and thus the apparent effectiveness of the ailerons or elevators was often higher or lower than experienced with the airplane. For pro­cedural trainers (used for checking out pilots in new airplanes) mathe­matical models were often changed erroneously based strictly on pilot comments, such as "the airplane rolls faster than the simulator.” Since these early simulators were based strictly on wind tunnel or theoretical aerodynamic predictions and calculated moments of inertia, the flight – test community began to explore methods for measuring and validating the mathematical models to improve the acceptance of simulators as valid tools for analysis and training. Ground procedures and support equip­ment were devised by NASA to measure the moments of inertia of small aircraft and were used for many of the research airplanes flown at DFRC.[725]

A large inertia table was constructed in the Air Force Flight Test Center Weight and Balance facility at Edwards AFB for the purpose of measuring the inertia of large airplanes. Unfortunately, the system was never able to provide accurate results, as fluctuations in temperature and humidity adversely affected the performance of the table’s sensitive bearings, so the concept was discarded.

During the X-15 flight-test program, NASA researchers at Edwards developed several methods for extracting the aerodynamic stability

derivatives from specific flight-test maneuvers. Researchers then com­pared these results with wind tunnel or theoretical predictions and, where necessary, revised the simulator mathematical models to reflect the flight-test-derived information. For the X-15, the predictions were quite good, and only minor simulator corrections were needed to allow flight maneuvers to be replicated quite accurately on the simulator. The most useful of these methods was an automatic computer analy­sis of pulse-type maneuvers, originally referred to as Newton-Raphson Parameter Identification.53,54 This system evolved into a very useful tool subsequently used as an industry standard for identifying the real-world stability and control derivatives during early testing of new aircraft.[726] The resulting updates are usually also transplanted into the final train­ing simulators to provide the pilots with the best possible duplication of the airplanes’ handling qualities. Bookkeeping methods for determin­ing moments of inertia of a new aircraft (i. e., tracking the weight and location of each individual component or structural member during air­craft manufacture) have also been given more attention.

Characteristically, the predicted aerodynamics for a new airplane are often in error for at least a few of the derivatives. These errors are usually a result of either a discrepancy between the wind tunnel model that was tested and the actual airplane that was manufactured, or a result of a misinterpretation or poor interpolation of the wind tunnel data. In some cases, these discrepancies have been significant and have led to major incidents (such as the HL-10 first flight described earlier). Another source of prediction errors for simulation is the prediction of the aeroelastic effects from applied air loads to the structure. These aeroelastic effects are quite complex and difficult to predict for a lim­ber airplane. They usually require flight-test maneuvers to identify or validate the actual handling quality effects of structural deformation. There have been several small, business aircraft that have been built, developed, and sold commercially wherein calculated predictions of the aerodynamics were the primary data source, and very little if any wind tunnel tests were ever accomplished. Accurate simulators for pilot

training have been created by conducting a brief flight test of each air­plane, performing required test maneuvers, then applying the flight-test parameter estimation methods developed by NASA. With a little bit of attention during the flight-test program, a highly accurate mathematical model of a new airplane can be assembled and used to produce excellent simulators, even without wind tunnel data.[727]

The Evolution of Fluid Dynamics from da Vinci to Navier-Stokes

Fluid flow has fascinated humans since antiquity. The Phoenicians and Greeks built ships that glided over the water, creating bow waves and leaving turbulent wakes behind. Leonardo da Vinci made detailed sketches of the complex flow fields over objects in a flowing stream, show­ing even the smallest vortexes created in the flow. He observed that the force exerted by the water flow over the bodies was proportional to the cross-sectional area of the bodies. But nobody at that time had a clue about the physical laws that governed such flows. This prompted some substantive experimental fluid dynamics in the 17th and 18 th centuries. In the early 1600s, Galileo observed from the falling of bodies through the air that the resistance force (drag) on the body was proportional to the air density. In 1673, the French scientist Edme Mariotte published the first experiments that proved the important fact that the aerodynamic force on an object in a flow varied as the square of the flow velocity, not
directly with the velocity itself as believed by da Vinci and Galileo before him.[758] Seventeen years later, Dutch scientist Christiaan Huygens pub­lished the same result from his experiments. Clearly, by this time, fluid dynamics was of intense interest, yet the only way to learn about it was by experiment, that is, empiricism.[759]

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesThis situation began to change with the onset of the scientific rev­olution in the 17th century, spearheaded by the theoretical work of British polymath Isaac Newton. Newton was interested in the flow of fluids, devoting the whole Book II of his Principia to the subject of fluid dynamics. He conjured up a theoretical picture of fluid flow as a stream of particles in straight-line rectilinear motion that, upon impact with an object, instantly changed their motion to follow the surface of the object. This picture of fluid flow proved totally wrong, as Newton him­self suspected, and it led to Newton’s "sine-squared law” for the force on a object immersed in a flow, which famously misled many early aero­nautical pioneers. But if quantitatively incorrect, it was nevertheless the first to theoretically attempt an explanation of why the aerodynamic force varied directly with the square of the flow velocity.[760]

Newton, through his second law contributed indirectly to the break­throughs in theoretical fluid dynamics that occurred in the 18th century. Newton’s second law states that the force exerted on a moving object is directly proportional to the time rate of change of momentum of that object. (It is more commonly known as "force equals mass time accel­eration,” but this is not found in the Principia). Applying Newton’s sec­ond law to an infinitesimally small fluid element moving as part of a
fluid flow that is actually a continuum material, Leonhard Euler con­structed an equation for the motion of the fluid as dictated by Newton’s second law. Euler, arguably the greatest scientist and mathematician of the 18 th century, modeled a fluid as a continuous collection of infinitesi­mally small fluid elements moving with the flow, where each fluid element can continually change its size and shape as it moves with the flow, but, at the same time, all the fluid elements taken as a whole constitute an overall picture of the flow as a continuum. That was somewhat in con­trast to the individual and distinct particles in Newton’s impact theory model mentioned previously. To his infinitesimally small fluid element, Euler applied Newton’s second law in a form that used differential cal­culus, leading to a differential equation relating the variation of veloc­ity and pressure throughout the flow. This equation, simply labeled the "momentum equation,” came to be known simply as Euler’s equation. In the 18th century, it constituted a bombshell in launching the field of theoretical fluid dynamics and was to become a pivotal equation in CFD in the 20th century, a testament to Euler’s insight and its application.

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesThere is a second fundamental principle that underlies all of fluid dynamics, namely that mass is conserved. Euler applied this principle also to his model of an infinitesimally small moving fluid element, con­structing another differential equation labeled the "continuity equa­tion.” These two equations, the continuity equation and the momentum equation, were published in 1753, considered one of his finest works. Moreover, these two equations, 200 years later, were to become the phys­ical foundations of the early work in computational fluid dynamics.[761]

After Euler’s publication, for the next century all serious efforts to theoretically calculate the details of a fluid flow centered on efforts to solve these Euler equations. There were two problems, however. The first was mathematical: Euler’s equations are nonlinear partial differ­ential equations. In general, nonlinear partial differential equations are not easy to solve. (Indeed, to this day there exists no general analytical solution to the Euler equations.) When faced with the need to solve a practical problem, such as the airflow over an airplane wing, in most cases an exact solution of the Euler equations is unachievable. Only by simplifying the fluid dynamic problem and allowing certain terms in the
equations to be either dropped or modified in such a fashion to make the equations linear rather than nonlinear can these equations be solved in a useful manner. But a penalty usually must be paid for this simpli­fication because in the process at least some of the physical or geomet­rical accuracy of the flow is lost.

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesThe second problem is physical: when applying Newton’s second law to his moving fluid element, Euler did not account for the effects of fric­tion in the flow, that is, the force due to the frictional shear stresses rub­bing on the surfaces of the fluid element as it moves in the flow. Some fluid dynamic problems are reasonably characterized by ignoring the effects of friction, but the 18th and 19th century theoretical fluid dynam – icists were not sure, and they always worried about what role friction plays in a flow. However, a myriad of other problems are dominated by the effect of friction in the flow, and such problems could not even be addressed by applying the Euler equations. This physical problem was exacerbated by controversy as to what happens to the flow moving along a solid surface. We know today that the effect of friction between a fluid flow and a solid surface (such as the surface of an airplane wing) is to cause the flow velocity right at the surface to be zero (relative to the surface). This is called the no-slip condition in modern terminology, and in aerodynamic theory, it represents a "boundary condition” that must be accounted for in conjunction with the solution of the govern­ing flow equations. The no-slip condition is fully understood in modern fluid dynamics, but it was by no means clear to 19th century scientists. The debate over whether there was a finite relative velocity between a solid surface and the flow immediately adjacent to the surface contin­ued into the 2nd decade of the 20th century.[762] In short, the world of the­oretical fluid dynamics in the 18 th and 19 th centuries was hopelessly cast adrift from many desired practical applications.

The second problem, that of properly accounting for the effects of friction in the flow, was dealt with by two mathematicians in the middle 19th century, France’s Claude-Louis-Marie-Henri Navier, and Britain’s Sir George Gabriel Stokes. Navier, an instructor at the famed Ecole natio­nals des ponts et chaussees, changed the pedagogical style of teaching civil engineering from one based mainly on cut-and-try empiricism to a program emphasizing physics and mathematical analysis. In 1822, he
gave a paper to the Academy of Sciences that contained the first accu­rate representation of the effects of friction in the general partial differ­ential momentum equation for fluid flow.[763] Although Navier’s equations were in the correct form, his theoretical reasoning was greatly flawed, and it was almost a fluke that he arrived at the correct terms. Moreover, he did not fully understand the physical significance of what he had derived. Later, quite independently from Navier, Stokes, a professor at Cambridge who occupied the Lucasian Chair at Cambridge University (the same chair Newton had occupied a century and a half earlier) took up the derivation of the momentum equation including the effects of fric­tion. He began with the concept of internal shear stress caused by fric­tion in the fluid and derived the governing momentum equation much like it would be derived today in a fluid dynamics class, publishing it in 1845.[764] Working independently, then, Navier and Stokes derived the basic equations that describe fluid flows and contain terms to account for friction. They remain today the fundamental equations that fluid dynamicists employ for analyzing frictional flows.

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesFinally, in addition to the continuity and momentum equations, a third fundamental physical principle is required for any flow that involves high speeds and in which the density of the flow changes from one point to another. This is the principle of conservation of energy, which holds that energy cannot be created or destroyed; it can only change its form. The origin of this principle in the form of the first law of thermo­dynamics is found in the history of the development of thermo­dynamics in the late 19th century. When applied to a moving fluid ele­ment in Euler’s model, and including frictional dissipation and heat transfer by thermal conduction, this principle leads to the energy equa­tion for fluid flow.

So there it is, the origin of the three Navier-Stokes equations exhib­ited so prominently at the National Air and Space Museum. They are horribly nonlinear partial differential equations. They are also fully cou­pled together because the variables of pressure, density, and velocity that appear in these equations are all dependent on each other. Obtaining a
general analytical solution of the Navier-Stokes equations is much more daunting than the problem of obtaining a general analytical solution of the Euler equations, for they are far more complex. There is today no general analytical solution of the Navier-Stokes equations (as is likewise true in the case of the Euler equations). Yet almost all of modern com­putational fluid dynamics is based on the Navier-Stokes equations, and all of the modern solutions of the Navier-Stokes equations are based on computational fluid dynamics.

Origins of NASTRAN

In the early 1960s, structures researchers from the various NASA Centers were gathering annually at Headquarters in Washington, DC, to exchange ideas and coordinate their efforts. They began to realize that many organizations—NASA Centers and industry—were independently developing computer programs to solve similar types of structural prob­lems. There were several drawbacks to this situation. Effort was being duplicated needlessly. There was no compatibility of input and out­put formats, or consistency of naming conventions. The programs

were only as versatile as the developers cared to make them; the inher­ent versatility of the finite element method was not being exploited. More benefit might be achieved by pooling resources and developing a truly general-purpose program. Thomas G. Butler of the Goddard Space Flight Center (GSFC), wholed the team that developed NASTRAN between 1965 and 1970, recalled in 1982:

NASA’s Office of Advanced Research and Technology (OART) under Dr. Raymond Bisplinghoff sponsored a considerable amount of research in the area of flight structures through its operating centers. Representatives from the centers who man­aged research in structures convened annually to exchange ideas. I was one of the representatives from Goddard Space Flight Center at the meeting in January 1964. . . . Center after center described research programs to improve analysis of structures. Shells of different kinds were logical for NASA to analyze at the time because rockets are shell-like. Each research concentrated on a different aspect of shells. Some were closed with discontinuous boundaries. Other shells had cutouts. Others were noncircular. Others were partial spans of less than 360°. This all seemed quite worthwhile if the prod­ucts of the research resulted in exact closed-form solutions. However, all of them were geared toward making some sim­plifying assumption that made it possible to write a computer program to give numerical solutions for their behavior. . . .

Each of these computer programs required data organization different from every other. . . . Each was intended for explor­ing localized conditions rather than complete shell-like struc­tures, such as a whole rocket. My reaction to these programs was that. . . technology was currently available to give engi­neering solutions to not just localized shells but to whole, highly varied structures. The method was finite elements.[806]

Doug Michel led the meetings at NASA Headquarters. Butler, Harry Runyan of Langley Research Center, and probably others proposed that NASA develop its own finite element program, if a suitable one could

not be found already existing. "The group thought this was a good idea, and Doug followed up with forming the Ad Hoc Group for Structural Analysis, which was headed by Tom Butler of Goddard,” recalled C. Thomas Modlin, Jr., who was one of the representatives from what is now Johnson Space Center.[807] The committee included representatives from all of the NASA Centers that had any significant activity in struc­tural analysis methods at the time, plus an adjunct member from the U. S. Air Force at Wright-Patterson Air Force Base, as listed in the accom­panying table.[808]

CENTER

REPRESENTATIVE(S)

Ames

Richard M. Beam and Perry P. Polentz

Flight Research (now Dryden)

Richard J. Rosecrans

Goddard

Thomas G. Butler (Chair) and Peter A. Smidinger

Jet Propulsion Laboratory

Marshall E. Alper and Robert M. Bamford

Langley

Herbert J. Cunningham

Lewis

William C. Scott and James D. McAleese

Manned Spacecraft (now Johnson)

C. Thomas Modlin, Jr., and William W. Renegar

Marshall

Robert L. McComas

Wright-Patterson AFB

James Johnson (adjunct member)

After visiting several aerospace companies, all of whom were "extremely cooperative and candid,” and reviewing the existing meth­ods, the committee recommended to Headquarters that NASA spon­sor the development of its own finite element program "to update the analytical capability of the whole aerospace community. The program should incorporate the best of the state of the arts, which were cur­rently splintered.”[809]

The effort was launched, under the management of Butler at the Goddard Space Flight Center, to define and implement the General Purpose Structural Analysis program. Requirements were collected from the information brought from the various Centers, from the industry vis­its, and from a conference on "Matrix Methods in Structural Mechanics”

held at Wright-Patterson Air Force Base.[810] Key requirements included the following:[811]

• General-purpose. The system must allow different analysis types—static, transient, thermal, etc.—to be per­formed on the same structural model without alteration.

• Problem size. At least 2,000 degrees of freedom for static and dynamic analyses alike. (Prior state of the art was approximately 100 d. o.f. for dynamic mode analysis and 100 to 600 d. o.f. for static analysis.)

• Modular. Parts of the program could be changed with­out disrupting other parts.

• Open-ended. New types of elements, new analysis mod­ules, and new formats could be added.

• Maintainable and capable of being updated.

• Machine-independent. Capable of operating on IBM 360,

CDC 6000 Series, and UNIVAC 1108 (the only 3 commer­cially available computers capable of performing such analysis at the time), and future generations of computers.

After an initial design phase, the implementation contract was awarded to a team led by Computer Sciences Corporation (CSC), with MacNeal Schwendler Corporation and Martin Baltimore as subcontrac­tors. Coding began in July 1966. Dr. Paul R. Peabody was the principal architect of the overall system design. Dr. Richard H. MacNeal (MacNeal Schwendler) designed the solution structure, taking each type of solu­tion from physics, to math, to programming, assisted by David Harting. Keith Redner was the implementation team lead and head program­mer, assisted by Steven D. Wall and Richard S. Pyle. Frank J. Douglas coded the element routines and wrote the programmer’s manual. Caleb W. McCormick was the author of the user’s manual and supervised NASTRAN installation and training. Other members of the development team included Stanley Kaufman (Martin Baltimore), Thomas L. Clark,

David B. Hall, Carl Hennrich, and Howard Dielmann. The project staff at Goddard included Richard D. McConnell, William R. Case, James B. Mason, William L. Cook, and Edward F. Puccinelli.[812]

NASTRAN embodied many technically advanced features that are beyond the scope of this paper (and, admittedly, beyond the scope of this author’s understanding), which provided the inherent capability to handle large problems accurately and efficiently. It was referred to as a "system” rather than just a program by its developers, and for good reasons. It had its own internal control language, called Digital Matrix Abstraction Programming (DMAP), which gave flexibility in the use of its different modules. There were 151,000 FORTRAN statements, equating to more than 1 million machine language statements. Twelve prepack­aged "rigid formats” permitted multiple types of analysis on the same structural model, including statics, steady-state frequency response, transient response, etc.[813]

The initial development of NASTRAN was not without setbacks and delays, and at introduction it did not have all of the intended capabili­ties. But the team stayed focused on the essentials, choosing which fea­tures to defer until later and which characteristics absolutely had to be maintained to keep NASTRAN true to its intent.[814] According to Butler: "One thing that must be mentioned about the project, that is remark­able, pertains to the spirit that infused it everywhere. Every man thought that he was the key man on the whole project. As it turned out, every man was key because for the whole to mesh no effort was inconsequen­tial. The marvelous thing was that every man felt it inside. There was a feeling of destiny on the project.”[815]

That the developers adhered to the original principles to make NASTRAN modular, open-ended, and general-purpose—with com­mon formats and interfaces among its different routines—proved to be more important in the long term than how many elements and analysis

capabilities were available at introduction. Preserving the intended architecture ensured that the details could be filled in later.

Computational Methods, Industrial Transfer, and the Way Ahead

Having surveyed the development of computational structural analysis within NASA, the contributions of various Centers, and key flight proj­ects that tested and validated structural design and analysis methods in their ultimate application, we turn to the current state of affairs as of 2010 and future challenges.

Overall, even a cursory historical examination clearly indicates that the last four decades have witnessed revolutionary improvements in all of the following areas:

• Analysis capability.

• Complexity of structures that can be analyzed.

• Number of nodes.

• Types of elements.

• Complexity of processes simulated.

• Nonlinearity.

• Buckling.

• Other geometric nonlinearity.

• Material nonlinearity.

• Time-dependent properties.

• Yield or ultimate failure of some members.

• Statistical/nondeterministic processes.

• Thermal effects.

• Control system interactions.

• Usability.

• Execution time.

• Hardware improvements.

• Efficiency of algorithms.

• Adequate but not excessive model complexity.

• Robustness, diagnostics, and restart capability.

• Computing environment.

• Pre – and post-processing.

Before NASTRAN, capabilities generally available (i. e., not count­ing proprietary programs at the large aerospace companies) were lim­ited to a few hundred nodes. In 1970, NASTRAN made it possible to analyze models with over 2,000 nodes. Currently, models with hundreds of thousands of nodes are routinely analyzed. The computing environ­ment has changed just as dramatically, or more so: the computer used to be a shared resource among many users—sometimes an entire com­pany, or it was located at a data center used by many companies—with punch cards for input and reams of paper for output. Now, there is a PC (or two) at every engineer’s desk. NASTRAN can run on a PC, although some users prefer to run it on UNIX machines or other platforms.

Technology has thus come full circle: NASA now makes extensive use of commercial structural analysis codes that have their roots in NASA technology. Commercial versions of NASTRAN have essentially super­seded NASA’s COSMIC NASTRAN. That is appropriate, in this author’s opinion, because it is not NASA’s role to provide commercially competi­tive performance, user interfaces, etc. The existence and widespread use of these commercial codes indicates successful technology transition.

At the time of this writing, basic capability is relatively mature. Advances are still being made, but it is now possible to analyze the vast majority of macroscopic structural problems that are of practical inter­est in aeronautics and many other industries.

Improvements in the "usability” category are of greater interest to most engineers. Execution speed has improved orders of magnitude, but this has been partially offset by the corresponding orders-of-mag – nitude increases in model size. Engineers build models with hundreds of thousands of nodes, because they can.

Pre – and post-processing challenges remain. Building the model and interpreting the results typically take longer than actually running the analysis. It is by no means a trivial task to build a finite element model of a complex structure such as a complete airframe, or a major portion thereof. Some commercial software can generate finite element mod­els automatically from CAD geometry. However, many practitioners in the aircraft industry prefer to have more involvement in the model­ing process, because of the complexity of the analysis and the safety – critical nature of the task. The fundamental challenge is to make the modeling job easier, while providing the user with control when required and the ability to thoroughly check the resulting model.[972]

In 1982, Thomas Butler wrote, "I would compare the state of graph­ics pre – and post-processors today with the state that finite elements were in before NASTRAN came on the scene in 1964. Many good fea­tures exist. There is much to be desired in each available package.”[973] Industry practitioners interviewed today have expressed similar sen­timents. There is no single pre – or post-processing product that meets every need. Some users deliberately switch between different pre – and post-processing programs, utilizing the strengths of each for different phases of the modeling task (such as creating components, manipulat­ing them, and visualizing and interrogating the finished model). A rea­sonable number of distinct pre – and post-processing systems maintain commercial competition, which many users consider to be important.[974]

As basic analysis capability has become well established, research­ers step back and look at the bigger picture. Integration, optimization, and uncertainty modeling are common themes at many of the NASA Centers. This includes integration of design and analysis, of analysis and testing, and of structural analysis with analysis in other disciplines. NASA Glenn Research Center is heavily involved in nondeterministic analysis methods, life prediction, modeling of failure mechanisms, and modeling of composite materials, including high-temperature material systems for propulsion applications. Research at Langley spans many fields, including multidisciplinary analysis and optimization of aircraft and spacecraft, analysis and test correlation, uncertainty modeling and "fuzzy structures,” and failure analysis.

In many projects, finite element analysis is being applied at the microscale to gain a better understanding of material behaviors. The

ability to perform such analysis is a noteworthy benefit coming from advances in structural analysis methods at the macroscopic level. Very real benefits to industry could result. The weight savings predicted from composite materials have been slow in coming, partly because of lim­itations on allowable stresses. In the civil aviation industry especially, such limitations are not necessarily based on the inherent character­istics of the material but on the limited knowledge of those character­istics. Analysis that gives insight into material behaviors near failure, documented and backed up by test results, may help to achieve the full potential of composite materials in airframe structures.

Applications of true optimization—such as rigorously finding the mathematical minimum of a "cost function”—are still relatively lim­ited in the aircraft industry. The necessary computational tools exist. However, the combination of practical difficulties in automating com­plex analyses and a certain amount of cultural resistance has somewhat limited the application of true optimization in the aircraft industry up to the present time. There is untapped potential in this area. The path to reaching it is not necessarily in the development of better computer pro­grams, but rather, in the development and demonstration of processes for the effective and practical use of capabilities that exist already. The DAMVIBS program (discussed previously in the section on the NASA Langley Research Center) might provide a model for how this kind of technology transfer can happen. In that program, industry teams essentially demonstrated to themselves that existing finite element programs could be useful in predicting and improving the vibration characteristics of helicopters—when coupled with some necessary improvements in modeling technique. All of the participants subse­quently embraced the use of such methods in the design processes of their respective organizations. A comparable program could, perhaps, be envisioned in the field of structural and/or multidisciplinary optimi­zation in aircraft design.[975]

Considering structural analysis as a stand-alone discipline, how­ever, it can be stated without question that computational methods have been adopted throughout the aircraft industry. Specific processes vary between companies. Some companies perform more upfront optimization than others; some still test exhaustively, while others test minimally. But the aircraft industry as a whole has embraced compu­tational structural analysis and benefited greatly from it.

The benefits of computational structural analysis may not be ade­quately captured in one concise list, but they include the following:

• Improved productivity of analysis.

• Ability to analyze a more complete range of load cases.

• Ability to analyze a structure more thoroughly than was previously practical.

• Ability to correct and update analyses as designs and requirements mature.

• Improved quality and consistency of analysis.

• Improved performance of the end product. Designs can be improved through more cycles of design/analysis in the early stages of a project, and earlier identification of structural issues, than previously practical.

• Improved capabilities in related disciplines: thermal modeling and acoustic modeling, for example. Some aircraft companies utilize finite element models in the design stage of an aircraft to develop effective noise reduction strategies.

• Ability to analyze structures that could not be practically analyzed before. For example, composite and metallic airframes are different. Metal structures typically have more discrete load paths. Composite structures, such as honeycomb-core panels, have less distinct load paths and are less amenable to analysis by hand using classi­cal methods. Therefore, finite element analysis enables airplanes to be built in ways that would not be possible (or, at least, not verifiable) otherwise.

• Reduced cost and increased utility of testing. Analysis does not replace all testing, but it can greatly enhance the amount of knowledge gained from a test. For exam­ple, modeling performed ahead of a test series can help identify the appropriate locations for strain gages, accel­erometers, and other instrumentation, and aid in the interpretation of the resulting test data. The most diffi­cult or costly types of testing can certainly be reduced.

In a greatly simplified sense, the old paradigm is that

testing was the proof of the structure; now, testing val­idates the model, and the model proves the structure. Practically speaking, most aircraft companies practice something in between these two extremes.

NASA’s contributions have included not only the development of the tools but also the development and dissemination of techniques to apply the tools to practical problems and the provision of opportuni­ties—through unique test facilities and, ultimately, flight research proj­ects—to prove, validate, and improve the tools.

In other industries also, there is now widespread use of computer­ized structural analysis for almost every conceivable kind of part that must operate under conditions of high mechanical and/or thermal stress. NASTRAN is used to analyze buildings, bridges, towers, ships, wind tun­nels and other specialized test facilities, nuclear power plants, steam turbines, wind turbines, chemical processing plants, microelectronics, robotic systems, tools, sports equipment, cars, trucks, buses, trains, engines, transmissions, and tires. It is used for geophysical and seismic analysis, and for medical applications.

In conclusion, finite element analysis would have developed with or without NASA’s involvement. However, by creating NASTRAN, NASA provided a centerpiece: a point of reference for all other development and an open-ended framework into which new capabilities could be inserted. This framework gradually collected the best or nearly best methods in every area. If NASTRAN had not been developed, differ­ent advances would have occurred only within proprietary codes used internally by different industrial companies or marketed by differ­ent software companies. There would have been little hope of consol­idating all the important capabilities into one code or of making such capabilities available to the general user. NASTRAN brought high-pow­ered finite element analysis within reach of many users much sooner than would have otherwise been the case. At the same time, the job of predicting every aspect of structural performance was by no means fin­ished with the initial release of NASTRAN—nor is it finished yet. NASA has been and continues to be involved in the development of many new capabilities—developing programs and new ways to apply existing programs—and making the resulting tools and methods available to users in the aerospace industry and in many other sectors of the U. S. economy.

Appendix A: