Category AERONAUTICS

Strike Technology Testbed

In the summer of 1991, a flight-test effort oriented to close air support and battlefield air interdiction began. The focus was to demonstrate tech­nologies to locate and destroy ground targets day or night, good weather or bad, while maneuvering at low altitudes. The AFTI/F-16 was modi­fied with two forward-looking infrared sensors mounted in turrets on the upper fuselage ahead of the canopy. The pilot was equipped with a helmet-mounted sight that was integrated with the infrared sensors. As he moved his head, they followed his line of sight and transmitted their images to eyepieces mounted in his helmet. The nose-mounted canards used in earlier AFTI/F-16 testing were removed. Testing emphasized giv­ing pilots the capability to fly their aircraft and attack targets in darkness or bad weather. To assist in this task, a digital terrain map was stored
in the aircraft computer. Advanced terrain following was also evalu­ated. This used the AFTI/F-16’s radar to scan terrain ahead of the air­craft and automatically fly over or around obstacles. The pilot could select minimum altitudes for his mission. The system would automat­ically calculate that the aircraft was about to descend below this alti­tude and initiate a 5 g pullup maneuver. The advanced terrain following system was connected to the Automated Maneuvering Attack System, enabling the pilot to delivery weapons from altitudes as low as 500 feet in a 5 g turn. An automatic Pilot Activated Recovery System was integrated with the flight control system. If the pilot became disori­ented at night or in bad weather, he could activate a switch on his side controller. This caused the flight control computer to automatically recover the aircraft putting it into a wings-level climb. Many of these technologies have subsequently transitioned into upgrades to existing fighter/attack aircraft.[1187]

Подпись: 10The final incarnation of this unique aircraft would be as the AFTI/F-16 power-by-wire flight technology demonstrator.

Performance Seeking Control

Подпись: 10The Performance Seeking Control (PSC) effort followed the Adaptive Electronic Control System project. Previous engine control modes uti­lized on the HIDEC aircraft used stored schedules of optimum engine pressure ratios based an average engine on a normal standard day. Using digital flight control, inlet control, and engine control systems, PSC used highly advanced computational techniques and control laws to identify the actual condition of the engine components and optimize the overall propulsion system for best efficiency based on actual engine and flight conditions that the aircraft was encountering, ensuring the highest engine and maneuvering performance in all flight environments. PSC testing with the HIDEC aircraft began in 1990. Results of flight-testing with PSC included increased fuel efficiency, improved engine thrust dur­ing accelerations and climbs, and increased engine service life achieved by reductions in turbine inlet temperature. Flight-testing demonstrated turbine inlet temperature reductions of more than 160 °F. Such large operating temperature reductions can significantly extend the life of jet engines. Additionally, improvements in thrust of between 9 percent and 15 percent were observed in various flight conditions, including accel­eration and climb.[1265] PSC also included the development of methodolo­gies within the digital engine control system designed to detect engine wear and impending failure of certain engine components. Such infor­mation, coupled with normal preventative maintenance, could assist in implementing future fail-safe propulsion systems.[1266] The flight dem­onstration and evaluation of the PSC system at NASA Dryden directly contributed to the rapid transition of the technology into operational use. For example, PSC technology has been applied to the F100 engine

used in the F-15 Eagle, the F119 engine in the F-22 Raptor, and the F135 engine for the F-35 Lightning II.

Numerical Propulsion System Simulation

NASA and its contractor colleagues soon found another use for computers to help improve engine performance. In fact, looking back at the history

Подпись: 11 Numerical Propulsion System Simulation

of NASA’s involvement with improving propulsion technology, a trilogy of major categories of advances can be suggested based on the develop­ment of the computer and its evolution in the role that electronic think­ers have played in our culture.

Part one of this story includes all the improvements NASA and its industry partners have made with jet engines before the computer came along. Having arrived at a basic operational design for a turbojet engine—and its relations, the turboprop and turbofan—engineers sought to improve fuel efficiency, reduce noise, decrease wear, and otherwise reduce the cost of maintaining the engines. They did this through such efforts as the Quiet Clean Short Haul Experimental Engine and Aircraft Energy Efficiency program, detailed earlier in this case study. By tinker­ing with the individual components and testing the engines on the ground and in the air for thousands of hours, incremental advances were made.[1338]

Part two of the story introduces the capabilities made available to engineers as computers became powerful enough and small enough to be incorporated into the engine design. Instead of requiring the pilot to manually make occasional adjustments to the engine operation in
flight depending on what the instruments read, a small digital computer built into the engine senses thousands of measurements per minute and caused an equal number of adjustments to be made to keep the power – plant performing at peak efficiency. With the Digital Electronic Engine Control, engines designed years before behaved as though they were fresh off the drawing boards, thanks to their increased capabilities.[1339]

Подпись: 11Having taken engine designs about as far as it was thought possible, the need for even more fuel-efficient, quieter, and capable engines con­tinued. Unfortunately, the cost of developing a new engine from scratch, building it, and testing it in flight can cost millions of dollars and take years to accomplish. What the aerospace industry needed was a way to take advantage of the powerful computers available at the dawn of the 21st century to make the engine development process less expen­sive and timelier. The result was part three of NASA’s overarching story of engine development: the Numerical Propulsion System Simulation (NPSS) program.[1340]

Working with the aerospace industry and academia, NASA’s Glenn Research Center led the collaborative effort to create the NPSS pro­gram, which was funded and operated as part of the High Performance Computing and Communications program. The idea was to use modern simulation techniques and create a virtual engine and test stand within a virtual wind tunnel, where new designs could be tried out, adjustments made, and the refinements exercised again without costly and time-con­suming tests in the "real” world. As stated in a 1999 industry review of the program, the NPSS was built around inclusion of three main ele­ments: "Engineering models that enable multi-disciplinary analysis of large subsystems and systems at various levels of detail, a simulation environment that maximizes designer productivity and a cost-effective, high-performance computing platform.”[1341]

In explaining to the industry the potential value of the program dur­ing a 2006 American Society of Mechanical Engineers conference in

Spain, a NASA briefer from Glenn suggested that if a standard turbo­jet development program for the military—such as the F100—took 10 years, $1.5 billion, construction of 14 ground-test engines, 9 flight-test engines, and more than 11,000 hours of engine tests, the NPSS pro­gram could realize a:

• 50-percent reduction in tooling cost.

• 33-percent reduction in the average development engine cost.

Подпись: 1130-percent reduction in the cost of fabricating, assembling, and testing rig hardware.

• 36-percent reduction in the number of development engines.

• 60-percent reduction in total hardware cost.[1342]

A key—and groundbreaking—feature of NPSS was its ability to inte­grate simulated tests of different engine components and features, and run them as a whole, fully modeling all aspects of a turbojet’s operation. The program did this through the use of the Common Object Request Broker Architecture (CORBA), which essentially provided a shared lan­guage among the objects and disciplines (mechanical, thermo-dynam­ics, structures, gas flow, etc.) being tested so the resulting data could be analyzed in an "apples to apples” manner. Through the creation of an NPSS developer’s kit, researchers had tools to customize the software for individual needs, share secure data, and distribute the simulations for use on multiple computer operating systems. The kit also provided for the use of CORBA to "zoom” in on the data to see specific informa­tion with higher fidelity.[1343]

Begun in 1997, the NPSS team consisted of propulsion experts and software engineers from GE, Pratt & Whitney, Boeing, Honeywell, Rolls – Royce, Williams International, Teledyne Ryan Aeronautical, Arnold Engineering Development Center, Wright-Patterson AFB, and NASA’s

Glenn Research Center. By the end of the 2000 fiscal year, the NPSS team had released Version 1.0.0 on schedule. According to a summary of the program produced that year:

Подпись: 11(The new software) can be used as an aero-thermodynamic zero-dimensional cycle simulation tool. The capabilities include text-based input syntax, a sophisticated solver, steady- state and transient operation, report generation, a built-in object-oriented programming language for user-definable components and functions, support for distributed running of external codes via CORBA, test data reduction, interactive debug capability and customer deck generation.[1344]

Additional capabilities were added in 2001, including the ability to support development of space transportation technologies. At the same time, the initial NPSS software quickly found applications in aviation safety, ground-based power, and alternative energy devices, such as fuel cells. Moreover, project officials at the time suggested that with the fur­ther development of the software, other applications could be found for the program in the areas of nuclear power, water treatment, biomedi­cine, chemical processing, and marine propulsion. NPSS proved to be so capable and promising of future applications that NASA designated the program a cowinner of the NASA Software of the Year Award for 2001.[1345]

Work to improve the capabilities and expand the applications of the software continued, and, in 2008, NASA transferred NPSS to a consor­tium of industry partners, and, through a Space Act Agreement, it is cur­rently offered commercially by Wolverine Ventures, Inc., of Jupiter, FL. Now at Version 1.6.5, NPSS’s features include the ability to model all types of complex systems, plug-and-play interfaces for fluid properties, built-in plotting package, interface to higher fidelity legacy codes, mul­tiple model views, command language interpreter with language sen­sitive text editor, comprehensive component solver, and variable setup controls. It also can operate on Linux, Windows, and UNIX platforms.[1346]

Originally begun as a virtual tool for designing new turbojet engines, NPSS has since found uses in testing rocket engines, fuel cells, analog controls, combined cycle engines, thermal management systems, air­frame vehicles preliminary design, and commercial and military engines.[1347]

Ultra Efficient Engine Technology Program

Подпись: 11With the NPSS tool firmly in place and some four decades of experience incrementally improving the design, operation, and maintenance of the jet engine, it was time to go for broke and assemble an ultra­bright team of engineers to come up with nothing short of the best jet

Building on the success of technology development programs such as the Quiet Clean Short Haul Experimental Engine and Energy Efficient Engine project—all of which led directly to the improvements and production of turbojet engines now propelling today’s commercial airliners—NASA approached the start of the 21st century with plans to take jet engine design to accomplish even more impressive feats. In 1999, the Aeronautics Directorate of NASA began the Ultra Efficient Engine Technology (UEET) program—a 5-year, $300-million effort— with two primary goals. The first was to find ways that would enable further improvements in engine efficiency to reduce fuel burn and, as a result, carbon dioxide emissions by yet another 15 percent. The second was to continue developing new materials and configuration schemes in the engine’s combustor to reduce emissions of nitrogen oxides (NOx) during takeoff and landings by 70 percent relative to the standards detailed in 1996 by the International Civil Aviation Organization.[1348]

NASA’s Glenn Research Center led the program, with participation from three other NASA Centers: Ames, Langley, and the Goddard Space Flight Center in Greenbelt, MD. Also involved were GE, Pratt & Whitney, Honeywell, Allison/Rolls-Royce, Williams International, Boeing, and Lockheed Martin.[1349]

Подпись: 11The program was comprised of seven major projects, each of which addressed particular technology needs and exploitation opportunities.[1350] The Propulsion Systems Integration and Assessment project examined overall component technology issues relevant to the UEET program to help furnish overall program guidance and identify technology short­falls.[1351] The Emissions Reduction project sought to significantly reduce NOx and other emissions, using new combustor concepts and tech­nologies such as lean burning combustors with advanced controls and high-temperature ceramic matrix composite materials.[1352] The Highly Loaded Turbomachinery project sought to design lighter-weight, reduced – stage cores, low-pressure spools and propulsors for more efficient and environmentally friendly engines, and advanced fan concepts for qui­eter, lighter, and more efficient fans.[1353] The Materials and Structures for High Performance project sought to develop and demonstrate high – temperature material concepts such as ceramic matrix composite combustor liners and turbine vanes, advanced disk alloys, turbine air­foil material systems, high-temperature polymer matrix composites, and innovative lightweight materials and structures for static engine struc – tures.[1354] The Propulsion-Airframe Integration project studied propul­sion systems and engine locations that could furnish improved engine and environmental benefits without compromising the aerodynamic performance of the airplane; lowering aircraft drag itself constituted a highly desirable means of reducing fuel burn, and, hence, CO2 emis­sions will develop advanced technologies to yield lower drag propulsion system integration with the airframe for a wide range of vehicle classes. Decreasing drag improves air vehicle performance and efficiency, which
reduces fuel burn to accomplish a particular mission, thereby reducing the CO2 emissions.[1355] The Intelligent Propulsion Controls Project sought to capitalize upon breakthroughs in electronic control technology to improve propulsion system life and enhance flight safety via integrating informa­tion, propulsion, and integrated flight propulsion control technologies.[1356] Finally, the Integrated Component Technology Demonstrations project sought to evaluate the benefits of off-the-shelf propulsion systems inte­gration on NASA, Department of Defense, and aeropropulsion industry partnership efforts, including both the UEET and the military’s Integrated High Performance Turbine Engine Technology (IHPTET) programs.[1357]

Подпись: 11By 2003, the 7 project areas had come up with 10 specific technol­ogy areas that UEET would investigate and incorporate into an engine that would meet the program’s goals for reducing pollution and increas­ing fuel burn efficiency. The technology goals included:

1. Advanced low-NOx combustor design that would feature a lean burning concept.

2. A highly loaded compressor that would lower system weight, improve overall performance, and result in lower fuel burn and carbon dioxide emissions.

3. A highly loaded, high-pressure turbine that could allow a reduction in the number of high-pressure stages, parts count, and cooling requirements, all of which could improve fuel burn and lower carbon dioxide emissions.

4. A highly loaded, low-pressure turbine and aggressive tran­sition duct that would use flow control techniques that would reduce the number of low-pressure stages within the engine.

5. Use of a ceramic matrix composite turbine vane that would allow high-pressure vanes to operate at a higher
inlet temperature, which would reduce the amount of engine cooling necessary and result in lower carbon diox­ide emissions.

6. The same ceramic matrix composite material would be used to line the combustor walls so it could operate at a higher temperature and reduce NOx emissions.

7. Coat the turbine airfoils with a ceramic thermal barrier material to allow the turbines to operate at a higher tem­perature and thus reduce carbon dioxide emissions.

8. Подпись: 11Use advanced materials in the construction of the tur­bine airfoil and disk. Specifically, use a lightweight single crystal superalloy to allow the turbine blades and vanes to operate at a higher temperature and reduce carbon dioxide emissions, as well as a dual microstructure nickel – base superalloy to manufacture turbine disks tailored to meet the demands of the higher-temperature environment.

9. Determine advanced materials and structural concepts for an improved, lighter-weight impact damage tolerance and noise-reducing fan containment case.

10. Develop active tip clearance control technology for use in the fan, compressor, and turbine to improve each compo­nent’s efficiency and reduce carbon dioxide emissions.[1358]

In 2003, the UEET program was integrated into NASA’s Vehicle Systems program to enable the enginework to be coordinated with research into improving other areas of overall aircraft technology. But in the wake of policy changes associated with the 2004 decision to redi­rect NASA’s space program to retire the Space Shuttle and return humans to the Moon, the Agency was forced to redirect some of its funding to Exploration, forcing the Aeronautics Directorate to give up the $21.6 mil­lion budgeted for UEET in fiscal year 2005, effectively canceling the big­gest and most complicate jet engine research program ever attempted. At the same time, NASA was directed to realign its jet engine research to concentrate on further reducing noise.[1359]

Nevertheless, results from tests of UEET hardware showed prom­ise that a large, subsonic aircraft equipped with some of the technologies detailed above would have a "very high probability” of achieving the pro­gram goals laid out for reducing emissions of carbon dioxide and other pollutants. The data remain for application to future aircraft and engine schemes.72

1973 RANN Symposium Sponsored by the National Science Foundation

In reviewing the current status and potential of wind energy, Ronald Thomas and Joseph M. Savino, both from NASA’s Lewis Research Center, in November 1973 presented a paper at the Research Applied to National Needs Symposium in Washington, DC, sponsored by the National Science Foundation. The paper reviewed past experience with wind generators, problems to be overcome, the feasibility of wind power to help meet energy needs, and the planned Wind Energy Program. Thomas and Savino pointed out that the Dutch had used windmills for years to pro­vide power for pumping water and grinding grain; that the Russians built
a 100-kilowatt generator at Balaclava in 1931 that feed into a power net­work; that the Danes used wind as a major source of power for many years, including the building of the 200-kilowatt Gedser mill system that operated from 1957 through 1968; that the British built several large wind generators in the early 1950s; that the Smith-Putnam wind tur­bine built in Vermont in 1941 supplied power into a hydroelectric power grid; and that Germans did fine work in the 1950s and 1960s building and testing machines of 10 and 100 kilowatts. The two NASA engineers noted, however, that in 1973, no large wind turbines were in operation.

Подпись: 13Thomas and Savino concluded that preliminary estimates indicated that wind could supply a significant amount of the Nation’s electricity needs and that utilizing energy from the wind was technically feasible, as evidenced by the past development of wind generators. They added, however, that a sustained development effort was needed to obtain eco­nomical systems. They noted that the effects of wind variability could be reduced by storage systems or connecting wind generators to fos­sil fuel or hydroelectric systems, or dispersing the generated electricity throughout a large grid system. Thomas and Savino[1497] recommended a number of steps that the NASA and National Science Foundation pro­gram should take, including: (1) designing, building, and testing modern machines for actual applications in order to provide baseline information for assessing the potential of wind energy as an electric power source, (2) operating wind generators in selected applications for determining actual power costs, and (3) identifying subsystems and components that might be further reduced in costs.[1498]

The Making of an Engineer

Richard Travis Whitcomb was born on February 21, 1921, in Evanston, IL, and grew up in Worcester, MA. He was the eldest of four children in a family led by mathematician-engineer Kenneth F. Whitcomb.[136] Whitcomb was one of the many air-minded American children building and testing aircraft models throughout the 1920s and 1930s.[137] At the age of 12, he created an aeronautical laboratory in his family’s basement. Whitcomb spent the majority of his time there building, flying, and innovating rub – berband-powered model airplanes, with the exception of reluctantly eating, sleeping, and going to school. He never had a desire to fly him­self, but, in his words, he pursued aeronautics for the "fascination of making a model that would fly.” One innovation Whitcomb developed was a propeller that folded back when it stopped spinning to reduce aerodynamic drag. He won several model airplane contests and was a prizewinner in the Fisher Body Company automobile model competi­tion; both were formative events for young American men who would become the aeronautical engineers of the 1940s. Even as a young man, Whitcomb exhibited an enthusiastic drive that could not be diverted until the challenge was overcome.[138]

A major influence on Whitcomb during his early years was his pater­nal grandfather, who had left farming in Illinois to become a manufac­turer of mechanical vending machines. Independent and driven, the grandfather was also an acquaintance of Thomas A. Edison. Whitcomb listened attentively to his grandfather’s stories about Edison and soon came to idolize the inventor for his ideas as well as for his freethinking individuality.[139] The admiration for his grandfather and for Edison shaped Whitcomb’s approach to aeronautical engineering.

Whitcomb received a scholarship to nearby Worcester Polytechnic Institute and entered the prestigious school’s engineering program in 1939. He lived at home to save money and spent the majority of his time in the institute’s wind tunnel. Interested in helping with the war effort, Whitcomb’s senior project was the design of a guided bomb. He graduated with distinction with a bachelor’s of science degree in mechanical engineering. A 1943 Fortune magazine article on the NACA convinced Whitcomb to join the Government-civilian research facility at Hampton, VA.[140]

Airplanes ventured into a new aerodynamic regime, the so-called "transonic barrier,” as Whitcomb entered into his second year at Worcester. At speeds approaching Mach 1, aircraft experienced sudden changes in stability and control, extreme buffeting, and, most impor­tantly, a dramatic increase in drag, which exposed three challenges to the aeronautical community, involving propulsion, research facili­ties, and aerodynamics. The first challenge involved the propeller and piston-engine propulsion system. The highly developed and reliable sys­tem was at a plateau and incapable of powering the airplane in the tran­sonic regime. The turbojet revolution brought forth by the introduction of jet engines in Great Britain and Germany in the early 1940s provided the power needed for transonic flight. The latter two challenges directly involved the NACA and, to an extent, Dick Whitcomb, during the course of the 1940s. Bridging the gap between subsonic and supersonic speeds was a major aerodynamic challenge.[141]

Little was known about the transonic regime, which falls between Mach 0.8 and 1.2. Aeronautical engineers faced a daunting challenge rooted in developing new tools and concepts. The aerodynamicist’s pri­mary tool, the wind tunnel, was unable to operate and generate data at transonic speeds. Four approaches were used in lieu of an available wind tunnel in the 1940s for transonic research. One way to generate data for speeds beyond 350 mph was through aircraft diving at terminal velocity, which was dangerous for test pilots and of limited value for aeronauti­cal engineers. Moreover, a representative drag-weight ratio for a 1940- era airplane ensured that it was unable to exceed Mach 0.8. Another way was the use of a falling body, an instrumented missile dropped from the bomb bay of a Boeing B-29 Superfortress. A third method was the wing-flow model. NACA personnel mounted a small, instrumented air­foil on top of the wing of a North American P-51 Mustang fighter. The Mustang traveled at high subsonic speeds and provided a recoverable method in real-time conditions. Finally, the NACA launched small mod­els mounted atop rockets from the Wallops Island facility on Virginia’s Eastern Shore.[142] The disadvantages for these three methods were that they only generated data for short periods of time and that there were many variables regarding conditions that could affect the tests.

Even if a wind tunnel existed that was capable of evaluating aircraft at transonic speeds, there was no concept that guaranteed a successful transonic aircraft design. A growing base of knowledge in supersonic aircraft design emerged in Europe beginning in the 1930s. Jakob Ackeret operated the first wind tunnel capable of generating Mach 2 in Zurich, Switzerland, and designed tunnels for other countries. The international high-speed aerodynamics community met at the Volta Conference held in Rome in 1935. A paper presented by German aerodynamicist Adolf Busemann argued that if aircraft designers swept the wing back from the fuselage, it would offset the increase in drag beyond speeds of Mach 1. Busemann offered a revolutionary answer to the problem of high-speed aerodynamics and the sound barrier. In retrospect, the Volta Conference proved to be a turning point in high-speed aerodynamics research, espe­cially for Nazi Germany. In 1944, Dietrich Kuchemann discovered that a contoured fuselage resembling the now-iconic Coca-Cola soft drink bot­tle was ideal when combined with Busemann’s swept wings. American researcher Robert T. Jones independently discovered the swept wing at NACA Langley almost a decade after the Volta Conference. Jones was a respected Langley aerodynamicist, and his five-page 1945 report pro­vided a standard definition of the aerodynamics of a swept wing. The report appeared at the same time that high-speed aerodynamic infor­mation from Nazi Germany was reaching the United States.[143]

As the German and American high-speed traditions merged after World War II, the American aeronautical community realized that there were still many questions to be answered regarding high-speed flight. Three NACA programs in the late 1940s and early 1950s overcame the remaining aerodynamic and facility "barriers” in what John Becker char­acterized as "one of the most effective team efforts in the annals of aero­nautics.” The National Aeronautics Association recognized these NACA achievements three times through aviation’s highest award, the Collier Trophy, for 1947, 1951, and 1954. The first award, for the achievement of supersonic flight by the X-1, was presented jointly to John Stack of the NACA, manufacturer Lawrence D. Bell, and Air Force test pilot Capt. Charles E. "Chuck” Yeager. The second award in 1952 recognized the slotted transonic tunnel development pioneered by John Stack and his associates at NACA Langley.[144] The third award recognized the direct byproduct of the development of a wind tunnel in which the visionary mind of Dick Whitcomb developed the design concept that would enable aircraft to efficiently transition from subsonic to supersonic speeds through the transonic regime.

A Painful Lesson: Sonic Booms and the Supersonic Transport

By the late 1950s, the rapid pace of aeronautical progress—with new turbojet-powered airliners flying twice as fast and high as the propel­ler-driven transports they were replacing—promised even higher speeds in coming years. At the same time, the perceived challenge to America’s technological superiority implied by the Soviet Union’s early space triumphs inspired a willingness to pursue ambitious new aerospace ventures. One of these was the Supersonic Commercial Air Transport (SCAT). This program was further motivated by competition from

A Painful Lesson: Sonic Booms and the Supersonic Transport

Figure 2. Cover of an Air Force pamphlet for sonic boom claim investigators. USAF.

Britain and France to build an airliner that was expected to dominate the future of mid – and long-range commercial aviation.[344]

Aerospaceplane to NASP: The Lure of Air-Breathing Hypersonics

The Space Shuttle represented a rocket-lofted approach to hypersonic space access. But rockets were not the only means of propulsion con­templated for hypersonic vehicles. One of the most important aspects of hypersonic evolution since the 1950s has been the development of the supersonic combustion ramjet, popularly known as a scramjet. The ramjet in its simplest form is a tube and nozzle, into which air is intro­duced, mixed with fuel, and ignited, the combustion products passing

through a classic nozzle and propelling the engine forward. Unlike a conventional gas turbine, the ramjet does not have a compressor wheel or staged compressor blades, cannot typically function at speeds less than Mach 0.5, and does not come into its own until the inlet velocity is near or greater than the speed of sound. Then it func­tions remarkably well as an accelerator, to speeds well in excess of Mach 3.

Conventional subsonic-combustion ramjets, as employed by the Mach 4.31 X-7, held promise as hypersonic accelerators for a time, but they could not approach higher hypersonic speeds because their sub­sonic internal airflow heated excessively at high Mach. If a ramjet could be designed that had a supersonic internal flow, it would run much cooler and at the same time be able to accelerate a vehicle to double­digit hypersonic Mach numbers, perhaps reaching the magic Mach 25, signifying orbital velocity. Such an engine would be a scramjet. Such engines have only recently made their first flights, but they nevertheless are important in hypersonics and point the way toward future practical air-breathing hypersonics.

An important concern explored at the NACA’s Lewis Flight Propulsion Laboratory during the 1950s was whether it was possible to achieve supersonic combustion without producing attendant shock waves that slow internal flow and heat it. Investigators Irving Pinkel and John Serafini proposed experiments in supersonic combus­tion under a supersonic wing, postulating that this might afford a means of furnishing additional lift. Lewis researchers also studied supersonic combustion testing in wind tunnels. Supersonic tunnels produced very low air pressure, but it was known that aluminum boro – hydride could promote the ignition of pentane fuel even at pressures as low as 0.03 atmospheres. In 1955, Robert Dorsch and Edward Fletcher successfully demonstrated such tunnel combustion, and sub­sequent research indicated that combustion more than doubled lift at Mach 3.

Though encouraging, this work involved flow near a wing, not in a ramjet-like duct. Even so, NACA aerodynamicists Richard Weber and John MacKay posited that shock-free flow in a supersonic duct could be attained, publishing the first open-literature discussion of theoretical scramjet performance in 1958, which concluded: "the trends developed herein indicate that the [scramjet] will provide superior performance

at higher hypersonic flight speeds.”[644] The Weber-MacKay study came a year after Marquardt researchers had demonstrated supersonic com­bustion of a hydrogen and air mix. Other investigators working contem­poraneously were the manager William Avery and the experimentalist Frederick Billig, who independently achieved supersonic combustion at the Johns Hopkins University Applied Physics Laboratory (APL), and J. Arthur Nicholls at the University of Michigan.[645]

The most influential of all scramjet advocates was the colorful Italian aerodynamicist, partisan leader, and wartime emigree, Antonio Ferri. Before the war, as a young military engineer, he had directed supersonic wind tunnel studies at Guidonia, Benito Mussolini’s showcase aeronau­tical research establishment outside Rome. In 1943, after thecollapse of the Fascist regime and the Nazi assumption of power, he left Guidonia, leading a notably successful band of anti-Nazi, anti-Fascist partisans. Brought to America by Moe Berg, a baseball player turned intelligence agent, Ferri joined NACA Langley, becoming Director of its Gas Dynamics Branch. Turning to the academic world, he secured a professorship at Brooklyn Polytechnic Institute. He formed a close association with

Alexander Kartveli, chief designer at Republic Aviation, and designer of the P-47, F-84, XF-103, and F-105. Indeed, Kartveli’s XF-103 (which, alas, never was completed or flown) employed a Ferri engine concept. In 1956, he established General Applied Science Laboratories (GASL), with finan­cial backing from the Rockefellers.[646]

Ferri emphasized that scramjets could offer sustained performance far higher than rockets could, and his strong reputation ensured that people listened to him. At a time when shock-free flow in a duct still loomed as a major problem, Ferri did not flinch from it but instead took it as a point of departure. He declared in September 1958 that he had achieved it, thus taking a position midway between the demonstrations at Marquardt and APL. Because he was well known, he therefore turned the scramjet from a wish into an invention, which might be made practical.

He presented his thoughts publicly at a technical colloquium in Milan in 1960 ("Many of the older men present,” John Becker wrote subsequently, "were politely skeptical”) and went on to give a far more detailed discus­sion in May 1964, at the Royal Aeronautical Society in London. This was the first extensive public presentation on hypersonic propulsion, and the attendees responded with enthusiasm. One declared that whereas investigators "had been thinking of how high in flight speed they could stretch conventional subsonic burning engines, it was now clear that they should be thinking of how far down they could stretch supersonic burning engines,” and another added that Ferri now was "assailing the field which until recently was regarded as the undisputed regime of the rocket.”[647]

Scramjet advocates were offered their first opportunity to actu­ally build such propulsion systems with the Air Force’s abortive Aerospaceplane program of the late 1950s-mid-1960s. A contemporary to Dyna-Soar but far less practical, Aerospaceplane was a bold yet pre­mature effort to produce a logistical transatmospheric vehicle and pos­sible orbital strike system. Conceived in 1957 and initially known as the Recoverable Orbital Launch System (ROLS), Aerospaceplane attracted surprising interest from industry. Seventeen aerospace companies sub­mitted contract proposals and related studies; Convair, Lockheed, and Republic submitted detailed designs. The Republic concept had the greatest degree of engine-airframe integration, a legacy of Ferri’s part­nership with Kartveli.

By the early 1960s, Aerospaceplane not surprisingly was beset with numerous developmental problems, along with a continued debate over whether it should be a single – or two-stage system, and what proportion of its propulsion should be turbine, scramjet, and pure rocket. Though it briefly outlasted Dyna-Soar, it met the same harsh fate. In the fall of 1963, the Air Force Scientific Advisory Board damned the program in no uncertain terms, noting: "Aerospaceplane has had such an erratic history, has involved so many clearly infeasible factors, and has been subjected to so much ridicule that from now on this name should be dropped. It is recommended that the Air Force increase [its] vigilance [so] that no new program achieves such a difficult position.”[648] The next year, Congress slashed its remaining funding, and Aerospaceplane was at last consigned to a merciful oblivion.

In the wake of Aerospaceplane’s cancellation, both the Air Force and NASA maintained an interest in advancing scramjet propulsion for transatmospheric aircraft. The Navy’s scramjet interest, though great, was primarily in smaller engines for missile applications. But Air Force and NASA partisans formed an Ad-Hoc Working Group on Hypersonic Scramjet Aircraft Technology.

Both agencies pursued development programs that sought to build and test small scramjet modules. The Air Force Aero-Propulsion Laboratory sponsored development of an Incremental Scramjet flight – test program at Marquardt. This proposed test vehicle underwent exten­sive analysis and study, though without actually flying as a functioning scramjet testbed. The first manifestation of Langley work was the so – called Hypersonic Research Engine (HRE), an axisymmetric scramjet of circular cross section with a simple Oswatitsch spike inlet, designed by Anthony duPont. Garrett AiResearch built this engine, planned for a derivative of the X-15. The HRE never actually flew as a "hot” function­ing engine, though the X-15A-2 flew repeatedly with a boilerplate test article mounted on the stub ventral fin (during its record flight to Mach 6.70 on October 3, 1967, searing hypersonic shock interactions melted it off the plane). Subsequent tunnel tests revealed that the HRE was, unfor­tunately, the wrong design. A podded and axisymmetric design, like an air­liner’s fanjet, it could only capture a small fraction of the air that flowed past a vehicle, resulting in greatly reduced thrust. Integrating the scram­jet with the airframe, so that it used the forebody to assist inlet perfor­mance and the afterbody as a nozzle enhancement, would more than double its thrust.[649]

Investigation of such concepts began at Langley in 1968, with pio­neering studies by researchers John Henry, Shimer Pinckney, and oth­ers. Their work expanded upon a largely Ferri-inspired base, defining what emerged as common basic elements of subsequent Langley scram­jet research. It included a strong emphasis upon airframe integration, use of fixed geometry, a swept inlet that could readily spill excess air­flow, and the use of struts for fuel injection. Early observations, pub­lished in 1970, showed that struts were practical for a large supersonic combustor at Mach 8. The program went on to construct test scramjets and conducted almost 1,000 wind tunnel test runs of engines at Mach 4

and Mach 7. Inlets at Mach 4 proved sensitive to "unstarts,” a condition where the shock wave is displaced, disrupting airflow and essen­tially starving the engine of its oxygen. Flight at Mach 7 raised the question of whether fuel could mix and burn in the short available com­bustor length.[650]

Langley test engines, like engines at GASL, Marquardt, and other scramjet research organizations, encountered numerous difficulties. Large disparities existed between predicted performance and that actu­ally achieved in the laboratory. Indeed, the scramjet, advanced so boldly in the mid-1950s, would not be ready for serious pursuit as a propulsive element until 1986. Then, on the eve of the National Aerospace Plane development program, Langley researchers Burton Northam and Griffin Anderson announced that NASA had succeeded at last in developing a prac­tical scramjet. They proclaimed triumphantly: "At both Mach 4 and Mach 7 flight conditions, there is ample thrust both for acceleration and cruise.”[651]

Out of such optimism sprang the National Aero-Space Plane program, which became a central feature of the presidency of Ronald Reagan. It was linked to other Reagan-era defense initiatives, particularly his Strategic Defense Initiative, a ballistic missile shield intended to reduce the threat of nuclear war, which critics caustically belittled as "Star Wars.” SDI called for the large-scale deployment of defensive arms in space, and it became clear that the Space Shuttle would not be their carrier. Experience since the Shuttle’s first launch in April 1981 had shown that it was costly and took a long time to prepare for relaunch. The Air Force was unwilling to place the national eggs in such a basket. In February 1984, Defense Secretary Caspar Weinberger approved a document stating that total reli­ance upon the Shuttle represented an unacceptable risk.

An Air Force initiative was under way at the time that looked toward an alternative. Gen. Lawrence A. Skantze, Chief of Air Force Systems Command (AFSC), had sponsored studies of Trans Atmospheric Vehicles (TAVs) by Air Force Aeronautical Systems Division (ASD). These reflected concepts advanced by ASD’s chief planner, Stanley A. Tremaine, as well as interest from Air Force Space Division (SD), the Defense Advanced Research Projects Agency (DARPA), and Boeing and other companies. TAVs were SSTO craft intended to use the Space Shuttle Main Engine (SSME) and possibly would be air-launched from derivatives of the Boeing 747 or Lockheed C-5. In August 1982, ASD had hosted a 3-day conference on TAVs, attended by representatives from AFSC’s Space Division and DARPA. In December 1984, ASD went further. It estab­lished a TAV Program Office to "streamline activities related to long­term, preconceptual design studies.”[652]

DARPAs participation was not surprising, for Robert Cooper, head of this research agency, had elected to put new money into ramjet research. His decision opened a timely opportunity for Anthony duPont, who had designed the HRE for NASA. DuPont held a strong interest in "combined – cycle engines” that might function as a turbine air breather, translate to ram/scram, and then perhaps use some sophisticated air collection and liquefaction process to enable them to boost as rockets into orbit. There are several types of these engines, and duPont had patented such a design as early as 1972. A decade later, he still believed in it, and he learned that Anthony Tether was the DARPA representative who had been attending TAV meetings.

Tether sent him to Cooper, who introduced him to DARPA aerody – namicist Robert Williams, who brought in Arthur Thomas, who had been studying scramjet-powered spaceplanes as early as Sputnik. Out of this climate of growing interest came a $5.5 million DARPA study program, Copper Canyon. Its results were so encouraging that DARPA took the notion of an air-breathing single-stage-to-orbit vehicle to Presidential science adviser George Keyworth and other senior officials, includ­ing Air Force Systems Command’s Gen. Skantze. As Thomas recalled: "The people were amazed at the component efficiencies that had been

Aerospaceplane to NASP: The Lure of Air-Breathing Hypersonics

The National Aero-Space Plane concept in final form, showing its modified lifting body design approach. NASA.

assumed in the study. They got me aside and asked if I really believed it. Were these things achievable? Tony [duPont] was optimistic every­where: on mass fraction, on drag of the vehicle, on inlet performance, on nozzle performance, on combustor performance. The whole thing, across the board. But what salved our consciences was that even if these things weren’t all achieved, we still could have something worthwhile. Whatever we got would still be exciting.”[653]

Gen. Skantze realized that SDI needed something better than the Shuttle—and Copper Canyon could possibly be it. Briefings were encour­aging, but he needed to see technical proof. That evidence came when he visited GASL and witnessed a subscale duPont engine in operation. Afterward, as DARPAs Bob Williams recalled subsequently: "the Air Force system began to move with the speed of a spaceplane.”[654] Secretary of Defense Caspar Weinberger received a briefing and put his support behind the effort. In January 1986, the Air Force established a joint-service Air Force-Navy-NASA National Aero-Space Plane Joint Program Office at Aeronautical Systems Division, transferring into it all the personnel

previously assigned to the TAV Program Office established previously. (The program soon received an X-series designation, as the X-30.) Then came the clincher. President Ronald Reagan announced his support for what he now called the "Orient Express” in his State of the Union Address to the Nation on February 4, 1986. President Reagan’s support was not the product of some casual whim: the previous spring, he had ordered a joint Department of Defense-NASA space launch study of future space needs and, additionally, established a national space commission. Both strongly endorsed "aerospace plane development,” the space commis­sion recommending it be given "the highest national priority.”[655]

Though advocates of NASP attempted to sharply differentiate their effort from that of the discredited Aerospaceplane of the 1960s, the NASP effort shared some distressing commonality with its predecessor, particu­larly an exuberant and increasingly unwarranted optimism that afforded ample opportunity for the program to run into difficulties. In 1984, with optimism at its height, DARPA’s Cooper declared that the X-30 could be ready in 3 years. DuPont, closer to the technology, estimated that the Government could build a 50,000-pound fighter-size vehicle in 5 years for $5 billion. Such predictions proved wildly off the mark. As early as 1986, the "Government baseline” estimate of the aircraft rose to 80,000 pounds. Six years later, in 1992, its gross weight had risen eightfold, to 400,000 pounds. It also had a "velocity deficit” of 3,000 feet per second, meaning that it could not possibly attain orbit. By the next year, NASP "lay on life support.”[656]

It had evolved from a small, seductively streamlined speedster to a fatter and far less appealing shape more akin to a wooden shoe, enter­ing a death spiral along the way. It lacked performance, so it needed greater power and fuel, which made it bigger, which meant it lacked per­formance so that it needed greater power and fuel, which made it bigger. . . and bigger. . . and bigger. X-30 could never attain the "design clo­sure” permitting it to reach orbit. NASP’s support continuously softened,

particularly as technical challenges rose, performance estimates fell, and other national issues grew in prominence. It finally withered in the mid – 1990s, leaving unresolved what, if anything, scramjets might achieve.[657]

The Advent of Fixed-Base Simulation

Simulating flight has been an important part of aviation research since even before the Wright brothers. The wind tunnel, invented in the 1870s, represented one means of simulating flight conditions. The rudimentary Link trainer of the Second World War, although it did not attempt to represent any particular airplane, was used to train pilots on the proper navigation techniques to use while flying in the clouds. Toward the end of the Second World War, researchers within Government, the military services, academia, and private industry began experimenting with

analog computers to solve differential equations in real time. Electronic components, such as amplifiers, resistors, capacitors, and servos, were linked together to perform mathematical operations, such as arithme­tic and integration. By patching many of these components together, it was possible to continuously solve the equations of motion for a moving object. There are six differential equations that can be used to describe the motion of an object. Three rotational equations identify pitching, rolling, and yawing motions, and three translational equations identify linear motion in fore-and-aft, sideways, and up-and-down directions. Each of these equations requires two independent integration processes to solve for the vehicle velocities and positions. Prior to the advent of analog computers, the integration process was a very tedious, manual operation and not amenable to real-time solutions. Analog computers allowed the integration to be accomplished in real time, opening the door to pilot-in-the-loop simulation. The next step was the addition of controlling inputs from an operator (stick and rudder pedals) and output displays (dials and oscilloscopes) to permit continuous, real-time con­trol of a simulated moving object. Early simulations only solved three of the equations of motion, usually pitch rotation and the horizontal and vertical translational equations, neglecting some of the minor cou­pling terms that linked all six equations. As analog computers became more available and affordable, the simulation capabilities expanded to include five and eventually all six of the equations of motion (com­monly referred to as "six degrees of freedom” or 6DOF).

By the mid-1950s, the Air Force, on NACA advice, had acquired a Goodyear Electronic Differential Analyzer (GEDA) to predict aircraft handling qualities based on the extrapolation of data acquired from previous test flights. One of the first practical applications of simula­tion was the analysis of the F-100A roll-coupling accident that killed North American Aviation (NAA) test pilot George "Wheaties” Welch on October 12, 1954, one of six similar accidents that triggered an emer­gency grounding of the Super Sabre. By programming the pilot’s inputs into a set of equations of motion representing the F-100A, researchers duplicated the circumstances of the accident. The combination of sim­ulation and flight-testing on another F-100A at the NACA High-Speed Flight Station (now the Dryden Center) forced redesign of the aircraft. North American increased the size of the vertical fin by 10 percent and, when even this proved insufficient, increased it again by nearly 30 per­cent, modifying existing and new production Super Sabres with the

larger tail. Thus modified, the F-100 went on to a very successful career as a mainstay Air Force fighter-bomber.[719]

Another early application of computerized simulation analysis occurred during the Air Force-NACA X-2 research airplane program in 1956. NACA engineer Richard E. Day established a simulation of the X-2 on the Air Force’s GEDA analog computer. He used a B-17 bom­bardier’s stick as an input control and a simple oscilloscope with a line representing the horizon as a display along with some voltmeters for airspeed, angle of attack, etc. Although the controls and display were crude, the simulation did accurately duplicate the motions of the air­plane. Day learned that lateral control inputs near Mach 3 could result in a roll reversal and loss of control. He showed these characteristics to Capt. Iven Kincheloe on the simulator before his flight to 126,200 feet on September 7, 1956. When the rocket engine quit near Mach 3, the air­plane was climbing steeply but was in a 45-degree bank. Kincheloe remem­bered the simulation results and did not attempt to right the airplane with lateral controls until well into the entry at a lower Mach number, thus avoid­ing the potentially disastrous coupled motion observed on the simulator.[720]

Kincheloe’s successor as X-2 project pilot, Capt. Milburn Apt, also flew the simulator before his ill-fated high-speed flight in the X-2 on September 27, 1956. When the engine exhausted its propellants, Apt was at Mach 3.2 and over 65,000 feet, heading away from Edwards and apparently concerned that the speeding plane would be unable to turn and glide home to its planned landing on Rodgers Dry Lake. When he used the lateral controls to begin a gradual turn back toward the base,

the X-2 went out of control. Apt was badly battered in the violent motions that ensued, was unable to use his personal parachute, and was killed.[721]

The loss of the X-2 and Apt shocked the Edwards community. The accident could be duplicated on the simulator, solidifying the value of simulation in the field of aviation and particularly flight-testing.[722] The X-2 experience convinced the NACA (later NASA) that simulation must play a significant role in the forthcoming X-15 hypersonic research air­craft program. The industry responded to the need with larger and more capable analog computer equipment.[723]

The X-15 simulator constituted a significant step in both simulator design and flight-test practice. It consisted of several analog computers connected to a fixed-base cockpit replicating that of the aircraft, and an "iron bird” duplication of all control system hardware (hydraulic actua­tors, cable runs, control surface mass balances, etc.). Computer output parameters were displayed on the normal cockpit instruments, though there were no visual displays outside the cockpit. This simulator was first used at the North American plant in Inglewood, CA, during the design and manufacture of the airplane. It was later transferred to NASA DFRC at Edwards AFB and became the primary tool used by the X-15 test team for mission planning, pilot training, and emergency procedure definition.

The high g environment and the high pilot workload during the 10-min­ute X-15 flights required that the pilot and the operational support team in the control room be intimately familiar with each flight plan. There was no time to communicate emergency procedures if an emergency occurred— they had to be already imbedded in the memories of the pilot and team members. That necessity highlighted another issue underscored by the X-15’s simulator experience: the necessity of replicating with great fidelity the actual cockpit layout and instrumentation in the simulator. On at least two occasions, X-15 pilots nearly misread their instrumentation or reached for the wrong switch because of seemingly minor differences between the simulator and the instrumentation layout of the X-15 aircraft.[724]

Overall, test pilots and flight-test engineers uniformly agreed that the X-15 program could not have been accomplished safely or pro­ductively without the use of the simulator. Once the X-15 began flying, engineers updated the simulator using data extracted from actual flight experience, steadily refining and increasing its fidelity. An X-15 pilot "flew” the simulator an average of 15 hours for every flight, roughly 1 hour of simulation for every minute of flying time. The X-15 experience emphasized the profound value of simulation, and soon, nearly all new airplanes and spacecraft were accompanied by fixed-base simulators for engineering analysis and pilot/astronaut training.

NASA and the Evolution of Computational Fluid Dynamics

NASA and the Evolution of Computational Fluid DynamicsJohn D. Anderson, Jr.

I

The expanding capabilities of the computer readily led to its increasing application to the aerospace sciences. NACA-NASA researchers were quick to realize how the computer could supplement traditional test meth­odologies, such as the wind tunnel and structural test rig. Out of this came a series of studies leading to the evolution of computer codes used to undertake computational fluid dynamics and structural predictive studies. Those codes, refined over the last quarter century and available to the public, are embodied in many current aircraft and spacecraft systems.

HE VISITOR TO THE SMITHSONIAN INSTITUTION’S National Air and Space Museum (NASM) in Washington, DC, who takes the east escalator to the second floor, turns left into the Beyond the Limits exhibit gallery, and then turns left again into the gallery’s main bay is suddenly confronted by three long equations with a bunch of squiggly symbols neatly painted on the wall. These are the Navier-Stokes equa­tions, and the NASM (to this author’s knowledge) is the world’s only museum displaying them so prominently. These are not some introduc­tory equations drawn for a first course in algebra, with simple symbols like a + b = c. Rather, these are "partial derivatives” strung together from the depths of university-level differential calculus. What are the Navier – Stokes equations, why are they in a gallery devoted to the history of the computer as applied to flight vehicles, and what do they have to do with the National Aeronautics and Space Administration (which, by the way, dominates the artifacts and technical content exhibited in this gallery)?

The answers to all these questions have to do with computational fluid dynamics (CFD) and the pivotal role played by the National Aeronautics and Space Administration (NASA) in the development of CFD over the past 50 years. The role played by CFD in the study and understanding of fluid dynamics in general and in aerospace engineering

in particular has grown from a fledgling research activity in the 1960s to a powerful "third” dimension in the profession, an equal partner with pure experiment and pure theory. Today it is used to help design air­planes, study the aerodynamics of automobiles, enhance wind tunnel testing, develop global weather models, and predict the tracts of hurri­canes, to name just a few. New jet engines are developed with an exten­sive use of CFD to model flows and combustion processes, and even the flow field in the reciprocating engine of the average family automobile is laid bare for engineers to examine and study using the techniques of CFD.

NASA and the Evolution of Computational Fluid DynamicsThe history of the development of computational fluid dynamics is an exciting and provocative story. In the whole spectrum of the his­tory of technology, CFD is still very young, but its importance today and in the future is of the first magnitude. This essay offers a capsule history of the development of theoretical fluid dynamics, tracing how the Navier-Stokes equations came about, discussing just what they are and what they mean, and examining their importance and what they have to do with the evolution of computational fluid dynamics. It then discusses what CFD means to NASA—and what NASA means to CFD. Of course, many other players have been active in CFD, in universities, other Government laboratories, and in industry, and some of their work will be noted here. But NASA has been the major engine that powered the rise of CFD for the solution of what were otherwise unsolvable prob­lems in the fields of fluid dynamics and aerodynamics.

NASA Spawns NASTRAN, Its Greatest Computational Success

The project to develop a general-purpose finite element structural analysis system was conceived in the midst of this rapid expansion of finite element research in the 1960s. The development, and subsequent management, enhancement, and distribution, of the NASA Structural Analysis System, or NASTRAN, unquestionably constitutes NASA’s great­est single contribution to computerized structural analysis—and argu­ably the single most influential contribution to the field from any source. NASTRAN is the workhorse of structural analysis: there may be more advanced programs in use for certain applications or in certain proprie­tary or research environments, but NASTRAN is the most capable general – purpose, generally available, program for structural analysis in existence today, even more than 40 years after it was introduced.