Category AERONAUTICS

1973 RANN Symposium Sponsored by the National Science Foundation

In reviewing the current status and potential of wind energy, Ronald Thomas and Joseph M. Savino, both from NASA’s Lewis Research Center, in November 1973 presented a paper at the Research Applied to National Needs Symposium in Washington, DC, sponsored by the National Science Foundation. The paper reviewed past experience with wind generators, problems to be overcome, the feasibility of wind power to help meet energy needs, and the planned Wind Energy Program. Thomas and Savino pointed out that the Dutch had used windmills for years to pro­vide power for pumping water and grinding grain; that the Russians built
a 100-kilowatt generator at Balaclava in 1931 that feed into a power net­work; that the Danes used wind as a major source of power for many years, including the building of the 200-kilowatt Gedser mill system that operated from 1957 through 1968; that the British built several large wind generators in the early 1950s; that the Smith-Putnam wind tur­bine built in Vermont in 1941 supplied power into a hydroelectric power grid; and that Germans did fine work in the 1950s and 1960s building and testing machines of 10 and 100 kilowatts. The two NASA engineers noted, however, that in 1973, no large wind turbines were in operation.

Подпись: 13Thomas and Savino concluded that preliminary estimates indicated that wind could supply a significant amount of the Nation’s electricity needs and that utilizing energy from the wind was technically feasible, as evidenced by the past development of wind generators. They added, however, that a sustained development effort was needed to obtain eco­nomical systems. They noted that the effects of wind variability could be reduced by storage systems or connecting wind generators to fos­sil fuel or hydroelectric systems, or dispersing the generated electricity throughout a large grid system. Thomas and Savino[1497] recommended a number of steps that the NASA and National Science Foundation pro­gram should take, including: (1) designing, building, and testing modern machines for actual applications in order to provide baseline information for assessing the potential of wind energy as an electric power source, (2) operating wind generators in selected applications for determining actual power costs, and (3) identifying subsystems and components that might be further reduced in costs.[1498]

The Making of an Engineer

Richard Travis Whitcomb was born on February 21, 1921, in Evanston, IL, and grew up in Worcester, MA. He was the eldest of four children in a family led by mathematician-engineer Kenneth F. Whitcomb.[136] Whitcomb was one of the many air-minded American children building and testing aircraft models throughout the 1920s and 1930s.[137] At the age of 12, he created an aeronautical laboratory in his family’s basement. Whitcomb spent the majority of his time there building, flying, and innovating rub – berband-powered model airplanes, with the exception of reluctantly eating, sleeping, and going to school. He never had a desire to fly him­self, but, in his words, he pursued aeronautics for the "fascination of making a model that would fly.” One innovation Whitcomb developed was a propeller that folded back when it stopped spinning to reduce aerodynamic drag. He won several model airplane contests and was a prizewinner in the Fisher Body Company automobile model competi­tion; both were formative events for young American men who would become the aeronautical engineers of the 1940s. Even as a young man, Whitcomb exhibited an enthusiastic drive that could not be diverted until the challenge was overcome.[138]

A major influence on Whitcomb during his early years was his pater­nal grandfather, who had left farming in Illinois to become a manufac­turer of mechanical vending machines. Independent and driven, the grandfather was also an acquaintance of Thomas A. Edison. Whitcomb listened attentively to his grandfather’s stories about Edison and soon came to idolize the inventor for his ideas as well as for his freethinking individuality.[139] The admiration for his grandfather and for Edison shaped Whitcomb’s approach to aeronautical engineering.

Whitcomb received a scholarship to nearby Worcester Polytechnic Institute and entered the prestigious school’s engineering program in 1939. He lived at home to save money and spent the majority of his time in the institute’s wind tunnel. Interested in helping with the war effort, Whitcomb’s senior project was the design of a guided bomb. He graduated with distinction with a bachelor’s of science degree in mechanical engineering. A 1943 Fortune magazine article on the NACA convinced Whitcomb to join the Government-civilian research facility at Hampton, VA.[140]

Airplanes ventured into a new aerodynamic regime, the so-called "transonic barrier,” as Whitcomb entered into his second year at Worcester. At speeds approaching Mach 1, aircraft experienced sudden changes in stability and control, extreme buffeting, and, most impor­tantly, a dramatic increase in drag, which exposed three challenges to the aeronautical community, involving propulsion, research facili­ties, and aerodynamics. The first challenge involved the propeller and piston-engine propulsion system. The highly developed and reliable sys­tem was at a plateau and incapable of powering the airplane in the tran­sonic regime. The turbojet revolution brought forth by the introduction of jet engines in Great Britain and Germany in the early 1940s provided the power needed for transonic flight. The latter two challenges directly involved the NACA and, to an extent, Dick Whitcomb, during the course of the 1940s. Bridging the gap between subsonic and supersonic speeds was a major aerodynamic challenge.[141]

Little was known about the transonic regime, which falls between Mach 0.8 and 1.2. Aeronautical engineers faced a daunting challenge rooted in developing new tools and concepts. The aerodynamicist’s pri­mary tool, the wind tunnel, was unable to operate and generate data at transonic speeds. Four approaches were used in lieu of an available wind tunnel in the 1940s for transonic research. One way to generate data for speeds beyond 350 mph was through aircraft diving at terminal velocity, which was dangerous for test pilots and of limited value for aeronauti­cal engineers. Moreover, a representative drag-weight ratio for a 1940- era airplane ensured that it was unable to exceed Mach 0.8. Another way was the use of a falling body, an instrumented missile dropped from the bomb bay of a Boeing B-29 Superfortress. A third method was the wing-flow model. NACA personnel mounted a small, instrumented air­foil on top of the wing of a North American P-51 Mustang fighter. The Mustang traveled at high subsonic speeds and provided a recoverable method in real-time conditions. Finally, the NACA launched small mod­els mounted atop rockets from the Wallops Island facility on Virginia’s Eastern Shore.[142] The disadvantages for these three methods were that they only generated data for short periods of time and that there were many variables regarding conditions that could affect the tests.

Even if a wind tunnel existed that was capable of evaluating aircraft at transonic speeds, there was no concept that guaranteed a successful transonic aircraft design. A growing base of knowledge in supersonic aircraft design emerged in Europe beginning in the 1930s. Jakob Ackeret operated the first wind tunnel capable of generating Mach 2 in Zurich, Switzerland, and designed tunnels for other countries. The international high-speed aerodynamics community met at the Volta Conference held in Rome in 1935. A paper presented by German aerodynamicist Adolf Busemann argued that if aircraft designers swept the wing back from the fuselage, it would offset the increase in drag beyond speeds of Mach 1. Busemann offered a revolutionary answer to the problem of high-speed aerodynamics and the sound barrier. In retrospect, the Volta Conference proved to be a turning point in high-speed aerodynamics research, espe­cially for Nazi Germany. In 1944, Dietrich Kuchemann discovered that a contoured fuselage resembling the now-iconic Coca-Cola soft drink bot­tle was ideal when combined with Busemann’s swept wings. American researcher Robert T. Jones independently discovered the swept wing at NACA Langley almost a decade after the Volta Conference. Jones was a respected Langley aerodynamicist, and his five-page 1945 report pro­vided a standard definition of the aerodynamics of a swept wing. The report appeared at the same time that high-speed aerodynamic infor­mation from Nazi Germany was reaching the United States.[143]

As the German and American high-speed traditions merged after World War II, the American aeronautical community realized that there were still many questions to be answered regarding high-speed flight. Three NACA programs in the late 1940s and early 1950s overcame the remaining aerodynamic and facility "barriers” in what John Becker char­acterized as "one of the most effective team efforts in the annals of aero­nautics.” The National Aeronautics Association recognized these NACA achievements three times through aviation’s highest award, the Collier Trophy, for 1947, 1951, and 1954. The first award, for the achievement of supersonic flight by the X-1, was presented jointly to John Stack of the NACA, manufacturer Lawrence D. Bell, and Air Force test pilot Capt. Charles E. "Chuck” Yeager. The second award in 1952 recognized the slotted transonic tunnel development pioneered by John Stack and his associates at NACA Langley.[144] The third award recognized the direct byproduct of the development of a wind tunnel in which the visionary mind of Dick Whitcomb developed the design concept that would enable aircraft to efficiently transition from subsonic to supersonic speeds through the transonic regime.

A Painful Lesson: Sonic Booms and the Supersonic Transport

By the late 1950s, the rapid pace of aeronautical progress—with new turbojet-powered airliners flying twice as fast and high as the propel­ler-driven transports they were replacing—promised even higher speeds in coming years. At the same time, the perceived challenge to America’s technological superiority implied by the Soviet Union’s early space triumphs inspired a willingness to pursue ambitious new aerospace ventures. One of these was the Supersonic Commercial Air Transport (SCAT). This program was further motivated by competition from

A Painful Lesson: Sonic Booms and the Supersonic Transport

Figure 2. Cover of an Air Force pamphlet for sonic boom claim investigators. USAF.

Britain and France to build an airliner that was expected to dominate the future of mid – and long-range commercial aviation.[344]

Aerospaceplane to NASP: The Lure of Air-Breathing Hypersonics

The Space Shuttle represented a rocket-lofted approach to hypersonic space access. But rockets were not the only means of propulsion con­templated for hypersonic vehicles. One of the most important aspects of hypersonic evolution since the 1950s has been the development of the supersonic combustion ramjet, popularly known as a scramjet. The ramjet in its simplest form is a tube and nozzle, into which air is intro­duced, mixed with fuel, and ignited, the combustion products passing

through a classic nozzle and propelling the engine forward. Unlike a conventional gas turbine, the ramjet does not have a compressor wheel or staged compressor blades, cannot typically function at speeds less than Mach 0.5, and does not come into its own until the inlet velocity is near or greater than the speed of sound. Then it func­tions remarkably well as an accelerator, to speeds well in excess of Mach 3.

Conventional subsonic-combustion ramjets, as employed by the Mach 4.31 X-7, held promise as hypersonic accelerators for a time, but they could not approach higher hypersonic speeds because their sub­sonic internal airflow heated excessively at high Mach. If a ramjet could be designed that had a supersonic internal flow, it would run much cooler and at the same time be able to accelerate a vehicle to double­digit hypersonic Mach numbers, perhaps reaching the magic Mach 25, signifying orbital velocity. Such an engine would be a scramjet. Such engines have only recently made their first flights, but they nevertheless are important in hypersonics and point the way toward future practical air-breathing hypersonics.

An important concern explored at the NACA’s Lewis Flight Propulsion Laboratory during the 1950s was whether it was possible to achieve supersonic combustion without producing attendant shock waves that slow internal flow and heat it. Investigators Irving Pinkel and John Serafini proposed experiments in supersonic combus­tion under a supersonic wing, postulating that this might afford a means of furnishing additional lift. Lewis researchers also studied supersonic combustion testing in wind tunnels. Supersonic tunnels produced very low air pressure, but it was known that aluminum boro – hydride could promote the ignition of pentane fuel even at pressures as low as 0.03 atmospheres. In 1955, Robert Dorsch and Edward Fletcher successfully demonstrated such tunnel combustion, and sub­sequent research indicated that combustion more than doubled lift at Mach 3.

Though encouraging, this work involved flow near a wing, not in a ramjet-like duct. Even so, NACA aerodynamicists Richard Weber and John MacKay posited that shock-free flow in a supersonic duct could be attained, publishing the first open-literature discussion of theoretical scramjet performance in 1958, which concluded: "the trends developed herein indicate that the [scramjet] will provide superior performance

at higher hypersonic flight speeds.”[644] The Weber-MacKay study came a year after Marquardt researchers had demonstrated supersonic com­bustion of a hydrogen and air mix. Other investigators working contem­poraneously were the manager William Avery and the experimentalist Frederick Billig, who independently achieved supersonic combustion at the Johns Hopkins University Applied Physics Laboratory (APL), and J. Arthur Nicholls at the University of Michigan.[645]

The most influential of all scramjet advocates was the colorful Italian aerodynamicist, partisan leader, and wartime emigree, Antonio Ferri. Before the war, as a young military engineer, he had directed supersonic wind tunnel studies at Guidonia, Benito Mussolini’s showcase aeronau­tical research establishment outside Rome. In 1943, after thecollapse of the Fascist regime and the Nazi assumption of power, he left Guidonia, leading a notably successful band of anti-Nazi, anti-Fascist partisans. Brought to America by Moe Berg, a baseball player turned intelligence agent, Ferri joined NACA Langley, becoming Director of its Gas Dynamics Branch. Turning to the academic world, he secured a professorship at Brooklyn Polytechnic Institute. He formed a close association with

Alexander Kartveli, chief designer at Republic Aviation, and designer of the P-47, F-84, XF-103, and F-105. Indeed, Kartveli’s XF-103 (which, alas, never was completed or flown) employed a Ferri engine concept. In 1956, he established General Applied Science Laboratories (GASL), with finan­cial backing from the Rockefellers.[646]

Ferri emphasized that scramjets could offer sustained performance far higher than rockets could, and his strong reputation ensured that people listened to him. At a time when shock-free flow in a duct still loomed as a major problem, Ferri did not flinch from it but instead took it as a point of departure. He declared in September 1958 that he had achieved it, thus taking a position midway between the demonstrations at Marquardt and APL. Because he was well known, he therefore turned the scramjet from a wish into an invention, which might be made practical.

He presented his thoughts publicly at a technical colloquium in Milan in 1960 ("Many of the older men present,” John Becker wrote subsequently, "were politely skeptical”) and went on to give a far more detailed discus­sion in May 1964, at the Royal Aeronautical Society in London. This was the first extensive public presentation on hypersonic propulsion, and the attendees responded with enthusiasm. One declared that whereas investigators "had been thinking of how high in flight speed they could stretch conventional subsonic burning engines, it was now clear that they should be thinking of how far down they could stretch supersonic burning engines,” and another added that Ferri now was "assailing the field which until recently was regarded as the undisputed regime of the rocket.”[647]

Scramjet advocates were offered their first opportunity to actu­ally build such propulsion systems with the Air Force’s abortive Aerospaceplane program of the late 1950s-mid-1960s. A contemporary to Dyna-Soar but far less practical, Aerospaceplane was a bold yet pre­mature effort to produce a logistical transatmospheric vehicle and pos­sible orbital strike system. Conceived in 1957 and initially known as the Recoverable Orbital Launch System (ROLS), Aerospaceplane attracted surprising interest from industry. Seventeen aerospace companies sub­mitted contract proposals and related studies; Convair, Lockheed, and Republic submitted detailed designs. The Republic concept had the greatest degree of engine-airframe integration, a legacy of Ferri’s part­nership with Kartveli.

By the early 1960s, Aerospaceplane not surprisingly was beset with numerous developmental problems, along with a continued debate over whether it should be a single – or two-stage system, and what proportion of its propulsion should be turbine, scramjet, and pure rocket. Though it briefly outlasted Dyna-Soar, it met the same harsh fate. In the fall of 1963, the Air Force Scientific Advisory Board damned the program in no uncertain terms, noting: "Aerospaceplane has had such an erratic history, has involved so many clearly infeasible factors, and has been subjected to so much ridicule that from now on this name should be dropped. It is recommended that the Air Force increase [its] vigilance [so] that no new program achieves such a difficult position.”[648] The next year, Congress slashed its remaining funding, and Aerospaceplane was at last consigned to a merciful oblivion.

In the wake of Aerospaceplane’s cancellation, both the Air Force and NASA maintained an interest in advancing scramjet propulsion for transatmospheric aircraft. The Navy’s scramjet interest, though great, was primarily in smaller engines for missile applications. But Air Force and NASA partisans formed an Ad-Hoc Working Group on Hypersonic Scramjet Aircraft Technology.

Both agencies pursued development programs that sought to build and test small scramjet modules. The Air Force Aero-Propulsion Laboratory sponsored development of an Incremental Scramjet flight – test program at Marquardt. This proposed test vehicle underwent exten­sive analysis and study, though without actually flying as a functioning scramjet testbed. The first manifestation of Langley work was the so – called Hypersonic Research Engine (HRE), an axisymmetric scramjet of circular cross section with a simple Oswatitsch spike inlet, designed by Anthony duPont. Garrett AiResearch built this engine, planned for a derivative of the X-15. The HRE never actually flew as a "hot” function­ing engine, though the X-15A-2 flew repeatedly with a boilerplate test article mounted on the stub ventral fin (during its record flight to Mach 6.70 on October 3, 1967, searing hypersonic shock interactions melted it off the plane). Subsequent tunnel tests revealed that the HRE was, unfor­tunately, the wrong design. A podded and axisymmetric design, like an air­liner’s fanjet, it could only capture a small fraction of the air that flowed past a vehicle, resulting in greatly reduced thrust. Integrating the scram­jet with the airframe, so that it used the forebody to assist inlet perfor­mance and the afterbody as a nozzle enhancement, would more than double its thrust.[649]

Investigation of such concepts began at Langley in 1968, with pio­neering studies by researchers John Henry, Shimer Pinckney, and oth­ers. Their work expanded upon a largely Ferri-inspired base, defining what emerged as common basic elements of subsequent Langley scram­jet research. It included a strong emphasis upon airframe integration, use of fixed geometry, a swept inlet that could readily spill excess air­flow, and the use of struts for fuel injection. Early observations, pub­lished in 1970, showed that struts were practical for a large supersonic combustor at Mach 8. The program went on to construct test scramjets and conducted almost 1,000 wind tunnel test runs of engines at Mach 4

and Mach 7. Inlets at Mach 4 proved sensitive to "unstarts,” a condition where the shock wave is displaced, disrupting airflow and essen­tially starving the engine of its oxygen. Flight at Mach 7 raised the question of whether fuel could mix and burn in the short available com­bustor length.[650]

Langley test engines, like engines at GASL, Marquardt, and other scramjet research organizations, encountered numerous difficulties. Large disparities existed between predicted performance and that actu­ally achieved in the laboratory. Indeed, the scramjet, advanced so boldly in the mid-1950s, would not be ready for serious pursuit as a propulsive element until 1986. Then, on the eve of the National Aerospace Plane development program, Langley researchers Burton Northam and Griffin Anderson announced that NASA had succeeded at last in developing a prac­tical scramjet. They proclaimed triumphantly: "At both Mach 4 and Mach 7 flight conditions, there is ample thrust both for acceleration and cruise.”[651]

Out of such optimism sprang the National Aero-Space Plane program, which became a central feature of the presidency of Ronald Reagan. It was linked to other Reagan-era defense initiatives, particularly his Strategic Defense Initiative, a ballistic missile shield intended to reduce the threat of nuclear war, which critics caustically belittled as "Star Wars.” SDI called for the large-scale deployment of defensive arms in space, and it became clear that the Space Shuttle would not be their carrier. Experience since the Shuttle’s first launch in April 1981 had shown that it was costly and took a long time to prepare for relaunch. The Air Force was unwilling to place the national eggs in such a basket. In February 1984, Defense Secretary Caspar Weinberger approved a document stating that total reli­ance upon the Shuttle represented an unacceptable risk.

An Air Force initiative was under way at the time that looked toward an alternative. Gen. Lawrence A. Skantze, Chief of Air Force Systems Command (AFSC), had sponsored studies of Trans Atmospheric Vehicles (TAVs) by Air Force Aeronautical Systems Division (ASD). These reflected concepts advanced by ASD’s chief planner, Stanley A. Tremaine, as well as interest from Air Force Space Division (SD), the Defense Advanced Research Projects Agency (DARPA), and Boeing and other companies. TAVs were SSTO craft intended to use the Space Shuttle Main Engine (SSME) and possibly would be air-launched from derivatives of the Boeing 747 or Lockheed C-5. In August 1982, ASD had hosted a 3-day conference on TAVs, attended by representatives from AFSC’s Space Division and DARPA. In December 1984, ASD went further. It estab­lished a TAV Program Office to "streamline activities related to long­term, preconceptual design studies.”[652]

DARPAs participation was not surprising, for Robert Cooper, head of this research agency, had elected to put new money into ramjet research. His decision opened a timely opportunity for Anthony duPont, who had designed the HRE for NASA. DuPont held a strong interest in "combined – cycle engines” that might function as a turbine air breather, translate to ram/scram, and then perhaps use some sophisticated air collection and liquefaction process to enable them to boost as rockets into orbit. There are several types of these engines, and duPont had patented such a design as early as 1972. A decade later, he still believed in it, and he learned that Anthony Tether was the DARPA representative who had been attending TAV meetings.

Tether sent him to Cooper, who introduced him to DARPA aerody – namicist Robert Williams, who brought in Arthur Thomas, who had been studying scramjet-powered spaceplanes as early as Sputnik. Out of this climate of growing interest came a $5.5 million DARPA study program, Copper Canyon. Its results were so encouraging that DARPA took the notion of an air-breathing single-stage-to-orbit vehicle to Presidential science adviser George Keyworth and other senior officials, includ­ing Air Force Systems Command’s Gen. Skantze. As Thomas recalled: "The people were amazed at the component efficiencies that had been

Aerospaceplane to NASP: The Lure of Air-Breathing Hypersonics

The National Aero-Space Plane concept in final form, showing its modified lifting body design approach. NASA.

assumed in the study. They got me aside and asked if I really believed it. Were these things achievable? Tony [duPont] was optimistic every­where: on mass fraction, on drag of the vehicle, on inlet performance, on nozzle performance, on combustor performance. The whole thing, across the board. But what salved our consciences was that even if these things weren’t all achieved, we still could have something worthwhile. Whatever we got would still be exciting.”[653]

Gen. Skantze realized that SDI needed something better than the Shuttle—and Copper Canyon could possibly be it. Briefings were encour­aging, but he needed to see technical proof. That evidence came when he visited GASL and witnessed a subscale duPont engine in operation. Afterward, as DARPAs Bob Williams recalled subsequently: "the Air Force system began to move with the speed of a spaceplane.”[654] Secretary of Defense Caspar Weinberger received a briefing and put his support behind the effort. In January 1986, the Air Force established a joint-service Air Force-Navy-NASA National Aero-Space Plane Joint Program Office at Aeronautical Systems Division, transferring into it all the personnel

previously assigned to the TAV Program Office established previously. (The program soon received an X-series designation, as the X-30.) Then came the clincher. President Ronald Reagan announced his support for what he now called the "Orient Express” in his State of the Union Address to the Nation on February 4, 1986. President Reagan’s support was not the product of some casual whim: the previous spring, he had ordered a joint Department of Defense-NASA space launch study of future space needs and, additionally, established a national space commission. Both strongly endorsed "aerospace plane development,” the space commis­sion recommending it be given "the highest national priority.”[655]

Though advocates of NASP attempted to sharply differentiate their effort from that of the discredited Aerospaceplane of the 1960s, the NASP effort shared some distressing commonality with its predecessor, particu­larly an exuberant and increasingly unwarranted optimism that afforded ample opportunity for the program to run into difficulties. In 1984, with optimism at its height, DARPA’s Cooper declared that the X-30 could be ready in 3 years. DuPont, closer to the technology, estimated that the Government could build a 50,000-pound fighter-size vehicle in 5 years for $5 billion. Such predictions proved wildly off the mark. As early as 1986, the "Government baseline” estimate of the aircraft rose to 80,000 pounds. Six years later, in 1992, its gross weight had risen eightfold, to 400,000 pounds. It also had a "velocity deficit” of 3,000 feet per second, meaning that it could not possibly attain orbit. By the next year, NASP "lay on life support.”[656]

It had evolved from a small, seductively streamlined speedster to a fatter and far less appealing shape more akin to a wooden shoe, enter­ing a death spiral along the way. It lacked performance, so it needed greater power and fuel, which made it bigger, which meant it lacked per­formance so that it needed greater power and fuel, which made it bigger. . . and bigger. . . and bigger. X-30 could never attain the "design clo­sure” permitting it to reach orbit. NASP’s support continuously softened,

particularly as technical challenges rose, performance estimates fell, and other national issues grew in prominence. It finally withered in the mid – 1990s, leaving unresolved what, if anything, scramjets might achieve.[657]

The Advent of Fixed-Base Simulation

Simulating flight has been an important part of aviation research since even before the Wright brothers. The wind tunnel, invented in the 1870s, represented one means of simulating flight conditions. The rudimentary Link trainer of the Second World War, although it did not attempt to represent any particular airplane, was used to train pilots on the proper navigation techniques to use while flying in the clouds. Toward the end of the Second World War, researchers within Government, the military services, academia, and private industry began experimenting with

analog computers to solve differential equations in real time. Electronic components, such as amplifiers, resistors, capacitors, and servos, were linked together to perform mathematical operations, such as arithme­tic and integration. By patching many of these components together, it was possible to continuously solve the equations of motion for a moving object. There are six differential equations that can be used to describe the motion of an object. Three rotational equations identify pitching, rolling, and yawing motions, and three translational equations identify linear motion in fore-and-aft, sideways, and up-and-down directions. Each of these equations requires two independent integration processes to solve for the vehicle velocities and positions. Prior to the advent of analog computers, the integration process was a very tedious, manual operation and not amenable to real-time solutions. Analog computers allowed the integration to be accomplished in real time, opening the door to pilot-in-the-loop simulation. The next step was the addition of controlling inputs from an operator (stick and rudder pedals) and output displays (dials and oscilloscopes) to permit continuous, real-time con­trol of a simulated moving object. Early simulations only solved three of the equations of motion, usually pitch rotation and the horizontal and vertical translational equations, neglecting some of the minor cou­pling terms that linked all six equations. As analog computers became more available and affordable, the simulation capabilities expanded to include five and eventually all six of the equations of motion (com­monly referred to as "six degrees of freedom” or 6DOF).

By the mid-1950s, the Air Force, on NACA advice, had acquired a Goodyear Electronic Differential Analyzer (GEDA) to predict aircraft handling qualities based on the extrapolation of data acquired from previous test flights. One of the first practical applications of simula­tion was the analysis of the F-100A roll-coupling accident that killed North American Aviation (NAA) test pilot George "Wheaties” Welch on October 12, 1954, one of six similar accidents that triggered an emer­gency grounding of the Super Sabre. By programming the pilot’s inputs into a set of equations of motion representing the F-100A, researchers duplicated the circumstances of the accident. The combination of sim­ulation and flight-testing on another F-100A at the NACA High-Speed Flight Station (now the Dryden Center) forced redesign of the aircraft. North American increased the size of the vertical fin by 10 percent and, when even this proved insufficient, increased it again by nearly 30 per­cent, modifying existing and new production Super Sabres with the

larger tail. Thus modified, the F-100 went on to a very successful career as a mainstay Air Force fighter-bomber.[719]

Another early application of computerized simulation analysis occurred during the Air Force-NACA X-2 research airplane program in 1956. NACA engineer Richard E. Day established a simulation of the X-2 on the Air Force’s GEDA analog computer. He used a B-17 bom­bardier’s stick as an input control and a simple oscilloscope with a line representing the horizon as a display along with some voltmeters for airspeed, angle of attack, etc. Although the controls and display were crude, the simulation did accurately duplicate the motions of the air­plane. Day learned that lateral control inputs near Mach 3 could result in a roll reversal and loss of control. He showed these characteristics to Capt. Iven Kincheloe on the simulator before his flight to 126,200 feet on September 7, 1956. When the rocket engine quit near Mach 3, the air­plane was climbing steeply but was in a 45-degree bank. Kincheloe remem­bered the simulation results and did not attempt to right the airplane with lateral controls until well into the entry at a lower Mach number, thus avoid­ing the potentially disastrous coupled motion observed on the simulator.[720]

Kincheloe’s successor as X-2 project pilot, Capt. Milburn Apt, also flew the simulator before his ill-fated high-speed flight in the X-2 on September 27, 1956. When the engine exhausted its propellants, Apt was at Mach 3.2 and over 65,000 feet, heading away from Edwards and apparently concerned that the speeding plane would be unable to turn and glide home to its planned landing on Rodgers Dry Lake. When he used the lateral controls to begin a gradual turn back toward the base,

the X-2 went out of control. Apt was badly battered in the violent motions that ensued, was unable to use his personal parachute, and was killed.[721]

The loss of the X-2 and Apt shocked the Edwards community. The accident could be duplicated on the simulator, solidifying the value of simulation in the field of aviation and particularly flight-testing.[722] The X-2 experience convinced the NACA (later NASA) that simulation must play a significant role in the forthcoming X-15 hypersonic research air­craft program. The industry responded to the need with larger and more capable analog computer equipment.[723]

The X-15 simulator constituted a significant step in both simulator design and flight-test practice. It consisted of several analog computers connected to a fixed-base cockpit replicating that of the aircraft, and an "iron bird” duplication of all control system hardware (hydraulic actua­tors, cable runs, control surface mass balances, etc.). Computer output parameters were displayed on the normal cockpit instruments, though there were no visual displays outside the cockpit. This simulator was first used at the North American plant in Inglewood, CA, during the design and manufacture of the airplane. It was later transferred to NASA DFRC at Edwards AFB and became the primary tool used by the X-15 test team for mission planning, pilot training, and emergency procedure definition.

The high g environment and the high pilot workload during the 10-min­ute X-15 flights required that the pilot and the operational support team in the control room be intimately familiar with each flight plan. There was no time to communicate emergency procedures if an emergency occurred— they had to be already imbedded in the memories of the pilot and team members. That necessity highlighted another issue underscored by the X-15’s simulator experience: the necessity of replicating with great fidelity the actual cockpit layout and instrumentation in the simulator. On at least two occasions, X-15 pilots nearly misread their instrumentation or reached for the wrong switch because of seemingly minor differences between the simulator and the instrumentation layout of the X-15 aircraft.[724]

Overall, test pilots and flight-test engineers uniformly agreed that the X-15 program could not have been accomplished safely or pro­ductively without the use of the simulator. Once the X-15 began flying, engineers updated the simulator using data extracted from actual flight experience, steadily refining and increasing its fidelity. An X-15 pilot "flew” the simulator an average of 15 hours for every flight, roughly 1 hour of simulation for every minute of flying time. The X-15 experience emphasized the profound value of simulation, and soon, nearly all new airplanes and spacecraft were accompanied by fixed-base simulators for engineering analysis and pilot/astronaut training.

NASA and the Evolution of Computational Fluid Dynamics

NASA and the Evolution of Computational Fluid DynamicsJohn D. Anderson, Jr.

I

The expanding capabilities of the computer readily led to its increasing application to the aerospace sciences. NACA-NASA researchers were quick to realize how the computer could supplement traditional test meth­odologies, such as the wind tunnel and structural test rig. Out of this came a series of studies leading to the evolution of computer codes used to undertake computational fluid dynamics and structural predictive studies. Those codes, refined over the last quarter century and available to the public, are embodied in many current aircraft and spacecraft systems.

HE VISITOR TO THE SMITHSONIAN INSTITUTION’S National Air and Space Museum (NASM) in Washington, DC, who takes the east escalator to the second floor, turns left into the Beyond the Limits exhibit gallery, and then turns left again into the gallery’s main bay is suddenly confronted by three long equations with a bunch of squiggly symbols neatly painted on the wall. These are the Navier-Stokes equa­tions, and the NASM (to this author’s knowledge) is the world’s only museum displaying them so prominently. These are not some introduc­tory equations drawn for a first course in algebra, with simple symbols like a + b = c. Rather, these are "partial derivatives” strung together from the depths of university-level differential calculus. What are the Navier – Stokes equations, why are they in a gallery devoted to the history of the computer as applied to flight vehicles, and what do they have to do with the National Aeronautics and Space Administration (which, by the way, dominates the artifacts and technical content exhibited in this gallery)?

The answers to all these questions have to do with computational fluid dynamics (CFD) and the pivotal role played by the National Aeronautics and Space Administration (NASA) in the development of CFD over the past 50 years. The role played by CFD in the study and understanding of fluid dynamics in general and in aerospace engineering

in particular has grown from a fledgling research activity in the 1960s to a powerful "third” dimension in the profession, an equal partner with pure experiment and pure theory. Today it is used to help design air­planes, study the aerodynamics of automobiles, enhance wind tunnel testing, develop global weather models, and predict the tracts of hurri­canes, to name just a few. New jet engines are developed with an exten­sive use of CFD to model flows and combustion processes, and even the flow field in the reciprocating engine of the average family automobile is laid bare for engineers to examine and study using the techniques of CFD.

NASA and the Evolution of Computational Fluid DynamicsThe history of the development of computational fluid dynamics is an exciting and provocative story. In the whole spectrum of the his­tory of technology, CFD is still very young, but its importance today and in the future is of the first magnitude. This essay offers a capsule history of the development of theoretical fluid dynamics, tracing how the Navier-Stokes equations came about, discussing just what they are and what they mean, and examining their importance and what they have to do with the evolution of computational fluid dynamics. It then discusses what CFD means to NASA—and what NASA means to CFD. Of course, many other players have been active in CFD, in universities, other Government laboratories, and in industry, and some of their work will be noted here. But NASA has been the major engine that powered the rise of CFD for the solution of what were otherwise unsolvable prob­lems in the fields of fluid dynamics and aerodynamics.

NASA Spawns NASTRAN, Its Greatest Computational Success

The project to develop a general-purpose finite element structural analysis system was conceived in the midst of this rapid expansion of finite element research in the 1960s. The development, and subsequent management, enhancement, and distribution, of the NASA Structural Analysis System, or NASTRAN, unquestionably constitutes NASA’s great­est single contribution to computerized structural analysis—and argu­ably the single most influential contribution to the field from any source. NASTRAN is the workhorse of structural analysis: there may be more advanced programs in use for certain applications or in certain proprie­tary or research environments, but NASTRAN is the most capable general – purpose, generally available, program for structural analysis in existence today, even more than 40 years after it was introduced.

AD-1 Oblique Wing Demonstrator

The AD-1 was a small and inexpensive demonstrator aircraft intended to investigate some of the issues of an oblique wing. It flew between 1979 and 1982. It had a maximum takeoff weight of 2,100 pounds and a max­imum speed of 175 knots. It is an interesting case because (1) NASA had an unusually large role in its design and integration—it was essentially a NASA aircraft—and (2) because it provides a neat illustration of the prosecution of a particular objective through design, analysis, wind tun­nel test, flight test, and planned follow-on development.[953]

The oblique wing was conceived by German aerodynamicists in the midst of the Second World War. But it was only afterward, through the

Подпись: AD-1 three view. NASA. Подпись: 8

brilliance and determination of NASA aerodynamicist Robert T. Jones that it advanced to actual flight. Indeed, Jones, father of the American swept wing, became one of the most persistent proponents of the oblique wing concept.[954] The principal advantage of the oblique wing is that it spreads both the lift and volume distributions of the wing over a greater length than that of a simple symmetrically swept wing. This has the effect of reducing both the wave drag because of lift and the wave drag because of volume, two important components of supersonic drag. With this the­oretical advantage come practical challenges. The challenges fall into two broad categories: the effects of asymmetry on the flight character­istics (stability and handling qualities) of the vehicle, and the aeroelas – tic stability of the forward-swept wing. The research objectives of the AD-1 were primarily oriented toward flying qualities. The AD-1 was not intended to explore structural dynamics or divergence in depth, other

than establishing safety of flight. Mike Rutkowski analyzed the wing for flutter and divergence using NASTRAN and other methods.[955]

However, the project did make a significant accomplishment in the use of static aeroelastic tailoring. The fiberglass wing design by Ron Smith was tailored to bend just enough, with increasing g, to cancel out an aero­dynamically induced rolling moment. Pure bending of the oblique wing increases the incidence (and therefore the lift) of the forward-swept tip and decreases the incidence (and lift) of the aft-swept tip. In a pullup maneuver, increasing lift coefficient (CL), and load factor at a given flight condition, this would cause a rollaway from the forward-swept tip. At the same time, induced aerodynamic effects (the downwash/upwash distribu­tion) increase the lift at the tip of an aft-swept wing. On an aircraft with only one aft-swept tip, this would cause a roll toward the forward-swept side. The design intent for the AD-1 was to have these two effects cancel each other as nearly as possible, so that the net change in rolling moment because of increasing g at a given flight condition would be zero. The design condition was CL = 0.3 for 1-g flight at 170 knots, 12,500-foot alti­tude, and a weight of 1,850 pounds, with the wing at 60-degree sweep.[956]

An aeroelastically scaled one-sixth model was tested at full-scale Reynolds number in the Ames 12-Foot Pressure Wind Tunnel. A stiff alu­minum wing was used for preliminary tests, then two fiberglass wings. The two fiberglass wings had zero sweep at the 25- and 30-percent chord lines, respectively, bracketing the full-scale AD-1 wing, which had zero sweep at 27-percent chord. The wings were tested at the design scaled dynamic pressure and at two lower values to obtain independent varia­tion of wing load because of angle of attack and dynamic pressure at a constant angle of attack. Forces and moments were measured, and deflec­tion was determined from photographs of the wing at test conditions.[957]

Subsequently, ". . . the actual wing deflection in bending and twist was verified before flight through static ground loading tests.” Finally, in-flight measurements were made of total force and moment coeffi­cients and of aeroelastic effects. Level-flight decelerations provided angle-of-attack sweeps at constant load, and windup turns provided angle-of-attack sweeps at constant "q” (dynamic pressure). Results were interpreted and compared with predictions. The simulator model, with aeroelastic effects included, realistically represented the dynamic responses of the flight vehicle.[958]

Provision had been made for mechanical linkage between the pitch and roll controls, to compensate for any pitch-roll coupling observed in flight. However, the intent of the aeroelastic wing was achieved closely enough that the mechanical interconnect was never used.[959] Roll trim was not needed at the design condition (60-degrees sweep) nor at zero sweep, where the aircraft was symmetric. At intermediate sweep angles, roll trim was required. The correction of this characteristic was not pur­sued because it was not a central objective of the project. Also, the air­plane experienced fairly large changes in rolling moment with angle of attack beyond the linear range. Vortex lift, other local flow separa­tions, and ultimately full stall of the aft-swept wing, occurred in rapid succession as angle of attack was increased from 8 to approximately 12 degrees. Therefore, it would be a severe oversimplification to say that the AD-1 had normal handling qualities.[960]

The AD-1 flew at speeds of 170 knots or less. On a large, high-speed aircraft, divergence of the forward-swept wing would also be a consid­eration. This would be addressed by a combination of inherent stiffness, aeroelastic tailoring to introduce a favorable bend-twist coupling, and, potentially, active load alleviation. The AD-1 project provided initial cor­relation of measured versus predicted wing bending and its effects on the vehicle’s flight characteristics. NASA planned to take the next step with a supersonic oblique wing aircraft, using the same F-8 airframe that had been used for earlier supercritical wing tests. These studies delved deeper into the aeroelastic issues: "Preliminary studies have been performed to identify critical DOF [Degree of Freedom] for flut­ter model tests of oblique configurations. An ‘oblique’ mode has been identified with a 5 DOF model which still retains its characteristics with the three rotational DOF’s. An interdisciplinary analysis code (STARS), which is capable of performing flutter and aeroservoelastic analyses, has been developed. The structures module has a large library of elements and in conjunction with numerical analysis routines, is capable of effi­ciently performing statics, vibration, buckling, and dynamic response analysis of structures. . . . ” The STARS code also included supersonic (potential gradient method) and subsonic (doublet lattice) unsteady aero­dynamics calculations. " . . . Linear flutter models are developed and transformed to the body axis coordinate system and are subsequently augmented with the control law. Stability analysis is performed using hybrid techniques. The major research benefit of the OWRA [Oblique Wing Research Aircraft] program will be validation of design and anal­ysis tools. As such, the structural model will be validated and updated based on ground vibration test (GVT) results. The unsteady aero codes will be correlated with experimentally measured unsteady pressures.”[961] While the OWRA program never reached flight, (NASA was ready to begin wing fabrication in 1987, expecting first flight in 1991), these comments illustrate the typical interaction of flight programs with ana­lytical methods development and the progressive validation process that takes place. Such methods development is often driven by unconven­tional problems (such as the oblique wing example here) and eventually finds its way into routine practice in more conventional applications. For example, in the design of large passenger aircraft today, the loads process is typically iterated to include the effects of static aeroelastic deflections on the aerodynamic load distribution.[962]

X-29

The Grumman X-29 aircraft was an extraordinarily ambitious and pro­ductive flight-test program run between 1984 and 1992. It demonstrated a large (approximately 35 percent) unstable static margin in the pitch axis, a digital active flight control system utilizing three-surface pitch control (all-moving canards, wing flaps, and aft-mounted strake flaps), and a thin supercritical forward-swept wing, aeroelastically tailored to prevent struc­tural divergence. The X-29 was funded by the Defense Advanced Research Projects Agency (DARPA) through the USAF Aeronautical Systems Division (ASD). Grumman was responsible for aircraft design and fabrication, including the primary structural analyses, although there was exten­sive involvement of NASA and the USAF in addressing the entire realm of unique technical issues on the project. NASA Ames Research Center/ Dryden Flight Research Facility was the responsible test organization.[963]

Careful treatment of aeroelastic stability was necessary for the thin FSW to be used on a supersonic, highly maneuverable aircraft. According to Grumman, "Automated design and analysis procedures played a major role in the development of the X-29 demonstrator aircraft.” Grumman used one of its programs, called FASTOP, to optimize the X-29’s structure to avoid aeroelastic divergence while minimizing the weight impact.[964]

In contrast to the AD-1, which allowed the forward-swept wing to bend along its axis, thereby increasing the lift at the forward tip, the X-29’s forward-swept wings were designed to twist when bending, in a manner that relieved the load. This was accomplished by orienting the primary spanwise fibers in the composite skins at a forward "kick angle” relative to the nominal structural axis of the wing. The optimum angle was found in a 1977 Grumman feasibility study: "Both beam and coarse- grid, finite-element models were employed to study various materials and laminate configurations with regard to their effect on divergence and flutter characteristics and to identify the weight increments required to avoid divergence.”[965] While a pure strength design was optimum at zero kick angle, an angle of approximately 10 degrees was found to be best for optimum combined strength and divergence requirements.

When the program reached the flight-test phase, hardware-in-the – loop simulation was integral to the flight program. During the func­tional and envelope expansion phases, every mission was flown on the simulator before it was flown in the airplane.[966] In flight, the X-29 No. 1 aircraft (of two that were built) carried extensive and somewhat unique instrumentation to measure the loads and deflections of the air­frame, and particularly of the wing. This consisted of pressure taps on the left wing and canard, an optical deflection measurement system on the right wing, strain gages for static structural load measurement, and accelerometers for structural dynamic and buffet measurement.

The most unusual element of this suite was the optical system, which had been developed and used previously on the HiMAT demonstrator (see preceding description). Optical deflection data were sampled at a

Подпись: 8 AD-1 Oblique Wing Demonstrator

rate of 13 samples per channel per second. Data quality was reported to be very good, and initial results showed good match to predictions. In addition, pressure data from the 156 wing and 17 canard pressure taps was collected at a rate of 25 samples per channel per second. One hundred six strain gages provided static loads measurement as shown. Structural dynamic data from the 21 accelerometers was measured at 400 samples per channel per second. All data was transmitted to ground station and, during limited-envelope phase, to Grumman in Calverton, NY, for analysis.[967] "Careful analyses of the instrumentation requirements, flight test points, and maneuvers are conducted to ensure that data of sufficient quality and quantity are acquired to validate the design, fab­rication, and test process.”[968] The detailed analysis and measurements provided extensive opportunities to validate predictive methods.

The X-29 was used as a test case for NASA’s STARS structural anal­ysis computer program, which had been upgraded with aeroservoelas- tic analysis capability. In spite of the exhaustive analysis done ahead of time, there were, as is often the case, several "discoveries” made during flight test. Handling qualities at high alpha were considerably better than predicted, leading to an expanded high-alpha control and maneuverability investigation in the later phases of the project. The X-29

No. 1 was initially limited to 21-degree angle of attack, but, during sub­sequent Phase II envelope expansion testing, its test pilots concluded it had "excellent control response to 45 deg. angle of attack and still had limited controllability at 67 deg. angle of attack.”[969]

There were also at least two distinct types of aeroservoelastic phe­nomena encountered: buffet-induced modes and a coupling between the canard position feedback and the aircraft’s longitudinal aerody­namic and structural modes were observed.[970] The modes mentioned involved frequencies between 11 and 27 hertz (Hz). Any aircraft with an automatic control system may experience interactions between the aircraft’s structural and aerodynamic modes and the control system. Typically, the aeroelastic frequencies are much higher than the charac­teristic frequencies of the motion of the aircraft as a whole. However, the 35-percent negative static margin of the X-29A was much larger than any unstable margin designed into an aircraft before or since. As a consequence, its divergence timescale was much more rapid, making it particularly challenging to tune the flight control system to respond quickly enough to aircraft motions, without being excited by structural dynamic modes. Simply stated, the X-29A provided ample opportunity for aeroservoelastic phenomena to occur, and such were indeed observed, a contribution of the aircraft that went far beyond simply demonstrat­ing the aerodynamic and maneuver qualities of an unstable forward – swept canard planform.[971]

In sum, each of these five advanced flight projects provides impor­tant lessons learned across many disciplines, particularly the validation of computer methods in structural design and/or analysis. The YF-12 project provided important correlation of analysis, ground-test data, and flight data for an aircraft under complex aerothermodynamic load­ing. The Rotor Aerodynamic Limits survey collected important data on helicopter rotors—a class of system often taken for granted yet one that represent an incredibly complex interaction of aerodynamic, aeroelas – tic, and inertial phenomena. The HiMAT, AD-1, and X-29 programs each advanced the state of the art in aeroelastic design as applied to nontra­ditional, indeed exotic, planforms featuring unstable design, compos­ite structures, and advanced flight control concepts. Finally, the data required to validate structural analysis and design methods do not auto­matically come from the testing normally performed by aircraft devel­opers and users. Special instrumentation and testing techniques are required. NASA has developed the facilities and the knowledge base needed for many kinds of special testing and is able assign the required priority to such testing. As these cases show, NASA therefore plays a key role in this process of gaining knowledge about the behavior of aircraft in flight, evaluating predictive capabilities, and flowing that experience back to the people who design the aircraft.

The High-Speed Environment

During World War II the whole of aeronautics used aluminum. There was no hypersonics; the very word did not exist, for it took until 1946 for the investigator Hsue-shen Tsien to introduce it. Germany’s V-2 was flying at Mach 5, but its nose cone was of mild steel, and no one expected that this simple design problem demanded a separate term for its flight regime.[1018]

A decade later, aeronautics had expanded to include all flight speeds because of three new engines: the liquid-fuel rocket, the ramjet, and the variable-stator turbojet. The turbojet promised power beyond Mach 3, while the ramjet proved useful beyond Mach 4. The Mach 6 X-15 was under contract. Intermediate-range missiles were in development, with ranges of 1,200 to 1,700 miles, and people regarded intercontinental missiles as preludes to satellite launchers.

A common set of descriptions presents the flight environments within which designers must work. Well beyond Mach 3, engineers accommo­date aerodynamic heating through materials substitutions. The aircraft themselves continue to accelerate and cruise much as they do at lower speeds. Beyond Mach 4, however, cruise becomes infeasible because of heating. A world airspeed record for air-breathing flight (one that lasted for nearly the next half century) was set in 1958 with the Lockheed X-7, which was made of 4130 steel, at Mach 4.31 (2,881 mph). The airplane had flown successfully at Mach 3.95, but it failed structurally in flight at Mach 4.31, and no airplane has approached such performance in the past half century.[1019]

No aircraft has ever cruised at Mach 5, and an important reason involves structures and materials. "If I cruise in the atmosphere for 2 hours,” said Paul Czysz of McDonnell-Douglas, "I have a thousand times the heat load into the vehicle that the Shuttle gets on its quick transit of the atmosphere.”[1020] Aircraft indeed make brief visits to such speed regimes, but they don’t stay there; the best approach is to pop out of the atmosphere and then return, the hallmark of a true trans­atmospheric vehicle.

At Mach 4, aerodynamic heating raises temperatures. At higher Mach, other effects are seen. A reentering intercontinental ballistic mis­sile (ICBM) nose cone, at speeds above Mach 20, has enough kinetic energy to vaporize 5 times its weight in iron. Temperatures behind its bow shock reach 9,000 kelvins (K), hotter than the surface of the Sun. The research physicist Peter Rose has written that this velocity would be "large enough to dissociate all the oxygen molecules into atoms, dissociate about half of the nitrogen, and thermally ionize a consider­able fraction of the air.”[1021]

Aircraft thus face a simple rule: they can cruise up to Mach 4 if built with suitable materials, but they cannot cruise at higher speeds. This rule applies not only to entry into Earth’s atmosphere but also to entry into the atmosphere of Jupiter, which is far more demanding but which an entry probe of the Galileo spacecraft investigated in 1995, at Mach 50.[1022]

Other speed limits become important in the field of wind tunnel simulation. The Government’s first successful hypersonic wind tun­nel was John Becker’s 11-inch facility, which entered service in 1947. It approached Mach 7, with compressed air giving run times of 40 sec­onds.[1023] A current facility, which is much larger and located at the National Aeronautics and Space Administration (NASA) Langley Research Center, is the Eight-Foot High-Temperature Tunnel—which also uses compressed air and operates near Mach 7.

The reason for such restrictions involves fundamental limitations of compressed air, which liquefies if it expands too much when seeking higher speeds. Higher speeds indeed are achievable but only by creat­ing shock waves within an instrument for periods measured in milli­seconds. Hence, the field of aerodynamics introduces an experimental speed limit of Mach 7, which describes its wind tunnels, and an opera­tional speed limit of Mach 4, which sets a restriction within which cruis­ing flight remains feasible. Compared with these velocities, the usual definition of hypersonics, describing flight beyond Mach 5, is seen to describe nothing in particular.

Project 680J: Survivable Flight Control System YF-4E

In mid-1969, modifications began to convert the prototype McDonnell – Douglas YF-4E (USAF serial No. 62-12200) for the SFCS program. A quadruple-redundant analog computer-based three-axis fly-by-wire flight control system with integrated hydraulic servo-actuator packages was incorporated and side stick controllers were added to both the front and back cockpits. Roll control was pure fly-by-wire with no mechani­cal backup. For initial testing, the Phantom’s mechanical flight control system was retained in the pitch and yaw axes as a safety backup. On April 29, 1972, McDonnell-Douglas test pilot Charles P. "Pete” Garrison flew the SFCS YF-4E for the first time from the McDonnell-Douglas fac­tory at Lambert Field in St. Louis, MO. The mechanical flight control system was used for takeoff with the pilot switching to the fly-by-wire system during climb-out. The aircraft was then flown to Edwards AFB for a variety of additional tests, including low-altitude supersonic flights. After the first 27 flights, which included 23 hours in the full three-axis fly­by-wire configuration, the mechanical flight control system was disabled. First flight in the pure fly-by-wire configuration occurred January 22, 1973. The SFCS YF-4E flew as a pure fly-by-wire aircraft for the remain­der of its flight-test program, ultimately completing over 100 flights.[1138]

Whereas the earlier phases of the flight-test effort were primarily flown by McDonnell-Douglas test pilots, the next aspect of the SFCS

program was focused on an Air Force evaluation of the operational suitability of fly-by-wire and an assessment of the readiness of the tech­nology for transition into new aircraft designs. During this phase, 15 flights were accomplished by two Air Force test pilots (Lt. Col. C. W. Powell and Maj. R. C. Ettinger), who concluded that fly-by-wire was indeed ready and suitable for use in new designs. They also noted that flying qualities were generally excellent, especially during takeoffs and landings, and that the pitch transient normally encountered in the F-4 during rapid deceleration from supersonic to subsonic flight was nearly eliminated. Another aspect of the flight-test effort involved so – called technology transition and demonstration flights in the SFCS aircraft. At this time, the Air Force had embarked on the Lightweight Fighter (LWF) program. One of the two companies developing flight demonstrator aircraft (General Dynamics)had elected to use fly-by-wire in its new LWF design (the YF-16). A block of 11 flights in the SFCS YF-4E was allocated to three pilots assigned to the LWF test force at Edwards AFB (Lt. Col. Jim Ryder, Maj. Walt Hersman, and Maj. Mike Clarke). Based on their experiences flying the SFCS YF-4E, the LWF test force pilots were able to provide valuable inputs into the design, devel­opment, and flight test of the YF-16, directly contributing to the dra­matic success of that program. An additional 10 flights were allocated to another 10 pilots, who included NASA test pilot Gary E. Krier and USAF Maj. Robert Barlow.[1139] Earlier, Krier had piloted the first flight of a digital fly-by-wire (DFBW) flight control system in the NASA DFBW F-8C on May 25, 1972. That event marked the first time that a piloted aircraft had been flown purely using a fly-by-wire flight control system without any mechanical backup provisions. Barlow, as a colonel, would command the Air Force Flight Dynamics Laboratory during execution of several important fly-by-wire flight research efforts. The Air Force YF-16 and the NASA DFBW F-8 programs are discussed in following sections.