Category AERONAUTICS

NASA Advanced Control Technology for Integrated Vehicles

In 1994, after the conclusion of Air Force S/MTD testing, the aircraft was transferred to NASA Dryden for the NASA Advanced Control Technology for Integrated Vehicles (ACTIVE) research project. ACTIVE was oriented to determining if axisymmetric vectored thrust could contribute to drag reduction and increased fuel economy and range compared with con­ventional aerodynamic controls. The project was a collaborative effort between NASA, the Air Force Research Laboratory, Pratt & Whitney,
and Boeing (formerly McDonnell-Douglas). An advanced digital flight fly-by-wire control system was integrated into the NF-15B, which was given NASA tail No. 837. Higher-thrust versions of the Pratt & Whitney F100 engine with newly developed axisymmetric thrust-vectoring engine exhaust nozzles were installed. The nozzles could deflect engine exhaust up to 20 degrees off centerline. This allowed variable thrust control in both pitch and yaw, or combinations of the two axes. An integrated pro­pulsion and flight control system controlled both aerodynamic flight control surfaces and the engines. New cockpit controls and electron­ics from an F-15E aircraft were also installed in the NF-15B. The first supersonic flight using yaw vectoring occurred in early 1996. Pitch and yaw thrust vectoring were demonstrated at speeds up to Mach 2.0, and yaw vectoring was used at angles of attack up to 30 degrees. An adaptive performance software program was developed and successfully tested in the NF-15B flight control computer. It automatically determined the optimal setting or trim for the thrust-vectoring nozzles and the aero­dynamic control surfaces to minimize aircraft drag. An improvement of Mach 0.1 in level flight was achieved at Mach 1.3 at 30,000 feet with no increase in engine thrust. The ACTIVE NF-15B continued investiga­tions of integrated flight and propulsion control with thrust-vectoring during 1997 and 1998, including an experiment that combined thrust vectoring with aerodynamic controls during simulated ground attack missions. Following completion of the ACTIVE project, the NF-15B was used as a testbed for several other NASA Dryden research experiments, which included the efforts described below.[1275]

Fuel Efficiency Takes Flight

Caitlin Harrington

Подпись: 12Decades of NASA research have led to breakthroughs in understand­ing the physical processes of pollution and determining how to secure unprecedented levels of propulsion and aerodynamic efficiency to reduce emissions. Goaded by recurring fuel supply crises, NASA has responded with a series of research plans that have dramatically improved the efficiency of gas turbine propulsion systems, the lift-to – drag ratio of new aircraft designs, and myriad other challenges.

A

LTHOUGH NASA’S AERONAUTICS BUDGET has fallen dramatically in recent years,[1372] the Agency has nevertheless managed to spear­head some of America’s biggest breakthroughs in fuel-efficient and environmentally friendly aircraft technology. The National Aeronautics and Space Administration (NASA) has engaged in major programs to increase aircraft fuel efficiency that have laid the groundwork for engines, airframes, and new energy sources—such as alternative fuel and fuel cells—that are still in use today. NASA’s research on aircraft emissions in the 1970s also was groundbreaking, leading to a widely accepted view at the national—and later, global—level that pollution can damage the ozone layer and spawning a series of efforts inside and outside NASA to reduce aircraft emissions.[1373]

This case study will explore NASA’s efforts to improve the fuel effi­ciency of aircraft and also reduce emissions, with a heavy emphasis on the 1970s, when the energy crisis and environmental concerns cre­ated a national demand for "lean and green” airplanes.[1374] The launch of

Sputnik in 1957 and the resulting space race with the Soviet Union spurred the National Advisory Committee for Aeronautics (NACA)— subsequently restructured within the new National Aeronautics and Space Administration—to shift its research heavily toward rocketry— at the expense of aeronautics—until the mid-1960s.[1375] But as commer­cial air travel grew in the 1960s, NASA began to embark on a series of ambitious programs that connected aeronautics, energy, and the envi­ronment. This case study will discuss some of NASA’s most important programs in this area.

Подпись: 12Key propulsion initiatives to be discussed include the Energy Efficient Engine program—perhaps NASA’s greatest contribution to fuel-efficient flight—as well as later efforts to increase propulsion efficiency, includ­ing the Advanced Subsonic Technology (AST) initiative and the Ultra Efficient Engine Technology (UEET) program. Another propulsion effort that paved the way for the development of fuel-efficient engine technol­ogy was the Advanced Turboprop, which led to current NASA and indus­try attempts to develop fuel-efficient "open rotor” concepts.

In addition to propulsion research, this case study will also explore several NASA programs aimed at improving aircraft structures to pro­mote fuel efficiency, including initiatives to develop supercritical wings and winglets and efforts to employ laminar flow concepts. NASA has also sought to develop alternative fuels to improve performance, maximize efficiency, and minimize emissions; this case study will touch on liquid hydrogen research conducted by NASA’s predecessor—the NACA—as well as subsequent attempts to develop synthetic fuels to replace hydro­carbon-based jet fuel.

Second-Generation DOE-NASA Wind Turbine Systems (Mod-2)

While the primary objectives of the Mod-0, Mod-0A, and Mod-1 pro­grams were research and development, the primary goal of the sec­ond-generation Mod-2 project was for direct and efficient commercial application. The Mod-2 program was designed to determine the poten­tial cost-effectiveness of megawatt-sized remote site operation wind tur­bines when located in areas of moderate (14 mph) winds. Significant changes from the Mod-0 and Mod-1 included use of a soft-shell-type

Подпись: DOE-NASA Mod-2 megawatt wind turbine cluster, Goldendale, WA. NASA. Подпись: 13

tower, an epicyclic gearbox, a quill shaft to attenuate torque and power oscillations, and a rotor designed primarily to commercial steel fabri­cation standards. Other significant changes were the switch from fixed to a teetered (pivot connection) hub rotor, which reduced rotor fatigue, weight, and cost; use of tip control rather than full span control; and orienting the rotor upwind rather than downwind, which reduced rotor fatigue and resulted in a 2.5-percent increase in power produced by the system. Each of these changes resulted in a favorable decrease in the cost of electricity. One of the more important changes, as noted in a Boeing conference presentation, was the switch from the stiff truss type tower to a soft shell tower that weighed less, was much cheaper to fabricate, and enabled the use of heavy but economical and reliable rotor designs.[1505]

Four primary Mod-2 wind turbine units were designed, built, and operated under the second-generation phase of the DOE-NASA pro­gram. The first three machines were built as a cluster at Goldendale, WA, where the Department of Energy selected the Bonneville Power

Подпись: 13 Second-Generation DOE-NASA Wind Turbine Systems (Mod-2)

Administration as the participating utility. The operation of several wind turbines at one site afforded NASA the opportunity to study the effects of single and multiple wind turbines operating together while feeding into a power network. The Goldendale project demonstrated the suc­cessful operation of a cluster of large NASA Mod-2 horizontal-axis wind turbines operating in an unattended mode within a power grid. For con­struction of these machines, DOE-NASA awarded a competitively bid contract in 1977 to Boeing. The first of the three wind turbines started operation in November 1980, and the two additional machines went into service between March and May 1981. As of January 1985, the three – turbine cluster had generated over 5,100 megawatthours of electricity while synchronized to the power grid for over 4,100 hours. The Mod-2 machines had a rated power of 2.5 megawatts, a rotor-blade diameter of 300 feet, and a hub height (distance of the center of blade rotation to the ground) of 300 feet. Boeing evaluated a number of design options and tradeoffs, including upwind or downwind rotors, two – or three-bladed

rotors, teetered or rigid hubs, soft or rigid towers, and a number of different drive train and power generation configurations. A fourth 2.5-megawatt Mod-2 wind turbine was purchased by the Department of the Interior, Bureau of Reclamation, for installation near Medicine Bow, WY, and a fifth turbine unit was purchased by Pacific Gas and Electric for operation in Solano County, CA.[1506]

Inventing the Supercritical Wing

Whitcomb was hardly an individual content to rest on his laurels or bask in the glow of previous successes, and after his success with area rul­ing, he wasted no time in moving further into the transonic and super­sonic research regime. In the late 1950s, the introduction of practical subsonic commercial jetliners led many in the aeronautical community to place a new emphasis on what would be considered the next logical step: a Supersonic Transport (SST). John Stack recognized the impor­tance of the SST to the aeronautics program in NASA in 1958. As NASA placed its primary emphasis on space, he and his researchers would work on the next plateau in commercial aviation. Through the Supersonic Transport Research Committee, Stack and his successor, Laurence K. Loftin, Jr., oversaw work on the design of a Supersonic Commercial Air Transport (SCAT). The goal was to create an airliner capable of outper­forming the cruise performance of the Mach 3 North American XB-70 Valkyrie bomber. Whitcomb developed a six-engine arrowlike highly swept wing SST configuration that stood out as possessing the best lift – to-drag (L/D) ratio among the Langley designs called SCAT 4.[194]

Manufacturers’ analyses indicated that Whitcomb’s SCAT 4 exhib­ited the lowest range and highest weight among a group of designs that would generate high operating and fuel costs and was too heavy when compared with subsonic transports. Despite President John F. Kennedy’s June 1963 commitment to the development of "a commercially success­ful supersonic transport superior to that being built in any other country in the world,” Whitcomb saw the writing on the wall and quickly disas­sociated himself from the American supersonic transport program in 1963.[195] Always keeping in mind his priorities based on practicality and what he could do to improve the airplane, Whitcomb said: "I’m going back where I know I can make things pay off.”[196] For Whitcomb, practi­cality outweighed the lure of speed equated with technological progress.

Whitcomb decided to turn his attention back toward improving sub­sonic aircraft, specifically a totally new airfoil shape. Airfoils and wings had been evolving over the course of the 20th century. They reflected the ever-changing knowledge and requirements for increased aircraft perfor­mance and efficiency. They also represented the bright minds that devel­oped them. The thin cambered airfoil of the Wright brothers, the thick airfoils of the Germans in World War I, the industry-standard Clark Y of the 1920s, and the NACA four – and five-digit series airfoils innovated by Eastman Jacobs exemplified advances in and general approaches toward airfoil design and theory.[197]

Despite these advances and others, subsonic aircraft flew at 85-percent efficiency.[198] The problem was that, as subsonic airplanes moved toward their maximum speed of 660 mph, increased drag and instability devel­oped. Air moving over the upper surface of wings reached supersonic speeds, while the rest of the airplane traveled at a slower rate. The plane had to fly at slower speeds at decreased performance and efficiency.[199]

When Whitcomb returned to transonic research in 1964, he specifi­cally wanted to develop an airfoil for commercial aircraft that delayed the onset of high transonic drag near Mach 1 by reducing air friction and turbu-

Inventing the Supercritical Wing

Whitcomb inspecting a supercritical wing model in the 8-Foot TPT. NASA.

lence across an aircraft’s major aerodynamic surface, the wing. Whitcomb went intuitively against conventional airfoil design, in which the upper sur­face curved downward on the leading and trailing edges to create lift. He envisioned a smoother flow of air by turning a conventional airfoil upside down. Whitcomb’s airfoil was flat on top with a downward curved rear sec­tion.[200] The shape delayed the formation of shock waves and moved them further toward the rear of the wing to increase total wing efficiency. The rear lower surface formed into deeper, more concave curve to compen­sate for the lift lost along the flattened wing top. The blunt leading edge facilitated better takeoff, landing, and maneuvering performance. Overall, Whitcomb’s airfoil slowed airflow, which lessened drag and buffeting, and improved stability.[201]

With the wing captured in his mind’s eye, Whitcomb turned it into mathematical calculations and transformed his findings into a wind tun­nel model created by his own hands. He spent days at a time in the 8-foot Transonic Pressure Tunnel (TPT), sleeping on a nearby cot when needed, as he took advantage of the 24-hour schedule to confirm his findings.[202]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[203]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[204] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[205] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[206]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[207]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[208]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[209] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[210] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[211]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[212]

The major players met in March 1967 to discuss turning Whitcomb’s concept into a reality. The practicalities of manufacturing, flight char­acteristics, structural integrity, and safety required a flight research program. The group selected the Navy Chance Vought F-8A fighter as the flight platform. The F-8A possessed specific attributes that made it ideal for the program. While not an airliner, the F-8A had an easily removable modular wing readymade for replacement, fuselage-mounted landing gear that did not interfere with the wing, engine thrust capable of opera­tion in the transonic regime, and lower operating costs than a multi-engine airliner. Langley contracted Vought to design a supercritical wing for the F-8 and collaborated with Whitcomb during wind tunnel testing begin­ning during the summer of 1967. Unfortunately for the program, NASA Headquarters suspended all ongoing contracts in January 1968 and Vought withdrew from the program.[213]

SST Reincarnated: Birth of the High-Speed Civil Transport

For much of the next decade, the most active sonic boom research took place as part of the Air Force’s Noise and Sonic Boom Impact Technology (NSBIT) program. This was a comprehensive effort started in 1981 to study the noises resulting from military training and operations, espe­cially those involving environmental impact statements and similar assess­ments. Although NASA was not intimately involved with NSBIT, Domenic Maglieri (just before his retirement from the Langley Center) and the recently retired Harvey Hubbard compiled a comprehensive annotated bib­liography of sonic boom research, organized into 10 major areas, to help inform NSBIT participants of the most relevant sources of information.[460]

One of the noteworthy achievements of the NSBIT program was to continue building a detailed sonic boom database (known as Boomfile) on all U. S. supersonic aircraft by flying them over a large array of newly developed sensors at Edwards AFB in the summer of 1987. Called the Boom Event Analyzer Recorder (BEAR), these unmanned devices

recorded the full sonic boom waveform in digital format.[461] Other con­tributions of NSBIT were long-term sonic boom monitoring of combat training areas, continued assessment of structures exposed to sonic booms, studies of the effects of sonic booms on livestock and wildlife, and inten­sified research on focused booms (long an issue with maneuvering fighter aircraft). The latter included a specialized computer program (derived from that originated by NASAs Thomas) called PCBoom to predict these events.[462] In a separate project, fighter pilots were successfully trained to lay down super booms at specified locations (an idea first broached in the early 1950s).[463]

By the mid-1980s, the growing economic importance of nations in Asia was drawing attention to the long flight times required to cross the Pacific Ocean or the ability to reach most of Asia from Europe. The White House Office of Science and Technology (OST), reversing the administration’s initial opposition to civilian aeronautical research, took various steps to gain support for such activities. In March 1985, the OST released a report, "National Aeronautical R&D Goals: Technology for America’s Future,” which included a long-range supersonic transport.[464] Then, in his State of the Union Address in January 1986, President Reagan ignited interest in the possibility of a hypersonic transport—the National Aero-Space Plane (NASP)—dubbed the "Orient Express.” The Battelle Memorial Institute, which established the Center for High-Speed Commercial Flight in April 1986, became a focal point and influential advocate for these proposals.[465]

NASA had been working with the Defense Advanced Research Projects Agency (DARPA) on hypersonic technology for what became the NASP since the early 1980s. In February 1987, the OST issued an updated National Aeronautical R&D Goals, subtitled "Agenda for Achievement.”

It called for both aggressively pursuing the NASP and developing the "fundamental technology, design, and business foundation for a long – range supersonic transport.”[466] In response, NASA accelerated its hyper­sonic research and began a new quest to develop commercially viable supersonic technology. This started with contracts to Boeing and Douglas aircraft companies in October 1986 for market and feasibility studies on what was now named the High-Speed Civil Transport (HSCT), accom­panied by several internal NASA assessments. These studies soon ruled out hypersonic speeds (above Mach 5) as being impractical for pas­senger service. Eventually, NASA and its industry partners settled on a cruise speed of Mach 2.4.[467] Although only marginally faster than the Concorde, the HSCT was expected to double its range and carry three times as many passengers. Meanwhile, the NASP survived as a NASA – DOD program (the X-30) until 1994, with its sonic boom potential stud­ied by current and former NASA specialists.[468]

The contractual studies on the HSCT emphasized the need to resolve environmental issues, including the restrictions on cruising over land because of sonic booms, before it could meet the goal of efficient long­distance supersonic flight. On January 19-20, 1988, the Langley Center hosted a workshop on the status of sonic boom methodology and under­standing. Sixty representatives from Government, academia, and industry attended—including many of those involved in the SST and SCR efforts and several from the Air Force’s NSBIT program. Working groups on sonic boom theory, minimization, atmospheric effects, and human response deter­mined that the following areas most needed more research: boom carpets, focused booms, high-Mach predictions, atmospheric effects, acceptability metrics, signature prediction, and low-boom airframe designs.

The report from this workshop served as a baseline on the latest knowledge about sonic booms and some of the challenges that lay ahead. One of these was the disconnect between aerodynamic efficiency and lowering shock strength that had long plagued efforts at boom min­imization. Simply stated, near-field shockwaves from a streamlined airframe coalesce more readily into strong front and tail shocks, while the near-field shock waves from a higher-drag airframe are less likely to join together, thus allowing a more relaxed N-wave signature. This paradox (illustrated by Figure 6) would have to be solved before a low – boom supersonic transport would be both permissible and practical.[469]

Resolving the Challenge of Aerodynamic Damping

Researchers in the early supersonic era also faced the challenges posed by the lack of aerodynamic damping. Aerodynamic damping is the nat­ural resistance of an airplane to rotational movement about its center of gravity while flying in the atmosphere. In its simplest form, it consists of forces created on aerodynamic surfaces that are some distance from the center of gravity (cg). For example, when an airplane rotates about the cg in the pitch axis, the horizontal tail, being some distance aft of the cg, will translate up or down. This translational motion produces a vertical lift force on the tail surface and a moment (force times dis­tance) that tends to resist the rotational motion. This lift force opposes the rotation regardless of the direction of the motion. The resisting force will be proportional to the rate of rotation, or pitch rate. The faster the rotational rate, the larger will be the resisting force. The magnitude of

the resisting tail lift force is dependent on the change in angle of attack created by the rotation. This change in angle of attack is the vector sum of the rotational velocity and the forward velocity of the airplane. For low forward velocities, the angle of attack change is quite large and the natural damping is also large. The high aerodynamic damping associ­ated with the low speeds of the Wright brothers flights contributed a great deal to the brothers’ ability to control the static longitudinal insta­bility of their early vehicles.

At very high forward speed, the same pitch rate will produce a much smaller change in angle of attack and thus lower damping. For practical purposes, all aerodynamic damping can be considered to be inversely proportional to true velocity. The significance of this is that an airplane’s natural resistance to oscillatory motion, in all axes, disappears as the true speed increases. At hypersonic speeds (above Mach 5), any rota­tional disturbance will create an oscillation that will essentially not damp out by itself.

As airplanes flew ever faster, this lightly damped, oscillatory ten­dency became more obvious and was a hindrance to accurate weap­ons delivery for military aircraft, and pilot and passenger comfort for commercial aircraft. Evaluating the seriousness of the damping chal­lenge in an era when aircraft design was changing markedly (from the straight-wing propeller-driven airplane to the swept and delta wing jet and beyond). It occupied a great amount of attention from the NACA and early NASA researchers, who recognized that it would pose a con­tinuing hindrance to the exploitation of the transonic and supersonic region, and the hypersonic beyond.[678]

In general, aerodynamic damping has a positive influence on han­dling qualities, because it tends to suppress the oscillatory tendencies of a naturally stable airplane. Unfortunately, it gradually disappears as the speed increases, indicating the need for some artificial method of suppressing these oscillations during high-speed flight. In the preelec­tronic flight control era, the solution was the modification of flight con­trol systems to incorporate electronic damper systems, often referred to as Stability Augmentation Systems (SAS). A damper system for one axis con­sisted of a rate gyro measuring rotational rate in that axis, a gain-chang­ing circuit that adjusted the size of the needed control command, and a

servo mechanism that added additional control surface commands to the commands from the pilot’s stick. Control surface commands were generated that were proportional to the measured rotational rate (feed­back) but opposite in sign, thus driving the rotational rate toward zero.

Damper systems were installed in at least one axis of all of the Century – series fighters (F-100 through F-107), and all were successful in stabilizing the aircraft in high-speed flight.[679] Development of stability augmentation systems—and their refinement through contractor, Air Force-Navy, and NACA-NASA testing—was crucial to meeting the challenge of developing Cold War airpower forces, made yet more demanding because the United States and the larger NATO alliance chose a conscious strategy of using advanced technology to generate high-leverage aircraft systems that could offset larger numbers of less-individually capable Soviet-bloc designs.[680]

Early, simple damper systems were so-called single-string systems and were designed to be "fail-safe.” A single gyro, servo, and wiring system were installed for each axis. The feedback gains were quite low, tailored to the damping requirements at high speed, at which very little control surface travel was necessary. The servo travel was limited to a very small value, usually less than 2 degrees of control surface movement. A failure in the system could drive the servo to its maximum travel, but the transient motion was small and easily compensated by the pilot. Loss of a damper at high speed thus reduced the comfort level, or weapons delivery accu­racy, but was tolerable, and, at lower speeds associated with takeoff and landing, the natural aerodynamic damping was adequate.

One of the first airplanes to utilize electronic redundancy in the design of its flight control system was the X-15 rocket-powered research air­plane, which, at the time of its design, faced numerous unknowns. Because of the extreme flight conditions (Mach 6 and 250,000-foot alti­tude), the servo travel needed for damping was quite large, and the pilot could not compensate if the servo received a hard-over signal.

The solution was the incorporation of an independent, but identical, feedback "monitoring” channel in addition to the "working” channel in each axis. The servo commands from the monitor and working channel were continuously compared, and when a disagreement was detected, the system was automatically disengaged and the servo centered. This provided the equivalent level of protection to the limited-authority fail­safe damper systems incorporated in the Century series fighters. Two of the three X-15s retained this fail-safe damper system throughout the 9-year NASA-Air Force-Navy test program, although a backup roll rate gyro was added to provide fail-operational, fail-safe capability in the roll axis.[681] Refining the X-15’s SAS system necessitated a great amount of analysis and simulator work before the pilots deemed it acceptable, particularly as the X-15’s stability deteriorated markedly at higher angles of attack above Mach 2. Indeed, one of the major aspects of the X-15’s research program was refining understanding of the complexities of hypersonic stability and control, particularly during reentry at high angles of attack.[682]

The electronic revolution dramatically reshaped design approaches to damping and stability. Once it was recognized that electronic assis­tance was beneficial to a pilot’s ability to control an airplane, the con­cept evolved rapidly. By adding a third independent channel, and some electronic voting logic, a failed channel could be identified and its sig­nal "voted out,” while retaining the remaining two channels active. If a second failure occurred (that is, the two remaining channels did not agree), the system would be disconnected and the damper would become inoperable. Damper systems of this type were referred to as fail – operational, fail-safe (FOFS) systems. Further enhancement was provided by comparing the pilot’s stick commands with the measured airplane response and using analog computer circuits to tailor servo commands so that the airplane response was nearly the same for all flight con­ditions. These systems were referred to as Command Augmentation Systems (CAS). The next step in the evolution was the incorporation of a mathematical model of the desired aircraft response into the ana­log computer circuitry. An error signal was generated by comparing

the instantaneous measured aircraft response with the desired mathe­matical-model response, and the servo commands forced the airplane to fly per the mathematical model, regardless of the airplane’s inherent aerodynamic tendencies. These systems were called "model-following.”

Even higher levels of redundancy were necessary for safe operation of these advanced control concepts after multiple failures, and the fail­ure logic became increasingly more complex. Establishing the proper "trip” levels, where an erroneous comparison would result in the exclu­sion of one channel, was an especially challenging task. If the trip levels were too tight, a small difference between the outputs of two perfectly good gyros would result in nuisance trips, while trip levels that were too loose could result in a failed gyro not being recognized in a timely manner. Trip levels were usually adjusted during flight test to provide the safest settings.

NASA’s Space Shuttle orbiter utilized five independent control system computers. Four had identical software. This provided fail-operational, fail-operational, fail-safe (FOFOFS) capability. The fifth computer used a different software program with a "get-me-home” capability as a last resort (often referred to as the "freeze-dried” control system computer).

Low L/D Approach and Landing Trainers

In addition to the need to simulate the handling qualities of a new air­plane, a need to accurately duplicate the approach and landing perfor­mance also evolved. The air-launched, rocket-powered research airplane concept, pioneered by the X-1, allowed quick access to high-speed flight

for research purposes. It also brought with it unpowered, gliding land­ings, after the rocket fuel was expended. For the X-1 series of airplanes, the landings were not particular stressful because most landings were on the 7-mile dry lakebed at Edwards AFB and the approach glide angles were 8 degrees or less (lift-to-drag (L/D) ratios of about 8). As the rocket – powered airplanes reached toward higher speeds and altitudes, the landing approach angles increased rather dramatically. The approach glide angle for the X-15 was predicted to be between 15 and 20 degrees (lift-to-drag ratios between 2.8 and 4.25) primarily because of the larger base area at the rear of the fuselage. The L/D was further reduced to about 2.5 after landing gear and flap deployment. These steep unpowered approaches prompted a reassessment of the piloting technique to be used. Higher – than-normal approach speeds were suggested as well as a delay of the land­ing gear and flap deployment until after completion of the landing flare. These new landing methods also indicated a need for a training "simula­tor” that could duplicate the landing performance of the X-15 in order to explore different landing techniques and train test pilots.

Out-of-the-cockpit, simulated visual displays available at that time were of very poor quality and were not even considered for the X-15 fixed-base simulator. Simulated missions on the X-15 fixed-base simula­tor were flown to a high-key location over the lakebed using the cockpit instruments, but the simulation was not considered valid for the landing pattern or the actual landing, which was to be done using visual, out-of- the-window references.

North American added a small drag chute to one of its F-100s to allow its pilots to fly landing approaches simulating the X-15. Additionally, both the Air Force and NASA began to survey available jet aircraft that could match the expected X-15 landing maneuver so that the Government pilots could develop a consistent landing method and identify what external cues were necessary to perform accurate landings. The F-104 had just entered the inventory at the AFFTC and NASA. Flight-testing showed that it was an excellent candidate for duplicating the X-15 landing pattern.[735]

Various combinations of landing gear and flap settings, plus partial power on the engine, could be used to simulate the entire X-15 land­ing trajectory from high key to touchdown. F-104s were used through­out the program for chase, for training new X-15 pilots, for practicing approaches prior to each flight, and also for practicing approaches into uprange emergency lakebeds. The combination of the X-15 fixed-base simulator and the F-104 in-flight landing simulation worked very well for pilot training and emergency planning over the entire X-15 test pro­gram, and the F-104 did yeoman work supporting the subsequent lift­ing body research effort as well, through the X-24B.

In the late 1960s, engineers at the Air Force Flight Dynamics Laboratory had evolved a family of reentry shapes (particularly the AFFDL 5, 7, and 8) that blended a lifting body approach with an exten­sible variable-sweep wing for terminal approach and landing. In support of these studies, in 1969, the Air Force Flight Test Center undertook a series of low L/D approach tests using a General Dynamics F-111A as a surrogate for a variable-sweep Space Shuttle-like craft returning from orbit. The supersonic variable-sweep F-111 could emulate the track of such a design from Mach 2 and 50,000 feet down to landing, and its sophisticated navigation system and two-crew-member layout enabled a flight-test engineer/navigator to undertake terminal area navigation. The result of these tests demonstrated conclusively that a trained crew could fly unpowered instrument approaches from Mach 2 and 50,000 feet down to a precise runway landing, even at night, an important con­fidence-building milestone on the path to the development of practical lifting reentry logistical spacecraft such as the Shuttle.[736]

Notice that the landing-pattern simulators discussed above did not duplicate the handling qualities of the simulated airplane, only the perfor­mance and landing trajectory. Early in the Space Shuttle program, man­agement decided to create a Shuttle Training Aircraft (STA). A Grumman G II was selected as the host airplane. Modifications were made to this unique airplane to not only duplicate the orbiter’s handling qualities (a variable-stability airplane), but also to duplicate the landing trajectory and the out-of-the-window visibility from the orbiter cockpit. This NASA training device represents the ultimate in a complete electronic and

Low L/D Approach and Landing Trainers

A Lockheed F-104 flying chase for an X-15 lakebed landing. NASA.

motion-based training simulator. The success of the gliding entries and landings of the Space Shuttle orbiter confirm the value of this trainer.

The Critical Tool: Emergent High-Speed Electronic Digital Computing

During the Second World War, J. Presper Eckert and John Mauchly at the University of Pennsylvania’s Moore School of Electrical Engineering designed and built the ENIAC, an electronic calculator that inaugurated the era of digital computing in the United States. By 1951, they had turned this expensive and fragile instrument into a product that was man­ufactured and sold, a computer they called the UNIVAC, which stands for Universal Automatic Computer. The National Advisory Committee for Aeronautics (NACA) was quick to realize the potential of a high-speed computer for the calculation of fluid dynamic problems. After all, the NACA was in the business of aerodynamics and after 40 years of trying to solve the equations of motion by simplified analysis, it recognized

the breakthrough supplied by the computer to solve these equations numerically on a potentially practical basis. In 1954, Remington Rand delivered an ERA 1103 digital computer intended for scientific and engineering calculations to the NACA Ames Aeronautical Laboratory at Sunnyvale, CA. This was a state-of-the-art computer that was the first to employ a magnetic core in place of vacuum tubes for memory. The ERA 1103 used binary arithmetic, a 36-bit word length, and operated on all the bits of a word at a time. One year later, Ames acquired its first stored-program electronic computer, an IBM 650. In 1958, the 650 was replaced by an IBM 704, which in turn was replaced with an IBM 7090 mainframe in 1961.[770]

The Critical Tool: Emergent High-Speed Electronic Digital ComputingThe IBM 7090 had enough storage and enough speed to allow the first generation of practical CFD solutions to be carried out. By 1963, four additional index registers were added to the 7090, making it the IBM 7094. This computer became the workhorse for the CFD of the 1960s and early 1970s, not just at Ames, but throughout the aero­dynamics community; the author cut his teeth solving dissertation on an IBM 7094 at the Ohio State University in 1966. The calculation speed of a digital computer is measured in its number of floating point oper­ations per second (FLOPS). The IBM 7094 could do 100,000 FLOPS, making it about the fastest computer available in the 1960s. With this number of FLOPS, it was possible to carry out for the first time detailed flow-field calculations around a body moving at hypersonic speeds, one of the major activities within the newly formed NASA that drove both computer and algorithm development for CFD. The IBM 7094 was a "mainframe” computer, a large electronic machine that usually filled a room with equipment. The users would write their programs (usu­ally in the FORTRAN language) as a series of logically constructed line statements that would be punched on cards, and the decks of punched cards (sometimes occupying many boxes for just one program) would be fed into a reader that would read the punches and tell the computer what calculations to make. The output from the calculations would be printed on large sheets and returned to the user. One program at a time was fed into the computer, the so-called "batch” operation. The user would submit his or her batch to the computer desk and then return hours or days later to pick up the printed output. As cumbersome as it

may appear today, the batch operation worked. The field of CFD was launched with such batch operations on mainframe computers like the IBM 7094. And NASA Ames was a spearhead of such activities. Indeed, because of the synergism between CFD and the computers on which it worked, the demands on the central IBM installation at Ames grew at a compounded rate of over 100 percent per year in the 1960s.

The Critical Tool: Emergent High-Speed Electronic Digital ComputingWith these computers, it became practical to set up CFD solutions of the Euler equations for two-dimensional flows. These solutions could be carried out with a relatively small number of grid points in the flow, typically 10,000 to 100,000 points, and still have computer run times on the order of hours. Users of CFD in the 1960s were happy to have this capability, and the three primary NASA Research Centers—Langley, Ames, and Lewis (now Glenn)—made major strides in the numerical analysis of many types of flows, especially in the transonic and hyper­sonic regimes. The practical calculation of inviscid (that is, frictionless), three-dimensional flows and especially any type of high Reynolds num­ber flows was beyond the computer capabilities at that time.

This situation changed markedly when the supercomputer came on the scene in the 1970s. NASA Ames acquired the Illiac IV advanced parallel-processing machine. Designed at the University of Illinois, this was an early and controversial supercomputer, one bridging both older and newer computer architectures and processor approaches. Ames quickly followed with the installation of an IBM 360 time-sharing com­puter. These machines provided the capability to make CFD calculations with over 1 million grid points in the flow field with a computational speed of more than 106 FLOPS. NASA installed similar machines at the Langley and Lewis Research Centers. On these machines, NASA researchers made the first meaningful three-dimensional inviscid flow – field calculations and significant two-dimensional high Reynolds num­ber calculations. Supercomputers became the engine that propelled CFD into the forefront of aerospace design as well as research. Bigger and better supercomputers, such as the pioneering Cray-1 and its succes­sor, the Cray X-MP, allowed grids of tens of millions of grid points to be used in a flow-field calculation with speeds beginning to approach the hallowed goal of gigaflops (109 floating point operations per second). Such machines made it possible to carry out numerical solutions of the Navier-Stokes equations for three-dimensional fairly high Reynolds number viscous flows. The first three-dimensional Navier-Stokes solu­tions of the complete flow field around a complete airplane at angle of
attack came on the scene in the 1980s, enabled by these supercomput­ers. Subsonic, transonic, supersonic, and hypersonic flow solutions cov­ered the whole flight regime. Again, the major drivers for these solutions were the aerospace research and development problems tackled by NASA engineers and scientists. This headlong development of supercomput­ers has continued unabated. The holy grail of CFD researchers in the 1990s was the teraflop machine (1012 FLOPS); today, it is the petaflop (1015 FLOPS) machine. Indeed, recently the U. S. Energy Department has contracted with IBM to build a 20-petaflop machine in 2012 for calcu­lations involving the safety and reliability of the Nation’s aging nuclear arsenal.[771] Such a machine will aid the CFD practitioner’s quest for the ultimate flow-field calculations—direct numerical simulation (DNS) of turbulent flows, an area of particularly interest to NASA researchers.

NASA Centers and Their Computational Structural Research

To gain a sense of the types of computational structures projects under­taken by NASA and the contributions of individual Centers to the Agency’s efforts, it is necessary to examine briefly the computational structures analysis activities undertaken at each Center, reviewing representative projects, computer programs, and instances of technology transfer to industry—aircraft and otherwise. Projects included the development of new computer programs, the enhancement of existing programs, inte­gration of programs to provide new capabilities, and, in some cases, just the development of methods to apply existing computer programs to new types of problems. The unique missions of the different Centers certainly influenced the research, but many valuable developments came from col­laborative efforts between Centers and applying tools developed at one Center to the problems being worked at another.[849]

FLEXSTAB (Ames, Dryden, and Langley Research Centers, 1970s)

FLEXSTAB was a method for calculating stability derivatives that included the effects of aeroelastic deformation. Originally developed in the early 1970s by Boeing under contract to NASA Ames, FLEXSTAB was also used and upgraded at Dryden. FLEXSTAB used panel-method aerodynamic calculations, which could be readily adjusted with empiri­cal corrections. The structural effects were treated first as a steady defor­mation at the trim condition, then as "unsteady perturbations about the reference motion to determine dynamic stability by characteristic roots or by time histories following an initial perturbation or follow­ing penetration of a discrete gust flow field.”[976] Comparisons between FLEXSTAB predictions and flight measurements were made at Dryden for the YF-12A, Shuttle, B1, and other aircraft. Initially developed for symmetric flight conditions only, FLEXSTAB was extended in 1981 to include nonsymmetric flight conditions.[977] In 1984, a procedure was developed to couple a NASTRAN structural model to the FLEXSTAB elastic-aircraft stability analysis.[978] NASA Langley and the Air Force Flight Dynamics Laboratory also funded upgrades to FLEXSTAB,

leading to the DYLOFLEX program, which added aeroservoelastic effects.[979]