Inventing the Supercritical Wing

Whitcomb was hardly an individual content to rest on his laurels or bask in the glow of previous successes, and after his success with area rul­ing, he wasted no time in moving further into the transonic and super­sonic research regime. In the late 1950s, the introduction of practical subsonic commercial jetliners led many in the aeronautical community to place a new emphasis on what would be considered the next logical step: a Supersonic Transport (SST). John Stack recognized the impor­tance of the SST to the aeronautics program in NASA in 1958. As NASA placed its primary emphasis on space, he and his researchers would work on the next plateau in commercial aviation. Through the Supersonic Transport Research Committee, Stack and his successor, Laurence K. Loftin, Jr., oversaw work on the design of a Supersonic Commercial Air Transport (SCAT). The goal was to create an airliner capable of outper­forming the cruise performance of the Mach 3 North American XB-70 Valkyrie bomber. Whitcomb developed a six-engine arrowlike highly swept wing SST configuration that stood out as possessing the best lift – to-drag (L/D) ratio among the Langley designs called SCAT 4.[194]

Manufacturers’ analyses indicated that Whitcomb’s SCAT 4 exhib­ited the lowest range and highest weight among a group of designs that would generate high operating and fuel costs and was too heavy when compared with subsonic transports. Despite President John F. Kennedy’s June 1963 commitment to the development of "a commercially success­ful supersonic transport superior to that being built in any other country in the world,” Whitcomb saw the writing on the wall and quickly disas­sociated himself from the American supersonic transport program in 1963.[195] Always keeping in mind his priorities based on practicality and what he could do to improve the airplane, Whitcomb said: "I’m going back where I know I can make things pay off.”[196] For Whitcomb, practi­cality outweighed the lure of speed equated with technological progress.

Whitcomb decided to turn his attention back toward improving sub­sonic aircraft, specifically a totally new airfoil shape. Airfoils and wings had been evolving over the course of the 20th century. They reflected the ever-changing knowledge and requirements for increased aircraft perfor­mance and efficiency. They also represented the bright minds that devel­oped them. The thin cambered airfoil of the Wright brothers, the thick airfoils of the Germans in World War I, the industry-standard Clark Y of the 1920s, and the NACA four – and five-digit series airfoils innovated by Eastman Jacobs exemplified advances in and general approaches toward airfoil design and theory.[197]

Despite these advances and others, subsonic aircraft flew at 85-percent efficiency.[198] The problem was that, as subsonic airplanes moved toward their maximum speed of 660 mph, increased drag and instability devel­oped. Air moving over the upper surface of wings reached supersonic speeds, while the rest of the airplane traveled at a slower rate. The plane had to fly at slower speeds at decreased performance and efficiency.[199]

When Whitcomb returned to transonic research in 1964, he specifi­cally wanted to develop an airfoil for commercial aircraft that delayed the onset of high transonic drag near Mach 1 by reducing air friction and turbu-

Inventing the Supercritical Wing

Whitcomb inspecting a supercritical wing model in the 8-Foot TPT. NASA.

lence across an aircraft’s major aerodynamic surface, the wing. Whitcomb went intuitively against conventional airfoil design, in which the upper sur­face curved downward on the leading and trailing edges to create lift. He envisioned a smoother flow of air by turning a conventional airfoil upside down. Whitcomb’s airfoil was flat on top with a downward curved rear sec­tion.[200] The shape delayed the formation of shock waves and moved them further toward the rear of the wing to increase total wing efficiency. The rear lower surface formed into deeper, more concave curve to compen­sate for the lift lost along the flattened wing top. The blunt leading edge facilitated better takeoff, landing, and maneuvering performance. Overall, Whitcomb’s airfoil slowed airflow, which lessened drag and buffeting, and improved stability.[201]

With the wing captured in his mind’s eye, Whitcomb turned it into mathematical calculations and transformed his findings into a wind tun­nel model created by his own hands. He spent days at a time in the 8-foot Transonic Pressure Tunnel (TPT), sleeping on a nearby cot when needed, as he took advantage of the 24-hour schedule to confirm his findings.[202]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[203]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[204] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[205] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[206]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[207]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[208]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[209] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[210] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[211]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[212]

The major players met in March 1967 to discuss turning Whitcomb’s concept into a reality. The practicalities of manufacturing, flight char­acteristics, structural integrity, and safety required a flight research program. The group selected the Navy Chance Vought F-8A fighter as the flight platform. The F-8A possessed specific attributes that made it ideal for the program. While not an airliner, the F-8A had an easily removable modular wing readymade for replacement, fuselage-mounted landing gear that did not interfere with the wing, engine thrust capable of opera­tion in the transonic regime, and lower operating costs than a multi-engine airliner. Langley contracted Vought to design a supercritical wing for the F-8 and collaborated with Whitcomb during wind tunnel testing begin­ning during the summer of 1967. Unfortunately for the program, NASA Headquarters suspended all ongoing contracts in January 1968 and Vought withdrew from the program.[213]

SST Reincarnated: Birth of the High-Speed Civil Transport

For much of the next decade, the most active sonic boom research took place as part of the Air Force’s Noise and Sonic Boom Impact Technology (NSBIT) program. This was a comprehensive effort started in 1981 to study the noises resulting from military training and operations, espe­cially those involving environmental impact statements and similar assess­ments. Although NASA was not intimately involved with NSBIT, Domenic Maglieri (just before his retirement from the Langley Center) and the recently retired Harvey Hubbard compiled a comprehensive annotated bib­liography of sonic boom research, organized into 10 major areas, to help inform NSBIT participants of the most relevant sources of information.[460]

One of the noteworthy achievements of the NSBIT program was to continue building a detailed sonic boom database (known as Boomfile) on all U. S. supersonic aircraft by flying them over a large array of newly developed sensors at Edwards AFB in the summer of 1987. Called the Boom Event Analyzer Recorder (BEAR), these unmanned devices

recorded the full sonic boom waveform in digital format.[461] Other con­tributions of NSBIT were long-term sonic boom monitoring of combat training areas, continued assessment of structures exposed to sonic booms, studies of the effects of sonic booms on livestock and wildlife, and inten­sified research on focused booms (long an issue with maneuvering fighter aircraft). The latter included a specialized computer program (derived from that originated by NASAs Thomas) called PCBoom to predict these events.[462] In a separate project, fighter pilots were successfully trained to lay down super booms at specified locations (an idea first broached in the early 1950s).[463]

By the mid-1980s, the growing economic importance of nations in Asia was drawing attention to the long flight times required to cross the Pacific Ocean or the ability to reach most of Asia from Europe. The White House Office of Science and Technology (OST), reversing the administration’s initial opposition to civilian aeronautical research, took various steps to gain support for such activities. In March 1985, the OST released a report, "National Aeronautical R&D Goals: Technology for America’s Future,” which included a long-range supersonic transport.[464] Then, in his State of the Union Address in January 1986, President Reagan ignited interest in the possibility of a hypersonic transport—the National Aero-Space Plane (NASP)—dubbed the "Orient Express.” The Battelle Memorial Institute, which established the Center for High-Speed Commercial Flight in April 1986, became a focal point and influential advocate for these proposals.[465]

NASA had been working with the Defense Advanced Research Projects Agency (DARPA) on hypersonic technology for what became the NASP since the early 1980s. In February 1987, the OST issued an updated National Aeronautical R&D Goals, subtitled "Agenda for Achievement.”

It called for both aggressively pursuing the NASP and developing the "fundamental technology, design, and business foundation for a long – range supersonic transport.”[466] In response, NASA accelerated its hyper­sonic research and began a new quest to develop commercially viable supersonic technology. This started with contracts to Boeing and Douglas aircraft companies in October 1986 for market and feasibility studies on what was now named the High-Speed Civil Transport (HSCT), accom­panied by several internal NASA assessments. These studies soon ruled out hypersonic speeds (above Mach 5) as being impractical for pas­senger service. Eventually, NASA and its industry partners settled on a cruise speed of Mach 2.4.[467] Although only marginally faster than the Concorde, the HSCT was expected to double its range and carry three times as many passengers. Meanwhile, the NASP survived as a NASA – DOD program (the X-30) until 1994, with its sonic boom potential stud­ied by current and former NASA specialists.[468]

The contractual studies on the HSCT emphasized the need to resolve environmental issues, including the restrictions on cruising over land because of sonic booms, before it could meet the goal of efficient long­distance supersonic flight. On January 19-20, 1988, the Langley Center hosted a workshop on the status of sonic boom methodology and under­standing. Sixty representatives from Government, academia, and industry attended—including many of those involved in the SST and SCR efforts and several from the Air Force’s NSBIT program. Working groups on sonic boom theory, minimization, atmospheric effects, and human response deter­mined that the following areas most needed more research: boom carpets, focused booms, high-Mach predictions, atmospheric effects, acceptability metrics, signature prediction, and low-boom airframe designs.

The report from this workshop served as a baseline on the latest knowledge about sonic booms and some of the challenges that lay ahead. One of these was the disconnect between aerodynamic efficiency and lowering shock strength that had long plagued efforts at boom min­imization. Simply stated, near-field shockwaves from a streamlined airframe coalesce more readily into strong front and tail shocks, while the near-field shock waves from a higher-drag airframe are less likely to join together, thus allowing a more relaxed N-wave signature. This paradox (illustrated by Figure 6) would have to be solved before a low – boom supersonic transport would be both permissible and practical.[469]

Resolving the Challenge of Aerodynamic Damping

Researchers in the early supersonic era also faced the challenges posed by the lack of aerodynamic damping. Aerodynamic damping is the nat­ural resistance of an airplane to rotational movement about its center of gravity while flying in the atmosphere. In its simplest form, it consists of forces created on aerodynamic surfaces that are some distance from the center of gravity (cg). For example, when an airplane rotates about the cg in the pitch axis, the horizontal tail, being some distance aft of the cg, will translate up or down. This translational motion produces a vertical lift force on the tail surface and a moment (force times dis­tance) that tends to resist the rotational motion. This lift force opposes the rotation regardless of the direction of the motion. The resisting force will be proportional to the rate of rotation, or pitch rate. The faster the rotational rate, the larger will be the resisting force. The magnitude of

the resisting tail lift force is dependent on the change in angle of attack created by the rotation. This change in angle of attack is the vector sum of the rotational velocity and the forward velocity of the airplane. For low forward velocities, the angle of attack change is quite large and the natural damping is also large. The high aerodynamic damping associ­ated with the low speeds of the Wright brothers flights contributed a great deal to the brothers’ ability to control the static longitudinal insta­bility of their early vehicles.

At very high forward speed, the same pitch rate will produce a much smaller change in angle of attack and thus lower damping. For practical purposes, all aerodynamic damping can be considered to be inversely proportional to true velocity. The significance of this is that an airplane’s natural resistance to oscillatory motion, in all axes, disappears as the true speed increases. At hypersonic speeds (above Mach 5), any rota­tional disturbance will create an oscillation that will essentially not damp out by itself.

As airplanes flew ever faster, this lightly damped, oscillatory ten­dency became more obvious and was a hindrance to accurate weap­ons delivery for military aircraft, and pilot and passenger comfort for commercial aircraft. Evaluating the seriousness of the damping chal­lenge in an era when aircraft design was changing markedly (from the straight-wing propeller-driven airplane to the swept and delta wing jet and beyond). It occupied a great amount of attention from the NACA and early NASA researchers, who recognized that it would pose a con­tinuing hindrance to the exploitation of the transonic and supersonic region, and the hypersonic beyond.[678]

In general, aerodynamic damping has a positive influence on han­dling qualities, because it tends to suppress the oscillatory tendencies of a naturally stable airplane. Unfortunately, it gradually disappears as the speed increases, indicating the need for some artificial method of suppressing these oscillations during high-speed flight. In the preelec­tronic flight control era, the solution was the modification of flight con­trol systems to incorporate electronic damper systems, often referred to as Stability Augmentation Systems (SAS). A damper system for one axis con­sisted of a rate gyro measuring rotational rate in that axis, a gain-chang­ing circuit that adjusted the size of the needed control command, and a

servo mechanism that added additional control surface commands to the commands from the pilot’s stick. Control surface commands were generated that were proportional to the measured rotational rate (feed­back) but opposite in sign, thus driving the rotational rate toward zero.

Damper systems were installed in at least one axis of all of the Century – series fighters (F-100 through F-107), and all were successful in stabilizing the aircraft in high-speed flight.[679] Development of stability augmentation systems—and their refinement through contractor, Air Force-Navy, and NACA-NASA testing—was crucial to meeting the challenge of developing Cold War airpower forces, made yet more demanding because the United States and the larger NATO alliance chose a conscious strategy of using advanced technology to generate high-leverage aircraft systems that could offset larger numbers of less-individually capable Soviet-bloc designs.[680]

Early, simple damper systems were so-called single-string systems and were designed to be "fail-safe.” A single gyro, servo, and wiring system were installed for each axis. The feedback gains were quite low, tailored to the damping requirements at high speed, at which very little control surface travel was necessary. The servo travel was limited to a very small value, usually less than 2 degrees of control surface movement. A failure in the system could drive the servo to its maximum travel, but the transient motion was small and easily compensated by the pilot. Loss of a damper at high speed thus reduced the comfort level, or weapons delivery accu­racy, but was tolerable, and, at lower speeds associated with takeoff and landing, the natural aerodynamic damping was adequate.

One of the first airplanes to utilize electronic redundancy in the design of its flight control system was the X-15 rocket-powered research air­plane, which, at the time of its design, faced numerous unknowns. Because of the extreme flight conditions (Mach 6 and 250,000-foot alti­tude), the servo travel needed for damping was quite large, and the pilot could not compensate if the servo received a hard-over signal.

The solution was the incorporation of an independent, but identical, feedback "monitoring” channel in addition to the "working” channel in each axis. The servo commands from the monitor and working channel were continuously compared, and when a disagreement was detected, the system was automatically disengaged and the servo centered. This provided the equivalent level of protection to the limited-authority fail­safe damper systems incorporated in the Century series fighters. Two of the three X-15s retained this fail-safe damper system throughout the 9-year NASA-Air Force-Navy test program, although a backup roll rate gyro was added to provide fail-operational, fail-safe capability in the roll axis.[681] Refining the X-15’s SAS system necessitated a great amount of analysis and simulator work before the pilots deemed it acceptable, particularly as the X-15’s stability deteriorated markedly at higher angles of attack above Mach 2. Indeed, one of the major aspects of the X-15’s research program was refining understanding of the complexities of hypersonic stability and control, particularly during reentry at high angles of attack.[682]

The electronic revolution dramatically reshaped design approaches to damping and stability. Once it was recognized that electronic assis­tance was beneficial to a pilot’s ability to control an airplane, the con­cept evolved rapidly. By adding a third independent channel, and some electronic voting logic, a failed channel could be identified and its sig­nal "voted out,” while retaining the remaining two channels active. If a second failure occurred (that is, the two remaining channels did not agree), the system would be disconnected and the damper would become inoperable. Damper systems of this type were referred to as fail – operational, fail-safe (FOFS) systems. Further enhancement was provided by comparing the pilot’s stick commands with the measured airplane response and using analog computer circuits to tailor servo commands so that the airplane response was nearly the same for all flight con­ditions. These systems were referred to as Command Augmentation Systems (CAS). The next step in the evolution was the incorporation of a mathematical model of the desired aircraft response into the ana­log computer circuitry. An error signal was generated by comparing

the instantaneous measured aircraft response with the desired mathe­matical-model response, and the servo commands forced the airplane to fly per the mathematical model, regardless of the airplane’s inherent aerodynamic tendencies. These systems were called "model-following.”

Even higher levels of redundancy were necessary for safe operation of these advanced control concepts after multiple failures, and the fail­ure logic became increasingly more complex. Establishing the proper "trip” levels, where an erroneous comparison would result in the exclu­sion of one channel, was an especially challenging task. If the trip levels were too tight, a small difference between the outputs of two perfectly good gyros would result in nuisance trips, while trip levels that were too loose could result in a failed gyro not being recognized in a timely manner. Trip levels were usually adjusted during flight test to provide the safest settings.

NASA’s Space Shuttle orbiter utilized five independent control system computers. Four had identical software. This provided fail-operational, fail-operational, fail-safe (FOFOFS) capability. The fifth computer used a different software program with a "get-me-home” capability as a last resort (often referred to as the "freeze-dried” control system computer).

Low L/D Approach and Landing Trainers

In addition to the need to simulate the handling qualities of a new air­plane, a need to accurately duplicate the approach and landing perfor­mance also evolved. The air-launched, rocket-powered research airplane concept, pioneered by the X-1, allowed quick access to high-speed flight

for research purposes. It also brought with it unpowered, gliding land­ings, after the rocket fuel was expended. For the X-1 series of airplanes, the landings were not particular stressful because most landings were on the 7-mile dry lakebed at Edwards AFB and the approach glide angles were 8 degrees or less (lift-to-drag (L/D) ratios of about 8). As the rocket – powered airplanes reached toward higher speeds and altitudes, the landing approach angles increased rather dramatically. The approach glide angle for the X-15 was predicted to be between 15 and 20 degrees (lift-to-drag ratios between 2.8 and 4.25) primarily because of the larger base area at the rear of the fuselage. The L/D was further reduced to about 2.5 after landing gear and flap deployment. These steep unpowered approaches prompted a reassessment of the piloting technique to be used. Higher – than-normal approach speeds were suggested as well as a delay of the land­ing gear and flap deployment until after completion of the landing flare. These new landing methods also indicated a need for a training "simula­tor” that could duplicate the landing performance of the X-15 in order to explore different landing techniques and train test pilots.

Out-of-the-cockpit, simulated visual displays available at that time were of very poor quality and were not even considered for the X-15 fixed-base simulator. Simulated missions on the X-15 fixed-base simula­tor were flown to a high-key location over the lakebed using the cockpit instruments, but the simulation was not considered valid for the landing pattern or the actual landing, which was to be done using visual, out-of- the-window references.

North American added a small drag chute to one of its F-100s to allow its pilots to fly landing approaches simulating the X-15. Additionally, both the Air Force and NASA began to survey available jet aircraft that could match the expected X-15 landing maneuver so that the Government pilots could develop a consistent landing method and identify what external cues were necessary to perform accurate landings. The F-104 had just entered the inventory at the AFFTC and NASA. Flight-testing showed that it was an excellent candidate for duplicating the X-15 landing pattern.[735]

Various combinations of landing gear and flap settings, plus partial power on the engine, could be used to simulate the entire X-15 land­ing trajectory from high key to touchdown. F-104s were used through­out the program for chase, for training new X-15 pilots, for practicing approaches prior to each flight, and also for practicing approaches into uprange emergency lakebeds. The combination of the X-15 fixed-base simulator and the F-104 in-flight landing simulation worked very well for pilot training and emergency planning over the entire X-15 test pro­gram, and the F-104 did yeoman work supporting the subsequent lift­ing body research effort as well, through the X-24B.

In the late 1960s, engineers at the Air Force Flight Dynamics Laboratory had evolved a family of reentry shapes (particularly the AFFDL 5, 7, and 8) that blended a lifting body approach with an exten­sible variable-sweep wing for terminal approach and landing. In support of these studies, in 1969, the Air Force Flight Test Center undertook a series of low L/D approach tests using a General Dynamics F-111A as a surrogate for a variable-sweep Space Shuttle-like craft returning from orbit. The supersonic variable-sweep F-111 could emulate the track of such a design from Mach 2 and 50,000 feet down to landing, and its sophisticated navigation system and two-crew-member layout enabled a flight-test engineer/navigator to undertake terminal area navigation. The result of these tests demonstrated conclusively that a trained crew could fly unpowered instrument approaches from Mach 2 and 50,000 feet down to a precise runway landing, even at night, an important con­fidence-building milestone on the path to the development of practical lifting reentry logistical spacecraft such as the Shuttle.[736]

Notice that the landing-pattern simulators discussed above did not duplicate the handling qualities of the simulated airplane, only the perfor­mance and landing trajectory. Early in the Space Shuttle program, man­agement decided to create a Shuttle Training Aircraft (STA). A Grumman G II was selected as the host airplane. Modifications were made to this unique airplane to not only duplicate the orbiter’s handling qualities (a variable-stability airplane), but also to duplicate the landing trajectory and the out-of-the-window visibility from the orbiter cockpit. This NASA training device represents the ultimate in a complete electronic and

Low L/D Approach and Landing Trainers

A Lockheed F-104 flying chase for an X-15 lakebed landing. NASA.

motion-based training simulator. The success of the gliding entries and landings of the Space Shuttle orbiter confirm the value of this trainer.

The Critical Tool: Emergent High-Speed Electronic Digital Computing

During the Second World War, J. Presper Eckert and John Mauchly at the University of Pennsylvania’s Moore School of Electrical Engineering designed and built the ENIAC, an electronic calculator that inaugurated the era of digital computing in the United States. By 1951, they had turned this expensive and fragile instrument into a product that was man­ufactured and sold, a computer they called the UNIVAC, which stands for Universal Automatic Computer. The National Advisory Committee for Aeronautics (NACA) was quick to realize the potential of a high-speed computer for the calculation of fluid dynamic problems. After all, the NACA was in the business of aerodynamics and after 40 years of trying to solve the equations of motion by simplified analysis, it recognized

the breakthrough supplied by the computer to solve these equations numerically on a potentially practical basis. In 1954, Remington Rand delivered an ERA 1103 digital computer intended for scientific and engineering calculations to the NACA Ames Aeronautical Laboratory at Sunnyvale, CA. This was a state-of-the-art computer that was the first to employ a magnetic core in place of vacuum tubes for memory. The ERA 1103 used binary arithmetic, a 36-bit word length, and operated on all the bits of a word at a time. One year later, Ames acquired its first stored-program electronic computer, an IBM 650. In 1958, the 650 was replaced by an IBM 704, which in turn was replaced with an IBM 7090 mainframe in 1961.[770]

The Critical Tool: Emergent High-Speed Electronic Digital ComputingThe IBM 7090 had enough storage and enough speed to allow the first generation of practical CFD solutions to be carried out. By 1963, four additional index registers were added to the 7090, making it the IBM 7094. This computer became the workhorse for the CFD of the 1960s and early 1970s, not just at Ames, but throughout the aero­dynamics community; the author cut his teeth solving dissertation on an IBM 7094 at the Ohio State University in 1966. The calculation speed of a digital computer is measured in its number of floating point oper­ations per second (FLOPS). The IBM 7094 could do 100,000 FLOPS, making it about the fastest computer available in the 1960s. With this number of FLOPS, it was possible to carry out for the first time detailed flow-field calculations around a body moving at hypersonic speeds, one of the major activities within the newly formed NASA that drove both computer and algorithm development for CFD. The IBM 7094 was a "mainframe” computer, a large electronic machine that usually filled a room with equipment. The users would write their programs (usu­ally in the FORTRAN language) as a series of logically constructed line statements that would be punched on cards, and the decks of punched cards (sometimes occupying many boxes for just one program) would be fed into a reader that would read the punches and tell the computer what calculations to make. The output from the calculations would be printed on large sheets and returned to the user. One program at a time was fed into the computer, the so-called "batch” operation. The user would submit his or her batch to the computer desk and then return hours or days later to pick up the printed output. As cumbersome as it

may appear today, the batch operation worked. The field of CFD was launched with such batch operations on mainframe computers like the IBM 7094. And NASA Ames was a spearhead of such activities. Indeed, because of the synergism between CFD and the computers on which it worked, the demands on the central IBM installation at Ames grew at a compounded rate of over 100 percent per year in the 1960s.

The Critical Tool: Emergent High-Speed Electronic Digital ComputingWith these computers, it became practical to set up CFD solutions of the Euler equations for two-dimensional flows. These solutions could be carried out with a relatively small number of grid points in the flow, typically 10,000 to 100,000 points, and still have computer run times on the order of hours. Users of CFD in the 1960s were happy to have this capability, and the three primary NASA Research Centers—Langley, Ames, and Lewis (now Glenn)—made major strides in the numerical analysis of many types of flows, especially in the transonic and hyper­sonic regimes. The practical calculation of inviscid (that is, frictionless), three-dimensional flows and especially any type of high Reynolds num­ber flows was beyond the computer capabilities at that time.

This situation changed markedly when the supercomputer came on the scene in the 1970s. NASA Ames acquired the Illiac IV advanced parallel-processing machine. Designed at the University of Illinois, this was an early and controversial supercomputer, one bridging both older and newer computer architectures and processor approaches. Ames quickly followed with the installation of an IBM 360 time-sharing com­puter. These machines provided the capability to make CFD calculations with over 1 million grid points in the flow field with a computational speed of more than 106 FLOPS. NASA installed similar machines at the Langley and Lewis Research Centers. On these machines, NASA researchers made the first meaningful three-dimensional inviscid flow – field calculations and significant two-dimensional high Reynolds num­ber calculations. Supercomputers became the engine that propelled CFD into the forefront of aerospace design as well as research. Bigger and better supercomputers, such as the pioneering Cray-1 and its succes­sor, the Cray X-MP, allowed grids of tens of millions of grid points to be used in a flow-field calculation with speeds beginning to approach the hallowed goal of gigaflops (109 floating point operations per second). Such machines made it possible to carry out numerical solutions of the Navier-Stokes equations for three-dimensional fairly high Reynolds number viscous flows. The first three-dimensional Navier-Stokes solu­tions of the complete flow field around a complete airplane at angle of
attack came on the scene in the 1980s, enabled by these supercomput­ers. Subsonic, transonic, supersonic, and hypersonic flow solutions cov­ered the whole flight regime. Again, the major drivers for these solutions were the aerospace research and development problems tackled by NASA engineers and scientists. This headlong development of supercomput­ers has continued unabated. The holy grail of CFD researchers in the 1990s was the teraflop machine (1012 FLOPS); today, it is the petaflop (1015 FLOPS) machine. Indeed, recently the U. S. Energy Department has contracted with IBM to build a 20-petaflop machine in 2012 for calcu­lations involving the safety and reliability of the Nation’s aging nuclear arsenal.[771] Such a machine will aid the CFD practitioner’s quest for the ultimate flow-field calculations—direct numerical simulation (DNS) of turbulent flows, an area of particularly interest to NASA researchers.

NASA Centers and Their Computational Structural Research

To gain a sense of the types of computational structures projects under­taken by NASA and the contributions of individual Centers to the Agency’s efforts, it is necessary to examine briefly the computational structures analysis activities undertaken at each Center, reviewing representative projects, computer programs, and instances of technology transfer to industry—aircraft and otherwise. Projects included the development of new computer programs, the enhancement of existing programs, inte­gration of programs to provide new capabilities, and, in some cases, just the development of methods to apply existing computer programs to new types of problems. The unique missions of the different Centers certainly influenced the research, but many valuable developments came from col­laborative efforts between Centers and applying tools developed at one Center to the problems being worked at another.[849]

FLEXSTAB (Ames, Dryden, and Langley Research Centers, 1970s)

FLEXSTAB was a method for calculating stability derivatives that included the effects of aeroelastic deformation. Originally developed in the early 1970s by Boeing under contract to NASA Ames, FLEXSTAB was also used and upgraded at Dryden. FLEXSTAB used panel-method aerodynamic calculations, which could be readily adjusted with empiri­cal corrections. The structural effects were treated first as a steady defor­mation at the trim condition, then as "unsteady perturbations about the reference motion to determine dynamic stability by characteristic roots or by time histories following an initial perturbation or follow­ing penetration of a discrete gust flow field.”[976] Comparisons between FLEXSTAB predictions and flight measurements were made at Dryden for the YF-12A, Shuttle, B1, and other aircraft. Initially developed for symmetric flight conditions only, FLEXSTAB was extended in 1981 to include nonsymmetric flight conditions.[977] In 1984, a procedure was developed to couple a NASTRAN structural model to the FLEXSTAB elastic-aircraft stability analysis.[978] NASA Langley and the Air Force Flight Dynamics Laboratory also funded upgrades to FLEXSTAB,

leading to the DYLOFLEX program, which added aeroservoelastic effects.[979]

Hot Structures: ASSET

Dyna-Soar never flew, for Defense Secretary Robert S. McNamara can­celed the program in December 1963. At that time, vehicles were well under construction but still were some 2% years away from first flight.

Still its technology remained available for further development, and thus it fell to a related program, Aerothermodynamic/elastic Structural Systems Environmental Test (ASSET), to take up the hot structures cause and fly with them.[1055]

As early as August 1959, the Flight Dynamics Laboratory at Wright – Patterson Air Force Base launched an in-house study of a small recov­erable boost-glide vehicle that was to test hot structures during reentry. From the outset there was strong interest in problems of aerodynamic flutter. This was reflected in the ASSET concept name.

ASSET won approval as a program in January 1961. In April of that year, the firm of McDonnell Aircraft, which was already building Mercury spacecraft, won a contract to develop the ASSET flight vehi­cles. The Thor, which had been deployed operationally in England, was about to come home because it was no longer needed as a weapon. It became available for use as a launch vehicle.

Подпись: er case study on X-20).

Hot Structures: ASSET Подпись: 9

ASSET took shape as a flat-bottomed wing-body craft that used a low-wing configuration joined to a truncated combined cone-cylinder

body. It had a length of 59 inches and a span of 55 inches. Its bill of materials resembled that of Dyna-Soar, for it used TZM molybdenum to withstand 3,000 °F on the forward lower heat shield, graphite for sim­ilar temperatures on leading edges, and zirconia rods for the nose cap, which was rated at 4,000 °F. But ASSET avoided the use of Rene 41, with cobalt and columbium alloys being employed instead.[1056]

ASSET was built in two varieties: the Aerothermodynamic Structural Vehicle (ASV) weighing 1,130 pounds and the Aerothermodynamic Elastic Vehicle (AEV) at 1,225 pounds. The AEVs were to study panel flutter along with the behavior of a trailing-edge flap, which represented an aerodynamic control surface in hypersonic flight. These vehicles did not demand the highest possible flight speeds and therefore flew with single-stage Thors as the booster. But the ASVs were built to study mate­rials and structures in the reentry environment while taking data on tem­peratures, pressures, and heat fluxes. Such missions demanded higher speeds. These boost-glide craft therefore used the two-stage Thor-Delta launch vehicle, which resembled the Thor-Able that had conducted nose cone tests at intercontinental range in 1958.[1057]

The program eventually conducted six flights:[1058] several of these craft were to be recovered. Following standard practice, their launches were scheduled for the early morning, to give downrange recovery crews the maximum hours of daylight. That did not help ASV-1, the first flight in the program, which sank into the sea. Still, it flew successfully and returned good data. In addition, this flight set a milestone, for it was the first time in aviation history that a lifting reentry spacecraft had traversed the demand­ing hypersonic reentry corridor from orbit down to the lower atmosphere.[1059]

ASV-2 followed, using the two-stage Thor-Delta, but it failed when the second stage did not ignite. The next one carried ASV-3, with this mission scoring a double achievement. It not only made a good flight downrange, but it was also successfully recovered. It carried a liquid – cooled double-wall test panel from Bell Aircraft along with a molybde­num heat-shield panel from Boeing, home of Dyna-Soar. ASV-3 also had a new nose cap. The standard ASSET type used zirconia dowels, l.5 inches long by 0.5 inches in diameter, which were bonded together with a zir­conia cement. The new cap, from International Harvester, had a tung­sten base covered with thorium oxide and was reinforced with tungsten.

A company advertisement stated that it withstood reentry so well that it "could have been used again,” and this was true for the craft as a whole. Historian Richard P. Hallion writes that "overall, it was in excel­lent condition. Water damage. . . caused some problems, but not so seri­ous that McDonnell could not have refurbished and reflown the vehicle.” The Boeing and Bell panels came through reentry without damage, and the importance of physical recovery was emphasized when columbium aft leading edges showed significant deterioration. They were redesigned, with the new versions going into subsequent AEV and ASV spacecraft.[1060]

The next two flights were AEVs, each of which carried a flutter test panel and a test flap. AEV-1 returned only one high-Mach data point, at Mach 11.88, but this sufficed to indicate that its panel was probably too stiff to undergo flutter. Engineers made it thinner and flew a new one on AEV-2, where it returned good data until it failed at Mach 10. The flap experiment also showed value. It had an electric motor that deflected it into the airstream, with potentiometers measuring the force required to move it, and it enabled aerodynamicists to critique their theories. Thus one treatment gave pressures that were in good agreement with obser­vations, whereas another did not.

ASV-4, the final flight, returned "the highest quality data of the ASSET program,” according to the flight-test report. The peak speed of 19,400 ft/sec, Mach 18.4, was the highest in the series and was well above the design speed of 18,000 ft/sec. The long hypersonic glide covered 2,300 nautical miles and prolonged the data return, which presented pressures at 29 locations on the vehicle and temperatures at 39. An onboard system transferred mercury bal­last to trim the angle of attack, increasing the lift-to-drag ratio (L/D) from its average of 1.2 to 1.4, and extending the trajectory. The only important prob­lem came when the recovery parachute failed to deploy properly and ripped away, dooming ASV-4 to follow ASV-1 into the depths of the Atlantic.[1061]

Подпись: DESIGNHot Structures: ASSETOPTIMUM


Подпись: 9 Hot Structures: ASSET Hot Structures: ASSET

NASA CR-1568


NASA concept for a hypersonic cruise wing structure formed of beaded, corrugated, and tubu­lar structural panels, 1978. NASA.

On the whole, ASSET nevertheless scored a host of successes. It showed that insulated hot structures could be built and flown without producing unpleasant surprises, at speeds up to three-fourths of orbital velocity. It dealt with such practical issues of design as fabrication, fas­teners, and coatings. In hypersonic aerodynamics, ASSET contributed to understanding of flutter and of the use of movable control surfaces. The program also developed and successfully used a reaction control system built for a lifting reentry vehicle. Only one flight vehicle was recovered in four attempts, but it complemented the returned data by permitting a close look at a hot structure that had survived its trial by fire.

Digital Fly-By-Wire: The Space Legacy

Both the Mercury and Gemini capsules controlled their reaction control thrusters via electrical commands carried by wire. They also used highly reliable computers specially developed for the U. S. manned space flight program. During reentry from space on his historic 1962 Mercury mis­sion, the first American in space, Alan Shepard, took manual control of the spacecraft attitude, one axis at a time, from the automatic attitude control system. Using the Mercury direct side controller, he "hand-flew” the capsule to the retrofire attitude of 34 degrees pitch-down. Shepard reported that he found that the spacecraft response was about the same as that of the Mercury simulator at the NASA Langley Research Center.[1151] The success of fly-by-wire in the early manned space missions gave NASA confidence to use a similar fly-by-wire approach in the Lunar Landing Research Vehicle (LLRV), built in the early 1960s to practice lunar land­ing techniques on Earth in preparation for the Apollo missions to the Moon. Two LLRVs were built by Bell Aircraft and first flown at Dryden in 1964. These were followed by three Lunar Landing Training Vehicles (LLTVs) that were used to train the Apollo astronauts. The LLTVs used a triply redundant fly-by-wire flight control system based on the use of three analog computers. Pure fly-by-wire in their design (there was insufficient weight allowance for a mechanical backup capability), they proved invaluable in preparing the astronauts for actual landings on the surface of the Moon, flying until November 1972.[1152] A total of 591 flights were accomplished, during which one LLRV and two LLTVs crashed in
spectacular accidents but fortunately did so without loss of life.[1153] During this same period, digital computers were demonstrating great improve­ments in processing power and programmability. Both the Apollo Lunar Module and the Command and Service Module used full-authority dig­ital fly-by-wire controls. Fully integrated into the fly-by-wire flight con­trol systems used in the Apollo spacecraft, the Apollo digital computer provided the astronauts with the ability to precisely maneuver their vehi­cles during all aspects of the lunar landing missions. The success of the Apollo digital computer in these space vehicles led to the idea of using this computer in a piloted flight research aircraft.

Подпись: 10By the end of 1969, many experts within NASA and especially at the NASA Flight Research Center at Edwards Air Force Base were con­vinced that digital-computer-based fly-by-wire flight control systems would ultimately open the way to dramatic improvements in aircraft design, flight safety, and mission effectiveness. A team headed by Melvin E. Burke—along with Dwain A. Deets, Calvin R. Jarvis, and Kenneth J. Szalai—proposed a flight-test program that would demonstrate exactly that. The digital fly-by-wire proposal was evaluated by the Office of Advanced Research and Technology (OART) at NASA Headquarters. A strong supporter of the proposal was Neil Armstrong, who was by then the Deputy Associate Administrator for Aeronautics. Armstrong had been the first person to step on the Moon’s surface, in July 1969 during the Apollo 11 mission, and he was very interested in fostering transfer of technology from the Apollo program into aeronautics applications. During discussion of the digital fly-by-wire proposal with Melvin Burke and Cal Jarvis, Armstrong strongly supported the concept and reportedly commented: "I just went to the Moon with one.” He urged that they con­tact the Massachusetts Institute of Technology (MIT) Draper Laboratory to evaluate the possibility of using modified Apollo hardware and soft­ware.[1154] The Flight Research Center was authorized to modify a fighter type aircraft with a digital fly-by-wire system. The modification would be based on the Apollo computer and inertial sensing unit.


Digital Flight Control for Tactical Aircraft (DIGITAC) was a joint program between the Air Force Flight Dynamics Laboratory (AFFDL) at Wright – Patterson AFB, OH, and the USAF Test Pilot School (TPS) at Edwards AFB. Its purpose was to develop and demonstrate digital flight control technology for potential use in future tactical fighter and attack aircraft, including the feasibility of using digital flight control computer technology to optimize an airplane’s tracking and handling qualities for a full range of weapons delivery tasks. The second prototype LTV YA-7D (USAF serial No. 67-14583) was selected for modification as the DIGITAC testbed by replacing the analog computer of the YA-7D Automated Flight Control System (AFCS) with the DIGITAC digital multimode flight control system that was developed by the AFFDL. The mechanical flight control system in the YA-7D was unchanged and was retained as a backup capability.

The YA-7D’s flight control system was eventually upgraded to DIGITAC II configuration. DIGITAC II used military standard data buses and transferred critical flight control data between individual computers and between computers and remote terminals. The data buses used
were dual channel wire and dual channel fiber optic and were selectable in the cockpit by the pilot to allow him to either fly-by-wire or fly-by­light. Alternately, for flight-test purposes, the pilot was able to imple­ment one wire channel and one fiber optic channel. During early testing, the channel with the multifiber cables (consisting of 210 individual fibers) encountered numerous fiber breakage problems during normal ground maintenance. The multifiber cable design was replaced by sin­gle-fiber cables with tough protective shields, a move that improved data transmission qualities and nearly eliminated breakage issues. The DIGITAC fly-by-light system flew 290 flights during a 3-year period, performing flaw­lessly with virtually no maintenance. It was so reliable that it was used to fly the aircraft on all routine test missions. The system performance and reliability was considered outstanding, with the technical approach assessed as ready for consideration for use in production aircraft.[1208]

Подпись: 10The DIGITAC YA-7D provided the TPS with a variable stability tes­tbed aircraft for use in projects involving assessments of advanced air­craft flying qualities. Results obtained from these projects contributed to the flying qualities database in many areas, including degraded-mode flight control cross-coupling, control law design, pro versus adverse yaw studies, and roll-subsistence versus roll-time-delay studies. Under a TPS project known as Have Coupling, the YA-7D DIGITAC aircraft was used to investigate degradation to aircraft handling qualities that would occur in flight when a single pitch control surface (such as one side of the horizontal stabilizer) was damaged or impaired. An asym­metric flight control situation would result when a pure pitch motion was commanded by the pilot, with roll and yaw cross-coupling motions being produced. For the Have Coupling tests, various levels of cross-cou­pling were programmed into the DIGITAC aircraft. The resulting data provided a valuable contribution to the degraded flight control mode handling qualities body of knowledge. This included the interesting finding that with exactly the same amounts of cross-coupling present, pilot ratings of aircraft handling qualities in flight-testing were signif­icantly different compared with those rating obtained in the ground – based simulator.[1209]

The TPS operated the YA-7D DIGITAC aircraft for over 15 years, beginning in 1976. It made significant contributions to advances in flight control technology during investigations involving improved direc­tional control, the effect of depressed roll axis on air-to-air tracking, and airborne verification of computer-simulated flying qualities. The DIGITAC aircraft was used to conduct the first Air Force flight tests of a digital fight control system, and it was also used to flight-test the first fiber-opti­cal fly-by-light DFCS. Other flight-test firsts included the integration of a dynamic gun sight and the flight control system and demonstrations of task-tailored multimode flight control laws.[1210] The DIGITAC YA-7D is now on display at the Air Force Flight Test Center Museum at Edwards AFB.