Richard Whitcomb and the Quest for Aerodynamic Efficiency


Richard Whitcomb and the Quest for Aerodynamic Efficiency Much of the history of aircraft design in the postwar era is encapsulated by the remarkable work of NACA-NASA engineer Richard T. Whitcomb. Whitcomb, a transonic and supersonic pioneer, gave to aeronautics the wasp-waisted area ruled transonic airplane, the graceful and highly efficient supercritical wing, and the distinctive wingtip winglet. But he also contributed greatly to the development of advanced wind tunnel design and testing. His life offers insights into the process of aeronautical creativity and the role of the genius figure in advancing flight.


N DECEMBER 21, 1 954, Convair test pilot Richard L. "Dick” Johnson flew the YF-102A Delta Dagger prototype to Mach 1, an achievement that marked the meeting of a challenge that had been facing the American aeronautical community. The Delta Dagger’s contoured fuselage, shaped by a new design concept, the area rule, enabled an efficient transition from subsonic to supersonic via the tran­sonic regime. Seventeen years later, test pilot Thomas C. "Tom” McMurtry made the first flight in the F-8 Supercritical Wing flight research vehi­cle on March 9, 1971. The flying testbed featured a new wing designed to cruise at near-supersonic speeds for improved fuel economy. Another 17 years later, the Boeing Company announced the successful maiden flight of what would be the manufacturer’s best-selling airliner, the 747­400, on April 29, 1988. Incorporated into the design of the jumbo jet were winglets: small vertical surfaces that reduced drag by smoothing turbulent airflow at the wingtips to increase fuel efficiency.[134] All three of these revolutionary innovations originated with one person, Richard T.

"Dick” Whitcomb, an aeronautical engineer working for the National Advisory Committee for Aeronautics (NACA) and its successor, the National Aeronautics and Space Administration (NASA).

A major aeronautical revolution was shaping the direction and use of the airplane during the latter half of the 20th century. The invention of the turbojet engine in Europe and its incorporation into the airplane transformed aviation. The aeronautical community followed a basic premise—to make the airplane fly higher, faster, farther, and cheaper than ever before—as national, military, industrial, and economic fac­tors shaped requirements. As a researcher at the Langley Memorial Aeronautical Laboratory in Hampton, VA, Dick Whitcomb was part of this movement, which was central to the missions of both the NACA and NASA.[135] His three fundamental contributions, the area rule fuselage, the supercritical wing, and the winglet, each in their own aerodynamic ways offered an increase in speed and performance without an increase in power. Whitcomb was highly individualistic, visionary, creative, and practical, and his personality, engineering style, and the working environ­ment nurtured at Langley facilitated his quest for aerodynamic efficiency.

From Curiosity to Controversy

In 1947, Muroc Army Airfield, CA, was a small collection of aircraft han­gars and other austere buildings adjoining the vast Rogers Dry Lake in the high desert of the Antelope Valley, across the San Gabriel Mountains from the Los Angeles basin. Because of the airfield’s remoteness and clear skies, a small team of Air Force, the NACA, and contractor personnel was using Muroc for a secret project to explore the still unknown ter­ritory of supersonic flight. On October 14, more than 40,000 feet over the little desert town of Boron, visible only by its contrail, Capt. Chuck Yeager’s 31-foot-long rocket-propelled Bell XS-1 successfully "broke” the fabled sound barrier.[323] The sonic boom from his little experimental air­plane—the first to fly supersonic in level flight—probably did not reach the ground on that historic day.[324] Before long, however, the acoustical signature of the shock waves generated by XS-1s and other supersonic aircraft became a familiar sound at and around the isolated airbase.

In the previous century, an Austrian physicist-philosopher, Ernst Mach, was the first to explain the phenomenon of supersonic shock waves, which he displayed visually in 1887 with a cleverly made photo­graph showing those formed by a high-velocity projectile, in this case a bullet. The speed of sound, he also determined, varied in relation to the density of the medium though which it passed, such as air molecules. (At sea level, the speed of sound is 760 mph.) In 1929, Jakob Ackeret, a Swiss fluid dynamicist, named this variable "Mach number” in his honor. This guaranteed that Ernst would be remembered by future gen­erations, especially after it became known that the 700 mph speed of Yeager’s XS-1, flying at 43,000 feet, was measured as Mach 1.06.[325]

Humans have long been familiar with and often frightened by natural sonic booms in the form of thunder, i. e., sudden surges of air

From Curiosity to Controversy

Bell XS-1 —the first aircraft to exceed Mach 1 in level flight, October 14, 1947. U. S. Air Force.

pressure caused when strokes of lightning instantaneously heat con­tiguous columns of air molecules. Perhaps the most awesome of sonic booms—heard only rarely—have been produced by large meteoroid fireballs speeding through the atmosphere. On a much smaller scale, the first acoustical shock waves produced by human invention were the modest cracking noises from the snapping of a whip. The high-power explosives perfected in the latter half of the 19th century were able—as Mach explained—to propel projectiles faster than the speed of sound. Their acoustical shock waves would be among the cacophony of fear­some sounds heard by millions of soldiers during the two World Wars.[326]

On a Friday evening, September 8, 1944, an explosion blew out a large crater in Stavely Road, west of London. The first German V-2 bal­listic missile aimed at England had announced its arrival. "After the explosion came a double thunderclap caused by the sonic boom catch-

ing up with the fallen rocket.”[327] For the next 7 months, millions of peo­ple would hear these sounds, which would become known as "sonic bangs” in Britain, from more than 3,000 V-2s launched at England as well as liberated portions of France, Belgium, and the Netherlands. Their sound waves would always arrive too late to warn any of those unfortu­nate enough to be near the missiles’ points of impact.[328] After World War II, these strange noises faded into memory for several years—until the arrival of new jet fighter planes.

In November 1949, the NACA designated its growing detachment at Muroc as the High-Speed Flight Research Station (HSFRS), 1 month before the Air Force renamed the installation Edwards Air Force Base (AFB).[329] By the early 1950s, the desert and mountains around Edwards reverberated with the occasional sonic booms of experimental and pro­totype aircraft, as did other flight-test locations in the United States and United Kingdom. Scientists and engineers had been familiar with the "axisymmetric” ballistic shock waves of projectiles such as artil­lery shells (referred to scientifically as bodies of revolution).[330] This was one reason the fuselage of the XS-1 was shaped like a 50-caliber bul­let. But these new acoustic phenomena—many of which featured a double-boom sound—hinted that they were more complex. In late 1952, the editors of the world’s oldest aeronautical weekly stated with some hyperbole that "the ‘supersonic bang’ phenomenon, if only by reason of its sudden incidence and the enormous public interest it has aroused, is probably the most spectacular and puzzling occurrence in the history of aerodynamics.”[331]

A young British graduate student, Gerald B. Whitham, was the first to analyze thoroughly the abrupt rise in air pressure upon arrival of a supersonic vehicle’s "bow wave,” followed by a more grad­ual but deeper fall in pressure for a fraction of a second, and then a recompression with the passing of the vehicle’s tail wave. As shown in a simplified fashion by Figure 1, this can be illustrated graphically by an elongated capital "N” (the solid line) transecting a horizontal axis (the dashed line) representing ambient air pressure during a second or less of elapsed time. For Americans, the pressure change is usually expressed in pounds per square foot (psf—also abbreviated as lb/ft[332] [333]).

Because a jet fighter (or a V-2 missile) is much longer than an artil­lery shell is, the human ear could detect a double boom if its tail shock wave arrived a tenth of a second or more after its bow shock wave. Whitham was first to systematically examine the more complex shock waves, which he called the F-function, generated by "nonaxisymmetri – cal” (i. e., asymmetrical) configurations, such as airplanes.11

The number of these double booms multiplied in the mid-1950s as the Air Force Flight Test Center (AFFTC) at Edwards (assisted by the HSFRS) began putting a new generation of Air Force jet fighters and interceptors, known as the Century Series, through their paces. The remarkably rapid advance in aviation technology (and priorities of the Cold War "arms race”) is evident in the sequence of their first flights at Edwards: YF-100 Super Sabre, May 1953; YF-102 Delta Dagger, October 1953; XF-104 Starfighter, February 1954; F-101 Voodoo, September 1954; YF-105 Thunderchief, October 1955; and F-106 Delta Dart, December 1956.12

With the sparse population living in California’s Mojave Desert region during the 1950s, disturbances caused by the flight tests of new jet aircraft were not a serious issue. But even in the early 1950s, the United

From Curiosity to Controversy

Figure 1. Simplified N-shaped sonic boom signature. NASA.

States Air Force (USAF) became concerned about their future impact. In November 1954, for example, its Aeronautical Research Laboratory at Wright-Patterson AFB, OH, submitted a study to the Air Force Board of top generals on early findings regarding the still somewhat myste­rious nature of sonic booms. Although concluding that low-flying air­craft flying at supersonic speeds could cause considerable damage, the report optimistically predicted the possibility of supersonic flight with­out booms at altitudes over 35,000 feet.[334]

As the latest Air Force and Navy fighters went into full produc­tion and began flying from bases throughout the Nation, much of the American public was exposed to jet noise for the first time. This included the thunderclap-like thuds characteristic of sonic booms—often accom­panied by rattling windowpanes. Under certain conditions, as the U. S. armed services and British Royal Air Force (RAF) had learned, even maneuvers below Mach 1 (e. g., accelerations, dives, and turns) could generate and focus transonic shock waves in such a manner as to cause strong sonic booms.[335] Indeed, residents of Southern California began hearing such booms in the late 1940s, when North American Aviation was flight-testing its new F-86 Sabre. The first civilian claim against the

USAF for sonic boom damage was apparently filed at Eglin AFB, FL, in 1951, when only subsonic jet fighters were assigned there.[336] Additionally, as shown in 1958 by Frank Walkden, another English mathematician, the lift effect of airplane wings could magnify the strength of sonic booms more than previously estimated.[337]

Sonic boom claims against the Air Force first became statistically significant in 1957, reflecting its growing inventory of Century fight­ers and the type of maneuvers they sometimes performed, which could focus acoustical rays into what became called "super booms.” (It was found that these powerful but localized booms had a U-shaped signa­ture, with the tail shock wave as well as that from the nose of the air­plane being above ambient air pressure.) Most claims involved broken windows or cracked plaster, but some were truly bizarre, such as the death of pets or the insanity of livestock. In addition to these formal claims, Air Force bases, local police switchboards, and other agencies received an uncounted number of phone calls about booms, ranging from merely inquisitive to seriously irate.[338] Complaints from constituents also became an issue for the U. S. Congress.[339] Between 1956 and 1968, some 38,831 claims were submitted to the Air Force, which approved 14,006 in whole or in part—65 percent for broken glass, 21 percent for cracked plaster (usually already weakened), 8 percent for fallen objects, and 6 percent for other reasons.[340]

The military’s problem with sonic boom complaints seems to have peaked in the 1960s. One reason was the sheer number of fighter-type aircraft stationed around the Nation (over three times as many as today). Secondly, many of these aircraft’s missions were air defense. This often meant flying at high speed over populated areas for training in

defending cities and other key targets from aerial attack, sometimes in prac­tice against Strategic Air Command (SAC) bombers. The North American Air Defense Command (NORAD) conducted two of the largest such exercises, Skyshield I and Skyshield II, in 1960 and 1961. The Federal Aviation Agency (FAA) shut down all civilian air traffic while NORAD’s interceptors and SAC bombers (augmented by some from the RAF) battled overhead—accompanied by a sporadic drumbeat of sonic booms reaching the surface.[341]

Although most fighters and interceptors deployed in the 1960s could readily fly faster than sound, they could only do so for a short distance because of the rapid fuel consumption of jet engine afterburners. Thus, their sonic boom "carpets” were relatively short. However, one super­sonic American warplane that became operational in 1960 was designed to fly faster than Mach 2 for more than 1,000 miles.

This innovative but troublesome aircraft was the SAC’s new Convair – built B-58 Hustler medium bomber. On March 5, 1962, the Air Force showed off the long-range speed of the B-58 by flying one from Los Angles to New York in just over 2 hours at an average pace of 1,215 mph (despite having to slow down for an aerial refueling over Kansas). After another refueling over the Atlantic, the same Hustler "outraced the sun” (i. e., flew faster than Earth’s rotation) back to Los Angles with one more refueling, completing the record-breaking round trip at an average speed of 1,044 mph.[342]

Capable of sustained Mach 2+ speeds, the four-engine delta-winged Hustler (weighing up to 163,000 pounds) helped demonstrate the feasi­bility of a supersonic transport. But the B-58’s performance revealed at least one troubling omen. Almost wherever it flew supersonic over pop­ulated areas, the bomber left sonic boom complaints and claims in its wake. Indeed, on its record-shattering flight of March 1962, flown mostly at an altitude of 50,000 feet (except when coming down to 30,000 feet for refueling), "the jet dragged a sonic boom 20 to 40 miles wide back and forth across the country—frightening residents, breaking windows, crack-

From Curiosity to Controversy

Convair B-58 Hustler, the first airplane capable of sustained supersonic flight and a major contributor to early sonic boom research. USAF.

ing plaster, and setting dogs to barking.”[343] As indicated by Figure 2, the B-58 became a symbol for sonic boom complaints (despite its small numbers).

Most Americans, especially during times of increased Cold War ten­sions, tolerated occasional disruptions justified by national defense. But how would they react to constantly repeated sonic booms generated by civilian jet airliners? Could a practical passenger-carrying supersonic air­plane be designed to minimize its sonic signature enough to be accept­able to people below? NASA’s attempts to resolve these two questions occupy the remainder of this history.

Shuttle Aerodynamics and Structures

The Shuttle was one of the last major aircraft to rely almost entirely on wind tunnels for studies of its aerodynamics. There was much interest in an alternative: the use of supercomputers to derive aerodynamic data through solution of the governing equations of airflow, known as the Navier-Stokes equations. Solution of the complete equations was out of the question, for they carried the complete physics of turbulence, with turbulent eddies that spanned a range of sizes covering several orders of magnitude. But during the 1970s, investigators made headway by dro – ing the terms within these equations that contained viscosity, thereby suppressing turbulence.[629]

People pursued numerical simulation because it offered hope of overcoming the limitations of wind tunnels. Such facilities usually tested small models that failed to capture important details of the aerodynam­ics of full-scale aircraft. Other errors arose from tunnel walls and model supports. Hypersonic flight brought its own restrictions. No installation had the power to accommodate a large model, realistic in size, at the velocity and temperatures of reentry.[630]

By piecing together results from specialized facilities, it was possi­ble to gain insights into flows at near-orbital speeds. The Shuttle reen­tered at Mach 27. NASA Langley had a pair of wind tunnels that used helium, which expands to very high flow velocities. These attained Mach 20, Mach 26, and even Mach 50. But their test models were only a few inches in size, and their flows were very cold and could not duplicate the high temperatures of atmosphere entry. Shock tunnels, which heated and compressed air using shock waves, gave true temperature up to Mach 17 while accommodating somewhat larger models. Yet their flow dura­tions were measured in milliseconds.[631]

During the 1970s, the largest commercially available mainframe computers included the Control Data 7600 and the IBM 370-195.[632] These sufficed to treat complete aircraft—but only at the lowest level of approx­imation, which used linearized equations and treated the airflow over an airplane as a small disturbance within a uniform free stream. The full Navier-Stokes equations contained 60 partial derivatives; the linearized approximation retained only 3 of these terms. It nevertheless gave good accuracy in computing lift, successfully treating such complex configura­tions as a Shuttle orbiter mated to its 747. The next level of approximation restored the most important nonlinear terms and treated transonic and hypersonic flows, which were particularly difficult to simulate in wind tunnels. The inadequacies of wind tunnel work had brought such errors as faulty predictions of the location of shock waves along the wings of the C-141, an Air Force transport. In flight test, this plane tended to nose downward, and its design had to be modified at considerable expense.

Computers such as the 7600 could not treat complete aircraft in transonic flow, for the equations were more complex and the computa­tion requirements more severe. HiMAT, a highly maneuverable NASA experimental aircraft, flew at Dryden and showed excessive drag at Mach 0.9. Redesign of its wing used a transonic-flow computational code and approached the design point. The same program, used to reshape the wing of the Grumman Gulfstream, gave considerable increases in range and fuel economy while reducing the takeoff distance and landing speed.[633]

During the 1970s, NASA’s most powerful computer was the Illiac IV, at Ames Research Center. It used parallel processing and had 64 process­ing units, achieving speeds up to 25 million operations per second. Built by Burroughs Corporation with support from the Pentagon, this machine was one of a kind. It entered service at Ames in 1973 and soon showed that it could run flow-simulation codes an order of magnitude more rap­idly than a 7600. Indeed, its performance foreshadowed the Cray-1, a true supercomputer that became commercially available only after 1976.

The Illiac IV was a research tool, not an instrument of mainstream Shuttle development. It extended the reach of flow codes, treating three­dimensional inviscid problems while supporting simulations of viscous flows that used approximate equations to model the turbulence.[634] In the realm of Space Shuttle studies, Ames’s Walter Reinhardt used it to run a three-dimensional inviscid code that included equations of atmospheric chemistry. Near-peak-entry heating of the Shuttle would be surrounded by dissociated air that was chemically reacting and not in chemical equi­librium. Reinhardt’s code treated the full-scale orbiter during entry and gave a fine example of the computational simulation of flows that were impossible to reproduce in ground facilities.[635]

Such exercises gave tantalizing hints of what would be done with computers of the next generation. Still, the Shuttle program was at least a decade too early to use computational simulations both routinely and effectively. NASA therefore used its wind tunnels. The wind tun­nel program gave close attention to low-speed flight, which included approach and landing as well as separation from the 747 during the 1977 flight tests of Enterprise.

In 1975, Rockwell built a $1 million model of the orbiter at 0.36 scale, lemon yellow in color and marked with the blue NASA logo. It went into the 40- by 80-foot test section of Ames’s largest tunnel, which was easily visible from the adjacent freeway. It gave parame­ters for the astronauts’ flight simulators, which previously had used data from models at 3-percent scale. The big one had grooves in its surface that simulated the gaps between thermal protection tiles, permitting assessment of the consequences of the resulting rough­ness of the skin. It calibrated and tested systems for making aero­dynamic measurements during flight test and verified the design of the elevons and other flight control surfaces as well as of their actuators.[636]

Other wind tunnel work strongly influenced design changes that occurred early in development. The most important was the introduc­tion of the lightweight delta wing late in 1972, which reduced the size of the solid boosters and chopped 1 million pounds from the overall weight. Additional results changed the front of the external tank from a cone to an ogive and moved the solid boosters rearward, placing their nozzles farther from the orbiter. The modifications reduced drag, mini­mized aerodynamic interference on the orbiter, and increased stability by moving the aerodynamic center aft.

The activity disclosed and addressed problems that initially had not been known to exist. Because both the liquid main engines and the sol­ids had nozzles that gimbaled, it was clear that they had enough power to provide control during ascent. Aerodynamic control would not be necessary, and managers believed that the orbiter could set its elevons in a single position through the entire flight to orbit. But work in wind tunnels subsequently showed that aerodynamic forces during ascent would impose excessive loads on the wings. This required elevons to move while in powered flight to relieve these loads. Uncertainties in the

wind tunnel data then broadened this requirement to incorporate an active system that prevented overloading the elevon actuators. This sys­tem also helped the Shuttle to fly a variety of ascent trajectories, which imposed different elevon loads from one flight to the next.[637]

Much wind tunnel work involved issues of separation: Enterprise from its carrier aircraft, solid boosters from the external tank after burnout. At NASA Ames, a 14-foot transonic tunnel investigated prob­lems of Enterprise and its 747. Using the same equipment, engineers addressed the separation of an orbiter from its external tank. This was supposed to occur in near-vacuum, but it posed aerodynamic problems during an abort.

The solid boosters brought their own special issues and nuances. They had to separate cleanly; under no circumstances could a heavy steel casing strike a wing. Small solid rocket motors, mounted fore and aft on each booster, were to push them away safely. It then was necessary to understand the behavior of their exhaust plumes, for these small motors were to blast into onrushing airflow that could blow their plumes against the orbiter’s sensitive tiles or the delicate aluminum skin of the exter­nal tank. Wind tunnel tests helped to define appropriate angles of fire while also showing that a short, sharp burst from the motors was best.[638]

Prior to the first orbital flight in 1981, the program racked up 46,000 wind tunnel hours. This consisted of 24,900 hours for the orbiter, 17,200 for the mated launch configuration, and 3,900 for the carrier aircraft program. During the 9 years from contract award to first flight, this was equivalent to operating a facility 16 hours a day, 6 days a week. Specialized projects demanded unusual effort, such as an ongoing attempt to minimize model-to-model and tunnel-to-tunnel discrepan­cies. This work alone conducted 28 test series and used 14 wind tunnels.[639]

Structural tests complemented the work in aerodynamics. The mathematics of structural analysis was well developed, with computer

programs called NASTRAN that dealt with strength under load while addressing issues of vibration, bending, and flexing. The equations of NASTRAN were linear and algebraic, which meant that in principle they were easy to solve. The problem was that there were too many of them, for the most detailed mathematical model of the orbiter’s structure had some 50,000 degrees of freedom. Analysts introduced abridged versions that cut this number to 1,000 and then relied on experimental tests for data that could be compared with the predictions of the computers.[640]

There were numerous modes of vibration, with frequencies that changed as the Shuttle burned its propellants. Knowledge of these fre­quencies was essential, particularly in dealing with "pogo.” This involved a longitudinal oscillation like that of a pogo stick, with propellant flow­ing in periodic surges within its main feed line. Such surges arose when their frequency matched that of one of the structural modes, producing resonance. The consequent variations in propellant-flow rate then caused the engine thrust to oscillate at that same rate. This turned the engines into sledgehammers, striking the vehicle structure at its resonant fre­quency, and made the pogo stronger. It weakened only when consump­tion of propellant brought a further change in the structural frequency that broke the resonance, allowing the surges to die out.

Pogo was common; it had been present on earlier launch vehicles. It had brought vibrations with acceleration of 9 g’s in a Titan II, which was unacceptably severe. Engineering changes cut this to below 0.25 g, which enabled this rocket to launch the manned Gemini spacecraft. Pogo reappeared in Apollo during the flight of a test Saturn V in 1968. For the Shuttle, the cure was relatively simple, calling for installation of a gas-filled accumulator within the main oxygen line. This damped the pogo oscillations, though design of this accumulator called for close understanding of the pertinent frequencies.[641]

The most important structural tests used actual flight hardware, including the orbiter Enterprise and STA-099, a full-size test article that later became the Challenger. In 1978, Enterprise went to NASA Marshall, where the work now included studies on the external tank. For vibrational tests, engineers assembled a complete Shuttle by mat­ing Enterprise to such a tank and to a pair of dummy solid boosters. One problem that these models addressed came at lift-off. The ignition of the three main engines imposes a sudden load of more than 1 million pounds of thrust. This force bends the solid boosters, placing consider­able stress at their forward attachments to the tank. If the solid boost­ers were to ignite at that moment, their thrust would add to the stress.

To reduce the force on the attachment, analysts took advantage of the fact that the solid boosters would not only bend but would sway back and forth somewhat slowly, like an upright fishing rod. The strain on the attachment would increase and decrease with the sway, and it was possible to have the solid boosters ignite at an instant of minimum load. This called for delaying their ignition by 2.7 seconds, which cut the total load by 25 percent. The main engines fired during this inter­val, which consumed propellant, cutting the payload by 600 pounds. Still, this was acceptable.[642]

While Enterprise underwent vibration tests, STA-099 showed the orbiter’s structural strength by standing up to applied forces. Like a newborn baby that lacks hair, this nascent form of Challenger had no thermal-protection tiles. Built of aluminum, it looked like a large fighter plane. For the structural tests, tiles were not only unnecessary; they were counterproductive. The tiles had no structural strength of their own that had to be taken into account, and they would have received severe dam­age from the hydraulic jacks that applied the loads and forces.

STA-099 and Columbia had both been designed to accommodate a set of loads defined by a database designated 5.1. In 1978, there was a new database, 5.4, and STA-099 had to withstand its loads without acquir­ing strains or deformations that would render it unfit for flight. Yet in an important respect, this vehicle was untestable; it was not possible to validate the strength of its structural design merely by applying loads with those jacks. The Shuttle structure had evolved under such strong emphasis on saving weight that it was necessary to take full account

of thermal stresses that resulted from temperature differences across structural elements during reentry. No facility existed that could impose thermal stresses on so large an object as STA-099, for that would have required heating the entire vehicle.

STA-099 and Columbia had both been designed to withstand ulti­mate loads 140 percent greater than those of the 5.1 database. The structural tests on STA-099 now had to validate this safety factor for the new 5.4 database. Unfortunately, a test to 140 percent of the 5.4 loads threatened to produce permanent deformations in the structure. This was unacceptable, for STA-099 was slated for refurbishment into Challenger. Moreover, because thermal stresses could not be reproduced over the entire vehicle, a test to 140 percent would sacrifice the pros­pect of building Challenger while still leaving questions as to whether an orbiter could meet the safety factor of 140 percent.

NASA managers shaped the tests accordingly. For the entire vehi­cle, they used the jacks to apply stresses only up to 120 percent of the 5.4 loads. When the observed strains proved to match closely the values predicted by stress analysis, the 140 percent safety factor was deemed to be validated. In addition, the forward fuselage underwent the most severe aerodynamic heating, yet it was relatively small. It was subjected to a combination of thermal and mechanical loads that simulated the complete reentry stress environment in at least this limited region. STA – 099 then was given a detailed and well-documented posttest inspection. After these tests, STA-099 was readied as the flight vehicle Challenger, joining Columbia as part of NASA’s growing Shuttle fleet.[643]

Elastic Aerostructural Effects

The distortion of the shape of an airplane structure because of applied loads also creates a static aerodynamic interaction. When air loads are applied to an aerodynamic surface, it will bend or twist proportional to the applied load, just like a spring. Depending on the surface configura­tion, the distorted shape can produce different aerodynamic properties when compared with the rigid shape. A swept wing, for example, will bend upward at the tip and may also twist as it is loaded.

This new shape may exhibit higher dihedral effect and altered span – wise lift distribution when compared with a rigid shape, impacting the performance of the aircraft. Because virtually all fighter aircraft have short wings and can withstand 7 to 9 g, their aeroelastic deformation is relatively small. In contrast, bomber, cargo, or high-altitude recon­naissance airplanes are typically designed for lower g levels, and the resulting structure, particularly its long, high aspect ratio wings, is often quite limber.

Notice that this is not a dynamic, oscillatory event, but a static con­dition that alters the steady-state handling qualities of the airplane. The prediction of these aeroelastic effects is a complex and not altogether accurate process, though the trends are usually correct. Because the effect is a static condition, the boundaries for safe flight can usually be determined during the buildup flight-test program, and, if necessary, placards, can be applied to avoid serious incidents once the aircraft enters operational service.

The six-engine Boeing B-47 Stratojet was the first airplane designed with a highly swept, relatively thin, high aspect ratio wing. At higher tran­sonic Mach numbers, deflection of the ailerons would cause the wing to twist sufficiently to cancel, and eventually exceed, the rolling moment produced by the aileron, thus producing an aileron reversal. (In effect, the aileron was acting like a big trim tab, twisting the wing and causing the exact opposite of what the pilot intended.) Aerodynamic loads are proportional to dynamic pressure, so the aeroelastic effects are usually more pronounced at high airspeed and low altitude, and this combina­tion caused several fatal accidents with the B-47 during its flight-testing and early deployment. After flight-testing determined the magnitude and region of reduced roll effectiveness, the airplane was placarded to 425 knots to avoid roll reversal. In sum, then, an aeroelastic problem forced the limiting of the maximum performance achievable by the airplane, rendering it more vulnerable to enemy defenses. The B-47’s successor,

the B-52, had a much thicker wing root and more robust structure to avoid the problems its predecessor had encountered.

The Mach 3.2+ Lockheed SR-71 Blackbird, designed to cruise at supersonic speeds at very high altitude, was another aircraft that exhib­ited significant aeroelastic structural deformation.[716] The Blackbird’s structure was quite limber, and the aeroelastic predictions for its behav­ior at cruise conditions were in error for the pitch axis. The SR-71 was a blended wing-body design with chines running along the forward sides of the fuselage and the engine nacelles, then blending smoothly into the rounded delta wing. These chines added lift to the airplane, and because they were well forward of the center of gravity, added a signifi­cant amount of pitching moment (much like a canard surface on an air­plane such as the Wright Flyer or the Saab AJ-37 Viggen). Flight-testing revealed that the airplane required more "nose-up” elevon deflection at cruise than predicted, adding a substantial amount of trim drag. This reduced the range the Blackbird could attain, degrading its operational performance. To correct the problem, a small shim was added to the production fuselage break just forward of the cockpit. The shim tilted the forebody nose cone and its attached chine surfaces slightly upward, producing a nose-up pitching moment. This allowed the elevons to be returned to their trim faired position at cruise flight conditions, thus regaining the lost range capability.

Sadly, the missed prediction of the aeroelastic effects also con­tributed to the loss of one of the early SR-71s. While the nose cone forebody shim was being designed and manufactured, the contractor desired to demonstrate that the airplane could attain its desired range if the elevons were faired. To achieve this, Lockheed technicians added trim-altering ballast to the third production SR-71, then being used for systems and sensor testing. The ballast shifted the center of grav­ity about 2 percent aft from its normal position and at the aft design limit for the airplane. The engineers calculated that this would permit the elevons to be set in their faired position at cruise conditions for this one flight so that the SR-71 could meet its desired range perfor­mance. Instead, the aft cg, combined with the nonlinear aerodynamics

and aeroelastic bending of the fuselage, resulted in the airplane going out of control at the start of a turn at a cruise Mach number. The air­plane broke in half, catapulting the pilot, who survived, from the cock­pit. Unfortunately, his flight-test engineer/navigator perished.[717] Shim installation, together with other minor changes to the control system and engine inlets, subsequently enabled the SR-71 to meet its perfor­mance goals, and it became a mainstay of America’s national reconnais­sance fleet until its retirement in early 1990.

Lockheed, the Air Force, and NASA continued to study Blackbird aeroelastic dynamics. In 1970, Lockheed proposed installation of a Loads Alleviation and Mode Suppression (LAMS) system on the YF-12A, installing very small canards called "exciter-” or "shaker-vanes” on the forebody to induce in-flight motions and subsequent suppression tech­niques that could be compared with analytical models, particularly NASA’s NASTRAN and Boeing’s FLEXSTAB computerized load predic­tion and response tools. The LAMS testing complemented Air Force – NASA research on other canard-configured aircraft such as the Mach 3+ North American XB-70A Valkyrie, a surrogate for large transport-sized supersonic cruise aircraft. The fruits of this research could be found on the trim canards used on the Rockwell B-1A and B-1B strategic bomb­ers, which entered service in the late 1980s and notably improved their high-speed "on the deck” ride qualities, compared with their three low – altitude predecessors, the Boeing B-52 Stratofortress, Convair B-58 Hustler, and General Dynamics FB-111.[718]

Active Cooling Approaches

There are other proposed methods for protecting vehicles from high temperature while flying at high speed or during reentry. Several active cooling concepts have been proposed where liquid is circulated through a hot area, then through a radiator to dissipate the heat. These concepts are quite complex and the risk is very high: failure of an active cooling system could result in loss of a hypersonic vehicle within a few seconds. None has been demonstrated in flight. Although work is continuing on active cooling concepts, their application will probably not be realized for many years.

As we look ahead to the future of aviation, it is easy to merely assess the current fleet of successful aircraft or spacecraft, and decide on what improvements we can provide, without considering the history and evolu­tion that produced these vehicles. T he danger is that some of the past prob­lems will reappear unless the design and test communities are aware of their history. This paper has attempted to summarize some of the problems that have been encountered, and resolved, during the technology explosion in aviation that has occurred over the last 60 years. The manner in which the problems were discovered, the methods used to determine causes, and the final resolution or correction that was implemented have been presented. Hopefully these brief summaries of historical events will stimulate further research by our younger engineers and historians into the various subjects covered, and to that end, the following works are particularly relevant.

Digital Computation Triggers Automated Structural Analysis

In 1946, the ENIAC, "commonly accepted as the first successful high­speed electronic digital computer,” became operational at the University

Подпись: Structure

Digital Computation Triggers Automated Structural Analysis Подпись: 8

Single Element Reactions

Simple example of discretized structure and single element. NASA.

of Pennsylvania.[800] It took up as much floor space as a medium-sized house and had to be "programmed” by physically rearranging its con­trol connections. Many advances followed rapidly: storing instructions in memory, conditional control transfer, random access memory, mag­netic core memory, and the transistor-circuit element. With these and other advances, digital computers progressed from large and ungainly experimental devices to programmable, useful, commercially available (albeit expensive) machines by the mid-1950s.[801]

The FORTRAN programming language was also developed in the mid-1950s and rapidly gained acceptance in technical communities. This was a "high level language,” which allowed programming instructions to be written in terms that an engineer or analyst could understand; a com­piler handled the translation into "machine language” that the computer could understand. International Business Machines (IBM) developed the original FORTRAN language and also some of the early practical digi­tal computers. Other early digital computers were produced by Control Data Corporation (CDC) and UNIVAC. These developments made it pos­
sible to take the new methods of structural analysis that were emerging and implement them in an automated, repeatable manner.

The essence of these new methods was to treat a structure as a finite number of discrete elastic elements, rather than as a continuum. Reactions (forces and moments) and deflections are only calculated at specific points, called "nodes.” Elements connect the nodes. The stress and strain fields in the regions between the nodes do not need to be solved in the global analysis. They only need to be solved when develop­ing the element-level solution, and once this is done for a particular type of element, that element is available as a prepackaged building block. Complex shapes and structures can then be built up from the simple elements. A simple example—using straight beam elements to model a curved beam structure—is illustrated here.

To find, for example, the relationship between the displacements of the nodes and the corresponding reactions, one could do the following (called the unit displacement method). First, a hypothetical unit dis­placement of one node in one degree of freedom (d. o.f.) only is assumed. This displacement is transposed into the local element coordinate sys­tems of all affected elements. (In the corresponding figure, this would entail the relatively simple transformation between global horizontal and vertical displacements, and element axial and transverse displace­ments. The angular displacements would require no transformation, except in some cases a sign change.) The predetermined element stiff­ness matrices are used to find the element-level reactions. The element reactions are then translated back into global coordinates and summed to give the total structure reactions—to the single hypothetical displace­ment. This set of global reactions, plus zeroes for all forces unaffected by the assumed displacement, constitutes one column in the "stiffness matrix.” By repeating the exercise for every degree of freedom of every node, the stiffness matrix can be built. Then the reactions to any set of nodal displacements may be found by multiplying the stiffness matrix by the displacement vector, i. e., the ordered list of displacements. This entails difficult bookkeeping but simple math.

It is more common in engineering, however, to have to find unknown displacements and stresses from known applied forces. This answer is not possible to obtain so directly. (That is, if the process just described seems direct to you. If it does, you are probably an engineer. If it seems too trivial to have even mentioned, then you are probably a mathematician.)

Instead, after the stiffness matrix is found, it must be inverted to obtain the flexibility matrix. The inversion of large matrices is a sci­ence in itself. But it can be done, using a computer, if one has time to wait. Most of the science lies in improving the efficiency of the process. Another important output is the stress distribution throughout the struc­ture. But this problem has already been solved at the element level for a hypothetical set of element nodal displacements. Scaling the generic stress distribution by the actual displacements, for all elements, yields the stress state throughout the structure.

There are, of course, many variations on this theme and many com­plexities that cannot be addressed here. The important point is that we have gone from an insoluble differential equation to a soluble matrix arithmetic problem. This, in turn, has enabled a change from individual analyses by hand of local portions of a structure to a model­ing effort followed by an automated calculation of the stresses and deflec­tions of the entire structure.

Pioneering papers on discretization of structures were published by Alexander Hrennikoff in 1941 at the Massachusetts Institute of Technology and by Richard Courant in 1943 at the mathematics institute he founded at New York University that would later bear his name. These papers did not lead to immediate application, in part perhaps because they were ahead of the necessary computational technology and in part because they were still somewhat theoretical and had not yet developed a well – formed practical implementation. The first example of what we now call the finite element method (FEM) is commonly considered to be a paper by M. J. Turner (Boeing), R. W. Clough (University of California at Berkeley, Civil Engineering Department), H. C. Martin (University of Washington, Aeronautical Engineering Department), and L. J. Topp in 1956.[802] This paper presented a method for plane stress problems, using triangular elements. John Argyris at the University of Stuttgart, Germany, also made impor­tant early contributions. The term "finite element method” was actually coined by Clough in 1960. The Civil Engineering Department at Berkeley became a major center of early finite element methods development.[803]

By the mid-1960s, aircraft companies, computing companies, univer­sities, and Government research centers were beginning to explore the possibilities—although the method allegedly suffered some initial lack of interest in the academic world, because it bypassed elegant mathe­matical solutions in favor of numerical brute force.[804] However, the prac­tical value could not long be ignored. The following insightful comment, made by a research team at the University of Denver in 1966 (working under NASA sponsorship), sums up the expectation of the period: "It is certain that this concept is going to become one of the most important tools of engineering in the future as structures become more complex and computers more versatile and available.”[805]

Modern Rotor Aerodynamic Limits Survey

The Modern Rotor Aerodynamic Limits Survey was a 10-year program launched in 1984, which encompassed flight efforts in 1987 and 1993— 1994. In 1987, a Sikorsky UH-60A Black Hawk was tested with conven­tional structural instrumentation installed on the rotor blades. Then:

. . . Sikorsky Aircraft was [subsequently] contracted to build a set of highly instrumented blades for the Black Hawk test air­craft: a pressure blade with 242 absolute pressure transduc­ers and a strain-gauge blade with an extensive suite of strain gauges and accelerometers. . . approximately 30 gigabytes of data were obtained in 1993-94 and installed in an electronic database that was immediately accessible to the domestic rotorcraft industry.[948]

Подпись: HiMAT NASTRAN Model

Подпись: 8 Modern Rotor Aerodynamic Limits Survey Подпись: • ?500 ELEMENTS Подпись: CQUAO-

Modern Rotor Aerodynamic Limits SurveyComparison Ground Test Data


: niches

OO WO 140 -«О WO 200 220


NASTRAN model and NASTRAN to static test comparison. NASA.

The two types of measurement systems are complementary. Strain gauges give an indication of the total load in a member, but little insight to the details of where and how the load is generated. The pressure taps show the distribution of the applied aerodynamic load, but only at given stations, so the total load estimate depends on how one computes the data through the unknown regions between the pressure transducers. The combination of both types of data is most useful to researchers trying to correlate computational loads predictions with the test data.


HiMAT was a small, unpiloted aircraft (23.5-feet long, 15.6-foot wingspan, weight just over 3,000 pounds) somewhat representative of a fighter type configuration, flown between 1979 and 1983, and developed to evaluate the following set of technologies and features:

• Close-coupled canard.

• Winglets.

• Digital fly-by-wire flight control.

• Composite structure.

• Aeroelastic tailoring.

• Supercritical airfoil.

It was intended that the benefits of these collected advances be shown together rather than separately and on an unpiloted platform, so that


Подпись: Twist, ' deg _2 Подпись: О Flight data Predicted Подпись: 8

Modern Rotor Aerodynamic Limits SurveyНІМАТ Wing with Electro-Optical Deflection Measurement System

О Targets

HiMAT Electro-Optical Flight Deflection Measurement System. NASA.

the vehicle could be tested more aggressively without danger to a pilot.[949]

"Aeroelastic tailoring” refers to the design of a structure to achieve aerodynamically favorable deformation under load, rather than the more traditional approach of simply minimizing deformation. The goal of aero­elastic tailoring on the HiMAT ". . . was to achieve an aero-dynamically favorable spanwise twist distribution for maneuvering flight conditions” in the canard and the outboard wing. "The NASTRAN program was used to compute structural deflections at each model grid point. Verification of these deflections was accomplished by performing a loads test prior to delivery of the vehicle to NASA.” The ground-test loads were based on a sustained 8-g turn at Mach 0.9, which was one of the key performance design points of the aircraft. The NASTRAN model and a comparison between predicted and measured deflections are shown in the accompa­nying figure. Canard and wing twist were less than predicted. The differ­ence was attributed to insufficient understanding of the matrix-dominated laminate material properties.[950]

The vehicle was also equipped with a system to measure deflections of the wing surface in flight. Light emitting diodes (LEDs)—referred to as targets—on the wing upper surface were detected by a photodiode
array mounted on the fuselage, at a location overlooking the wing. Three inboard targets were used to determine a reference plane, from which the deflection of the remaining targets could be measured. To measure wing twist, targets were positioned primarily in pairs along the front and rear wing spars.[951] The HiMAT wing had a relatively small num­ber of targets—only two pairs besides the inboard reference set—so the in-flight measurements were not a detailed survey of the wing by any means. Rather, they provided measurement at a few key points, which could then be compared with the NASTRAN data and the ground loads test data. Target and receiver locations are illustrated here, together with a sample of the deflection data at the 8-g maneuver condition. In-flight deflection data showed similar twist to the ground-test data, indicating that the aerodynamic loads were well predicted.[952]

The HiMAT was an early step in the development of aeroelastic tai­loring capability, providing a set of NASTRAN data, static load test data, and flight-test data, for surface deflection at a given loading condition. The project also proved out the electro-optical system for in-flight deflection measurements, which would later be used in the X-29 project.

High-Temperature Structures and Materials

T. A. Heppenheimer

Taking fullest advantage of the high-speed potential of rocket and air­breathing propulsion systems required higher-temperature structures. Researchers recognized that aerothermodynamics involved linking aerodynamic and thermodynamic understanding with the mechanics of thermal loading and deformation of structures. This drove use of new structural materials. NASA and other engineers would experiment with active and passive thermal protection systems, metals, and materials.

N AEROSPACE ENGINEERING, high-temperature structures and materials solve two problems. They are used in flight above Mach 2 to overcome the elevated temperatures that occur naturally at such speeds. They also are extensively used at subsonic velocities, in building high-quality turbofan engines, and for the protection of structures exposed to heating.

Aluminum loses strength when exposed to temperatures above 210 degrees Fahrenheit (°F). This is why the Concorde airliner, which was built of this material, cruised at Mach 2.1 but did not go faster.[1013] Materials requirements come to the forefront at higher speeds and escalate sharply as airplanes’ speeds increase. The standard solutions have been to use titanium and nickel, and a review of history shows what this has meant.

Many people wrote about titanium during the 1950s, but to reduce it to practice was another matter. Alexander "Sasha” Kartveli, chief designer at Republic Aviation, proposed a titanium F-103 fighter, but his vision outreached his technology, and although started, it never flew. North American Aviation’s contemporaneous Navaho missile pro­gram introduced chemical milling (etching out unwanted material) for aluminum as well as for titanium and steel, and was the first to use titanium skin in an aircraft. However, the version of Navaho that

Подпись: 9 Подпись: The Lockheed Blackbird experienced a wide range of upper surface temperatures, up to 600 °F. NASA.

was to use these processes never flew, as the program was canceled in 1957.[1014]

The Lockheed A-12 Blackbird, progenitor of a family of exotic Mach 3.2 cruisers that included the SR-71, encountered temperatures as high as 1,050 °F, which required that 93 percent of its structural weight be tita­nium. The version selected was B-120 (Ti-13V-11Cr-3Al), which has the tensile strength of stainless steel but weighs only half as much. But tita­nium is not compatible with chlorine, cadmium, or fluorine, which led to difficulties. A line drawn on a sheet of titanium with a pen would eat a hole into it in a few hours. Boltheads tended to fall away from assem­blies; this proved to result from tiny cadmium deposits made by tools. This brought removal of all cadmium-plated tools from toolboxes. Spot – welded panels produced during the summer tended to fail because the local water supply was heavily chlorinated to kill algae. The managers took to washing the parts in distilled water, and the problem went away.[1015]

The SR-71 was a success. Its shop-floor practice with titanium at first was classified but now has entered the aerospace mainstream. Today’s commercial airliners—notably the Boeing 787 and the Airbus A-380, together with their engines—use titanium as a matter of routine. That is because this metal saves weight.

Beyond Mach 4, titanium falters and designers must turn instead to alternatives. The X-15 was built to top Mach 6 and to reach 1,200 °F. In competing for the contract, Douglas Aircraft proposed a design that was to use magnesium, whose properties were so favorable that the aircraft would only reach 600 °F. But this concept missed the point, for manag­ers wanted a vehicle that would cope successfully with temperatures of 1,200 °F. Hence it was built of Inconel X, a nickel alloy.[1016]

High-speed flight represents one application of advanced metals. Another involves turbofans for subsonic flight. This application lacks the drama of Mach-breaking speeds but is far more common. Such engines use turbine blades, with the blade itself being fabricated from a single-crystal superalloy and insulated with ceramics. Small holes in the blade promote a circulation of cooler gas that is ducted down­stream from high-pressure stages of the compressor. The arrangement can readily allow turbines to run at temperatures 750 °F above the melt­ing point of the superalloy itself.[1017]

The Air Force JB-47E Fly-By-Wire Project

Подпись: 10The USAF Flight Dynamics Laboratory at Wright Patterson Air Force Base (AFB), OH, sponsored a number of technology efforts and flight – test programs intended to increase the survivability of aircraft flight control system components such as fly-by-wire hydraulic actuators. Beginning in 1966, a Boeing B-47E bomber was progressively modi­fied (being redesignated JB-47E) to incorporate analog computer-con­trolled fly-by-wire actuators for both pitch and roll control, with pilot inputs being provided via a side stick controller. The program spanned three phases. For Phase I testing, the JB-47E only included fly-by-wire in its pitch axis. This axis was chosen because the flight control sys­tem in the standard B-47E was known to have a slow response in pitch because of the long control cables to the elevator stretching under load. Control signals to the pitch axis FBW actuator were generated by a transducer attached to the pilot’s control column. The pilot had a simple switch in the cockpit that allowed him to switch between the standard hydromechanical flight control system (which was retained as a backup) and the computer-controlled FBW system. Modified thus, the JB-47E flew for the first time, in December 1967. Test pilots reported that the modified B-47 had better handling qualities then were attainable with the standard B-47E elevator control system, especially in high-speed, low-level flight.[1134]

Phase II of the JB-47E program added fly-by-wire roll control and a side stick controller that used potentiometers to measure pilot input. By the end of the flight-test program, over 40 pilots had flown the FBW JB-47E. The Air Force chief test pilot during Phase II, Col. Frank Geisler, reported: "In ease of control there is no comparison between the standard system and the fly-by-wire. The fly-by-wire is superior in every aspect concerning ease of control. . . . It is positive, it is rapid—it responds well— and best of all the feel is good.”[1135] Before the JB-47E Phase III flight-test program ended in early 1969, a highly reliable four-channel redundant
electrohydraulic actuator had been installed in the pitch axis and success­fully evaluated.[1136] By this time, the Air Force had already initiated Project 680J, the Survivable Flight Control System (SFCS), which resulted in the prototype McDonnell-Douglas YF-4E Phantom aircraft being mod­ified into a testbed to evaluate the potential benefits of fly-by-wire in a high-performance, fighter-type aircraft.[1137] The SFCS YF-4E was intended to validate the concept that dispersed, redundant fly-by-wire flight con­trol elements would be less vulnerable to battle damage, as well as to improve the performance of the flight control system and increase over­all mission effectiveness.

Strike Technology Testbed

In the summer of 1991, a flight-test effort oriented to close air support and battlefield air interdiction began. The focus was to demonstrate tech­nologies to locate and destroy ground targets day or night, good weather or bad, while maneuvering at low altitudes. The AFTI/F-16 was modi­fied with two forward-looking infrared sensors mounted in turrets on the upper fuselage ahead of the canopy. The pilot was equipped with a helmet-mounted sight that was integrated with the infrared sensors. As he moved his head, they followed his line of sight and transmitted their images to eyepieces mounted in his helmet. The nose-mounted canards used in earlier AFTI/F-16 testing were removed. Testing emphasized giv­ing pilots the capability to fly their aircraft and attack targets in darkness or bad weather. To assist in this task, a digital terrain map was stored
in the aircraft computer. Advanced terrain following was also evalu­ated. This used the AFTI/F-16’s radar to scan terrain ahead of the air­craft and automatically fly over or around obstacles. The pilot could select minimum altitudes for his mission. The system would automat­ically calculate that the aircraft was about to descend below this alti­tude and initiate a 5 g pullup maneuver. The advanced terrain following system was connected to the Automated Maneuvering Attack System, enabling the pilot to delivery weapons from altitudes as low as 500 feet in a 5 g turn. An automatic Pilot Activated Recovery System was integrated with the flight control system. If the pilot became disori­ented at night or in bad weather, he could activate a switch on his side controller. This caused the flight control computer to automatically recover the aircraft putting it into a wings-level climb. Many of these technologies have subsequently transitioned into upgrades to existing fighter/attack aircraft.[1187]

Подпись: 10The final incarnation of this unique aircraft would be as the AFTI/F-16 power-by-wire flight technology demonstrator.