Category AERONAUTICS

High Stability Engine Control

NASA Lewis (now Glenn) Research Center evaluated an automated com­puterized engine control system that sensed and responded to high lev­els of engine inlet airflow turbulence to prevent sudden in-flight engine compressor stalls and potential engine failures. Known as High Stability Engine Control (HISTEC), the system used a high-speed digital processor to evaluate airflow data from engine sensors. The technology involved in the HISTEC approach was intended to control distortion at the engine face. The HISTEC system included two major functional subelements: a Distortion Estimation System (DES) and a Stability Management Control
(SMC). The DES is an aircraft-mounted, high-speed computer proces­sor. It uses state-of-the-art algorithms to estimate the amount and type of distortion present at the engine face based on measurements from pressure sensors in the engine inlet near the fan. Maneuver informa­tion from the digital flight control system and predictive angle-of-attack and angle-of-yaw algorithms are used to provide estimates of the type and extent of airflow distortion likely to be encountered by the engine. From these inputs, the DES calculates the effects of the engine face dis­tortion on the overall propulsion system and determines appropriate fan and compressor pressure ratio commands. These are then passed to the SMC as inputs. The SMC performs an engine stability assessment using embedded stall margin control laws. It then issues actuator com­mands to the engine to best accommodate the estimated distortion.[1276]

Подпись: 10A dozen flights were flown on the ACTIVE F-15 aircraft at Dryden from July 15 to August 26, 1997, to validate the HISTEC concept, dur­ing which the system successfully directed the engine control computer to automatically command engine trim changes to adjust for changes in inlet turbulence level. The result was improved engine stability when inlet airflow was turbulent and increased engine performance when the airflow was stable.[1277]

System Verification Units

Подпись: 13In addition to the DOE-NASA units, NASA Lewis participated with the Bureau of Reclamation in the experimentation with two other tur­bines near Medicine Bow, WY. Both of these machines were designated as system verification units (SVU) because of their purpose of veri­fying the concept of integrating wind turbine generators with hydro­electric power networks. This was viewed as an important step in the Bureau of Reclamation’s long-range program of supplementing hydro­electric power generation with wind turbine power generation. One of the two turbines was a new design developed by the Hamilton Standard Division of United Technologies Corp., a 4-megawatt WTS-4 system, in the Medicine Bow area. A Swedish company, Karlskronavarvet (KKRV), was selected as a major subcontractor responsible for the design and fabrication of the nacelle hardware. The WTS-4 had a two-blade fiber­glass downwind rotor that was 256.4 feet in diameter. For over 20 years, this 4-megawatt machine remained the largest power rated wind turbine generator ever built. In a reverse role, an additional 3-megawatt version of the same machine was built for the Swedish government, with KKRV as the prime contractor and Hamilton Standard as the subcontractor.[1507]

The other SVU turbine was a Mod-2 design. While NASA engineers determined that the initial Mod-2 wind turbine generator performance was acceptable, they noted areas where improvement was needed. The problems encountered were primarily hardware-oriented and were attributed to fabrication or design deficiencies. Identification of these problems led to a number of modifications, including changes in the hydraulic, electric, and control systems; rework of the rotor hub flange; addition of a forced-lubrication system; and design of a new low-speed shaft.

Подпись: 13 System Verification Units

Third-Generation Advanced Multimegawatt Wind Turbines—The Mod-5 Program (1980-1988)

The third-generation (Mod-5) program, which started in 1980, was intended to incorporate the experiences from the earlier DOE-NASA wind turbines, especially the Mod-2 experiences, into a final proof-of – concept system for commercial use by an electric utility company. Two construction contracts were awarded to build the Mod-5 turbines—one unit to General Electric, which was designated the Mod-5A, and one unit to Boeing, which was designated the Mod-5B. As intermediate steps between the Mod-2 and Mod-5, two conceptual studies were undertaken for fabrication of both an advanced large wind turbine designated the Mod-3 and a medium turbine designated the Mod-4. Likewise, both a large-scale Mod-5 and medium-scale Mod-6 were planned as the final Wind Energy Program turbines. The Mod-3 and Mod-4 studies, however, were not carried through to construction of the turbines, and the Mod-6 program was canceled because of budget constraints and changing pri­orities resulting from a decline in oil prices following the end of the oil
crisis of the 1970s. Also, General Electric chose not to proceed beyond the design phase with its Mod-5A. As a result, only the Boeing Mod-5B was constructed and placed into power utility service.[1508]

Подпись: 13Although its design was never built, General Electric did complete the detailed design work and all of the significant development tests and documented the entire Mod-5A program. The planned Mod-5A system contained many interesting features that NASA Lewis chose to preserve for future reference. The Mod-5A wind turbine was expected to generate electricity at a cost competitive with conventional forms of power gen­eration once the turbines were in volume production. The program was divided into three phases: conceptual design, which was completed in March 1981; preliminary design, which was completed in May 1982; and final design, which was started in June 1982. The Mod-5A was planned to have a 7.3-megawatt generator, a 400-foot-diameter two-bladed tee­tered rotor, and hydraulically actuated ailerons over the outboard 40 percent of the blade span to regulate the power and control shutdown. The blades were to be made of epoxy-bonded wood laminates. The yaw drive was to include a hydraulically actuated disk brake system, and the tower was to be a soft-designed welded steel plate cylindrical shell with a conical base. The Mod-5A was designed to operate in wind speeds of between 12 and 60 mph at hub height. The system was designed for auto­matic unattended operation and for a design life of 30 years.[1509]

The Mod-5B, which was the only Mod-5 unit built, was physically the world’s largest wind turbine generator. The Mod-5B represented very advanced technology, including an upwind teetered rotor, compact plan­etary gearbox, pitchable tip blade control, soft-shell-type tower, and a variable-speed electrical induction generator/control system. Variable speed control enabled the turbine speed to vary with the wind speed, resulting in an increase energy capture and a decrease in fatigue loads on the drive train. The system underwent a number of design changes before the final fabricated version was built. For example, the turbine originally was planned to have a blade swept diameter of 304 feet. This was increased to 420 feet and finally reduced to 320 feet because of the use of blade steel tips and control improvements. Also, the tur­bine generator was planned initially to be rated at 4.4 megawatts. This
was increased to 7.2 megawatts and then decreased to the final version 3.2 megawatts because of development of better tip control and load management. The rotor weighed 319,000 pounds and was mounted on a 200-foot tower. Extensive testing of the Mod-5B system was con­ducted, including 580 hours of operational testing and 660 hours of per­formance and structural testing. Performance testing alone generated over 72 reports reviewing test results and problems resolved.[1510]

Подпись: 13The Mod-5B was the first large-scale wind turbine to operate suc­cessfully at variable rotational speeds, which varied from 13 to 17.3 revolutions per minute depending on the wind speed. In addition, the Mod-5B was the first large wind turbine with an apparent possibil­ity of lasting 30 years. The turbine, with a total system weight of 1.3 million pounds, was installed at Kahuku on the north shore of Oahu, HI, in 1987 and was operated first by Hawaiian Electric Incorporated and later by the Makani Uwila Power Corporation. The turbine started rated power rotation July 1, 1987. In January 1988, the Mod-5B was sold to the power utility, which continued to operate the unit as part of its power generation network until the small power utility ceased operations in 1996. In 1991, the Mod-5B produced a single wind tur­bine record of 1,256 megawatthours of electricity. The Mod-5B was oper­ated in conjunction with 15 Westinghouse 600-kilowatt wind turbines. While the Westinghouse turbines were not part of the NASA program, the design of the turbines combined successful technology from NASA’s Mod-0A and Mod-2 programs.[1511]

The Mod-5B, which represented a significant decrease over the Mod-2 turbines in the cost of production of electricity, was designed for the sole purpose of providing electrical power for a major utility network. To achieve this goal, a number of changes were made over the Mod-2 systems, including changes in concepts, size, and design refinements. These changes were reflected in more than 20 engineering studies, which addressed issues such as variable pitch versus fixed pitch, optimum machine size, steel shell versus truss tower, blade aerodynamics, mate­rial selection, rotor control, tower height, cluster optimization, and
gearbox configuration. For example, the studies indicated that loads problem was the decisive factor with regard to the use of a partial span variable pitch system rather than a fixed pitch rotor system, dynamic simulation led to selection of the variable speed generator, analysis of operational data enabled a significant reduction in the weight and size of the gearbox, and the development of weight and cost trend data for use in size optimization studies resulted in the formulation of machine sizing programs.[1512]

Подпись: 13A number of design elements resulted in significant contributions to the success of the Mod-5B wind turbine. Aerodynamic improvement over the Mod-2, including improvements in vortex generators, trailing edge tabs, and better shape control, resulted in an 18-percent energy capture increase. Improved variable speed design resulted in an increase of greater than 7 percent (up to as high as 11 percent) over an equiva­lent synchronous generator system. Both cycloconverter efficiency and control optimization of rotor speed versus wind speed proved to be bet­ter than anticipated. Use of the variable speed generator system to con­trol power output directly, as opposed to the pitch power control on the Mod-2, substantially reduced blade activity, especially at below rated power levels. The variable speed design also resulted in a substantial reduction in structural loads. Adequate structural integrity was dem­onstrated for all stress measurement locations. Lessons learned during the earlier operation of the Mod-2 systems resulted in improved yaw and pitch systems. Extensive laboratory simulation of control hardware and software likewise reduced control problems compared with Mod-2 systems.[1513] In summary, the Mod-5B machine represented a reliable proof-of-concept large horizontal-axis wind turbine conversion system capable of long-life production of electricity into a power grid system, thus fulfilling the DOE-NASA program objectives.

The Mod-5B was the last DOE-NASA wind turbine generator built under the Federal Wind Energy Program. In his paper on the Mod-5B wind turbine system, Boeing engineer R. R. Douglass noted the follow­ing size versus cost problem relating to the purchase of large wind tur­bines faced by power utility companies:

. . . large scale commercialization of large wind turbines suf­fers from the chicken and egg syndrome. That is, costs of units are so high when produced one or two at a time on prototype tooling that the utilities can scarcely afford to buy them. On the other hand, industry cannot possibly afford to invest the huge capital required for an automated high rate production capability without an established order base. To break this log jam will require a great deal of cooperation between govern­ment, industry, and the utilities.[1514]

Подпись: 13Boeing noted, however, in its final Mod-5B report that: "In summary the Mod-5B demonstrated the potential to generate at least 11 percent more revenue at a given site than the original design goal. It also dem­onstrated that multi-megawatt class wind turbines can be developed with high dependability which ultimately should show up in reduced operation and maintenance costs.”[1515]

Inventing the Supercritical Wing

Whitcomb was hardly an individual content to rest on his laurels or bask in the glow of previous successes, and after his success with area rul­ing, he wasted no time in moving further into the transonic and super­sonic research regime. In the late 1950s, the introduction of practical subsonic commercial jetliners led many in the aeronautical community to place a new emphasis on what would be considered the next logical step: a Supersonic Transport (SST). John Stack recognized the impor­tance of the SST to the aeronautics program in NASA in 1958. As NASA placed its primary emphasis on space, he and his researchers would work on the next plateau in commercial aviation. Through the Supersonic Transport Research Committee, Stack and his successor, Laurence K. Loftin, Jr., oversaw work on the design of a Supersonic Commercial Air Transport (SCAT). The goal was to create an airliner capable of outper­forming the cruise performance of the Mach 3 North American XB-70 Valkyrie bomber. Whitcomb developed a six-engine arrowlike highly swept wing SST configuration that stood out as possessing the best lift – to-drag (L/D) ratio among the Langley designs called SCAT 4.[194]

Manufacturers’ analyses indicated that Whitcomb’s SCAT 4 exhib­ited the lowest range and highest weight among a group of designs that would generate high operating and fuel costs and was too heavy when compared with subsonic transports. Despite President John F. Kennedy’s June 1963 commitment to the development of "a commercially success­ful supersonic transport superior to that being built in any other country in the world,” Whitcomb saw the writing on the wall and quickly disas­sociated himself from the American supersonic transport program in 1963.[195] Always keeping in mind his priorities based on practicality and what he could do to improve the airplane, Whitcomb said: "I’m going back where I know I can make things pay off.”[196] For Whitcomb, practi­cality outweighed the lure of speed equated with technological progress.

Whitcomb decided to turn his attention back toward improving sub­sonic aircraft, specifically a totally new airfoil shape. Airfoils and wings had been evolving over the course of the 20th century. They reflected the ever-changing knowledge and requirements for increased aircraft perfor­mance and efficiency. They also represented the bright minds that devel­oped them. The thin cambered airfoil of the Wright brothers, the thick airfoils of the Germans in World War I, the industry-standard Clark Y of the 1920s, and the NACA four – and five-digit series airfoils innovated by Eastman Jacobs exemplified advances in and general approaches toward airfoil design and theory.[197]

Despite these advances and others, subsonic aircraft flew at 85-percent efficiency.[198] The problem was that, as subsonic airplanes moved toward their maximum speed of 660 mph, increased drag and instability devel­oped. Air moving over the upper surface of wings reached supersonic speeds, while the rest of the airplane traveled at a slower rate. The plane had to fly at slower speeds at decreased performance and efficiency.[199]

When Whitcomb returned to transonic research in 1964, he specifi­cally wanted to develop an airfoil for commercial aircraft that delayed the onset of high transonic drag near Mach 1 by reducing air friction and turbu-

Inventing the Supercritical Wing

Whitcomb inspecting a supercritical wing model in the 8-Foot TPT. NASA.

lence across an aircraft’s major aerodynamic surface, the wing. Whitcomb went intuitively against conventional airfoil design, in which the upper sur­face curved downward on the leading and trailing edges to create lift. He envisioned a smoother flow of air by turning a conventional airfoil upside down. Whitcomb’s airfoil was flat on top with a downward curved rear sec­tion.[200] The shape delayed the formation of shock waves and moved them further toward the rear of the wing to increase total wing efficiency. The rear lower surface formed into deeper, more concave curve to compen­sate for the lift lost along the flattened wing top. The blunt leading edge facilitated better takeoff, landing, and maneuvering performance. Overall, Whitcomb’s airfoil slowed airflow, which lessened drag and buffeting, and improved stability.[201]

With the wing captured in his mind’s eye, Whitcomb turned it into mathematical calculations and transformed his findings into a wind tun­nel model created by his own hands. He spent days at a time in the 8-foot Transonic Pressure Tunnel (TPT), sleeping on a nearby cot when needed, as he took advantage of the 24-hour schedule to confirm his findings.[202]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[203]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[204] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[205] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[206]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[207]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[208]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[209] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[210] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[211]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[212]

The major players met in March 1967 to discuss turning Whitcomb’s concept into a reality. The practicalities of manufacturing, flight char­acteristics, structural integrity, and safety required a flight research program. The group selected the Navy Chance Vought F-8A fighter as the flight platform. The F-8A possessed specific attributes that made it ideal for the program. While not an airliner, the F-8A had an easily removable modular wing readymade for replacement, fuselage-mounted landing gear that did not interfere with the wing, engine thrust capable of opera­tion in the transonic regime, and lower operating costs than a multi-engine airliner. Langley contracted Vought to design a supercritical wing for the F-8 and collaborated with Whitcomb during wind tunnel testing begin­ning during the summer of 1967. Unfortunately for the program, NASA Headquarters suspended all ongoing contracts in January 1968 and Vought withdrew from the program.[213]

Resolving the Challenge of Aerodynamic Damping

Researchers in the early supersonic era also faced the challenges posed by the lack of aerodynamic damping. Aerodynamic damping is the nat­ural resistance of an airplane to rotational movement about its center of gravity while flying in the atmosphere. In its simplest form, it consists of forces created on aerodynamic surfaces that are some distance from the center of gravity (cg). For example, when an airplane rotates about the cg in the pitch axis, the horizontal tail, being some distance aft of the cg, will translate up or down. This translational motion produces a vertical lift force on the tail surface and a moment (force times dis­tance) that tends to resist the rotational motion. This lift force opposes the rotation regardless of the direction of the motion. The resisting force will be proportional to the rate of rotation, or pitch rate. The faster the rotational rate, the larger will be the resisting force. The magnitude of

the resisting tail lift force is dependent on the change in angle of attack created by the rotation. This change in angle of attack is the vector sum of the rotational velocity and the forward velocity of the airplane. For low forward velocities, the angle of attack change is quite large and the natural damping is also large. The high aerodynamic damping associ­ated with the low speeds of the Wright brothers flights contributed a great deal to the brothers’ ability to control the static longitudinal insta­bility of their early vehicles.

At very high forward speed, the same pitch rate will produce a much smaller change in angle of attack and thus lower damping. For practical purposes, all aerodynamic damping can be considered to be inversely proportional to true velocity. The significance of this is that an airplane’s natural resistance to oscillatory motion, in all axes, disappears as the true speed increases. At hypersonic speeds (above Mach 5), any rota­tional disturbance will create an oscillation that will essentially not damp out by itself.

As airplanes flew ever faster, this lightly damped, oscillatory ten­dency became more obvious and was a hindrance to accurate weap­ons delivery for military aircraft, and pilot and passenger comfort for commercial aircraft. Evaluating the seriousness of the damping chal­lenge in an era when aircraft design was changing markedly (from the straight-wing propeller-driven airplane to the swept and delta wing jet and beyond). It occupied a great amount of attention from the NACA and early NASA researchers, who recognized that it would pose a con­tinuing hindrance to the exploitation of the transonic and supersonic region, and the hypersonic beyond.[678]

In general, aerodynamic damping has a positive influence on han­dling qualities, because it tends to suppress the oscillatory tendencies of a naturally stable airplane. Unfortunately, it gradually disappears as the speed increases, indicating the need for some artificial method of suppressing these oscillations during high-speed flight. In the preelec­tronic flight control era, the solution was the modification of flight con­trol systems to incorporate electronic damper systems, often referred to as Stability Augmentation Systems (SAS). A damper system for one axis con­sisted of a rate gyro measuring rotational rate in that axis, a gain-chang­ing circuit that adjusted the size of the needed control command, and a

servo mechanism that added additional control surface commands to the commands from the pilot’s stick. Control surface commands were generated that were proportional to the measured rotational rate (feed­back) but opposite in sign, thus driving the rotational rate toward zero.

Damper systems were installed in at least one axis of all of the Century – series fighters (F-100 through F-107), and all were successful in stabilizing the aircraft in high-speed flight.[679] Development of stability augmentation systems—and their refinement through contractor, Air Force-Navy, and NACA-NASA testing—was crucial to meeting the challenge of developing Cold War airpower forces, made yet more demanding because the United States and the larger NATO alliance chose a conscious strategy of using advanced technology to generate high-leverage aircraft systems that could offset larger numbers of less-individually capable Soviet-bloc designs.[680]

Early, simple damper systems were so-called single-string systems and were designed to be "fail-safe.” A single gyro, servo, and wiring system were installed for each axis. The feedback gains were quite low, tailored to the damping requirements at high speed, at which very little control surface travel was necessary. The servo travel was limited to a very small value, usually less than 2 degrees of control surface movement. A failure in the system could drive the servo to its maximum travel, but the transient motion was small and easily compensated by the pilot. Loss of a damper at high speed thus reduced the comfort level, or weapons delivery accu­racy, but was tolerable, and, at lower speeds associated with takeoff and landing, the natural aerodynamic damping was adequate.

One of the first airplanes to utilize electronic redundancy in the design of its flight control system was the X-15 rocket-powered research air­plane, which, at the time of its design, faced numerous unknowns. Because of the extreme flight conditions (Mach 6 and 250,000-foot alti­tude), the servo travel needed for damping was quite large, and the pilot could not compensate if the servo received a hard-over signal.

The solution was the incorporation of an independent, but identical, feedback "monitoring” channel in addition to the "working” channel in each axis. The servo commands from the monitor and working channel were continuously compared, and when a disagreement was detected, the system was automatically disengaged and the servo centered. This provided the equivalent level of protection to the limited-authority fail­safe damper systems incorporated in the Century series fighters. Two of the three X-15s retained this fail-safe damper system throughout the 9-year NASA-Air Force-Navy test program, although a backup roll rate gyro was added to provide fail-operational, fail-safe capability in the roll axis.[681] Refining the X-15’s SAS system necessitated a great amount of analysis and simulator work before the pilots deemed it acceptable, particularly as the X-15’s stability deteriorated markedly at higher angles of attack above Mach 2. Indeed, one of the major aspects of the X-15’s research program was refining understanding of the complexities of hypersonic stability and control, particularly during reentry at high angles of attack.[682]

The electronic revolution dramatically reshaped design approaches to damping and stability. Once it was recognized that electronic assis­tance was beneficial to a pilot’s ability to control an airplane, the con­cept evolved rapidly. By adding a third independent channel, and some electronic voting logic, a failed channel could be identified and its sig­nal "voted out,” while retaining the remaining two channels active. If a second failure occurred (that is, the two remaining channels did not agree), the system would be disconnected and the damper would become inoperable. Damper systems of this type were referred to as fail – operational, fail-safe (FOFS) systems. Further enhancement was provided by comparing the pilot’s stick commands with the measured airplane response and using analog computer circuits to tailor servo commands so that the airplane response was nearly the same for all flight con­ditions. These systems were referred to as Command Augmentation Systems (CAS). The next step in the evolution was the incorporation of a mathematical model of the desired aircraft response into the ana­log computer circuitry. An error signal was generated by comparing

the instantaneous measured aircraft response with the desired mathe­matical-model response, and the servo commands forced the airplane to fly per the mathematical model, regardless of the airplane’s inherent aerodynamic tendencies. These systems were called "model-following.”

Even higher levels of redundancy were necessary for safe operation of these advanced control concepts after multiple failures, and the fail­ure logic became increasingly more complex. Establishing the proper "trip” levels, where an erroneous comparison would result in the exclu­sion of one channel, was an especially challenging task. If the trip levels were too tight, a small difference between the outputs of two perfectly good gyros would result in nuisance trips, while trip levels that were too loose could result in a failed gyro not being recognized in a timely manner. Trip levels were usually adjusted during flight test to provide the safest settings.

NASA’s Space Shuttle orbiter utilized five independent control system computers. Four had identical software. This provided fail-operational, fail-operational, fail-safe (FOFOFS) capability. The fifth computer used a different software program with a "get-me-home” capability as a last resort (often referred to as the "freeze-dried” control system computer).

The Critical Tool: Emergent High-Speed Electronic Digital Computing

During the Second World War, J. Presper Eckert and John Mauchly at the University of Pennsylvania’s Moore School of Electrical Engineering designed and built the ENIAC, an electronic calculator that inaugurated the era of digital computing in the United States. By 1951, they had turned this expensive and fragile instrument into a product that was man­ufactured and sold, a computer they called the UNIVAC, which stands for Universal Automatic Computer. The National Advisory Committee for Aeronautics (NACA) was quick to realize the potential of a high-speed computer for the calculation of fluid dynamic problems. After all, the NACA was in the business of aerodynamics and after 40 years of trying to solve the equations of motion by simplified analysis, it recognized

the breakthrough supplied by the computer to solve these equations numerically on a potentially practical basis. In 1954, Remington Rand delivered an ERA 1103 digital computer intended for scientific and engineering calculations to the NACA Ames Aeronautical Laboratory at Sunnyvale, CA. This was a state-of-the-art computer that was the first to employ a magnetic core in place of vacuum tubes for memory. The ERA 1103 used binary arithmetic, a 36-bit word length, and operated on all the bits of a word at a time. One year later, Ames acquired its first stored-program electronic computer, an IBM 650. In 1958, the 650 was replaced by an IBM 704, which in turn was replaced with an IBM 7090 mainframe in 1961.[770]

The Critical Tool: Emergent High-Speed Electronic Digital ComputingThe IBM 7090 had enough storage and enough speed to allow the first generation of practical CFD solutions to be carried out. By 1963, four additional index registers were added to the 7090, making it the IBM 7094. This computer became the workhorse for the CFD of the 1960s and early 1970s, not just at Ames, but throughout the aero­dynamics community; the author cut his teeth solving dissertation on an IBM 7094 at the Ohio State University in 1966. The calculation speed of a digital computer is measured in its number of floating point oper­ations per second (FLOPS). The IBM 7094 could do 100,000 FLOPS, making it about the fastest computer available in the 1960s. With this number of FLOPS, it was possible to carry out for the first time detailed flow-field calculations around a body moving at hypersonic speeds, one of the major activities within the newly formed NASA that drove both computer and algorithm development for CFD. The IBM 7094 was a "mainframe” computer, a large electronic machine that usually filled a room with equipment. The users would write their programs (usu­ally in the FORTRAN language) as a series of logically constructed line statements that would be punched on cards, and the decks of punched cards (sometimes occupying many boxes for just one program) would be fed into a reader that would read the punches and tell the computer what calculations to make. The output from the calculations would be printed on large sheets and returned to the user. One program at a time was fed into the computer, the so-called "batch” operation. The user would submit his or her batch to the computer desk and then return hours or days later to pick up the printed output. As cumbersome as it

may appear today, the batch operation worked. The field of CFD was launched with such batch operations on mainframe computers like the IBM 7094. And NASA Ames was a spearhead of such activities. Indeed, because of the synergism between CFD and the computers on which it worked, the demands on the central IBM installation at Ames grew at a compounded rate of over 100 percent per year in the 1960s.

The Critical Tool: Emergent High-Speed Electronic Digital ComputingWith these computers, it became practical to set up CFD solutions of the Euler equations for two-dimensional flows. These solutions could be carried out with a relatively small number of grid points in the flow, typically 10,000 to 100,000 points, and still have computer run times on the order of hours. Users of CFD in the 1960s were happy to have this capability, and the three primary NASA Research Centers—Langley, Ames, and Lewis (now Glenn)—made major strides in the numerical analysis of many types of flows, especially in the transonic and hyper­sonic regimes. The practical calculation of inviscid (that is, frictionless), three-dimensional flows and especially any type of high Reynolds num­ber flows was beyond the computer capabilities at that time.

This situation changed markedly when the supercomputer came on the scene in the 1970s. NASA Ames acquired the Illiac IV advanced parallel-processing machine. Designed at the University of Illinois, this was an early and controversial supercomputer, one bridging both older and newer computer architectures and processor approaches. Ames quickly followed with the installation of an IBM 360 time-sharing com­puter. These machines provided the capability to make CFD calculations with over 1 million grid points in the flow field with a computational speed of more than 106 FLOPS. NASA installed similar machines at the Langley and Lewis Research Centers. On these machines, NASA researchers made the first meaningful three-dimensional inviscid flow – field calculations and significant two-dimensional high Reynolds num­ber calculations. Supercomputers became the engine that propelled CFD into the forefront of aerospace design as well as research. Bigger and better supercomputers, such as the pioneering Cray-1 and its succes­sor, the Cray X-MP, allowed grids of tens of millions of grid points to be used in a flow-field calculation with speeds beginning to approach the hallowed goal of gigaflops (109 floating point operations per second). Such machines made it possible to carry out numerical solutions of the Navier-Stokes equations for three-dimensional fairly high Reynolds number viscous flows. The first three-dimensional Navier-Stokes solu­tions of the complete flow field around a complete airplane at angle of
attack came on the scene in the 1980s, enabled by these supercomput­ers. Subsonic, transonic, supersonic, and hypersonic flow solutions cov­ered the whole flight regime. Again, the major drivers for these solutions were the aerospace research and development problems tackled by NASA engineers and scientists. This headlong development of supercomput­ers has continued unabated. The holy grail of CFD researchers in the 1990s was the teraflop machine (1012 FLOPS); today, it is the petaflop (1015 FLOPS) machine. Indeed, recently the U. S. Energy Department has contracted with IBM to build a 20-petaflop machine in 2012 for calcu­lations involving the safety and reliability of the Nation’s aging nuclear arsenal.[771] Such a machine will aid the CFD practitioner’s quest for the ultimate flow-field calculations—direct numerical simulation (DNS) of turbulent flows, an area of particularly interest to NASA researchers.

FLEXSTAB (Ames, Dryden, and Langley Research Centers, 1970s)

FLEXSTAB was a method for calculating stability derivatives that included the effects of aeroelastic deformation. Originally developed in the early 1970s by Boeing under contract to NASA Ames, FLEXSTAB was also used and upgraded at Dryden. FLEXSTAB used panel-method aerodynamic calculations, which could be readily adjusted with empiri­cal corrections. The structural effects were treated first as a steady defor­mation at the trim condition, then as "unsteady perturbations about the reference motion to determine dynamic stability by characteristic roots or by time histories following an initial perturbation or follow­ing penetration of a discrete gust flow field.”[976] Comparisons between FLEXSTAB predictions and flight measurements were made at Dryden for the YF-12A, Shuttle, B1, and other aircraft. Initially developed for symmetric flight conditions only, FLEXSTAB was extended in 1981 to include nonsymmetric flight conditions.[977] In 1984, a procedure was developed to couple a NASTRAN structural model to the FLEXSTAB elastic-aircraft stability analysis.[978] NASA Langley and the Air Force Flight Dynamics Laboratory also funded upgrades to FLEXSTAB,

leading to the DYLOFLEX program, which added aeroservoelastic effects.[979]

Digital Fly-By-Wire: The Space Legacy

Both the Mercury and Gemini capsules controlled their reaction control thrusters via electrical commands carried by wire. They also used highly reliable computers specially developed for the U. S. manned space flight program. During reentry from space on his historic 1962 Mercury mis­sion, the first American in space, Alan Shepard, took manual control of the spacecraft attitude, one axis at a time, from the automatic attitude control system. Using the Mercury direct side controller, he "hand-flew” the capsule to the retrofire attitude of 34 degrees pitch-down. Shepard reported that he found that the spacecraft response was about the same as that of the Mercury simulator at the NASA Langley Research Center.[1151] The success of fly-by-wire in the early manned space missions gave NASA confidence to use a similar fly-by-wire approach in the Lunar Landing Research Vehicle (LLRV), built in the early 1960s to practice lunar land­ing techniques on Earth in preparation for the Apollo missions to the Moon. Two LLRVs were built by Bell Aircraft and first flown at Dryden in 1964. These were followed by three Lunar Landing Training Vehicles (LLTVs) that were used to train the Apollo astronauts. The LLTVs used a triply redundant fly-by-wire flight control system based on the use of three analog computers. Pure fly-by-wire in their design (there was insufficient weight allowance for a mechanical backup capability), they proved invaluable in preparing the astronauts for actual landings on the surface of the Moon, flying until November 1972.[1152] A total of 591 flights were accomplished, during which one LLRV and two LLTVs crashed in
spectacular accidents but fortunately did so without loss of life.[1153] During this same period, digital computers were demonstrating great improve­ments in processing power and programmability. Both the Apollo Lunar Module and the Command and Service Module used full-authority dig­ital fly-by-wire controls. Fully integrated into the fly-by-wire flight con­trol systems used in the Apollo spacecraft, the Apollo digital computer provided the astronauts with the ability to precisely maneuver their vehi­cles during all aspects of the lunar landing missions. The success of the Apollo digital computer in these space vehicles led to the idea of using this computer in a piloted flight research aircraft.

Подпись: 10By the end of 1969, many experts within NASA and especially at the NASA Flight Research Center at Edwards Air Force Base were con­vinced that digital-computer-based fly-by-wire flight control systems would ultimately open the way to dramatic improvements in aircraft design, flight safety, and mission effectiveness. A team headed by Melvin E. Burke—along with Dwain A. Deets, Calvin R. Jarvis, and Kenneth J. Szalai—proposed a flight-test program that would demonstrate exactly that. The digital fly-by-wire proposal was evaluated by the Office of Advanced Research and Technology (OART) at NASA Headquarters. A strong supporter of the proposal was Neil Armstrong, who was by then the Deputy Associate Administrator for Aeronautics. Armstrong had been the first person to step on the Moon’s surface, in July 1969 during the Apollo 11 mission, and he was very interested in fostering transfer of technology from the Apollo program into aeronautics applications. During discussion of the digital fly-by-wire proposal with Melvin Burke and Cal Jarvis, Armstrong strongly supported the concept and reportedly commented: "I just went to the Moon with one.” He urged that they con­tact the Massachusetts Institute of Technology (MIT) Draper Laboratory to evaluate the possibility of using modified Apollo hardware and soft­ware.[1154] The Flight Research Center was authorized to modify a fighter type aircraft with a digital fly-by-wire system. The modification would be based on the Apollo computer and inertial sensing unit.

Numerical Propulsion System Simulation

NASA and its contractor colleagues soon found another use for computers to help improve engine performance. In fact, looking back at the history

Подпись: 11 Numerical Propulsion System Simulation

of NASA’s involvement with improving propulsion technology, a trilogy of major categories of advances can be suggested based on the develop­ment of the computer and its evolution in the role that electronic think­ers have played in our culture.

Part one of this story includes all the improvements NASA and its industry partners have made with jet engines before the computer came along. Having arrived at a basic operational design for a turbojet engine—and its relations, the turboprop and turbofan—engineers sought to improve fuel efficiency, reduce noise, decrease wear, and otherwise reduce the cost of maintaining the engines. They did this through such efforts as the Quiet Clean Short Haul Experimental Engine and Aircraft Energy Efficiency program, detailed earlier in this case study. By tinker­ing with the individual components and testing the engines on the ground and in the air for thousands of hours, incremental advances were made.[1338]

Part two of the story introduces the capabilities made available to engineers as computers became powerful enough and small enough to be incorporated into the engine design. Instead of requiring the pilot to manually make occasional adjustments to the engine operation in
flight depending on what the instruments read, a small digital computer built into the engine senses thousands of measurements per minute and caused an equal number of adjustments to be made to keep the power – plant performing at peak efficiency. With the Digital Electronic Engine Control, engines designed years before behaved as though they were fresh off the drawing boards, thanks to their increased capabilities.[1339]

Подпись: 11Having taken engine designs about as far as it was thought possible, the need for even more fuel-efficient, quieter, and capable engines con­tinued. Unfortunately, the cost of developing a new engine from scratch, building it, and testing it in flight can cost millions of dollars and take years to accomplish. What the aerospace industry needed was a way to take advantage of the powerful computers available at the dawn of the 21st century to make the engine development process less expen­sive and timelier. The result was part three of NASA’s overarching story of engine development: the Numerical Propulsion System Simulation (NPSS) program.[1340]

Working with the aerospace industry and academia, NASA’s Glenn Research Center led the collaborative effort to create the NPSS pro­gram, which was funded and operated as part of the High Performance Computing and Communications program. The idea was to use modern simulation techniques and create a virtual engine and test stand within a virtual wind tunnel, where new designs could be tried out, adjustments made, and the refinements exercised again without costly and time-con­suming tests in the "real” world. As stated in a 1999 industry review of the program, the NPSS was built around inclusion of three main ele­ments: "Engineering models that enable multi-disciplinary analysis of large subsystems and systems at various levels of detail, a simulation environment that maximizes designer productivity and a cost-effective, high-performance computing platform.”[1341]

In explaining to the industry the potential value of the program dur­ing a 2006 American Society of Mechanical Engineers conference in

Spain, a NASA briefer from Glenn suggested that if a standard turbo­jet development program for the military—such as the F100—took 10 years, $1.5 billion, construction of 14 ground-test engines, 9 flight-test engines, and more than 11,000 hours of engine tests, the NPSS pro­gram could realize a:

• 50-percent reduction in tooling cost.

• 33-percent reduction in the average development engine cost.

Подпись: 1130-percent reduction in the cost of fabricating, assembling, and testing rig hardware.

• 36-percent reduction in the number of development engines.

• 60-percent reduction in total hardware cost.[1342]

A key—and groundbreaking—feature of NPSS was its ability to inte­grate simulated tests of different engine components and features, and run them as a whole, fully modeling all aspects of a turbojet’s operation. The program did this through the use of the Common Object Request Broker Architecture (CORBA), which essentially provided a shared lan­guage among the objects and disciplines (mechanical, thermo-dynam­ics, structures, gas flow, etc.) being tested so the resulting data could be analyzed in an "apples to apples” manner. Through the creation of an NPSS developer’s kit, researchers had tools to customize the software for individual needs, share secure data, and distribute the simulations for use on multiple computer operating systems. The kit also provided for the use of CORBA to "zoom” in on the data to see specific informa­tion with higher fidelity.[1343]

Begun in 1997, the NPSS team consisted of propulsion experts and software engineers from GE, Pratt & Whitney, Boeing, Honeywell, Rolls – Royce, Williams International, Teledyne Ryan Aeronautical, Arnold Engineering Development Center, Wright-Patterson AFB, and NASA’s

Glenn Research Center. By the end of the 2000 fiscal year, the NPSS team had released Version 1.0.0 on schedule. According to a summary of the program produced that year:

Подпись: 11(The new software) can be used as an aero-thermodynamic zero-dimensional cycle simulation tool. The capabilities include text-based input syntax, a sophisticated solver, steady- state and transient operation, report generation, a built-in object-oriented programming language for user-definable components and functions, support for distributed running of external codes via CORBA, test data reduction, interactive debug capability and customer deck generation.[1344]

Additional capabilities were added in 2001, including the ability to support development of space transportation technologies. At the same time, the initial NPSS software quickly found applications in aviation safety, ground-based power, and alternative energy devices, such as fuel cells. Moreover, project officials at the time suggested that with the fur­ther development of the software, other applications could be found for the program in the areas of nuclear power, water treatment, biomedi­cine, chemical processing, and marine propulsion. NPSS proved to be so capable and promising of future applications that NASA designated the program a cowinner of the NASA Software of the Year Award for 2001.[1345]

Work to improve the capabilities and expand the applications of the software continued, and, in 2008, NASA transferred NPSS to a consor­tium of industry partners, and, through a Space Act Agreement, it is cur­rently offered commercially by Wolverine Ventures, Inc., of Jupiter, FL. Now at Version 1.6.5, NPSS’s features include the ability to model all types of complex systems, plug-and-play interfaces for fluid properties, built-in plotting package, interface to higher fidelity legacy codes, mul­tiple model views, command language interpreter with language sen­sitive text editor, comprehensive component solver, and variable setup controls. It also can operate on Linux, Windows, and UNIX platforms.[1346]

Originally begun as a virtual tool for designing new turbojet engines, NPSS has since found uses in testing rocket engines, fuel cells, analog controls, combined cycle engines, thermal management systems, air­frame vehicles preliminary design, and commercial and military engines.[1347]

Ultra Efficient Engine Technology Program

Подпись: 11With the NPSS tool firmly in place and some four decades of experience incrementally improving the design, operation, and maintenance of the jet engine, it was time to go for broke and assemble an ultra­bright team of engineers to come up with nothing short of the best jet

Building on the success of technology development programs such as the Quiet Clean Short Haul Experimental Engine and Energy Efficient Engine project—all of which led directly to the improvements and production of turbojet engines now propelling today’s commercial airliners—NASA approached the start of the 21st century with plans to take jet engine design to accomplish even more impressive feats. In 1999, the Aeronautics Directorate of NASA began the Ultra Efficient Engine Technology (UEET) program—a 5-year, $300-million effort— with two primary goals. The first was to find ways that would enable further improvements in engine efficiency to reduce fuel burn and, as a result, carbon dioxide emissions by yet another 15 percent. The second was to continue developing new materials and configuration schemes in the engine’s combustor to reduce emissions of nitrogen oxides (NOx) during takeoff and landings by 70 percent relative to the standards detailed in 1996 by the International Civil Aviation Organization.[1348]

NASA’s Glenn Research Center led the program, with participation from three other NASA Centers: Ames, Langley, and the Goddard Space Flight Center in Greenbelt, MD. Also involved were GE, Pratt & Whitney, Honeywell, Allison/Rolls-Royce, Williams International, Boeing, and Lockheed Martin.[1349]

Подпись: 11The program was comprised of seven major projects, each of which addressed particular technology needs and exploitation opportunities.[1350] The Propulsion Systems Integration and Assessment project examined overall component technology issues relevant to the UEET program to help furnish overall program guidance and identify technology short­falls.[1351] The Emissions Reduction project sought to significantly reduce NOx and other emissions, using new combustor concepts and tech­nologies such as lean burning combustors with advanced controls and high-temperature ceramic matrix composite materials.[1352] The Highly Loaded Turbomachinery project sought to design lighter-weight, reduced – stage cores, low-pressure spools and propulsors for more efficient and environmentally friendly engines, and advanced fan concepts for qui­eter, lighter, and more efficient fans.[1353] The Materials and Structures for High Performance project sought to develop and demonstrate high – temperature material concepts such as ceramic matrix composite combustor liners and turbine vanes, advanced disk alloys, turbine air­foil material systems, high-temperature polymer matrix composites, and innovative lightweight materials and structures for static engine struc – tures.[1354] The Propulsion-Airframe Integration project studied propul­sion systems and engine locations that could furnish improved engine and environmental benefits without compromising the aerodynamic performance of the airplane; lowering aircraft drag itself constituted a highly desirable means of reducing fuel burn, and, hence, CO2 emis­sions will develop advanced technologies to yield lower drag propulsion system integration with the airframe for a wide range of vehicle classes. Decreasing drag improves air vehicle performance and efficiency, which
reduces fuel burn to accomplish a particular mission, thereby reducing the CO2 emissions.[1355] The Intelligent Propulsion Controls Project sought to capitalize upon breakthroughs in electronic control technology to improve propulsion system life and enhance flight safety via integrating informa­tion, propulsion, and integrated flight propulsion control technologies.[1356] Finally, the Integrated Component Technology Demonstrations project sought to evaluate the benefits of off-the-shelf propulsion systems inte­gration on NASA, Department of Defense, and aeropropulsion industry partnership efforts, including both the UEET and the military’s Integrated High Performance Turbine Engine Technology (IHPTET) programs.[1357]

Подпись: 11By 2003, the 7 project areas had come up with 10 specific technol­ogy areas that UEET would investigate and incorporate into an engine that would meet the program’s goals for reducing pollution and increas­ing fuel burn efficiency. The technology goals included:

1. Advanced low-NOx combustor design that would feature a lean burning concept.

2. A highly loaded compressor that would lower system weight, improve overall performance, and result in lower fuel burn and carbon dioxide emissions.

3. A highly loaded, high-pressure turbine that could allow a reduction in the number of high-pressure stages, parts count, and cooling requirements, all of which could improve fuel burn and lower carbon dioxide emissions.

4. A highly loaded, low-pressure turbine and aggressive tran­sition duct that would use flow control techniques that would reduce the number of low-pressure stages within the engine.

5. Use of a ceramic matrix composite turbine vane that would allow high-pressure vanes to operate at a higher
inlet temperature, which would reduce the amount of engine cooling necessary and result in lower carbon diox­ide emissions.

6. The same ceramic matrix composite material would be used to line the combustor walls so it could operate at a higher temperature and reduce NOx emissions.

7. Coat the turbine airfoils with a ceramic thermal barrier material to allow the turbines to operate at a higher tem­perature and thus reduce carbon dioxide emissions.

8. Подпись: 11Use advanced materials in the construction of the tur­bine airfoil and disk. Specifically, use a lightweight single crystal superalloy to allow the turbine blades and vanes to operate at a higher temperature and reduce carbon dioxide emissions, as well as a dual microstructure nickel – base superalloy to manufacture turbine disks tailored to meet the demands of the higher-temperature environment.

9. Determine advanced materials and structural concepts for an improved, lighter-weight impact damage tolerance and noise-reducing fan containment case.

10. Develop active tip clearance control technology for use in the fan, compressor, and turbine to improve each compo­nent’s efficiency and reduce carbon dioxide emissions.[1358]

In 2003, the UEET program was integrated into NASA’s Vehicle Systems program to enable the enginework to be coordinated with research into improving other areas of overall aircraft technology. But in the wake of policy changes associated with the 2004 decision to redi­rect NASA’s space program to retire the Space Shuttle and return humans to the Moon, the Agency was forced to redirect some of its funding to Exploration, forcing the Aeronautics Directorate to give up the $21.6 mil­lion budgeted for UEET in fiscal year 2005, effectively canceling the big­gest and most complicate jet engine research program ever attempted. At the same time, NASA was directed to realign its jet engine research to concentrate on further reducing noise.[1359]

Nevertheless, results from tests of UEET hardware showed prom­ise that a large, subsonic aircraft equipped with some of the technologies detailed above would have a "very high probability” of achieving the pro­gram goals laid out for reducing emissions of carbon dioxide and other pollutants. The data remain for application to future aircraft and engine schemes.72

From Curiosity to Controversy

In 1947, Muroc Army Airfield, CA, was a small collection of aircraft han­gars and other austere buildings adjoining the vast Rogers Dry Lake in the high desert of the Antelope Valley, across the San Gabriel Mountains from the Los Angeles basin. Because of the airfield’s remoteness and clear skies, a small team of Air Force, the NACA, and contractor personnel was using Muroc for a secret project to explore the still unknown ter­ritory of supersonic flight. On October 14, more than 40,000 feet over the little desert town of Boron, visible only by its contrail, Capt. Chuck Yeager’s 31-foot-long rocket-propelled Bell XS-1 successfully "broke” the fabled sound barrier.[323] The sonic boom from his little experimental air­plane—the first to fly supersonic in level flight—probably did not reach the ground on that historic day.[324] Before long, however, the acoustical signature of the shock waves generated by XS-1s and other supersonic aircraft became a familiar sound at and around the isolated airbase.

In the previous century, an Austrian physicist-philosopher, Ernst Mach, was the first to explain the phenomenon of supersonic shock waves, which he displayed visually in 1887 with a cleverly made photo­graph showing those formed by a high-velocity projectile, in this case a bullet. The speed of sound, he also determined, varied in relation to the density of the medium though which it passed, such as air molecules. (At sea level, the speed of sound is 760 mph.) In 1929, Jakob Ackeret, a Swiss fluid dynamicist, named this variable "Mach number” in his honor. This guaranteed that Ernst would be remembered by future gen­erations, especially after it became known that the 700 mph speed of Yeager’s XS-1, flying at 43,000 feet, was measured as Mach 1.06.[325]

Humans have long been familiar with and often frightened by natural sonic booms in the form of thunder, i. e., sudden surges of air

From Curiosity to Controversy

Bell XS-1 —the first aircraft to exceed Mach 1 in level flight, October 14, 1947. U. S. Air Force.

pressure caused when strokes of lightning instantaneously heat con­tiguous columns of air molecules. Perhaps the most awesome of sonic booms—heard only rarely—have been produced by large meteoroid fireballs speeding through the atmosphere. On a much smaller scale, the first acoustical shock waves produced by human invention were the modest cracking noises from the snapping of a whip. The high-power explosives perfected in the latter half of the 19th century were able—as Mach explained—to propel projectiles faster than the speed of sound. Their acoustical shock waves would be among the cacophony of fear­some sounds heard by millions of soldiers during the two World Wars.[326]

On a Friday evening, September 8, 1944, an explosion blew out a large crater in Stavely Road, west of London. The first German V-2 bal­listic missile aimed at England had announced its arrival. "After the explosion came a double thunderclap caused by the sonic boom catch-

ing up with the fallen rocket.”[327] For the next 7 months, millions of peo­ple would hear these sounds, which would become known as "sonic bangs” in Britain, from more than 3,000 V-2s launched at England as well as liberated portions of France, Belgium, and the Netherlands. Their sound waves would always arrive too late to warn any of those unfortu­nate enough to be near the missiles’ points of impact.[328] After World War II, these strange noises faded into memory for several years—until the arrival of new jet fighter planes.

In November 1949, the NACA designated its growing detachment at Muroc as the High-Speed Flight Research Station (HSFRS), 1 month before the Air Force renamed the installation Edwards Air Force Base (AFB).[329] By the early 1950s, the desert and mountains around Edwards reverberated with the occasional sonic booms of experimental and pro­totype aircraft, as did other flight-test locations in the United States and United Kingdom. Scientists and engineers had been familiar with the "axisymmetric” ballistic shock waves of projectiles such as artil­lery shells (referred to scientifically as bodies of revolution).[330] This was one reason the fuselage of the XS-1 was shaped like a 50-caliber bul­let. But these new acoustic phenomena—many of which featured a double-boom sound—hinted that they were more complex. In late 1952, the editors of the world’s oldest aeronautical weekly stated with some hyperbole that "the ‘supersonic bang’ phenomenon, if only by reason of its sudden incidence and the enormous public interest it has aroused, is probably the most spectacular and puzzling occurrence in the history of aerodynamics.”[331]

A young British graduate student, Gerald B. Whitham, was the first to analyze thoroughly the abrupt rise in air pressure upon arrival of a supersonic vehicle’s "bow wave,” followed by a more grad­ual but deeper fall in pressure for a fraction of a second, and then a recompression with the passing of the vehicle’s tail wave. As shown in a simplified fashion by Figure 1, this can be illustrated graphically by an elongated capital "N” (the solid line) transecting a horizontal axis (the dashed line) representing ambient air pressure during a second or less of elapsed time. For Americans, the pressure change is usually expressed in pounds per square foot (psf—also abbreviated as lb/ft[332] [333]).

Because a jet fighter (or a V-2 missile) is much longer than an artil­lery shell is, the human ear could detect a double boom if its tail shock wave arrived a tenth of a second or more after its bow shock wave. Whitham was first to systematically examine the more complex shock waves, which he called the F-function, generated by "nonaxisymmetri – cal” (i. e., asymmetrical) configurations, such as airplanes.11

The number of these double booms multiplied in the mid-1950s as the Air Force Flight Test Center (AFFTC) at Edwards (assisted by the HSFRS) began putting a new generation of Air Force jet fighters and interceptors, known as the Century Series, through their paces. The remarkably rapid advance in aviation technology (and priorities of the Cold War "arms race”) is evident in the sequence of their first flights at Edwards: YF-100 Super Sabre, May 1953; YF-102 Delta Dagger, October 1953; XF-104 Starfighter, February 1954; F-101 Voodoo, September 1954; YF-105 Thunderchief, October 1955; and F-106 Delta Dart, December 1956.12

With the sparse population living in California’s Mojave Desert region during the 1950s, disturbances caused by the flight tests of new jet aircraft were not a serious issue. But even in the early 1950s, the United

From Curiosity to Controversy

Figure 1. Simplified N-shaped sonic boom signature. NASA.

States Air Force (USAF) became concerned about their future impact. In November 1954, for example, its Aeronautical Research Laboratory at Wright-Patterson AFB, OH, submitted a study to the Air Force Board of top generals on early findings regarding the still somewhat myste­rious nature of sonic booms. Although concluding that low-flying air­craft flying at supersonic speeds could cause considerable damage, the report optimistically predicted the possibility of supersonic flight with­out booms at altitudes over 35,000 feet.[334]

As the latest Air Force and Navy fighters went into full produc­tion and began flying from bases throughout the Nation, much of the American public was exposed to jet noise for the first time. This included the thunderclap-like thuds characteristic of sonic booms—often accom­panied by rattling windowpanes. Under certain conditions, as the U. S. armed services and British Royal Air Force (RAF) had learned, even maneuvers below Mach 1 (e. g., accelerations, dives, and turns) could generate and focus transonic shock waves in such a manner as to cause strong sonic booms.[335] Indeed, residents of Southern California began hearing such booms in the late 1940s, when North American Aviation was flight-testing its new F-86 Sabre. The first civilian claim against the

USAF for sonic boom damage was apparently filed at Eglin AFB, FL, in 1951, when only subsonic jet fighters were assigned there.[336] Additionally, as shown in 1958 by Frank Walkden, another English mathematician, the lift effect of airplane wings could magnify the strength of sonic booms more than previously estimated.[337]

Sonic boom claims against the Air Force first became statistically significant in 1957, reflecting its growing inventory of Century fight­ers and the type of maneuvers they sometimes performed, which could focus acoustical rays into what became called "super booms.” (It was found that these powerful but localized booms had a U-shaped signa­ture, with the tail shock wave as well as that from the nose of the air­plane being above ambient air pressure.) Most claims involved broken windows or cracked plaster, but some were truly bizarre, such as the death of pets or the insanity of livestock. In addition to these formal claims, Air Force bases, local police switchboards, and other agencies received an uncounted number of phone calls about booms, ranging from merely inquisitive to seriously irate.[338] Complaints from constituents also became an issue for the U. S. Congress.[339] Between 1956 and 1968, some 38,831 claims were submitted to the Air Force, which approved 14,006 in whole or in part—65 percent for broken glass, 21 percent for cracked plaster (usually already weakened), 8 percent for fallen objects, and 6 percent for other reasons.[340]

The military’s problem with sonic boom complaints seems to have peaked in the 1960s. One reason was the sheer number of fighter-type aircraft stationed around the Nation (over three times as many as today). Secondly, many of these aircraft’s missions were air defense. This often meant flying at high speed over populated areas for training in

defending cities and other key targets from aerial attack, sometimes in prac­tice against Strategic Air Command (SAC) bombers. The North American Air Defense Command (NORAD) conducted two of the largest such exercises, Skyshield I and Skyshield II, in 1960 and 1961. The Federal Aviation Agency (FAA) shut down all civilian air traffic while NORAD’s interceptors and SAC bombers (augmented by some from the RAF) battled overhead—accompanied by a sporadic drumbeat of sonic booms reaching the surface.[341]

Although most fighters and interceptors deployed in the 1960s could readily fly faster than sound, they could only do so for a short distance because of the rapid fuel consumption of jet engine afterburners. Thus, their sonic boom "carpets” were relatively short. However, one super­sonic American warplane that became operational in 1960 was designed to fly faster than Mach 2 for more than 1,000 miles.

This innovative but troublesome aircraft was the SAC’s new Convair – built B-58 Hustler medium bomber. On March 5, 1962, the Air Force showed off the long-range speed of the B-58 by flying one from Los Angles to New York in just over 2 hours at an average pace of 1,215 mph (despite having to slow down for an aerial refueling over Kansas). After another refueling over the Atlantic, the same Hustler "outraced the sun” (i. e., flew faster than Earth’s rotation) back to Los Angles with one more refueling, completing the record-breaking round trip at an average speed of 1,044 mph.[342]

Capable of sustained Mach 2+ speeds, the four-engine delta-winged Hustler (weighing up to 163,000 pounds) helped demonstrate the feasi­bility of a supersonic transport. But the B-58’s performance revealed at least one troubling omen. Almost wherever it flew supersonic over pop­ulated areas, the bomber left sonic boom complaints and claims in its wake. Indeed, on its record-shattering flight of March 1962, flown mostly at an altitude of 50,000 feet (except when coming down to 30,000 feet for refueling), "the jet dragged a sonic boom 20 to 40 miles wide back and forth across the country—frightening residents, breaking windows, crack-

From Curiosity to Controversy

Convair B-58 Hustler, the first airplane capable of sustained supersonic flight and a major contributor to early sonic boom research. USAF.

ing plaster, and setting dogs to barking.”[343] As indicated by Figure 2, the B-58 became a symbol for sonic boom complaints (despite its small numbers).

Most Americans, especially during times of increased Cold War ten­sions, tolerated occasional disruptions justified by national defense. But how would they react to constantly repeated sonic booms generated by civilian jet airliners? Could a practical passenger-carrying supersonic air­plane be designed to minimize its sonic signature enough to be accept­able to people below? NASA’s attempts to resolve these two questions occupy the remainder of this history.

Elastic Aerostructural Effects

The distortion of the shape of an airplane structure because of applied loads also creates a static aerodynamic interaction. When air loads are applied to an aerodynamic surface, it will bend or twist proportional to the applied load, just like a spring. Depending on the surface configura­tion, the distorted shape can produce different aerodynamic properties when compared with the rigid shape. A swept wing, for example, will bend upward at the tip and may also twist as it is loaded.

This new shape may exhibit higher dihedral effect and altered span – wise lift distribution when compared with a rigid shape, impacting the performance of the aircraft. Because virtually all fighter aircraft have short wings and can withstand 7 to 9 g, their aeroelastic deformation is relatively small. In contrast, bomber, cargo, or high-altitude recon­naissance airplanes are typically designed for lower g levels, and the resulting structure, particularly its long, high aspect ratio wings, is often quite limber.

Notice that this is not a dynamic, oscillatory event, but a static con­dition that alters the steady-state handling qualities of the airplane. The prediction of these aeroelastic effects is a complex and not altogether accurate process, though the trends are usually correct. Because the effect is a static condition, the boundaries for safe flight can usually be determined during the buildup flight-test program, and, if necessary, placards, can be applied to avoid serious incidents once the aircraft enters operational service.

The six-engine Boeing B-47 Stratojet was the first airplane designed with a highly swept, relatively thin, high aspect ratio wing. At higher tran­sonic Mach numbers, deflection of the ailerons would cause the wing to twist sufficiently to cancel, and eventually exceed, the rolling moment produced by the aileron, thus producing an aileron reversal. (In effect, the aileron was acting like a big trim tab, twisting the wing and causing the exact opposite of what the pilot intended.) Aerodynamic loads are proportional to dynamic pressure, so the aeroelastic effects are usually more pronounced at high airspeed and low altitude, and this combina­tion caused several fatal accidents with the B-47 during its flight-testing and early deployment. After flight-testing determined the magnitude and region of reduced roll effectiveness, the airplane was placarded to 425 knots to avoid roll reversal. In sum, then, an aeroelastic problem forced the limiting of the maximum performance achievable by the airplane, rendering it more vulnerable to enemy defenses. The B-47’s successor,

the B-52, had a much thicker wing root and more robust structure to avoid the problems its predecessor had encountered.

The Mach 3.2+ Lockheed SR-71 Blackbird, designed to cruise at supersonic speeds at very high altitude, was another aircraft that exhib­ited significant aeroelastic structural deformation.[716] The Blackbird’s structure was quite limber, and the aeroelastic predictions for its behav­ior at cruise conditions were in error for the pitch axis. The SR-71 was a blended wing-body design with chines running along the forward sides of the fuselage and the engine nacelles, then blending smoothly into the rounded delta wing. These chines added lift to the airplane, and because they were well forward of the center of gravity, added a signifi­cant amount of pitching moment (much like a canard surface on an air­plane such as the Wright Flyer or the Saab AJ-37 Viggen). Flight-testing revealed that the airplane required more "nose-up” elevon deflection at cruise than predicted, adding a substantial amount of trim drag. This reduced the range the Blackbird could attain, degrading its operational performance. To correct the problem, a small shim was added to the production fuselage break just forward of the cockpit. The shim tilted the forebody nose cone and its attached chine surfaces slightly upward, producing a nose-up pitching moment. This allowed the elevons to be returned to their trim faired position at cruise flight conditions, thus regaining the lost range capability.

Sadly, the missed prediction of the aeroelastic effects also con­tributed to the loss of one of the early SR-71s. While the nose cone forebody shim was being designed and manufactured, the contractor desired to demonstrate that the airplane could attain its desired range if the elevons were faired. To achieve this, Lockheed technicians added trim-altering ballast to the third production SR-71, then being used for systems and sensor testing. The ballast shifted the center of grav­ity about 2 percent aft from its normal position and at the aft design limit for the airplane. The engineers calculated that this would permit the elevons to be set in their faired position at cruise conditions for this one flight so that the SR-71 could meet its desired range perfor­mance. Instead, the aft cg, combined with the nonlinear aerodynamics

and aeroelastic bending of the fuselage, resulted in the airplane going out of control at the start of a turn at a cruise Mach number. The air­plane broke in half, catapulting the pilot, who survived, from the cock­pit. Unfortunately, his flight-test engineer/navigator perished.[717] Shim installation, together with other minor changes to the control system and engine inlets, subsequently enabled the SR-71 to meet its perfor­mance goals, and it became a mainstay of America’s national reconnais­sance fleet until its retirement in early 1990.

Lockheed, the Air Force, and NASA continued to study Blackbird aeroelastic dynamics. In 1970, Lockheed proposed installation of a Loads Alleviation and Mode Suppression (LAMS) system on the YF-12A, installing very small canards called "exciter-” or "shaker-vanes” on the forebody to induce in-flight motions and subsequent suppression tech­niques that could be compared with analytical models, particularly NASA’s NASTRAN and Boeing’s FLEXSTAB computerized load predic­tion and response tools. The LAMS testing complemented Air Force – NASA research on other canard-configured aircraft such as the Mach 3+ North American XB-70A Valkyrie, a surrogate for large transport-sized supersonic cruise aircraft. The fruits of this research could be found on the trim canards used on the Rockwell B-1A and B-1B strategic bomb­ers, which entered service in the late 1980s and notably improved their high-speed "on the deck” ride qualities, compared with their three low – altitude predecessors, the Boeing B-52 Stratofortress, Convair B-58 Hustler, and General Dynamics FB-111.[718]