Category AERONAUTICS

Advancing Propulsive Technology

James Banke

Подпись: ПEnsuring proper aircraft propulsion has been a powerful stimulus. In the interwar years, the NACA researched propellers, fuels, engine cool­ing, supercharging, and nacelle and cowling design. In the postwar years, the Agency refined gas turbine propulsion technology. NASA now leads research in advancing environmentally friendly and fuel-con­serving propulsion, thanks to the Agency’s strengths in aerodynamic and thermodynamic analysis, composite structures, and other areas.

E

ACH DAY, OUR SKIES FILL with general aviation aircraft, business jets, and commercial airliners. Every 24 hours, some 2 million passen­gers worldwide are moved from one airport to the next, almost all of them propelled by relatively quiet, fuel-efficient, and safe jet engines.[1291]

And no matter if the driving force moving these vehicles through the air comes from piston-driven propellers, turboprops, turbojets, turbofans— even rocket engines or scramjets—the National Aeronautics and Space Administration (NASA) during the past 50 years has played a significant role in advancing that propulsion technology the public counts on every day.

Many of the advances seen in today’s aircraft power-plants can trace their origins to NASA programs that began during the 1960s, when the Agency responded to public demand that the Government apply major resources to tackling the problems of noise pollution near major airports. Highlights of some of the more noteworthy research programs to reduce noise and other pollution, prolong engine life, and increase fuel efficiency will be described in this case study.

But efforts to improve engine efficiency and curb unwanted noise actu­ally predate NASA’s origins in 1958, when its predecessor, the National Advisory Committee for Aeronautics (NACA), served as the Nation’s pre­eminent laboratory for aviation research. It was during the 1920s that

Подпись: 11 Advancing Propulsive Technology

the NACA invented a cowling to surround the front of an airplane and its radial engine, smoothing the aerodynamic flow around the aircraft while also helping to keep the engine cool. In 1929, the NACA won its first Collier Trophy for the breakthrough in engine and aerodynamic technology.[1292]

During World War II, the NACA produced new ways to fix problems discovered in higher-powered piston engines being mass-produced for wartime bombers. NACA research into centrifugal superchargers was particularly useful, especially on the R-1820 Cyclone engines intended for use on the Boeing B-17 Flying Fortress, and later with the Wright R-3350 Duplex Cyclone engines that powered the B-29.

Basic research on aircraft engine noise was conducted by NACA engineers, who reported their findings in a paper presented in 1956 to the 51st Meeting of the Acoustical Society of America in Cambridge, MA. It would seem that measurements backed up the prediction that the noise level of the spinning propeller depended on several variables,
including the propeller diameter, how fast it is turning, and how far away the recording device is from the engine.[1293]

Подпись: 11As the jet engine made its way from Europe to the United States and designs for the basic turboprop, turbojet, and turbofan were refined, the NACA during the early 1950s began one of the earliest noise-reduction programs, installing multitube nozzles of increasing complexity at the back of the engines to, in effect, act as mufflers. These engines were tested in a wind tunnel at Langley Research Center in Hampton, VA. But the effort was not effective enough to prevent a growing public sentiment that commercial jet airliners should be seen and not heard.

In fact, a 1952 Presidential commission chaired by the legendary pilot James H. Doolittle predicted that aircraft noise would soon turn into a problem for airport managers and planners. The NACA’s response was to form a Special Subcommittee on Aircraft Noise and pursue a three – part program to understand better what makes a jet noisy, how to quiet it, and what, if any, impact the noise might have on the aircraft’s structure.[1294]

As the NACA on September 30, 1958, turned overnight into the National Aeronautics and Space Administration on October 1, the new space agency soon found itself with more work to do than just beating the Soviet Union to the Moon.

Advanced Subsonic Technology Program and UEET

Подпись: 12NASA started a project in the mid-1990s known as the Advanced Subsonic Technology program. Like HSR before it, the AST focused heavily on reducing emissions through new combustor technology. The overall objective of the AST was to spur technology innovation to ensure U. S. leadership in developing civil transport aircraft. That meant lowering NOx emissions, which not only raised concern in local airport com­munities but also by this time had become a global concern because of potential damage to the ozone layer. The AST sought to spur the devel­opment of new low-emissions combustors that could achieve at least a 50-percent reduction in NOx from 1996 International Civil Aviation Organization standards. The AST program also sought to develop tech­niques that would better measure how NOx impacts the environment.[1417]

GE, P&W, Allison Engines, and AlliedSignal engines all participated in the project.[1418] Once again, the challenge for these companies was to con­trol combustion in such a way that it would minimize emissions. This required carefully managing the way fuel and air mix inside the combustor to avoid extremely hot temperatures at which NOx would be created, or at least reducing the length of time that the gases are at their hottest point.

Ultimately the AST emissions reduction project achieved its goal of reducing NOx emissions by more than 50 percent over the ICAO stan­dard, a feat that was accomplished not with actual engine demonstrators but with a "piloted airblast fuel preparation chamber.”[1419]

Подпись: 12Despite their relative success, however, NASA’s efforts to improve engine efficiency and reduce emissions began to face budget cuts in 2000. Funding for NASA’s Atmospheric Effects of Aviation project, which was the only Government program to assess the effects of aircraft emissions at cruise altitudes on climate change, was canceled in 2000.[1420] Investments in the AST and the HSR also came to an end. However, NASA did manage to salvage parts of the AST aimed at reducing emissions by rolling those projects into the new Ultra Efficient Engine Technology program in 2000.[1421]

UEET was a 6-year, nearly $300 million program managed by NASA Glenn that began in October 1999 and included participation from NASA Centers Ames, Goddard, and Langley; engine companies GE Aircraft Engines, Pratt & Whitney, Honeywell, Allison/Rolls Royce, and Williams International; and airplane manufacturers Boeing and Lockheed Martin.[1422]

UEET sought to develop new engine technologies that would dramat­ically increase turbine performance and efficiency. It sought to reduce NOx emissions by 70 percent within 10 years and 80 percent within 25 years, using the 1996 International Civil Aviation Organization guidelines as a baseline.[1423] The UEET project also sought to reduce carbon dioxide emissions by 20 percent and 50 percent in the same timeframes, using 1997 subsonic aircraft technology as a baseline.[1424] The dual goals posed a major challenge because current aircraft engine technologies typically require a tradeoff between NOx and carbon emissions; when engines are designed to minimize carbon dioxide emissions, they tend to generate more NOx.

In the case of the UEET project, improving fuel efficiency was expected to lead to a reduction in carbon dioxide emissions by at least
8 percent: the less fuel burned, the less carbon dioxide released.[1425] The UEET program was expected to maximize fuel efficiency, requiring engine operations at pressure ratios as high as 55 to 1 and turbine inlet tem­peratures of 3,100 degrees Fahrenheit (°F).[1426] However, highly efficient engines tend to run at very hot temperatures, which lead to the genera­tion of more NOx. Therefore, in order to reduce NOx, the UEET program also sought to develop new fuel/air mixing processes and separate engine component technologies that would reduce NOx emissions 70 percent from 1996 ICAO standards for takeoff and landing conditions and also minimize NOx impact during cruise to avoid harming Earth’s ozone layer.

Подпись: 12Under UEET, NASA worked on ceramic matrix composite (CMC) combustor liners and other engine parts that can withstand the high temperatures required to maximize energy efficiency and reduce car­bon emissions while also lowering NOx emissions. These engine parts, particularly combustor liners, would need to endure the high temper­atures at which engines operate most efficiently without the benefit of cooling air. Cooling air, which is normally used to cool the hottest parts of an engine, is unacceptable in an engine designed to minimize NOx, because it would create stoichiometric fuel-air mixtures—meaning the number of fuel and air molecules would be optimized so the gases would be at their hottest point—thereby producing high levels of NOx in regions close to the combustor liner.[1427]

NASA’s sponsorship of the AST and the UEET also fed into the devel­opment of two game-changing combustor concepts that can lead to a significant reduction in NOx emissions. These are the Lean Pre-mixed, Pre-vaporized (LPP) and Rich, Quick Mix, Lean (RQL) combustor con­cepts. P&W and GE have since adopted these concepts to develop com­bustors for their own engine product lines. Both concepts focus on improving the way fuel and air mix inside the engine to ensure that core temperatures do not get so high that they produce NOx emissions.

GE has drawn from the LPP combustor concept to develop its Twin Annular Pre-mixing Swirler (TAPS) combustor. Under the LPP concept,
air from the high-pressure compressor comes into the combustor through two swirlers adjacent to the fuel nozzles. The swirlers premix the fuel and combustion air upstream from the combustion zone, creating a lean (more air than fuel) homogenous mixture that can combust inside the engine without reaching the hottest temperatures, at which NOx is created.[1428]

Подпись: 12NASA support also helped lay the groundwork for P&W’s Technology for Advanced Low Nitrogen Oxide (TALON) low-emissions combustor, which reduces NOx emissions through the RQL process. The front end of the combustor burns very rich (more fuel than air), a process that suppresses the formation of NOx. The combustor then transitions in milliseconds to burning lean. The air must mix very rapidly with the combustion products from the rich first stage to prevent NOx forma­tion as the rich gases are diluted.[1429] The goal is to spend almost no time at extremely hot temperatures, at which air and fuel particles are evenly matched, because this produces NOx.[1430]

Today, NASA continues to study the difficult problem of increas­ing fuel efficiency and reducing NOx, carbon dioxide, and other emis­sions. At NASA Glenn, researchers are using an Advanced Subsonic Combustion Rig (ASCR), which simulates gas turbine combustion, to engage in ongoing emissions testing. P&W, GE, Rolls Royce, and United Technologies Corporation are continuing contracts with NASA to work on low-emissions combustor concepts.

"The [ICAO] regulations for NOx keep getting more stringent,” said Dan Bulzan, NASA’s associate principle investigator for the sub­sonic fixed wing and supersonic aeronautics project. "You can’t just sit there with your old combustor and expect to meet the NOx emissions regulations. The Europeans are quite aggressive and active in this area as well. There is a competition on who can produce the lowest emissions combustor.”[1431]

Solar Propulsion for High-Altitude Long-Endurance Unmanned Aerial Vehicles

Подпись: 13Another area of NASA involvement in the development and use of alter­native energy was work on solar propulsion for High-Altitude Long – Endurance (HALE) unmanned aerial vehicles (remotely piloted vehicles). Work in this area evolved out of the Agency’s Environmental Research Aircraft and Sensor Technology (ERAST) program that started in 1994. This program, which was a joint NASA/industry effort through a Joint Sponsored Research Agreement (JSRA), was under the direction of NASA’s Dryden Flight Research Center. The primary objectives of the ERAST program were to develop and transfer advanced technology to an emerging American unmanned aerial vehicle industry, and to con­duct flight demonstrations of the new technologies in controlled envi­ronments to validate the capability of UAVs to undertake operational science missions. A related and important aspect of this mission was the development, miniaturization, and integration of special purpose sen­sors and imaging equipment for the solar-powered aircraft. These goals were in line with both the revolutionary vehicles development aspect of NASA’s Office of Aerospace Technology aeronautics blueprint and with NASA’s Earth Science Enterprise efforts to expand scientific knowledge of the Earth system using NASA’s unique capabilities from the stand­point of space, aircraft, and onsite platforms.[1519]

Specific program objectives were to develop UAV capabilities for flying at extremely high altitudes and for long periods of time; demon­strate payload capabilities and sensors for atmospheric research; address and resolve UAV certification and operational issues; demonstrate the UAV’s usefulness to scientific, Government, and civil customers; and foster the emergence of a robust UAV industry in the United States.[1520]

The ERAST program envisioned missions that included remote sensing for Earth science studies, hyperspectral imaging for agriculture monitoring, tracking of severe storms, and serving as telecommunica­tions relay platforms. Related missions called for the development and testing of lightweight microminiaturized sensors, lightweight materi­als, avionics, aerodynamics, and other forms of propulsion suitable for extreme altitudes and flight duration.[1521]

Подпись: 13The ERAST program involved the development and testing of four generations of solar-powered UAVs, including the Pathfinder, the Pathfinder Plus, the Centurion, and the Helios Prototype. Because of budget limitations, the Helios Prototype was reconfigured in what could be considered a fifth-generation test vehicle for long-endurance flying (see below). Earlier UAVs, such as the Perseus, Theseus, and Proteus, relied on gasoline-powered engines. The first solar-powered UAV was the RAPTOR/Pathfinder, also known as the High-Altitude Solar (HALSOL) aircraft, which that was originally developed by the U. S. Ballistic Missile Defense Organization (BMDO—now the Missile Defense Agency) as part of a classified Government project and subsequently turned over to NASA for the ERAST program. In addition to BMDO’s interest in having NASA take over solar vehicle development, a workshop held in Truckee, CA, in 1989 played an important role in the origin of the ERAST program.

Whitcomb and History

Aircraft manufacturers tried repeatedly to lure Whitcomb away from NASA Langley with the promise of a substantial salary. At the height of his success during the supercritical wing program, Whitcomb remarked: "What you have here is what most researchers like—independence. In private industry, there is very little chance to think ahead. You have to worry about getting that contract in 5 or 6 months.”[256] Whitcomb’s inde­pendent streak was key to his and the Agency’s success. His relationship with his immediate boss, Laurence K. Loftin, the Chief of Aerodynamic Research at Langley, facilitated that autonomy until the late 1970s. When ordered to test a laminar flow concept that he felt was impracti­cal in the 8-foot TPT, which was widely known as "Whitcomb’s tunnel,” he retired as head of the Transonic Aerodynamics Branch in February 1980. He had worked in that organization since coming to Hampton from Worcester 37 years earlier, in 1943.[257]

Whitcomb’s resignation was partly due to the outside threat to his independence, but it was also an expression of his practical belief that his work in aeronautics was finished. He was an individual in touch with major national challenges and having the willingness and ability to devise solutions to help. When he made the famous quote “We’ve done all the easy things—let’s do the hard [emphasis Whitcomb’s] ones,” he made the simple statement that his purpose was to make a difference.[258] In the early days of his career, it was national security, when an inno­vation such as the area rule was a crucial element of the Cold War ten­sions between the United States and the Soviet Union. The supercritical wing and winglets were Whitcomb’s expression of making commercial aviation and, by extension, NASA, viable in an environment shaped by world fuel shortages and a new search for economy in aviation. He was a lifelong workaholic bachelor almost singularly dedicated to subsonic aerodynamics. While Whitcomb exhibited a reserved personality outside the laboratory, it was in the wind tunnel laboratory that he was unre­strained in his pursuit of solutions that resulted from his highly intui­tive and individualistic research methods.

With his major work accomplished, Whitcomb remained at Langley as a part-time and unpaid distinguished research associate until 1991. With over 30 published technical papers, numerous formal presenta­tions, and his teaching position in the Langley graduate program, he was a valuable resource for consultation and discussion at Langley’s numer­ous technical symposiums. In his personal life, Whitcomb continued his involvement in community arts in Hampton and pursued a new quest: an alternative source of energy to displace fossil fuels.[259]

Whitcomb’s legacy is found in the airliners, transports, business jets, and military aircraft flying today that rely upon the area rule fuselage, supercritical wings, and winglets for improved efficiency. The fastest, highest-flying, and most lethal example is the U. S. Air Force’s Lockheed Martin F-22 Raptor multirole air superiority fighter. Known widely as the 21st Century Fighter, the F-22 is capable of Mach 2 and features an area rule fuselage for sustained supersonic cruise, or supercruise, per­formance and a supercritical wing. The Raptor was an outgrowth of the Advanced Tactical Fighter (ATF) program that ran from 1986 to 1991. Lockheed designers benefited greatly from NASA work in fly-by-wire control, composite materials, and stealth design to meet the mission of the new aircraft. The Raptor made its first flight in 1997, and produc­tion aircraft reached Air Force units beginning in 2005.[260]

Whitcomb’s ideal transonic transport also included an area rule fuselage, but because most transports are truly subsonic, there is no need for that design feature for today’s aircraft.[261] The Air Force’s C-17 Globemaster III transport is the most illustrative example. In the early 1990s, McDonnell-Douglas used the knowledge generated with the YC-15 to develop a system of new innovations—supercritical airfoils, winglets, advanced structures and materials, and four monstrous high-bypass tur­bofan engines—that resulted in the award of the 1994 Collier Trophy. After becoming operational in 1995, the C-17 is a crucial element in the Air Force’s global operations as a heavy-lift, air-refuelable cargo trans­port.[262] After the C-17 program, McDonnell-Douglas, which was absorbed into the Boeing Company in 1997, combined NASA-derived advanced blended wing body configurations with advanced supercritical airfoils and winglets with rudder control surfaces in the 1990s.[263]

Unfortunately, Whitcomb’s tools are in danger of disappearing. Both the 8-foot HST and the 8-foot TPT are located beside each other on Langley’s East Side, situated between Langley Air Force Base and the Back River. The National Register of Historic Places designated the Collier-winning 8-foot HST a national historic landmark in October 1985.[264] Shortly after Whitcomb’s discovery of the area rule, the NACA suspended active operations at the tunnel in 1956. As of 2006, the Historic Landmarks program designated it as "threatened,” and its future

Подпись:
disposition was unclear.[265] The 8-foot TPT opened in 1953. He validated the area rule concept and conducted his supercritical wing and wing- let research through the 1950s, 1960s, and 1970s in this tunnel, which was located right beside the old 8-foot HST. The tunnel ceased oper­ations in 1996 and has been classified as "abandoned” by NASA.[266] In the early 21st century, the need for space has overridden the historical importance of the tunnel, and it is slated for demolition.

Overall, Whitcomb and Langley shared the quest for aerody­namic efficiency, which became a legacy for both. Whitcomb flour­ished working in his tunnel, limited only by the wide boundaries of his intellect and enthusiasm. One observer considered him to be "flight

Whitcomb and History

A 3-percent scale model of the Boeing Blended Wing Body 450 passenger subsonic transport in the Langley 14 x 22 Subsonic Tunnel. NASA.

theory personified.”[267] More importantly, Whitcomb was the ultimate personification of the importance of the NACA and NASA to American aeronautics during the second aeronautical revolution. The NACA and NASA hired great people, pure and simple, in the quest to serve American aeronautics. These bright minds made up a dynamic community that created innovations and ideas that were greater than the sum of their parts. Whitcomb, as one of those parts, fostered innovations that proved to be of longstanding value to aviation.

Breaking Up Shock Waves with "Quiet Spike&quot

In June 2003, the FAA—citing a finding by the National Research Council that there were no insurmountable obstacles to building a quiet super­sonic aircraft—began seeking comments on its noise standards in advance of a technical workshop on the issue. In response, the Aerospace Industries Association, the General Aviation Manufactures Association, and most aircraft companies felt that the FAA’s sonic boom restriction

was the still the most serious impediment to creating the market for a supersonic business jet (SSBJ), which would be severely handicapped if unable to fly faster than sound over land.[511]

By the time the FAA workshop was held in mid-November, Peter Coen of the Langley Center and a Gulfstream vice president were able to report on the success of the SSBD. Coen also outlined future initia­tives in NASA’s Supersonic Vehicles Technology program. In addition to leveraging the results of DARPA’s QSP research, NASA hoped to engage industry partners for follow-on projects on the sonic boom, and was also working with Eagle Aeronautics on new three-dimensional CFD boom propagation models. For additional psychoacoustical studies, Langley had reconditioned its boom simulator booth. And as a possible followup to the SSBD, NASA was considering a shaped low-boom demonstrator that could fly over populated areas, allowing definitive surveys on pub­lic acceptance of minimized boom signatures.[512]

The Concorde made its final transatlantic flights just a week after the FAA’s workshop. Its demise marked the first time in modern his­tory that a mode of transportation had retreated back to slower speeds. This did, however, leave the future supersonic market entirely open to business jets. Although the success of the SSBD hinted at the feasibil­ity of such an aircraft, designing one—as explained in a new study by Langley’s Robert Mack—would still not be at all easy.[513]

During the next several years, a few individual investors and a number of American and European aircraft companies—including Gulfstream, Boeing, Lockheed, Cessna, Raytheon, Dassault, Sukhoi, and the privately held Aerion Corporation—pursued assorted SSBJ concepts with varying degrees of cooperation, competition, and commitment. Some of these and other aviation-related companies also worked together on supersonic strategies through three consortiums: Supersonic Aerospace International

Breaking Up Shock Waves with "Quiet Spike&quot

(SAI), which had support from Lockheed-Martin; the 10-member Supersonic Cruise Industry Alliance (SCIA); and Europe’s High-Speed Aircraft Industrial Project (HISAC), comprising more than 30 companies, universities, and other members. Meanwhile, the FAA began the lengthy process for considering a new metric on acceptable sonic booms and, in the interest of global consistency, prompted the International Civil Aviation Organization (ICAO) to also put the issue on its agenda. It was in this environment of both renewed enthusiasm and ongoing uncertainty about commercial supersonic flight that NASA continued to study and experi­ment on ways to make the sonicboom more acceptable to the public.[514]

Richard Wlezien (back from DARPA as NASA’s vehicle systems man­ager) hoped to follow up on the SSBD with a truly low-boom super-

sonic demonstrator, possibly by 2010. In July 2005, NASA announced the Sonic Boom Mitigation Project, which began with concept explorations by major aerospace companies on the feasibility of either modifying another existing aircraft or designing a new demonstrator.[515] As explained by Peter Coen, "these studies will determine whether a low sonic boom demonstra­tor can be built at an affordable cost in a reasonable amount of time.”[516] Although numerous options for using existing aircraft were under inves­tigation, most of the studies were leaning toward the need to build a new experimental airplane as the most effective solution. On August 30, 2005, however, NASA Headquarters announced the end of the short-lived Sonic Boom Mitigation Project because of changing priorities.[517]

Despite this setback, there was still one significant boom lowering experiment in the making. Gulfstream Aerospace Corporation, which had been teamed with Northrop Grumman in one of the canceled studies, had already patented a new sonic boom mitigation technique.[518] Testing this invention—a retractable lance-shaped device to extend the length of an aircraft—would become the next major sonic boom flight experiment.

In the meantime, NASA continued some relatively modest sonic boom testing at the Dryden Center, mainly to help improve simulation capabili­ties. In a joint project with the FAA and Transport Canada in the summer of 2005, researchers from Pennsylvania State University strung an array of advanced microphones at Edwards AFB to record sonic booms created by Dryden F-18s passing overhead. Eighteen volunteers, who sat on lawn chairs alongside the row of microphones during the flyovers to experience the real thing, later gauged the fidelity of the played-back recordings. These were then used to help improve the accuracy of the booms replicated in simulators.[519]

“Quiet Spike” was the name that Gulfstream gave to its nose boom concept. Based on CFD models and results from Langley’s 4 by 4 super-

Breaking Up Shock Waves with "Quiet Spike&quot

Close-up view of the SSBD F-5E, showing its enlarged "pelican” nose and lower fuselage designed to shape the shock waves from the front of the airframe. NASA.

sonic wind tunnel, Gulfstream was convinced that the Quiet Spike device could greatly mitigate a sonic boom by breaking up the typical nose shock into three less-powerful waves that would propagate in parallel to the ground.[520] However, the company needed to test the structural and aero­dynamic suitability of the device and also obtain supersonic in-flight data on its shock scattering ability. NASA’s Dryden Flight Research Center had the capabilities needed to accomplish these tasks. Under this latest public – private partnership, Gulfstream fabricated a telescoping 30-foot-long nose boom (made of molded graphite epoxy over an aluminum frame) to attach to the radar bulkhead of Dryden’s frequently modified F-15B No. 836. A motorized cable and pulley system could extend the spike up to 24 feet and retract it back to 14 feet. After extensive static testing at its Savannah, GA, facility, Gulfstream and NASA technicians at Dryden attached the specially instrumented spike to the F-15’s radar bulkhead in April 2006 and began conducting further ground tests, such as for vibration.[521]

After various safety checks, aerodynamic assessments and checkout flights, Dryden conducted Quiet Spike flight tests from August 10, 2006 until February 14, 2007. Key engineers on the project included Dryden’s Leslie Molzahn and Thomas Grindle, and Gulfstream’s Robbie Cowart. Veteran NASA test pilot Jim Smolka gradually expanded the F-15B’s flight envelope up to Mach 1.8 and performed sonic boom experiments with the telescoping nose boom at speeds up to Mach 1.4 at 40,000 feet. Aerial refueling by AFFTC’s KC-135 allowed extended missions with multiple test points. Because it was known that the weak shock waves from the spike would rather quickly coalesce with the more powerful shock waves generated by the rest of the F-15’s unmodified high-boom airframe, data were collected from distances of no more than 1,000 feet. These mea­surements, made by a chase plane using similar probing techniques to those of the SR-71 and SSBD tests, confirmed CFD models on the spike’s ability to generate a sawtooth wave pattern that, if reaching the surface, would cause only a muffled sonic boom. Analysis of the data appeared to confirm that shocks of equal strength would not coalesce into a sin­gle strong shock. In February 2007, with all major test objectives hav­ing been accomplished, the Quiet Spike F-15B was flown to Savannah for Gulfstream to restore to its normal configuration.[522]

For this successful test of an innovative design concept for a future SSBJ, James Smolka, Leslie Molzahn, and three Gulfstream employees subsequently received Aviation Week and Space Technology’s Laureate Award in Aeronautics and Propulsion. One month later, however, both the Gulfstream Corporation and the Dryden Center were saddened by the death in an airshow accident of Gerard Schkolnik, Gulfstream’s Director of Supersonic Technology Programs, who had been a Dryden employee for 15 years.[523]

Self-Adaptive Flight Control Systems

One of the more sophisticated electronic control system concepts was funded by the AF Flight Dynamics Lab and created by Minneapolis Honeywell in the late 1950s for use in the Air Force-NASA-Boeing X-20 Dyna-Soar reentry glider. The extreme environment associated with a reentry from space (across a large range of dynamic pressures and Mach numbers) caused engineers to seek a better way of adjusting the feedback gains than stored programs and direct measurements of the atmospheric variables. The concept was based on increasing the elec­trical gain until a small limit-cycle was measured at the control surface, then alternately lowering and raising the electrical gain to maintain a small continuous, but controlled, limit-cycle throughout the flight. This allowed the total loop gains to remain at their highest safe value but avoided the need to accurately predict (or measure) the aerodynamic gains (control surface effectiveness).

This system, the MH-96 Adaptive Flight Control System (AFCS), was installed in a McDonnell F-101 Voodoo testbed and flown successfully by Minneapolis Honeywell in 1959-1960. It proved to be fairly robust in flight, and further system development occurred after the cancellation of the X-20 Dyna-Soar program in 1963. After a ground-test explosion during an engine run with the third X-15 in June 1960, NASA and the Air Force decided to install the MH-96 in the hypersonic research air­craft when it was rebuilt. The system was expanded to include several autopilot features, as well as a blending of the aerodynamic and reac­tion controls for the entry environment. The system was triply redun­dant, thus providing fail-operational, fail-safe capability. This was an improvement over the other two X-15s, which had only fail-safe fea­tures. Because of the added features of the MH-96, and the additional

redundancy it provided, NASA and the Air Force used the third X-15 for all planned high-altitude flights (above 250,000 feet) after an initial enve­lope expansion program to validate the aircraft’s basic performance.[689]

Unfortunately, on November 15, 1967, the third X-15 crashed, kill­ing its pilot, Major Michael J. Adams. The loss of X-15 No. 3 was related to the MH-96 Adaptive Flight Control System design, along with several other factors. The aircraft began a drift off its heading and then entered a spin at high altitude (where dynamic pressure—"q” in engineering shorthand—is very low). The flight control system gain was at its max­imum when the spin started. The control surfaces were all deflected to their respective stops attempting to counter the spin, thus no limit-cycle motion—4 hertz (Hz) for this airplane—was being detected by the gain changer. Thus, it remained at maximum gain, even though the dynamic pressure (and hence the structural loading) was increasing rapidly dur­ing entry. When the spin finally broke and the airplane returned to a normal angle of attack, the gain was well above normal, and the sys­tem commanded maximum pitch rate response from the all-moving elevon surface actuators. With the surface actuators operating at their maximum rate, there was still no 4-Hz limit-cycle being sensed by the gain changer, and the gain remained at the maximum value, driving the airplane into structural failure at approximately 60,000 feet and at a velocity of Mach 3.93.[690]

As the accident to the third X-15 indicated, the self-adaptive con­trol system concept, although used successfully for several years, had some subtle yet profound difficulties that resulted in it being used in only one subsequent production aircraft, the General Dynamics F-111 multipurpose strike aircraft. One characteristic common to most of the model-following systems was a disturbing tendency to mask deteriorat­ing handling qualities. The system was capable of providing good han­dling qualities to the pilot right up until the system became saturated, resulting in an instantaneous loss of control without the typical warn­ing a pilot would receive from any of the traditional signs of impending loss of control, such as lightening of control forces and the beginning

of control reversal.[691] A second serious drawback that affected the F-111 was the relative ease with which the self-adaptive system’s gain changer could be "fooled,” as with the accident to the third X-15. During early testing of the self-adaptive flight control system on the F-111, testers dis­covered that, while the plane was flying in very still air, the gain changer in the flight control system could drive the gain to quite high values before the limit-cycle was observed. Then a divergent limit-cycle would occur for several seconds while the gain changer stepped the gain back to the proper levels. The solution was to install a "thumper” in the sys­tem that periodically introduced a small bump in the control system to start an oscillation that the gain changer could recognize. These oscilla­tions were small and not detectable by the pilot, and thus, by inducing a little "acceptable” perturbation, the danger of encountering an unex­pected larger one was avoided.

For most current airplane applications, flight control systems use stored gain schedules as a function of measured flight conditions (alti­tude, airspeed, etc.). The air data measurement systems are already installed on the airplane for pilot displays and navigational purposes, so the additional complication of a self-adaptive feature is considered unnecessary. As the third X-15’s accident indicated, even a well-designed adaptive flight control system can be fooled, resulting in tragic conse­quences.[692] The "lesson learned,” of course (or, more properly, the "les­son relearned”) is that the more complex the system, the harder it is to identify the potential hazards. It is a lesson that engineers and design­ers might profitably take to heart, no matter what their specialty.

Flight Control Coupling

Flight control coupling is a slow loss of control of an airplane because of a unique combination of static stability and control effectiveness. Day described control coupling—the second mode of dynamic coupling—as " a coupling of static yaw and roll stability and control moments which can produce untrimmability, control reversal, or pilot-induced oscilla­tion (PIO).”[742] So-called "adverse yaw” is a common phenomenon associ­ated with control of an aircraft equipped with ailerons. The down-going aileron creates an increase in lift and drag for one wing, while the up – going aileron creates a decrease in lift and drag for the opposite wing. The change in lift causes the airplane to roll toward the up-going aile­ron. The change in drag, however, results in the nose of the airplane swinging away from the direction of the roll (adverse yaw). If the air­plane exhibits strong dihedral effect (roll produced by sideslip, a quality more pronounced in a swept wing design), the sideslip produced by the aileron deflections will tend to detract from the commanded roll. In the extreme case, with high dihedral effect and strong adverse yaw, the roll can actually reverse, and the airplane will roll in the opposite direction to that commanded by the pilot—as sometimes happened with the Boeing

B-47, though by aeroelastic twisting of a wing because of air loads. If the pilot responds by adding more aileron deflection, the roll reversal and sideslip will increase, and the airplane could go out of control.

As discussed previously, the most dramatic incident of control cou­pling occurred during the last flight of the X-2 rocket-powered research airplane in September 1956. The dihedral effect for the X-2 was quite strong because of the influence of wing sweep rather than the existence of actual wing dihedral. Dihedral effect because of wing sweep is non­existent at zero-lift but increases proportionally as the angle of attack of the wing increases. After the rocket burned out, which occurred at the end of a ballistic, zero-lift trajectory, the pilot started a gradual turn by applying aileron. He also increased the angle of attack slightly to facili­tate the turn, and the airplane entered a region of roll reversal. The side­slip increased until the airplane went out of control, tumbling violently. The data from this accident were fully recovered, and the maneuver was analyzed extensively by the NACA, resulting in a better understanding of the control-coupling phenomenon. The concept of a control parame­ter was subsequently created by the NACA and introduced to the indus­try. This was a simple equation that predicted the boundary conditions for aileron reversal based on four stability derivatives. When the yaw­ing moment due to sideslip divided by the yawing moment due to aile­ron is equal to the rolling moment due to sideslip divided by the rolling moment due to aileron, the airplane remains in balance and aileron deflection will not cause the airplane to roll in either direction.[743]

CFD and Transonic Airfoils

The analysis of transonic flows suffers from the same problems as those for the supersonic blunt body discussed above. Just considering the flow to be inviscid, the governing Euler equations are highly nonlinear for both transonic and hypersonic flows. From the numerical point of view, both flow fields are mixed regions of locally subsonic and super­sonic flows. Thus, the numerical solution of transonic flows originally encountered the same problem as that for the supersonic blunt body problem: whatever worked in the subsonic region did not work in the supersonic region, and vice versa. Ultimately, this problem was solved from two points of view. Historically, the first truly successful CFD solu­tion for the inviscid transonic flow over an airfoil was carried out in
1971 by Earll Murman and Julian Cole of Boeing Scientific Research Laboratories, whose collaborative research began at the urging of Arnold "Bud” Goldburg, then Chief Scientist of Boeing.[776] They treated a simpli­fied version of the Euler equations called the small-perturbation veloc­ity potential equation. This limited their solutions to the flows over thin airfoils at small angles of attack. Nevertheless, Murman and Cole intro­duced the concept of writing the finite differences in the equations such that they reached in both the upstream and downstream directions when in the subsonic region, but they reached in only the upwind direction in the supersonic regions. This is motivated by the physical process that in subsonic flow disturbances propagate in all directions but in a supersonic flow disturbances propagate only in the downstream direction. Thus it is proper to form the finite differences in the supersonic region such that they take only information from the upstream side of the grid point.

CFD and Transonic AirfoilsToday, this approach in modern CFD is called "upwinding” and is part of many modern algorithms in use for all kinds of flows. In 1971, this idea was groundbreaking, and it allowed Murman and Cole to obtain the first successful numerical solutions of the transonic flow over a body. In addition to the restriction of thin airfoils at small angles of attack, how­ever, their use of the small perturbation velocity potential equation also limited their solutions to isentropic flows. This meant that, although their solution captured the semblance of a shock wave in the flow, the loca­tion and flow changes across a shock wave were not accurate. Because many transonic flows involve shock waves embedded in the flow, this was definitely a bit of a problem. The solution to this problem involved the numerical treatment of the Euler equations, which, as we have dis­cussed early in this article, accurately pertain to any inviscid flow, not just one with small perturbations and free of shocks.

The finest in such CFD solutions were developed by Antony Jameson, then a professor at Princeton University (and now at Stanford), whose work was heavily sponsored by the NASA Langley Research Laboratory. Using the concept of time marching in combination with a Runge-Kutta time integration of the unsteady equations, Jameson constructed a series of outstanding transonic airfoil codes under the general code name of the FLO codes. These codes entered standard use in many aircraft com­panies and laboratories. Once again, NASA had been responsible for a
major advancement in CFD, helping to develop transonic flow codes that advanced the design of many airfoil shapes used today on modern commercial jet transports.[777]

Goddard Space Flight Center

Goddard Space Flight Center was established in 1959, absorbing the U. S. Navy Vanguard satellite project and, with it, the mission of devel­oping, launching, and tracking unpiloted satellites. Since that time, its roles and responsibilities have expanded to consider space science, Earth observation from space, and unpiloted satellite systems more broadly.

Structural analysis problems studied at Goddard included definition of operating environments and loads applicable to vehicles, subsystems, and payloads; modeling and analysis of complete launch vehicle/payload sys­tems (generic and for specific planned missions); thermally induced loads and deformation; and problems associated with lightweight, deployable structures such as antennas. Control-structural interactions and multi­body dynamics are other related areas of interest.

Goddard’s greatest contribution to computer structural analysis was, of course, the NASTRAN program. With public release of NASTRAN, management responsibility shifted to Langley. However, Goddard remained extremely active in the early application of NASTRAN to practical problems, in the evaluation of NASTRAN, and in the ongoing improvement and addition of new capabilities to NASTRAN: thermal analysis (part of a larger Structural-Thermal-Optical [STOP] program, which is discussed below), hydroelastic analysis, automated cyclic sym­metry, and substructuring techniques, to name a few.[885]

Structural-Thermal-Optical analysis predicts the impact on the per­formance of a (typically satellite-based) sensor system due to the defor­mation of the sensors and their supporting structure(s) under thermal and mechanical loads. After NASTRAN was developed, a major effort began at GSFC to achieve better integration of the thermal and optical analysis components with NASTRAN as the structural analysis compo­nent. The first major product of this effort was the NASTRAN Thermal Analyzer. The program was based on NASTRAN and thereby inherited a great deal of modeling capability and flexibility. But, most impor­tantly, the resulting inputs and outputs were fully compatible with NASTRAN: "Prior to the existence of the NASTRAN Thermal Analyzer, available general purpose thermal analysis computer programs were designed on the basis of the lumped-node thermal balance method.

. . . They were not only limited in capacity but seriously handicapped by incompatibilities arising from the model representations [lumped – node versus finite-element]. The intermodal transfer of temperature data was found to necessitate extensive interpolation and extrapolation. This extra work proved not only a tedious and time-consuming process but also resulted in compromised solution accuracy. To minimize such an interface obstacle, the STOP project undertook the development of a general purpose finite-element heat transfer computer program.”[886] The capability was developed by the MacNeal Schwendler Corporation under subcontract from Bell Aerospace. "It must be stressed, however, that a cooperative financial and technical effort between [Goddard and Langley] made possible the emergence of this capability.”[887]

Another element of the STOP effort was the computation of "view factors” for radiation between elements: "In an in-house STOP proj­ect effort, GSFC has developed an IBM-360 program named ‘VIEW’ which computes the view factors and the required exchange coefficients between radiating boundary elements.”[888] VIEW was based on an ear­lier view factor program, RAVFAC, but was modified principally for compatibility with NASTRAN and eventual incorporation as a subrou­tine in NASTRAN.[889] STOP is still an important part of the analysis of many of the satellite packages that Goddard manages, and work contin­ues toward better performance with complex models, multidisciplinary design, and optimization capability, as well as analysis.

COmposite Blade STRuctural ANalyzer (COBSTRAN, Glenn, 1989)

COBSTRAN was a preprocessor for NASTRAN, designed to generate finite element models of composite blades. While developed specifically

for advanced turboprop blades under the Advanced Turboprop (ATP) project, it was subsequently applied to compressor blades and tur­bine blades. It could be used with both COSMIC NASTRAN and MSC/ NASTRAN, and was subsequently extended to work as a preprocessor for the MARC nonlinear finite element code.[984]

1) BLAde SIMulation (BLASIM), 1992

BLASIM calculates dynamic characteristics of engine blades before and after an ice impact event. BLASIM could accept input geometry in the form of airfoil coordinates or as a NASTRAN-format finite ele­ment model. BLASIM could also utilize the ICAN program (discussed separately) to generate ply properties of composite blades.[985] "The ice impacts the leading edge of the blade causing severe local damage. The local structural response of the blade due to the ice impact is pre­dicted via a transient response analysis by modeling only a local patch around the impact region. After ice impact, the global geometry of the blade is updated using deformations of the local patch and a free vibra­tion analysis is performed. The effects of ice impact location, ice size and ice velocity on the blade mode shapes and natural frequencies are investigated.”[986]