Category AERONAUTICS

High-Speed Research

When NASA decided to start a High-Speed Research (HSR) program in 1990, it quickly decided to draw in the E Cubed combustor research to address previous concerns about emissions. The goal of HSR was to develop a second generation of High-Speed Civil Transport (HSCT) air­craft with better performance than the Supersonic Transport project of the 1970s in several areas, including emissions. The project sought to lay the research foundation for industry to pursue a supersonic civil trans­port aircraft that could fly 300 passengers at more than 1,500 miles per hour, or Mach 2, crossing the Atlantic or Pacific Ocean in half the time of subsonic jets. The program had an aggressive NOx goal because there were still concerns, held over from the days of the SST in the 1970s, that a super-fast, high-flying jet could damage the ozone layer.[1414]

NASA’s Atmospheric Effects of Stratospheric Aircraft project was used to guide the development of environmental standards for the new HSCT exhaust emissions. The study yielded optimistic findings:
there would be negligible environmental impact from a fleet of 500 HSCT aircraft using advanced technology engine components.[1415] The HSR set a NOx emission index goal of 5 grams per kilogram of fuel burned, or 90 percent better than conventional technology at the time.[1416]

NASA sought to meet the NOx goal primarily through major advance­ments in combustion technologies. The HSR effort was canceled in 1999 because of budget constraints, but HSR laid the groundwork for future development of clean combustion technologies under the AST and UEET programs discussed below.

Benefits of NASA’s "Good Stewardship" Regarding the Agency’s Participation in the Federal Wind Energy Program

NASA Lewis’s involvement in the Federal Wind Energy Program from 1974 through 1988 brought a high degree of engineering expe­rience and expertise to the project that had a lasting impact on the development of use of wind energy both in the United States and inter­nationally. During this program, NASA developed the world’s first mul­timegawatt horizontal-axis wind turbines, the dominant wind turbine design in use throughout the world today.

NASA Lewis was able to make a quick start and contribution to the program because of the Research Center’s longstanding experi­ence and expertise in aerodynamics, power systems, materials, and structures. The first task that NASA Lewis accomplished was to bring forward and document past efforts in wind turbine development, includ­ing work undertaken by Palmer Putnam (Smith-Putnam wind tur­bine), Ulrich Hutter (Hutter-Allgaier wind turbine), and the Danish

Gedser mill. This information and database served both to get NASA Lewis involved in the Wind Energy Program and to form an initial data and experience foundation to build upon. Throughout the program, NASA Lewis continued to develop new concepts and testing and modeling techniques that gained wide use within the wind energy field. It documented the research and development efforts and made this information available for industry and others working on wind turbine development.

Подпись: 13Lasting accomplishments from NASA’s program involvement included development of the soft shell tubular tower, variable speed asynchronous generators, structural dynamics, engineering model­ing, design methods, and composite materials technology. NASA Lewis’s experience with aircraft propellers and helicopter rotors had quickly enabled the Research Center to develop and experiment with different blade designs, control systems, and materials. A significant blade development program advanced the use of steel, aluminum, wood epoxy composites, and later fiberglass composite blades that generally became the standard blade material. Finally, as presented in detail above, NASA was involved in the development, building, and testing of 13 large horizontal-axis wind turbines, with both the Mod-2 and Mod-5B turbines demonstrating the feasibility of operat­ing large wind turbines in a power network environment. With the end of the energy crisis of the 1970s and the resulting end of most U. S. Government funding, the electric power market was unable to support the investment in the new large wind turbine technology. Development interest moved toward the construction and operation of smaller wind turbine generators for niche markets that could be supported where energy costs remained high.

NASA Lewis’s involvement in the wind energy program started winding down in the early 1980s, and, by 1988, the program was basically turned over to the Department of Energy. With the decline in energy prices, U. S. turbine builders generally left the business, leaving Denmark and other European nations to develop the commercial wind turbine market.

While NASA Lewis had developed a 4-megawatt wind turbine in 1982, Denmark had developed systems with power levels only 10 per­cent of that at that time. However, with steady public policy and prod­uct development, Denmark had captured much of the $15 billion world market by 2004.

TABLE 1

COMPARATIVE WIND TURBINE TECHNOLOGICAL DEVELOPMENT, 1981 -2007

TURBINE TYPE

Nibe A

NASA WTS-4

Vestas

YEAR

1981

1982

2007

COUNTRY OF ORIGIN

Denmark

United States

Denmark

POWER (IN KW)

630

4,000

1,800

TIP HEIGHT (FEET)

230

425

355

POWER REGULATION

Partial pitch

Full pitch

Full pitch

BLADE NUMBER

3

2

3

BLADE MATERIAL

Steel/fiberglass

Fiberglass

Fiberglass

TOWER STRUCTURE

Concrete

Steel tubular

Steel tubular

Source: Larry A. Viterna, NASA.

Most of the technology developed by NASA, however, continued to represent a significant contribution to wind power generation applica­ble both to large and small wind turbine systems. In recent years, inter­est has been renewed in building larger-size wind turbines, and General Electric, which was involved in the DOE-NASA wind energy program, has now become the largest U. S. manufacture of wind power generators and, in 2007, was among world’s top three manufacturers of wind tur­bine systems. The Danish company Vestas remained the largest company in the wind turbine field. GE products currently include 1.5-, 2.5-, and, for offshore use, 3.6-megawatt systems. New companies, such as Clipper Wind Power, with its manufacturing plant in Cedar Rapids, IA, and Nordic Windpower also have entered the large turbine fabrication business in the United States. Clipper, which is a U. S.-U. K. company, installed its first system at Medicine Bow, WY, which was the location of a DOE-NASA Mod-2 unit. In the first quarter of 2007, the company installed eight com­mercial 2.5-megawatt Clipper Liberty machines. Nordic Windpower, which represents a merger of Swedish, U. S., and U. K. teams, markets its 1-megawatt unit that encompasses a two-bladed teetered rotor that evolved from the WTS-4 wind turbine under the NASA Lewis program.

In summary, NASA developed and made available to industry sig­nificant technology and turbine hardware designs through its "good stewardship” of wind energy development from 1974 through 1988. NASA thus played a leading role in the international development and utilization of wind power to help address the Nation’s energy needs today. In doing so, NASA Lewis fulfilled its primary wind program goal
of developing and transferring to industry the technology for safe, reliable, and environmentally acceptable large wind turbine systems capable of generating significant amounts electricity at cost competitive prices. In 2008, the United States achieved the No. 1 world ranking for total installed capacity of wind turbine systems for the generation of electricity.

Whitcomb and History

Aircraft manufacturers tried repeatedly to lure Whitcomb away from NASA Langley with the promise of a substantial salary. At the height of his success during the supercritical wing program, Whitcomb remarked: "What you have here is what most researchers like—independence. In private industry, there is very little chance to think ahead. You have to worry about getting that contract in 5 or 6 months.”[256] Whitcomb’s inde­pendent streak was key to his and the Agency’s success. His relationship with his immediate boss, Laurence K. Loftin, the Chief of Aerodynamic Research at Langley, facilitated that autonomy until the late 1970s. When ordered to test a laminar flow concept that he felt was impracti­cal in the 8-foot TPT, which was widely known as "Whitcomb’s tunnel,” he retired as head of the Transonic Aerodynamics Branch in February 1980. He had worked in that organization since coming to Hampton from Worcester 37 years earlier, in 1943.[257]

Whitcomb’s resignation was partly due to the outside threat to his independence, but it was also an expression of his practical belief that his work in aeronautics was finished. He was an individual in touch with major national challenges and having the willingness and ability to devise solutions to help. When he made the famous quote “We’ve done all the easy things—let’s do the hard [emphasis Whitcomb’s] ones,” he made the simple statement that his purpose was to make a difference.[258] In the early days of his career, it was national security, when an inno­vation such as the area rule was a crucial element of the Cold War ten­sions between the United States and the Soviet Union. The supercritical wing and winglets were Whitcomb’s expression of making commercial aviation and, by extension, NASA, viable in an environment shaped by world fuel shortages and a new search for economy in aviation. He was a lifelong workaholic bachelor almost singularly dedicated to subsonic aerodynamics. While Whitcomb exhibited a reserved personality outside the laboratory, it was in the wind tunnel laboratory that he was unre­strained in his pursuit of solutions that resulted from his highly intui­tive and individualistic research methods.

With his major work accomplished, Whitcomb remained at Langley as a part-time and unpaid distinguished research associate until 1991. With over 30 published technical papers, numerous formal presenta­tions, and his teaching position in the Langley graduate program, he was a valuable resource for consultation and discussion at Langley’s numer­ous technical symposiums. In his personal life, Whitcomb continued his involvement in community arts in Hampton and pursued a new quest: an alternative source of energy to displace fossil fuels.[259]

Whitcomb’s legacy is found in the airliners, transports, business jets, and military aircraft flying today that rely upon the area rule fuselage, supercritical wings, and winglets for improved efficiency. The fastest, highest-flying, and most lethal example is the U. S. Air Force’s Lockheed Martin F-22 Raptor multirole air superiority fighter. Known widely as the 21st Century Fighter, the F-22 is capable of Mach 2 and features an area rule fuselage for sustained supersonic cruise, or supercruise, per­formance and a supercritical wing. The Raptor was an outgrowth of the Advanced Tactical Fighter (ATF) program that ran from 1986 to 1991. Lockheed designers benefited greatly from NASA work in fly-by-wire control, composite materials, and stealth design to meet the mission of the new aircraft. The Raptor made its first flight in 1997, and produc­tion aircraft reached Air Force units beginning in 2005.[260]

Whitcomb’s ideal transonic transport also included an area rule fuselage, but because most transports are truly subsonic, there is no need for that design feature for today’s aircraft.[261] The Air Force’s C-17 Globemaster III transport is the most illustrative example. In the early 1990s, McDonnell-Douglas used the knowledge generated with the YC-15 to develop a system of new innovations—supercritical airfoils, winglets, advanced structures and materials, and four monstrous high-bypass tur­bofan engines—that resulted in the award of the 1994 Collier Trophy. After becoming operational in 1995, the C-17 is a crucial element in the Air Force’s global operations as a heavy-lift, air-refuelable cargo trans­port.[262] After the C-17 program, McDonnell-Douglas, which was absorbed into the Boeing Company in 1997, combined NASA-derived advanced blended wing body configurations with advanced supercritical airfoils and winglets with rudder control surfaces in the 1990s.[263]

Unfortunately, Whitcomb’s tools are in danger of disappearing. Both the 8-foot HST and the 8-foot TPT are located beside each other on Langley’s East Side, situated between Langley Air Force Base and the Back River. The National Register of Historic Places designated the Collier-winning 8-foot HST a national historic landmark in October 1985.[264] Shortly after Whitcomb’s discovery of the area rule, the NACA suspended active operations at the tunnel in 1956. As of 2006, the Historic Landmarks program designated it as "threatened,” and its future

Подпись:
disposition was unclear.[265] The 8-foot TPT opened in 1953. He validated the area rule concept and conducted his supercritical wing and wing- let research through the 1950s, 1960s, and 1970s in this tunnel, which was located right beside the old 8-foot HST. The tunnel ceased oper­ations in 1996 and has been classified as "abandoned” by NASA.[266] In the early 21st century, the need for space has overridden the historical importance of the tunnel, and it is slated for demolition.

Overall, Whitcomb and Langley shared the quest for aerody­namic efficiency, which became a legacy for both. Whitcomb flour­ished working in his tunnel, limited only by the wide boundaries of his intellect and enthusiasm. One observer considered him to be "flight

Whitcomb and History

A 3-percent scale model of the Boeing Blended Wing Body 450 passenger subsonic transport in the Langley 14 x 22 Subsonic Tunnel. NASA.

theory personified.”[267] More importantly, Whitcomb was the ultimate personification of the importance of the NACA and NASA to American aeronautics during the second aeronautical revolution. The NACA and NASA hired great people, pure and simple, in the quest to serve American aeronautics. These bright minds made up a dynamic community that created innovations and ideas that were greater than the sum of their parts. Whitcomb, as one of those parts, fostered innovations that proved to be of longstanding value to aviation.

Breaking Up Shock Waves with "Quiet Spike&quot

In June 2003, the FAA—citing a finding by the National Research Council that there were no insurmountable obstacles to building a quiet super­sonic aircraft—began seeking comments on its noise standards in advance of a technical workshop on the issue. In response, the Aerospace Industries Association, the General Aviation Manufactures Association, and most aircraft companies felt that the FAA’s sonic boom restriction

was the still the most serious impediment to creating the market for a supersonic business jet (SSBJ), which would be severely handicapped if unable to fly faster than sound over land.[511]

By the time the FAA workshop was held in mid-November, Peter Coen of the Langley Center and a Gulfstream vice president were able to report on the success of the SSBD. Coen also outlined future initia­tives in NASA’s Supersonic Vehicles Technology program. In addition to leveraging the results of DARPA’s QSP research, NASA hoped to engage industry partners for follow-on projects on the sonic boom, and was also working with Eagle Aeronautics on new three-dimensional CFD boom propagation models. For additional psychoacoustical studies, Langley had reconditioned its boom simulator booth. And as a possible followup to the SSBD, NASA was considering a shaped low-boom demonstrator that could fly over populated areas, allowing definitive surveys on pub­lic acceptance of minimized boom signatures.[512]

The Concorde made its final transatlantic flights just a week after the FAA’s workshop. Its demise marked the first time in modern his­tory that a mode of transportation had retreated back to slower speeds. This did, however, leave the future supersonic market entirely open to business jets. Although the success of the SSBD hinted at the feasibil­ity of such an aircraft, designing one—as explained in a new study by Langley’s Robert Mack—would still not be at all easy.[513]

During the next several years, a few individual investors and a number of American and European aircraft companies—including Gulfstream, Boeing, Lockheed, Cessna, Raytheon, Dassault, Sukhoi, and the privately held Aerion Corporation—pursued assorted SSBJ concepts with varying degrees of cooperation, competition, and commitment. Some of these and other aviation-related companies also worked together on supersonic strategies through three consortiums: Supersonic Aerospace International

Breaking Up Shock Waves with "Quiet Spike&quot

(SAI), which had support from Lockheed-Martin; the 10-member Supersonic Cruise Industry Alliance (SCIA); and Europe’s High-Speed Aircraft Industrial Project (HISAC), comprising more than 30 companies, universities, and other members. Meanwhile, the FAA began the lengthy process for considering a new metric on acceptable sonic booms and, in the interest of global consistency, prompted the International Civil Aviation Organization (ICAO) to also put the issue on its agenda. It was in this environment of both renewed enthusiasm and ongoing uncertainty about commercial supersonic flight that NASA continued to study and experi­ment on ways to make the sonicboom more acceptable to the public.[514]

Richard Wlezien (back from DARPA as NASA’s vehicle systems man­ager) hoped to follow up on the SSBD with a truly low-boom super-

sonic demonstrator, possibly by 2010. In July 2005, NASA announced the Sonic Boom Mitigation Project, which began with concept explorations by major aerospace companies on the feasibility of either modifying another existing aircraft or designing a new demonstrator.[515] As explained by Peter Coen, "these studies will determine whether a low sonic boom demonstra­tor can be built at an affordable cost in a reasonable amount of time.”[516] Although numerous options for using existing aircraft were under inves­tigation, most of the studies were leaning toward the need to build a new experimental airplane as the most effective solution. On August 30, 2005, however, NASA Headquarters announced the end of the short-lived Sonic Boom Mitigation Project because of changing priorities.[517]

Despite this setback, there was still one significant boom lowering experiment in the making. Gulfstream Aerospace Corporation, which had been teamed with Northrop Grumman in one of the canceled studies, had already patented a new sonic boom mitigation technique.[518] Testing this invention—a retractable lance-shaped device to extend the length of an aircraft—would become the next major sonic boom flight experiment.

In the meantime, NASA continued some relatively modest sonic boom testing at the Dryden Center, mainly to help improve simulation capabili­ties. In a joint project with the FAA and Transport Canada in the summer of 2005, researchers from Pennsylvania State University strung an array of advanced microphones at Edwards AFB to record sonic booms created by Dryden F-18s passing overhead. Eighteen volunteers, who sat on lawn chairs alongside the row of microphones during the flyovers to experience the real thing, later gauged the fidelity of the played-back recordings. These were then used to help improve the accuracy of the booms replicated in simulators.[519]

“Quiet Spike” was the name that Gulfstream gave to its nose boom concept. Based on CFD models and results from Langley’s 4 by 4 super-

Breaking Up Shock Waves with "Quiet Spike&quot

Close-up view of the SSBD F-5E, showing its enlarged "pelican” nose and lower fuselage designed to shape the shock waves from the front of the airframe. NASA.

sonic wind tunnel, Gulfstream was convinced that the Quiet Spike device could greatly mitigate a sonic boom by breaking up the typical nose shock into three less-powerful waves that would propagate in parallel to the ground.[520] However, the company needed to test the structural and aero­dynamic suitability of the device and also obtain supersonic in-flight data on its shock scattering ability. NASA’s Dryden Flight Research Center had the capabilities needed to accomplish these tasks. Under this latest public – private partnership, Gulfstream fabricated a telescoping 30-foot-long nose boom (made of molded graphite epoxy over an aluminum frame) to attach to the radar bulkhead of Dryden’s frequently modified F-15B No. 836. A motorized cable and pulley system could extend the spike up to 24 feet and retract it back to 14 feet. After extensive static testing at its Savannah, GA, facility, Gulfstream and NASA technicians at Dryden attached the specially instrumented spike to the F-15’s radar bulkhead in April 2006 and began conducting further ground tests, such as for vibration.[521]

After various safety checks, aerodynamic assessments and checkout flights, Dryden conducted Quiet Spike flight tests from August 10, 2006 until February 14, 2007. Key engineers on the project included Dryden’s Leslie Molzahn and Thomas Grindle, and Gulfstream’s Robbie Cowart. Veteran NASA test pilot Jim Smolka gradually expanded the F-15B’s flight envelope up to Mach 1.8 and performed sonic boom experiments with the telescoping nose boom at speeds up to Mach 1.4 at 40,000 feet. Aerial refueling by AFFTC’s KC-135 allowed extended missions with multiple test points. Because it was known that the weak shock waves from the spike would rather quickly coalesce with the more powerful shock waves generated by the rest of the F-15’s unmodified high-boom airframe, data were collected from distances of no more than 1,000 feet. These mea­surements, made by a chase plane using similar probing techniques to those of the SR-71 and SSBD tests, confirmed CFD models on the spike’s ability to generate a sawtooth wave pattern that, if reaching the surface, would cause only a muffled sonic boom. Analysis of the data appeared to confirm that shocks of equal strength would not coalesce into a sin­gle strong shock. In February 2007, with all major test objectives hav­ing been accomplished, the Quiet Spike F-15B was flown to Savannah for Gulfstream to restore to its normal configuration.[522]

For this successful test of an innovative design concept for a future SSBJ, James Smolka, Leslie Molzahn, and three Gulfstream employees subsequently received Aviation Week and Space Technology’s Laureate Award in Aeronautics and Propulsion. One month later, however, both the Gulfstream Corporation and the Dryden Center were saddened by the death in an airshow accident of Gerard Schkolnik, Gulfstream’s Director of Supersonic Technology Programs, who had been a Dryden employee for 15 years.[523]

Self-Adaptive Flight Control Systems

One of the more sophisticated electronic control system concepts was funded by the AF Flight Dynamics Lab and created by Minneapolis Honeywell in the late 1950s for use in the Air Force-NASA-Boeing X-20 Dyna-Soar reentry glider. The extreme environment associated with a reentry from space (across a large range of dynamic pressures and Mach numbers) caused engineers to seek a better way of adjusting the feedback gains than stored programs and direct measurements of the atmospheric variables. The concept was based on increasing the elec­trical gain until a small limit-cycle was measured at the control surface, then alternately lowering and raising the electrical gain to maintain a small continuous, but controlled, limit-cycle throughout the flight. This allowed the total loop gains to remain at their highest safe value but avoided the need to accurately predict (or measure) the aerodynamic gains (control surface effectiveness).

This system, the MH-96 Adaptive Flight Control System (AFCS), was installed in a McDonnell F-101 Voodoo testbed and flown successfully by Minneapolis Honeywell in 1959-1960. It proved to be fairly robust in flight, and further system development occurred after the cancellation of the X-20 Dyna-Soar program in 1963. After a ground-test explosion during an engine run with the third X-15 in June 1960, NASA and the Air Force decided to install the MH-96 in the hypersonic research air­craft when it was rebuilt. The system was expanded to include several autopilot features, as well as a blending of the aerodynamic and reac­tion controls for the entry environment. The system was triply redun­dant, thus providing fail-operational, fail-safe capability. This was an improvement over the other two X-15s, which had only fail-safe fea­tures. Because of the added features of the MH-96, and the additional

redundancy it provided, NASA and the Air Force used the third X-15 for all planned high-altitude flights (above 250,000 feet) after an initial enve­lope expansion program to validate the aircraft’s basic performance.[689]

Unfortunately, on November 15, 1967, the third X-15 crashed, kill­ing its pilot, Major Michael J. Adams. The loss of X-15 No. 3 was related to the MH-96 Adaptive Flight Control System design, along with several other factors. The aircraft began a drift off its heading and then entered a spin at high altitude (where dynamic pressure—"q” in engineering shorthand—is very low). The flight control system gain was at its max­imum when the spin started. The control surfaces were all deflected to their respective stops attempting to counter the spin, thus no limit-cycle motion—4 hertz (Hz) for this airplane—was being detected by the gain changer. Thus, it remained at maximum gain, even though the dynamic pressure (and hence the structural loading) was increasing rapidly dur­ing entry. When the spin finally broke and the airplane returned to a normal angle of attack, the gain was well above normal, and the sys­tem commanded maximum pitch rate response from the all-moving elevon surface actuators. With the surface actuators operating at their maximum rate, there was still no 4-Hz limit-cycle being sensed by the gain changer, and the gain remained at the maximum value, driving the airplane into structural failure at approximately 60,000 feet and at a velocity of Mach 3.93.[690]

As the accident to the third X-15 indicated, the self-adaptive con­trol system concept, although used successfully for several years, had some subtle yet profound difficulties that resulted in it being used in only one subsequent production aircraft, the General Dynamics F-111 multipurpose strike aircraft. One characteristic common to most of the model-following systems was a disturbing tendency to mask deteriorat­ing handling qualities. The system was capable of providing good han­dling qualities to the pilot right up until the system became saturated, resulting in an instantaneous loss of control without the typical warn­ing a pilot would receive from any of the traditional signs of impending loss of control, such as lightening of control forces and the beginning

of control reversal.[691] A second serious drawback that affected the F-111 was the relative ease with which the self-adaptive system’s gain changer could be "fooled,” as with the accident to the third X-15. During early testing of the self-adaptive flight control system on the F-111, testers dis­covered that, while the plane was flying in very still air, the gain changer in the flight control system could drive the gain to quite high values before the limit-cycle was observed. Then a divergent limit-cycle would occur for several seconds while the gain changer stepped the gain back to the proper levels. The solution was to install a "thumper” in the sys­tem that periodically introduced a small bump in the control system to start an oscillation that the gain changer could recognize. These oscilla­tions were small and not detectable by the pilot, and thus, by inducing a little "acceptable” perturbation, the danger of encountering an unex­pected larger one was avoided.

For most current airplane applications, flight control systems use stored gain schedules as a function of measured flight conditions (alti­tude, airspeed, etc.). The air data measurement systems are already installed on the airplane for pilot displays and navigational purposes, so the additional complication of a self-adaptive feature is considered unnecessary. As the third X-15’s accident indicated, even a well-designed adaptive flight control system can be fooled, resulting in tragic conse­quences.[692] The "lesson learned,” of course (or, more properly, the "les­son relearned”) is that the more complex the system, the harder it is to identify the potential hazards. It is a lesson that engineers and design­ers might profitably take to heart, no matter what their specialty.

Flight Control Coupling

Flight control coupling is a slow loss of control of an airplane because of a unique combination of static stability and control effectiveness. Day described control coupling—the second mode of dynamic coupling—as " a coupling of static yaw and roll stability and control moments which can produce untrimmability, control reversal, or pilot-induced oscilla­tion (PIO).”[742] So-called "adverse yaw” is a common phenomenon associ­ated with control of an aircraft equipped with ailerons. The down-going aileron creates an increase in lift and drag for one wing, while the up – going aileron creates a decrease in lift and drag for the opposite wing. The change in lift causes the airplane to roll toward the up-going aile­ron. The change in drag, however, results in the nose of the airplane swinging away from the direction of the roll (adverse yaw). If the air­plane exhibits strong dihedral effect (roll produced by sideslip, a quality more pronounced in a swept wing design), the sideslip produced by the aileron deflections will tend to detract from the commanded roll. In the extreme case, with high dihedral effect and strong adverse yaw, the roll can actually reverse, and the airplane will roll in the opposite direction to that commanded by the pilot—as sometimes happened with the Boeing

B-47, though by aeroelastic twisting of a wing because of air loads. If the pilot responds by adding more aileron deflection, the roll reversal and sideslip will increase, and the airplane could go out of control.

As discussed previously, the most dramatic incident of control cou­pling occurred during the last flight of the X-2 rocket-powered research airplane in September 1956. The dihedral effect for the X-2 was quite strong because of the influence of wing sweep rather than the existence of actual wing dihedral. Dihedral effect because of wing sweep is non­existent at zero-lift but increases proportionally as the angle of attack of the wing increases. After the rocket burned out, which occurred at the end of a ballistic, zero-lift trajectory, the pilot started a gradual turn by applying aileron. He also increased the angle of attack slightly to facili­tate the turn, and the airplane entered a region of roll reversal. The side­slip increased until the airplane went out of control, tumbling violently. The data from this accident were fully recovered, and the maneuver was analyzed extensively by the NACA, resulting in a better understanding of the control-coupling phenomenon. The concept of a control parame­ter was subsequently created by the NACA and introduced to the indus­try. This was a simple equation that predicted the boundary conditions for aileron reversal based on four stability derivatives. When the yaw­ing moment due to sideslip divided by the yawing moment due to aile­ron is equal to the rolling moment due to sideslip divided by the rolling moment due to aileron, the airplane remains in balance and aileron deflection will not cause the airplane to roll in either direction.[743]

CFD and Transonic Airfoils

The analysis of transonic flows suffers from the same problems as those for the supersonic blunt body discussed above. Just considering the flow to be inviscid, the governing Euler equations are highly nonlinear for both transonic and hypersonic flows. From the numerical point of view, both flow fields are mixed regions of locally subsonic and super­sonic flows. Thus, the numerical solution of transonic flows originally encountered the same problem as that for the supersonic blunt body problem: whatever worked in the subsonic region did not work in the supersonic region, and vice versa. Ultimately, this problem was solved from two points of view. Historically, the first truly successful CFD solu­tion for the inviscid transonic flow over an airfoil was carried out in
1971 by Earll Murman and Julian Cole of Boeing Scientific Research Laboratories, whose collaborative research began at the urging of Arnold "Bud” Goldburg, then Chief Scientist of Boeing.[776] They treated a simpli­fied version of the Euler equations called the small-perturbation veloc­ity potential equation. This limited their solutions to the flows over thin airfoils at small angles of attack. Nevertheless, Murman and Cole intro­duced the concept of writing the finite differences in the equations such that they reached in both the upstream and downstream directions when in the subsonic region, but they reached in only the upwind direction in the supersonic regions. This is motivated by the physical process that in subsonic flow disturbances propagate in all directions but in a supersonic flow disturbances propagate only in the downstream direction. Thus it is proper to form the finite differences in the supersonic region such that they take only information from the upstream side of the grid point.

CFD and Transonic AirfoilsToday, this approach in modern CFD is called "upwinding” and is part of many modern algorithms in use for all kinds of flows. In 1971, this idea was groundbreaking, and it allowed Murman and Cole to obtain the first successful numerical solutions of the transonic flow over a body. In addition to the restriction of thin airfoils at small angles of attack, how­ever, their use of the small perturbation velocity potential equation also limited their solutions to isentropic flows. This meant that, although their solution captured the semblance of a shock wave in the flow, the loca­tion and flow changes across a shock wave were not accurate. Because many transonic flows involve shock waves embedded in the flow, this was definitely a bit of a problem. The solution to this problem involved the numerical treatment of the Euler equations, which, as we have dis­cussed early in this article, accurately pertain to any inviscid flow, not just one with small perturbations and free of shocks.

The finest in such CFD solutions were developed by Antony Jameson, then a professor at Princeton University (and now at Stanford), whose work was heavily sponsored by the NASA Langley Research Laboratory. Using the concept of time marching in combination with a Runge-Kutta time integration of the unsteady equations, Jameson constructed a series of outstanding transonic airfoil codes under the general code name of the FLO codes. These codes entered standard use in many aircraft com­panies and laboratories. Once again, NASA had been responsible for a
major advancement in CFD, helping to develop transonic flow codes that advanced the design of many airfoil shapes used today on modern commercial jet transports.[777]

Glenn (Formerly Lewis) Research Center

Glenn is the primary Center for research on all aspects of aircraft and spacecraft propulsion, including engine-related structures. The struc­tures area has typically consisted of approximately 50 researchers (not counting materials).[866] Structures research topics include: structures sub­jected to thermal loading, dynamic loading, and cyclic loading; spinning structures; coupled thermo-fluid-structural problems; structures with local plasticity and time-varying properties; probabilistic methods and reliability; analysis of practically every part of a turbine engine; Space Shuttle Main Engine (SSME) components; propeller and propfan flut­ter; failed blade containment analysis; and bird impact analysis. Some of the impact analysis research has been collaborative with Marshall Space Flight Center, which was interested in meteor and space debris impact effects on spacecraft.[867] Glenn has also collaborated extensively with Langley. In 1987, there was a joint Lewis-Langley Workshop on Computational Structural Mechanics (CSM) "to encourage a cooper­ative Langley-Lewis CSM program in which Lewis concentrates on engine structures applications, Langley concentrates on airframe and space structures applications, and all participants share technology of mutual interest.”[868]

Glenn has been involved in NASTRAN improvements since NASTRAN was introduced in 1970 and hosted the sixth NASTRAN Users’ Colloquium. Many of the projects at Glenn built supplemental capabil­ity for NASTRAN to handle the unique problems of propulsion system structural analysis: "The NASA Lewis Research Center has sponsored the development of a number of related analytical/computational capa­bilities for the finite element analysis program, NASTRAN. This devel­opment is based on a unified approach to representing and integrating the structural, aerodynamic, and aeroelastic aspects of the static and dynamic stability and response problems of turbomachines.”[869]

The aircraft and spacecraft engine industries are naturally the pri­mary customers of Glenn technology. However, no attempt is made here to document this technology transfer in detail. Other essays in this vol­ume address advances in propulsion technology and high-temperature materials. Instead, attention is given here to those projects at Glenn that have advanced the general state of the art in computational structures methods and that have found other applications in addition to aero­space propulsion. These include SPAR, NESSUS, SCARE/CARE (and derivatives), ICAN, and MAC.

SPAR was a finite-element structural analysis system developed ini­tially at NASA Lewis in the early 1970s and upgraded extensively through the 1980s. SPAR was less powerful than NASTRAN but relatively inter­active and easy to use for tasks involving iterative design and analysis. Chrysler Corporation used SPAR for designing body panels, starting in the 1980s.[870] NASA Langley has made improvements to SPAR and has used it for many projects, including structural optimization, in conjunc­tion with the Ames CONMIN program.[871] SPAR evolved into the EAL pro­gram, which was used for the structural portion of structural-optical analyses at Marshall.[872] Dryden Flight Research Center has used SPAR for Space Shuttle reentry thermal modeling.

Numerical Evaluation of Stochastic Structures under Stress (NESSUS) was the product of a Probabilistic Structural Analysis Methods (PSAM) project initiated in 1984 for probabilistic structural analysis of Shuttle and future spacecraft propulsion system components. The prime contractor was Southwest Research Institute (SwRI). NESSUS was designed for solving problems in which the loads, boundary con­ditions, and/or the material properties involved are best described by statistical distributions of values, rather than by deterministic (known, single) values. PSAM officially completed in 1995 with the delivery of NESSUS Version 6.2. SwRI was awarded another contract in 2002 for enhancements to NESSUS, leading to the release of Version 8.2 to NASA in December 2004 and commercially in 2005. Los Alamos National Laboratory has used NESSUS for weapon-reliability analysis under its Stockpile Stewardship program. Other applications included auto­motive collision analysis and prediction of the probability of spinal injuries during aircraft ejections, carrier landings, or emergency water landings. NESSUS is used in teaching and research at the University of Texas at San Antonio.[873] In some applications, NESSUS is cou­pled with commercially available deterministic codes offering greater structural analysis capability, with NESSUS providing the statistically derived inputs.[874]

Ceramics Analysis and Reliability Evaluation of Structures (SCARE/ CARES) was introduced as SCARE in 1985 and later renamed CARES. This program performed fast-fracture reliability and failure probability analysis of ceramic components. SCARE was built as a postprocessor to MSC/NASTRAN. Using MSC/NASTRAN output of the stress state in a component, SCARE performed the crack growth and structural reli­ability analysis of the component.[875] Upgrades and a very comprehensive program description and user’s guide were introduced in 1990.[876] In 1993, an extension, CARES/LIFE, was developed to calculate the time depen­dence of the reliability of a component as it is subjected to testing or use. This was accomplished by including the effects of subcritical crack growth over time.[877] Another 1993 upgrade, CCARES (for CMC CARES), added the capability to analyze components made from ceramic matrix composite (CMC) materials, rather than just macroscopically isotropic materials.[878] CARES/PC, introduced in 1994 and made publicly available through COSMIC, ran on a personal computer but offered a more lim­ited capability (it did not include fast-fracture calculations).[879]

R&D Magazine gave an R&D 100 Award jointly to NASA Lewis and to Philips Display Components for application of CARES/Life to the development of an improved television picture tube in 1995. "Cares/ Life has been in high demand world-wide, although present technology transfer efforts are entirely focused on U. S.-based organizations. Success stories can be cited in numerous industrial sectors, including aerospace, automotive, biomedical, electronic, glass, nuclear, and conventional power-generation industries.”[880]

Integrated Composite Analyzer (ICAN) was developed in the early 1980s to perform design and analysis of multilayered fiber composites. ICAN considered hygrothermal (humidity-temperature) conditions as well as mechanical loads and provided results for stresses, stress con­centrations, and locations of probable delamination.[881] ICAN was used extensively for design and analysis of composite space antennas and for analysis of engine components. Upgrades were developed, includ­ing new capabilities and a version that ran on a PC in the early 1990s.[882] ICAN was adapted (as ICAN/PART) to analyze building materials under a cost-sharing agreement with Master Builders, Inc., in 1995.[883]

Goodyear began working with Glenn in 1995 to apply Glenn’s Micromechanics Analysis Code (MAC) to tire design. The relationship was formed, in part, as a result of Glenn’s involvement with the Great Lakes Industrial Technology Center (GLITeC) and the Consortium for the Design and Analysis of Composite Materials. NASA worked with Goodyear to tailor the code to Goodyear’s needs and provided onsite training. MAC was used to assess the effects of chord spacing, ply and belt configurations, and other tire design parameters. By 2002, Goodyear had several tires in production that had benefitted from the MAC design analysis capabilities. Dr. Steven Arnold was the Glenn point of contact in this effort.[884]

TRansfer ANalysis Code to Interface Thermal and Structural (3D TRANCITS, Glenn, 1985)

Transfer of data between different analysis codes has always been one of the challenges of multidisciplinary design, analysis, and optimization. Even if input and output format can be standardized, different types of analysis often require different types of information or different mesh densities, globally or locally. TRANCITS was developed to translate between heat transfer and structural analysis codes: "TRANCITS has the capability to couple finite difference and finite element heat transfer analysis codes to linear and nonlinear finite element structural analysis codes. TRANCITS currently supports the output of SINDA and MARC heat transfer codes directly. It will also format the thermal data out­put directly so that it is compatible with the input requirements of the NASTRAN and MARC structural analysis codes. . . . The transfer mod­ule can handle different elemental mesh densities for the heat transfer analysis and the structural analysis.”[982] MARC is a commercial, general- purpose, nonlinear finite element code introduced by MARC Analysis and Research Corp. in the late 1970s. Because of its nonlinear analysis capabilities, MARC was used extensively at Glenn for engine compo­nent analyses and for other applications, such as the analysis of a space station strongback for launch loads in 1992.[983] Other commercial finite element codes used at Glenn included MSC/NASTRAN, which was used along with NASA’s COSMIC version of NASTRAN.

Turbine Blades

Turbine blades operate at speeds well below hypersonic, but this topic shares the same exotic metals that are used for flight structures at the highest speeds. It is necessary to consider how such blades use coat­ings to stay cool, an issue that represents another form of cooling. It also is necessary to consider directionally solidified and single-crystal castings for blades.

Britain’s firm of Rolls-Royce has traditionally possessed a strong standing in this field, and The Economist has noted its activity:

The best place to start is the surprisingly small, almost under­whelming, turbine blades that make up the heart of the giant engines slung beneath the wings of the world’s biggest planes. These are not the huge fan blades you see when boarding, but are buried deep in the engines. Each turbine blade can fit in the hand like an oversized steak knife. At first glance it may not seem much more difficult to make. Yet they cost about $ 10,000 each. Rolls-Royce’s executives like to point out that their big engines, of almost six tonnes, are worth their weight in silver— and that the average car is worth its weight in hamburger.[1084]

Turbine blades are difficult to make because they have to survive high temperatures and huge stresses. The air inside big jet engines reaches about 2,900 °F in places, 750 degrees hotter than the melting point of the metal from which the turbine blades are made. Each blade is grown from a single crystal of alloy for strength and then coated with tough ceramics. A network of tiny air holes then creates a thin blanket of cool air that stops it from melting.

The study of turbine blades brings in the topic of thermal barrier coatings (TBC). By attaching an adherent layer of a material of low ther­mal conductivity to the surface of an internally cooled turbine blade, a temperature drop is induced across the thickness of the layer. This results in a drop in the temperature of the metal blade. Using this approach, temperature reductions of up to 340 °F at the metal surface have been estimated for 150-micron-thick yttria stabilized zirconia coatings. The rest of the temperature decrease is obtained by cooling the blade using air from the compressor that is ducted downstream to the turbine.

The cited temperature reductions reduce the oxidation rate of the bond coat applied to the blades and so delay failure by oxidation. They also retard the onset of thermal fatigue. One should note that such coat­ings are currently used only to extend the life of components. They are not used to increase the operating temperature of the engine.

Modern TBCs are required to not only limit heat transfer through the coating but to also protect engine components from oxidation and hot corrosion. No single coating composition appears able to satisfy these requirements. As a result, a "coating system” has evolved. Research in the last 20 years has led to a preferred coating system consisting of four separate layers to achieve long-term effectiveness in the high-tem­perature, oxidative, and corrosive environment in which the blades must function. At the bottom is the substrate, a nickel – or cobalt-based super­alloy that is cooled from the inside using compressor air. Overlaying it is the bond coat, an oxidation-resistant layer with thickness of 75-150 microns that is typically of a NiCrAlY or NiCoCrAlY alloy. It essentially dictates the spallation failure of the blade. Though it resists oxidation, it does not avoid it; oxidation of this coating forms a third layer, the ther­mally grown oxide, with a thickness of 1 to 10 microns. It forms as Al2O3. The topmost layer, the ceramic topcoat, provides thermal insulation. It is typically of yttria-stabilized ZrO2. Its thickness is characteristically about 300 microns when deposited by air plasma spray and 125 microns when deposited by electron beam physical vapor deposition (EB-PVD).[1085]

Yttria-stabilized zirconia has become the preferred TBC layer mate­rial for use in jet engines because of its low thermal conductivity and its relatively high thermal expansion coefficient, compared with many other ceramics. This reduces the thermal expansion mismatch with the met­als of high thermal expansion coefficient to which it is applied. It also has good erosion resistance, which is important because of the entrain­ment of particles having high velocity in the engine gases. Robert Miller, a leading specialist, notes that NASA and the NACA, its predecessor, have played a leading role in TBC development since 1942. Flame-sprayed Rokide coatings, which extended the life of the X-15 main engine com­bustion chamber, represented an early success. Magnesia-stabilized zir­conia later found use aboard the SR-71, allowing continuous use of the afterburner and sustained flight above Mach 3. By 1970, plasma-sprayed TBCs were in use in commercial combustors.[1086]

These applications involved components that had no moving parts. For turbines, the mid-1970s brought the first "modern” thermal spray coating. It used yttria as a zirconia stabilizer and a bond coat that contained MCrAlY, and demonstrated that blade TBCs were feasible.

C. W. Goward of Pratt & Whitney (P&W), writing of TBC experience with the firm’s J7 5 engine, noted: "Although the engine was run at rel­atively low pressures, the gas turbine engine community was sufficiently impressed to prompt an explosive increase in development funds and programs to attempt to achieve practical utilization of the coatings on turbine airfoils.”[1087]

But tests in 1977 on the more advanced JT9D, also conducted at P&W, brought more mixed results. The early TBC remained intact on lower-temperature regions of the blade but spalled at high tempera­tures. This meant that further development was required. Stefan Stecura reported an optimum concentration of Y2O3 in ZrO2 of 6-8 percent. This is still the state of the art. H. G. Scott reported that the optimum phase of zirconia was t’-ZrO2. In 1987, Stecura showed that ytterbia – stabilized zirconia on a ytterbium-containing bond coat doubled the blade life and took it from 300 1-hour cycles to 600 cycles. Also at that time, P&W used a zirconia-yttria TBC to address a problem with endur­ance of vane platforms. A metallic platform, with no thermal barrier, showed burn-through and cracking from thermal-mechanical fatigue after 1,500 test cycles. Use of a TBC extended the service life to 18,000 hours or 2,778 test cycles and left platforms that were clean, uncracked, and unburned. P&W shared these results with NASA, which led to the TBC task in the Hot Section Technology (HOST) program. NASA col­laborated with P&W and four other firms as it set out to predict TBC lifetimes. A preliminary NASA model showed good agreement between experiment and calculation. P&W identified major degradation modes and gave data that also showed good correlation between measured and modeled lives. Other important contributions came from Garrett Turbine Co. and General Electric. The late 1980s brought Prescribed Velocity Distribution (PVD) blades that showed failure when they were nearly out of the manufacturer’s box. EV-PVD blades resolved this issue and first entered service in 1989 on South African Airways 747s. They flew from Johannesburg, a high-altitude airport with high mean tem­peratures where an airliner needed a heavy fuel load to reach London. EV-PVD TBCs remain the coating of choice for first-row blades, which see the hottest combustion gases. TBC research continues to this day, both at NASA and its contractors. Fundamental studies in aeronautics are important, with emphasis on erosion of turbine components. This work has been oriented toward rotorcraft and has brought the first EV-PVD coating for their blades. There also has been an emphasis on damping of vibration amplitudes. A new effort has dealt with environ­mental barrier coatings (EBCs), which Miller describes as "ceramic coatings, such as SiC, on top of ceramics.”[1088]

Important collaborations have included work on coatings for die­sels, where thick TBCs permit higher operating temperatures that yield increased fuel economy and cleaner exhaust. This work has proceeded with Caterpillar Tractor Co. and the Army Research Laboratory.[1089]

Studies of supersonic engines have involved cooperation with P&W and GE, an industrial interaction that Miller described as "a useful reality check.”[1090] NASA has also pursued the Ultra Efficient Engine Technology program. Miller stated that it has not yet introduced engines for routine service but has led to experimental versions. This work has involved EBCs, as well as a search for low thermal conductivity. The latter can increase engine-operating temperatures and reduce cooling requirements, thereby achieving higher engine efficiency and lower emissions. At NASA Glenn, Miller and Dong-ming Zhu have built a test facility that uses a 3-kilowatt CO2 laser with wavelength of 10.6 microns. They also have complemented conventional ZrO2- Y2O3 coatings with other rare-earth oxides, including Nd2O3-Yb2O3 and Gd2O3-Yb2O3.[1091]

Can this be reduced further? A promising approach involves devel­opment of new deposition techniques that give better control of TBC pore morphology. Air plasma spray deposition creates many intersplat pores between initially molten droplets, in what Miller described as "a messy stack of pancakes.” By contrast, TBC layers produced by EB-PVD have a columnar microstructure with elongated intercolumnar pores that align perpendicular to the plane of the coating. Alternate depo­sition methods include sputtering, chemical vapor deposition (CVD), and sol-gel approaches. But these approaches involve low deposition rates that are unsuitable for economic production of coated blades. CVD and sol-gel techniques also require the use of dangerous and costly precursor materials. In addition, none of these approaches permit the precise control and manipulation of pore morphology. Thus, improved deposition methods that control this morphology do not now exist.