Category AERONAUTICS

Dutch Roll Coupling

Dutch roll coupling is another case of a dynamic loss of control of an airplane because of an unusual combination of lateral-directional static stability characteristics. Dutch roll coupling is a more subtle but nev­ertheless potentially violent motion, one that (again quoting Day) is a "dynamic lateral-directional stability of the stability axis. This coupling of body axis yaw and roll moments with sideslip can produce lateral – directional instability or PIO.”[744] A typical airplane design includes "static directional stability” produced by a vertical fin, and a small amount of "dihedral effect” (roll produced by sideslip). Dihedral effect is created by designing the wing with actual dihedral (wingtips higher than the

wing root), wing sweep (wingtips aft of the wing root), or some combi­nation of the two. Generally static directional stability and normal dihe­dral effect are both stabilizing and both contribute to a stable Dutch roll mode (first named for the lateral-directional motions of smooth-bottom Dutch coastal craft, which tend to roll and yaw in disturbed seas). When the interactive effects of other surfaces of an airplane are introduced, there can be potential regions of the flight envelope where these two contributions to Dutch roll stability are not stabilizing (i. e., regions of negative static directional stability or negative dihedral effect). In these regions, if the negative effect is smaller than the positive influence of the other, then the airplane will exhibit an oscillatory roll-yaw motion. (If both effects are negative, the airplane will show a static divergence in both the roll and yaw axes.) All aircraft that are statically stable exhibit some amount of Dutch roll motion. Most are well damped, and the Dutch roll only becomes apparent in turbulent conditions.

The Douglas DC-3 airliner (equivalent to the military C-47 airlifter) had a persistent Dutch roll that could be discerned by passengers watch­ing the wingtips as they described a slow horizontal "figure eight” with respect to the horizon.

Even the dart-like X-15 manifested Dutch roll characteristics. The very large vertical tail configuration of the X-15 was established by the need to control the airplane near engine burnout if the rocket engine was misaligned, a "lesson learned” from tests of earlier rocket-pow­ered aircraft such as the X-1, X-2, and D-558-2. This led to a large sym­metrical vertical tail with large rudder surfaces both above and below the airplane centerline. (The rocket engine mechanics and engineers at Edwards later devised a method for accurately aligning the engine, so that the large rudder control surfaces were no longer needed.) The X-15 simulator accurately predicted a strange Dutch roll characteristic in the Mach 3-4 region at angles of attack above 8 degrees with the roll and yaw dampers off. This Dutch roll mode was oscillatory and stable with­out pilot inputs but would rapidly diverge into an uncontrollable pilot – induced-oscillation when pilot control inputs were introduced.

During wind tunnel tests after the airplane was constructed, it was discovered that the lower segment of the vertical tail, which was oper­ating in a high compression flow field at hypersonic speeds, was highly effective at reentry angles of attack. The resulting rolling motions pro­duced by the lower fin and rudder were contributing a large negative dihedral effect. Fortunately, this destabilizing influence was not enough

to overpower the high directional stability produced by the very large vertical tail area, so the Dutch roll mode remained oscillatory and stable. The airplane motions associated with this stable oscillation were com­pletely foreign to the test pilots, however. Whereas a normal Dutch roll is described as "like a marble rolling inside a barrel,” NASA test pilot Joe Walker described the X-15 Dutch roll as "like a marble rolling on the out­side of the barrel” because the phase relationship between rolling and yawing were reversed. Normal pilot aileron inputs to maintain the wings level were out of phase and actually drove the oscillation to larger magni­tudes rather quickly. The roll damper, operating at high gain, was fairly effective at damping the oscillation, thus minimizing the pilot’s need to actively control the motion when the roll damper was on.[745]

Because the X-15 roll damper was a single string system (fail-safe), a roll damper failure above about 200,000 feet altitude would have caused the entry to be uncontrollable by the pilot. The X-15 envelope expansion to altitudes above 200,000 feet was delayed until this problem could be resolved. The flight control team proposed installing a backup roll damper, while members of the aerodynamic team proposed removing the lower ventral rudder. Removing the rudder was expected to reduce the direc­tional stability but also would cause the dihedral effect to be stable, thus the overall Dutch roll stability would be more like a normal airplane. The Air Force-NASA team pursued both options. Installation of the backup roll damper allowed the altitude envelope to be expanded to the design value of 250,000 feet. The removal of the lower rudder, however, solved the PIO problem completely, and all subsequent flights, after the initial ventral-off demonstration flights, were conducted without the lower rudder.[746]

The incident described above was unique to the X-15 configura­tion, but the analysis and resolution of the problem is instructive in that it offers a prudent cautioning to designers and engineers to avoid designs that exhibit negative dihedral effect.[747]

Navier-Stokes CFD Solutions

Navier-Stokes CFD SolutionsAs described earlier in this article, the Navier-Stokes equations are the full equations that govern a viscous flow. Solutions of the Navier-Stokes equations are the ultimate in fluid dynamics. To date, no general analyt­ical solutions of these highly nonlinear equations have been obtained. Yet they are the equations that reflect the real world of fluid dynamics. The only way to obtain useful solutions for the Navier-Stokes equations is by means of CFD. And even here such solutions have been slow in coming. The problem has been the very fine grids that are necessary to define certain regions of a viscous flow (in boundary layers, shear layers, separated flows, etc.), thus demanding huge numbers of grid point in the flow field. Practical solutions of the Navier-Stokes equations had to wait for supercomputers such as the Cray X-MP and Cyber 205 to come on the scene. NASA became a recognized and emulated leader in CFD solu­tions of Navier-Stokes equations, its professionalism evident by its hav­ing established the Institute for Computer Applications in Science and Engineering (ICASE) at Langley Research Center, though other Centers as well, particularly Ames, shared this interest in burgeoning CFD. [778] In particular, NASA researcher Robert MacCormack was responsible for the development of a Navier-Stokes CFD code that, by far, became the most popular and most widely used Navier-Stokes CFD algorithm in the last quarter of the 20th century. MacCormack, an applied math­ematician at NASA Ames (and now a professor at Stanford), conceived a straightforward algorithm for the solution of the Navier-Stokes equa­tions, simply identified everywhere as "MacCormack’s method.”

To understand the significance of MacCormack’s method, one must understand the concept of numerical accuracy. Whenever the derivatives in a partial differential equation are replaced by algebraic difference
quotients, there is always a truncation error that introduces a degree of inaccuracy in the numerical calculations. The simplest finite differences, usually involving only two distinct grid points in their formulation, are identified as "first-order” accurate (the least accurate formulation). The next step up, using a more sophisticated finite difference reaching to three grid points, is identified as second-order accurate. For the numer­ical solution of most fluid flow problems, first-order accuracy is not sufficient; not only is the accuracy compromised, but such algorithms frequently blow up on the computer. (The author’s experience, however, has shown that second-order accuracy is usually sufficient for many types of flows.) On the other hand, some of the early second-order algo­rithms required a large computation effort to obtain this second-order accuracy, requiring many pages of paper to write the algorithm and a lot of computations to execute the solution. MacCormack developed a predictor-corrector two-step scheme that was second-order accurate but required much less effort to program and many fewer calculations to execute. He introduced this scheme in an imaginative paper on hyper­velocity impact cratering published in 1969.[779]

Navier-Stokes CFD SolutionsMacCormack’s method broke open the field of Navier-Stokes solu­tions, allowing calculation of myriad viscous flow problems, beginning in the 1970s and continuing to the present time, as was as well (in this author’s opinion) the most "graduate-student friendly” CFD scheme in existence. Many graduate students have cut their CFD teeth on this method and have been able to solve many viscous flow problems that otherwise could not have attempted. Today, MacCormack’s method has been supplanted by several very sophisticated modern CFD algorithms, but even so, MacCormack’s method goes down in history as one of NASA’s finest contributions to the aeronautical sciences.

Goddard Space Flight Center

Goddard Space Flight Center was established in 1959, absorbing the U. S. Navy Vanguard satellite project and, with it, the mission of devel­oping, launching, and tracking unpiloted satellites. Since that time, its roles and responsibilities have expanded to consider space science, Earth observation from space, and unpiloted satellite systems more broadly.

Structural analysis problems studied at Goddard included definition of operating environments and loads applicable to vehicles, subsystems, and payloads; modeling and analysis of complete launch vehicle/payload sys­tems (generic and for specific planned missions); thermally induced loads and deformation; and problems associated with lightweight, deployable structures such as antennas. Control-structural interactions and multi­body dynamics are other related areas of interest.

Goddard’s greatest contribution to computer structural analysis was, of course, the NASTRAN program. With public release of NASTRAN, management responsibility shifted to Langley. However, Goddard remained extremely active in the early application of NASTRAN to practical problems, in the evaluation of NASTRAN, and in the ongoing improvement and addition of new capabilities to NASTRAN: thermal analysis (part of a larger Structural-Thermal-Optical [STOP] program, which is discussed below), hydroelastic analysis, automated cyclic sym­metry, and substructuring techniques, to name a few.[885]

Structural-Thermal-Optical analysis predicts the impact on the per­formance of a (typically satellite-based) sensor system due to the defor­mation of the sensors and their supporting structure(s) under thermal and mechanical loads. After NASTRAN was developed, a major effort began at GSFC to achieve better integration of the thermal and optical analysis components with NASTRAN as the structural analysis compo­nent. The first major product of this effort was the NASTRAN Thermal Analyzer. The program was based on NASTRAN and thereby inherited a great deal of modeling capability and flexibility. But, most impor­tantly, the resulting inputs and outputs were fully compatible with NASTRAN: "Prior to the existence of the NASTRAN Thermal Analyzer, available general purpose thermal analysis computer programs were designed on the basis of the lumped-node thermal balance method.

. . . They were not only limited in capacity but seriously handicapped by incompatibilities arising from the model representations [lumped – node versus finite-element]. The intermodal transfer of temperature data was found to necessitate extensive interpolation and extrapolation. This extra work proved not only a tedious and time-consuming process but also resulted in compromised solution accuracy. To minimize such an interface obstacle, the STOP project undertook the development of a general purpose finite-element heat transfer computer program.”[886] The capability was developed by the MacNeal Schwendler Corporation under subcontract from Bell Aerospace. "It must be stressed, however, that a cooperative financial and technical effort between [Goddard and Langley] made possible the emergence of this capability.”[887]

Another element of the STOP effort was the computation of "view factors” for radiation between elements: "In an in-house STOP proj­ect effort, GSFC has developed an IBM-360 program named ‘VIEW’ which computes the view factors and the required exchange coefficients between radiating boundary elements.”[888] VIEW was based on an ear­lier view factor program, RAVFAC, but was modified principally for compatibility with NASTRAN and eventual incorporation as a subrou­tine in NASTRAN.[889] STOP is still an important part of the analysis of many of the satellite packages that Goddard manages, and work contin­ues toward better performance with complex models, multidisciplinary design, and optimization capability, as well as analysis.

COmposite Blade STRuctural ANalyzer (COBSTRAN, Glenn, 1989)

COBSTRAN was a preprocessor for NASTRAN, designed to generate finite element models of composite blades. While developed specifically

for advanced turboprop blades under the Advanced Turboprop (ATP) project, it was subsequently applied to compressor blades and tur­bine blades. It could be used with both COSMIC NASTRAN and MSC/ NASTRAN, and was subsequently extended to work as a preprocessor for the MARC nonlinear finite element code.[984]

1) BLAde SIMulation (BLASIM), 1992

BLASIM calculates dynamic characteristics of engine blades before and after an ice impact event. BLASIM could accept input geometry in the form of airfoil coordinates or as a NASTRAN-format finite ele­ment model. BLASIM could also utilize the ICAN program (discussed separately) to generate ply properties of composite blades.[985] "The ice impacts the leading edge of the blade causing severe local damage. The local structural response of the blade due to the ice impact is pre­dicted via a transient response analysis by modeling only a local patch around the impact region. After ice impact, the global geometry of the blade is updated using deformations of the local patch and a free vibra­tion analysis is performed. The effects of ice impact location, ice size and ice velocity on the blade mode shapes and natural frequencies are investigated.”[986]

Blade Fabrication

The fabrication of turbine blades represents a related topic. No blade has indefinite life, for blades are highly stressed and must resist creep while operating under continuous high temperatures. Table 3 is taken from the journal Metallurgia and summarizes the stress to cause rupture in both wrought – and investment-case nickel – base superalloy.[1092]

Подпись: 9 Подпись: TABLE 3: STRESS TO CAUSE FAILURE OF VARIOUS ALLOYS TYPE OF ALLOY STRESS TO CAUSE FAILURE AFTER: Wrought Alloys: 100 hours at 1400 °F, MPa 50 hours at 1750 °F, MPa Nimonic 80 340 48 Nimonic 105 494 127 Nimonic 1 15 571 201 Investment-Cast Alloys: IN 100 648 278 B1914 756 262 Mar-M246 756 309

An important development involved the introduction of directionally solidified (d. s.) castings. Their advent, into military engines in 1969 and commercial engines in 1974, brought significant increases in allowable metal temperatures and rotor speeds. D. s. blades and vanes were fab­ricated by pouring molten superalloy into a ceramic mold seated on a water-cooled copper chill plate. Grains nucleate on the chill surface and grow in a columnar manner parallel to a temperature gradient. These columnar grains fill the mold and solidify to form the casting.[1093]

A further development involved single-crystal blades. More was required here than development of a solidification technique; it was nec­essary to consider as well the entire superalloy. It was to achieve a high melting temperature by containing no grain boundary-strengthening elements such as boron, carbon, hafnium, and zirconium. It would achieve high creep strength with a high gamma-prime temperature. A high temper­ature for solution heat treatment would also provide improved properties.

The specialized Alloy 454 had the best properties. It showed a com­plete absence of all grain boundary-strengthening elements and made significant use of tantalum, which suppressed a serious casting defect known as "freckling.” Chromium and aluminum were included to protect against oxidation and hot corrosion. It had a composition of 12Ta+4W+10Cr+5Al+1.5Ti+5Co, balance Ni.

Single-crystal blades were fabricated using a variant of the cited d. s. arrangement. Instead of having the ceramic mold rest directly on the chill

Подпись: A hypersonic scramjet configuration developed by Langley experts in the 1970s. The sharply swept double-delta layout set the stage for the National Aero-Space Plane program. NASA. Подпись: 9

plate, it was separated from this plate by a helical single-crystal selector. A number of grains nucleated at the bottom of the selector, but most of them had their growth cut off by its walls, and only one grain emerged at the top. This grain was then allowed to grow and fill the entire mold cavity.

Creep-rupture tests showed that Alloy 454 had a temperature advan­tage of 75 to 125 °F over d. s. MAR-M200 + Hf, the strongest produc­tion-blade alloy. A 75 °F improvement in metal temperature capability corresponds to threefold improvement in life. Single-crystal Alloy 454 thus was chosen as the material for the first-stage turbine blades of the JT9D-7R4 series of engines that were to power the Boeing 767 and the Airbus A310 aircraft, with engine certification and initial production shipments occurring in July 1980.[1094]

DFBW F-8: An Appreciation

The NASA DFBW F-8 had conclusively proven that a highly redun­dant digital flight control system could be successfully implemented and all aspects of its design validated.[1165] During the course of the pro­gram, the DFBW F-8 demonstrated the ability to be upgraded to take advantage of emerging state-of-the-art technologies or to meet evolv­
ing operational requirements. It proved that digital fly-by-wire flight control systems could be adapted to the new design and employment concepts that were evolving in both the military and in industry at the time. Perhaps the best testimony to the unique accomplishments of the F-8 DFBW aircraft and its NASA flight-test team is encapsulated in the following observations of former NASA Dryden director Ken Szalai:

Подпись: 10DFBW systems are ‘old hat’ today, but in 1972, only Apollo astronauts had put their life and missions into the hands of software engineers. We considered the F-8 DFBW a very high risk in 1972. That fact was driven home to us in the control room when we asked the EAFB [Edwards Air Force Base] tower to close the airfield, as was preplanned with the USAF, for first flight. It was the first time this 30-year-old FCS [Flight Control System] engineer had heard that particular radio call. . . . The project was both a pioneering effort for the tech­nology and a key enabler for extraordinary leaps in aircraft performance, survivability, and superiority. The basic archi­tecture has been used in numerous production systems, and many of the F-8 fault detection and fault handling/recovery technology elements have become ‘standard equipment.’ . . . In the total flight program, no software error/fault ever occurred in the operational software, synchronization was never lost in hundreds of millions of sync cycles, it was never required to transfer to the analog FBW backup system, there were zero nuisance channel failures in all the years of flying, and many NASA and visiting guest pilots easily flew the aircraft, includ­ing Phil Oestricher before the first YF-16 flight.[1166]

In retrospect, the NASA DFBW F-8C is of exceptional interest in the history of aeronautics. It was the first aircraft to fly with a digital fly-by­wire flight control system, and it was also the first aircraft to fly with­out any mechanical backup flight controls. Flown by Ed Schneider, the DFBW F-8 made its last flight December 16, 1985, completing 211 flights. The aircraft is now on display at the NASA Dryden Flight Research Center. Its sustained record of success over a 13-year period provided a
high degree of high confidence in the use of digital computers in fly-by­wire flight control systems. The DFBW F-8C also paved the way for a number of other significant NASA, Air Force, and foreign research pro­grams that would further explore and expand the application of digital computers to modern flight control systems, providing greatly improved aircraft performance and enhanced flight safety.

French Mirage NG

Подпись: 10Although not intended purely as a research aircraft, the French Dassault Mirage 3NG (Nouvelle Generation) was a greatly modified Mirage IIIE single-engine jet fighter that was used to demonstrate the improved air combat performance advantages made possible using relaxed static stability and fly-by-wire. One prototype was built by Dassault; modifications included destabilizing canards, extended wing root lead­ing edges, an analog-computer-controlled fly-by-wire flight control sys­tem (based on that used in the production Mirage 2000 fighter), and the improved Atar 9K-50 engine. The Mirage 3NG first flew in December 1982, demonstrating significant performance improvements over the standard operational Mirage IIIE. These were claimed to include a 20-25-percent reduction in takeoff distance, a 40-percent improvement in time to reach combat altitude, a nearly 10,000-foot increase in super­sonic ceiling, and similarly impressive gains in acceleration, instanta­neous turn rate, and combat air patrol time.

Noise Pollution Forces Engine Improvements

Fast-forward a few years, to a time when Americans embraced the prom­ise that technology would solve the world’s problems, raced the Soviet Union to the Moon, and looked forward to owning personal family hov­ercraft, just like they saw on the TV show The Jetsons. And during that same decade of the 1960s, the American public became more and more comfortable flying aboard commercial airliners equipped with the mod­ern marvel of turbojet engines. Boeing 707s and McDonnell-Douglas DC-8s, each with four engines bolted to their wings, were not only a common sight in the skies over major cities, but their presence could also easily be heard by anyone living next to or near where the planes took off and landed. Boeing 727s and 737s soon followed. At the same

Подпись: 11 Noise Pollution Forces Engine Improvements

time that commercial aviation exploded, people moved away from the metropolis to embrace the suburban lifestyle. Neighborhoods began to spring up immediately adjacent to airports that originally were built far from the city, and the new neighbors didn’t like the sound of what they hearing.[1295]

By 1966, the problem of aircraft noise pollution had grown to the point of attracting the attention of President Lyndon Johnson, who then directed the U. S. Office of Science and Technology to set a new national policy that said:

Подпись: 11The FAA and/or NASA, using qualified contractors as neces­sary, (should) establish and fund. . . an urgent program for conducting the physical, psycho-acoustical, sociological, and other research results needed to provide the basis for quanti­tative noise evaluation techniques which can be used. . . for hardware and operational specifications.[1296]

As a result, NASA began dedicating resources to aggressively address aircraft noise and sought to contract much of the work to industry, with the goals of advancing technology and conducting research to provide lawmakers with the information they needed to make informed regulatory decisions.[1297]

During 1968, the Federal Aviation Administration (FAA) was given authority to implement aircraft noise standards for the airline indus­try. Within a year, the new standards were adopted and called for all new designs of subsonic jet aircraft to meet certain criteria. Aircraft that met these standards were called Stage 2 aircraft, while the older planes that did not meet the standards were called Stage 1 aircraft. Stage 1 aircraft over 75,000 pounds were banned from flying to or from U. S. airports as of January 1, 1985. The cycle repeated itself with the establishment of Stage 3 aircraft in 1977, with Stage 2 aircraft need­ing to be phased out by the end of 1999. (Some of the Stage 2 aircraft engines were modified to meet Stage 3 aircraft standards.) In 2005, the FAA adopted an even stricter noise standard, which is Stage 4. All new aircraft designs submitted to the FAA on or after July 5, 2005, must meet Stage 4 requirements. As of this writing, there is no timeta­ble for the mandatory phaseout of Stage 3 aircraft.[1298]

With every new set of regulations, the airline industry required upgrades to its jet engines, if not wholesale new designs. So having already helped establish reliable working versions of each of the major types of jet engines—i. e., turboprop, turbojet, and turbofan—NASA and its industry partners began what has turned out to be a continuing 50-year-long challenge to constantly improve the design of jet engines to prolong their life, make them more fuel efficient, and reduce their environmental impact in terms of air and noise pollution. With this new direction, NASA set in motion three initial programs.9

Подпись: 11NASA’s first major new program was the Acoustically Treated Nacelle program, managed by the Langley Research Center. Engines flying on Douglas DC-8 and Boeing 707 aircraft were outfitted with experimen­tal mufflers, which reduced noise during approach and landing but had negligible effect on noise pollution during takeoff, according to program results reported during a 1969 conference at Langley.10

The second was the Quiet Engine program, which was managed by the Lewis Research Center in Cleveland (Lewis became the Glenn Research Center on March 1, 1999). Attention here focused on the inte­rior design of turbojet and turbofan engines to make them quieter by as much as 20 decibels. General Electric (GE) was the key industry partner in this program, which showed that noise reduction was possi­ble by several methods, including changing the rotational speed of the fan, increasing the fan bypass ratio, and adjusting the spacing of rotat­ing and stationary parts.11

The third was the Steep Approach program, which was jointly managed by Langley and the Ames Research Center/Dryden Flight Research Facility, both in California. This program did not result in new engine technology but instead focused on minimizing noise on the ground by developing techniques for pilots to use in flying steeper and faster approaches to airports.12 [1299] [1300]

Advanced Turboprop Project

Another significant program to emerge from NASA’s ACEE program was the Advanced Turboprop project, which lasted from 1976 to 1987.

Like E Cubed, the ATP was largely focused on improving fuel efficiency. The project sought to move away from the turbofan and improve on the open-rotor (propeller) technology of 1950s. Open rotors have high bypass ratios and therefore hold great potential to dramatically increase fuel efficiency. NASA believed an advanced turboprop could lead to a reduction in fuel consumption of 20 to 30 percent over existing turbo­fan engines with comparable performance and cabin comfort (accept­able noise and vibration) at a Mach 0.8 and an altitude of 30,000 feet.[1432]

Подпись: 12There were two major obstacles to returning to an open-rotor sys­tem, however. The most fundamental problem was that propellers typi­cally lose efficiency as they turn more quickly at higher flight speeds. The challenge of the ATP was to find a way to ensure that propellers could operate efficiently at the same flight speeds as turbojet engines. This would require a design that allowed the fan to operate at slow speeds to maximize efficiency while the turbine operates fast to achieve ade­quate thrust. Another major obstacle facing NASA’s ATP was the fact that turboprop engines tend to be very noisy, making them less than ideal for commercial airline use. NASA’s ATP sought to overcome the noise problem and increase fuel efficiency by adopting the concept of swept propeller blades.

The ATP generated considerable interest from the aeronautics research community, growing from a NASA contract with the Nation’s last major propeller manufacturer, Hamilton Standard, to a project that involved 40 industrial contracts, 15 university grants, and work at 4 NASA research Centers—Lewis, Langley, Dryden, and Ames. NASA engineers, along with a large industry team, won the Collier Trophy for developing a new fuel-efficient turboprop in 1987.[1433]

NASA initially contracted with Allison, P&W, and Hamilton Standard to develop a propeller for the ATP that rotated in one direction. This was called a "single rotation tractor system” and included a gearbox, which enabled the propeller and turbines to operate at different speeds. The NASA/industry team first conducted preliminary ground-testing. It combined the Hamilton Standard SR-7A prop fan with the Allison turbo shaft engine and a gearbox and performed 50 hours of success-

Advanced Turboprop Project Advanced Turboprop Project Подпись: 12

Advanced Turboprop ProjectAFT NACELLE

Schematic drawing of the NASA propfan testbed, showing modifications and features proposed for the basic Grumman Gulfstream airframe. NASA.

ful stationary tests in May and June 1986.[1434] Next, the engine parts were shipped to Savannah, GA, and reassembled on a modified Gulfstream II with a single-blade turboprop on its left wing. Flight-testing took place in 1987, validating NASA’s predictions of a 20 to 30 percent fuel savings.[1435]

Meanwhile, P&W’s main rival, GE, was quietly developing its own approach to the ATP known as the "unducted fan.” GE released the design to NASA in 1983, and NASA Headquarters instructed NASA Lewis to cooperate with GE on development and testing.[1436] Citing con­cerns about weight and durability, P&W decided not to use a gearbox to allow the propellers and the turbines to turn at different speeds.[1437] Instead, the company developed a counter-rotating pusher system. They mounted two counter-rotating propellers on the rear of the plane, which pushed it into flight. They also put counter-rotating blade rows in the turbine. The counter-rotating turbine blades were turning relatively
slowly to accommodate the fan, but because they were turning in opposite directions, their relative speed was high and therefore highly efficient.[1438]

GE performed ground tests of the unducted fan in 1985 that showed a 20 percent fuel-conservation rate.[1439] Then, in 1986, a year before the NASA/industry team flight test, GE mounted the unducted fan—the pro­pellers and the fan mounted behind an F404 engine—on a Boeing 727 airplane and conducted a successful flight test.[1440]

Подпись: 12Mark Bowles and Virginia Dawson have noted in their analysis of the ATP that the competition between the two ATP concepts and industry’s willingness to invest in the open-rotor technology fostered public accep­tance of the turboprop concept.[1441] But despite the growing momentum and the technical success of the ATP project, the open rotor was never adopted for widespread use on commercial aircraft. P&W’s Crow said that the main reason was that it was just too noisy.[1442] "This was clearly more fuel-efficient technology, but it was not customer friendly at all,” said Crow. Another problem was that the rising fuel prices that had spurred NASA to work on energy-efficient technology were now going back down. There was no longer a favorable ratio of cost to develop tur­boprop technology versus savings in fuel burn.[1443] "In one sense of the word it was a failure,” said Crow. "Neither GE nor Pratt nor Boeing nor anyone else wanted us to commercialize those things.”

Nevertheless, the ATP yielded important technological breakthroughs that fed into later engine technology developments at both GE and P&W. Crow said the ATP set the stage for the development of P&W’s latest engine, the geared turbofan.[1444] That engine is not an open-rotor system, but it does use a gearbox to allow the fan to turn more slowly than the turbines. The fan moves a large amount of air past the engine core without changing the velocity of the air very much. This enables a high bypass ratio, thereby increasing fuel efficiency; the bypass ratio is 8 to 1 in the 14,000-17,000- pound thrust class and 12 to 1 in the 17,000-23,000-pound thrust class.[1445]

GE renewed its ATP research to compete with P&W’s geared tur­bofan, announcing in 2008 that it would consider both open rotor and encased engine concepts for its new engine core development program, known as E Core. The company announced an agreement with NASA in the fall of 2008 to conduct a joint study on the feasibility of an open – rotor engine design. In 2009, GE plans to revisit its original open-rotor fan designs to serve as a baseline. GE and NASA will then conduct wind tunnel tests using the same rig that was used for the ATP. [1446] Snecma, GE’s 50/50 partner in CFM International—an engine manufacturing partner­ship—will participate in fan blade design testing. GE says the new E Core design—whether it adopts an open rotor or not—aims to increase fuel efficiency 16 percent above the baseline (a conventional turbofan configuration) in narrow-body and regional aircraft.[1447]

Подпись: 12Another major breakthrough resulting from the ATP was the devel­opment of computational fluid dynamics (CFD), which allowed engi­neers to predict the efficiency of new propulsion systems more accurately. "What computational fluid dynamics allowed us to do was to design a new air foil based on what the flow field needed rather than proscrib­ing a fixed air foil before you even get started with a design process,” said Dennis Huff, NASA’s Deputy Chief of the Aeropropulsion Division. "It was the difference between two – and three-dimensional analysis; you could take into account how the fan interacted with nacelle and certain aerodynamic losses that would occur. You could model numer­ically, whereas the correlations before were more empirically based.”[1448] Initially, companies were reluctant to embrace NASA’s new approach because they distrusted computational codes and wanted to rely on existing design methods, according to Huff. However, NASA continued to verify and validate the design methods until the companies began to accept them as standard practice. "I would say by the time we came out of the Advanced Turboprop project, we had a lot of these aerodynamic CFD tools in place that were proven on the turboprop, and we saw the companies developing codes for the turbo engine,” Huff said.[1449]

The Truckee Workshop and Conference Report

In July 1989, NASA Ames sponsored a workshop on requirements for the development and use of very high-altitude aircraft for atmospheric research. The primary objectives of the workshop were to assess the sci­entific justification for development of new aircraft that would support stratospheric research beyond the altitudes attainable by NASA’s ER-2 aircraft and to determine the aircraft characteristics (ceiling, altitude, payload capabilities, range, flight duration, and operational capabilities) required to perform the stratospheric research missions. Approximately 35 stratospheric scientists and aircraft design and operations experts attended the conference, either as participants or as observers. Nineteen of these attendees were from NASA (1 for NASA Langley, 16 from NASA Ames, and 2 representing both NASA Dryden and Ames); 4 were from uni­versities and institutes, including Harvard University and Pennsylvania

State University; and 6 represented aviation companies, including Boeing Aerospace, Aurora Flight Sciences and Lockheed. Crofton Farmer, rep­resenting the Jet Propulsion Laboratory, served as workshop chair, and Philip Russell, from NASA Ames, was the workshop organizer and report editor. The attendees represented a broad range of expertise, including 9 aircraft and development experts, 3 aircraft operations representa­tives, 2 aeronautical science experts, 2 Earth science specialists, 1 instru­ment management expert (Steven Wegener from NASA Ames, who later directed the science and payload projects for the solar UAV program), 1 general management observer, and 17 stratospheric scientists.[1522]

Подпись: 13The workshop considered pressing scientific questions that required advanced aircraft capabilities in order to accomplished a number of pro­posed science related missions, including: (1) answering important polar vortex questions, including determining what causes ozone loss above the dehydration region in Antarctica and to what extent the losses are transmitted to the middle latitudes; (2) determining high-altitude pho­tochemistry in tropical and middle latitudes; (3) determining the impact and degree of airborne transport of certain chemicals; and (4) studying volcanic, stratospheric cloud/aerosol, greenhouse, and radiation balance. The workshop concluded that carrying out the above missions would require flights at a cruise altitude of 100,000 feet, the ability to make a round trip of between 5,000 and 6,000 nautical miles, the capability to fly into the polar night and over water more than 200 nautical miles from land, and carry a payload equal to or greater than the ER-2. The workshop report noted that experience with satellites pointed out the need for increased emphasis on correlative measurements for current and future remote sensing systems. Previously, balloons had provided most of this information, but balloons presented a number of problems, including a low frequency of successful launches, the small number of available launch sites worldwide, the inability to follow selected paths, and the difficulty in recovering payloads. The workshop concluded with the following finding:

We recommend development of an aircraft with the capacity to carry integrated payloads similar to the ER-2 to significantly higher altitude preferably with greater range. It is important
that the aircraft be able to operate over the ocean and in the polar night. This may dictate development of an autonomous or remotely piloted plane. There is a complementary need to explore strategies that would allow payloads of reduced weight to reach even higher altitude, enhancing the current capabil­ity of balloons.[1523]

High-altitude, long-duration vehicle development, along with devel­opment of reduced weight instrumentation, both became goals of the ERAST program.