Category AERONAUTICS

Noise Pollution Forces Engine Improvements

Fast-forward a few years, to a time when Americans embraced the prom­ise that technology would solve the world’s problems, raced the Soviet Union to the Moon, and looked forward to owning personal family hov­ercraft, just like they saw on the TV show The Jetsons. And during that same decade of the 1960s, the American public became more and more comfortable flying aboard commercial airliners equipped with the mod­ern marvel of turbojet engines. Boeing 707s and McDonnell-Douglas DC-8s, each with four engines bolted to their wings, were not only a common sight in the skies over major cities, but their presence could also easily be heard by anyone living next to or near where the planes took off and landed. Boeing 727s and 737s soon followed. At the same

Подпись: 11 Noise Pollution Forces Engine Improvements

time that commercial aviation exploded, people moved away from the metropolis to embrace the suburban lifestyle. Neighborhoods began to spring up immediately adjacent to airports that originally were built far from the city, and the new neighbors didn’t like the sound of what they hearing.[1295]

By 1966, the problem of aircraft noise pollution had grown to the point of attracting the attention of President Lyndon Johnson, who then directed the U. S. Office of Science and Technology to set a new national policy that said:

Подпись: 11The FAA and/or NASA, using qualified contractors as neces­sary, (should) establish and fund. . . an urgent program for conducting the physical, psycho-acoustical, sociological, and other research results needed to provide the basis for quanti­tative noise evaluation techniques which can be used. . . for hardware and operational specifications.[1296]

As a result, NASA began dedicating resources to aggressively address aircraft noise and sought to contract much of the work to industry, with the goals of advancing technology and conducting research to provide lawmakers with the information they needed to make informed regulatory decisions.[1297]

During 1968, the Federal Aviation Administration (FAA) was given authority to implement aircraft noise standards for the airline indus­try. Within a year, the new standards were adopted and called for all new designs of subsonic jet aircraft to meet certain criteria. Aircraft that met these standards were called Stage 2 aircraft, while the older planes that did not meet the standards were called Stage 1 aircraft. Stage 1 aircraft over 75,000 pounds were banned from flying to or from U. S. airports as of January 1, 1985. The cycle repeated itself with the establishment of Stage 3 aircraft in 1977, with Stage 2 aircraft need­ing to be phased out by the end of 1999. (Some of the Stage 2 aircraft engines were modified to meet Stage 3 aircraft standards.) In 2005, the FAA adopted an even stricter noise standard, which is Stage 4. All new aircraft designs submitted to the FAA on or after July 5, 2005, must meet Stage 4 requirements. As of this writing, there is no timeta­ble for the mandatory phaseout of Stage 3 aircraft.[1298]

With every new set of regulations, the airline industry required upgrades to its jet engines, if not wholesale new designs. So having already helped establish reliable working versions of each of the major types of jet engines—i. e., turboprop, turbojet, and turbofan—NASA and its industry partners began what has turned out to be a continuing 50-year-long challenge to constantly improve the design of jet engines to prolong their life, make them more fuel efficient, and reduce their environmental impact in terms of air and noise pollution. With this new direction, NASA set in motion three initial programs.9

Подпись: 11NASA’s first major new program was the Acoustically Treated Nacelle program, managed by the Langley Research Center. Engines flying on Douglas DC-8 and Boeing 707 aircraft were outfitted with experimen­tal mufflers, which reduced noise during approach and landing but had negligible effect on noise pollution during takeoff, according to program results reported during a 1969 conference at Langley.10

The second was the Quiet Engine program, which was managed by the Lewis Research Center in Cleveland (Lewis became the Glenn Research Center on March 1, 1999). Attention here focused on the inte­rior design of turbojet and turbofan engines to make them quieter by as much as 20 decibels. General Electric (GE) was the key industry partner in this program, which showed that noise reduction was possi­ble by several methods, including changing the rotational speed of the fan, increasing the fan bypass ratio, and adjusting the spacing of rotat­ing and stationary parts.11

The third was the Steep Approach program, which was jointly managed by Langley and the Ames Research Center/Dryden Flight Research Facility, both in California. This program did not result in new engine technology but instead focused on minimizing noise on the ground by developing techniques for pilots to use in flying steeper and faster approaches to airports.12 [1299] [1300]

Advanced Turboprop Project

Another significant program to emerge from NASA’s ACEE program was the Advanced Turboprop project, which lasted from 1976 to 1987.

Like E Cubed, the ATP was largely focused on improving fuel efficiency. The project sought to move away from the turbofan and improve on the open-rotor (propeller) technology of 1950s. Open rotors have high bypass ratios and therefore hold great potential to dramatically increase fuel efficiency. NASA believed an advanced turboprop could lead to a reduction in fuel consumption of 20 to 30 percent over existing turbo­fan engines with comparable performance and cabin comfort (accept­able noise and vibration) at a Mach 0.8 and an altitude of 30,000 feet.[1432]

Подпись: 12There were two major obstacles to returning to an open-rotor sys­tem, however. The most fundamental problem was that propellers typi­cally lose efficiency as they turn more quickly at higher flight speeds. The challenge of the ATP was to find a way to ensure that propellers could operate efficiently at the same flight speeds as turbojet engines. This would require a design that allowed the fan to operate at slow speeds to maximize efficiency while the turbine operates fast to achieve ade­quate thrust. Another major obstacle facing NASA’s ATP was the fact that turboprop engines tend to be very noisy, making them less than ideal for commercial airline use. NASA’s ATP sought to overcome the noise problem and increase fuel efficiency by adopting the concept of swept propeller blades.

The ATP generated considerable interest from the aeronautics research community, growing from a NASA contract with the Nation’s last major propeller manufacturer, Hamilton Standard, to a project that involved 40 industrial contracts, 15 university grants, and work at 4 NASA research Centers—Lewis, Langley, Dryden, and Ames. NASA engineers, along with a large industry team, won the Collier Trophy for developing a new fuel-efficient turboprop in 1987.[1433]

NASA initially contracted with Allison, P&W, and Hamilton Standard to develop a propeller for the ATP that rotated in one direction. This was called a "single rotation tractor system” and included a gearbox, which enabled the propeller and turbines to operate at different speeds. The NASA/industry team first conducted preliminary ground-testing. It combined the Hamilton Standard SR-7A prop fan with the Allison turbo shaft engine and a gearbox and performed 50 hours of success-

Advanced Turboprop Project Advanced Turboprop Project Подпись: 12

Advanced Turboprop ProjectAFT NACELLE

Schematic drawing of the NASA propfan testbed, showing modifications and features proposed for the basic Grumman Gulfstream airframe. NASA.

ful stationary tests in May and June 1986.[1434] Next, the engine parts were shipped to Savannah, GA, and reassembled on a modified Gulfstream II with a single-blade turboprop on its left wing. Flight-testing took place in 1987, validating NASA’s predictions of a 20 to 30 percent fuel savings.[1435]

Meanwhile, P&W’s main rival, GE, was quietly developing its own approach to the ATP known as the "unducted fan.” GE released the design to NASA in 1983, and NASA Headquarters instructed NASA Lewis to cooperate with GE on development and testing.[1436] Citing con­cerns about weight and durability, P&W decided not to use a gearbox to allow the propellers and the turbines to turn at different speeds.[1437] Instead, the company developed a counter-rotating pusher system. They mounted two counter-rotating propellers on the rear of the plane, which pushed it into flight. They also put counter-rotating blade rows in the turbine. The counter-rotating turbine blades were turning relatively
slowly to accommodate the fan, but because they were turning in opposite directions, their relative speed was high and therefore highly efficient.[1438]

GE performed ground tests of the unducted fan in 1985 that showed a 20 percent fuel-conservation rate.[1439] Then, in 1986, a year before the NASA/industry team flight test, GE mounted the unducted fan—the pro­pellers and the fan mounted behind an F404 engine—on a Boeing 727 airplane and conducted a successful flight test.[1440]

Подпись: 12Mark Bowles and Virginia Dawson have noted in their analysis of the ATP that the competition between the two ATP concepts and industry’s willingness to invest in the open-rotor technology fostered public accep­tance of the turboprop concept.[1441] But despite the growing momentum and the technical success of the ATP project, the open rotor was never adopted for widespread use on commercial aircraft. P&W’s Crow said that the main reason was that it was just too noisy.[1442] "This was clearly more fuel-efficient technology, but it was not customer friendly at all,” said Crow. Another problem was that the rising fuel prices that had spurred NASA to work on energy-efficient technology were now going back down. There was no longer a favorable ratio of cost to develop tur­boprop technology versus savings in fuel burn.[1443] "In one sense of the word it was a failure,” said Crow. "Neither GE nor Pratt nor Boeing nor anyone else wanted us to commercialize those things.”

Nevertheless, the ATP yielded important technological breakthroughs that fed into later engine technology developments at both GE and P&W. Crow said the ATP set the stage for the development of P&W’s latest engine, the geared turbofan.[1444] That engine is not an open-rotor system, but it does use a gearbox to allow the fan to turn more slowly than the turbines. The fan moves a large amount of air past the engine core without changing the velocity of the air very much. This enables a high bypass ratio, thereby increasing fuel efficiency; the bypass ratio is 8 to 1 in the 14,000-17,000- pound thrust class and 12 to 1 in the 17,000-23,000-pound thrust class.[1445]

GE renewed its ATP research to compete with P&W’s geared tur­bofan, announcing in 2008 that it would consider both open rotor and encased engine concepts for its new engine core development program, known as E Core. The company announced an agreement with NASA in the fall of 2008 to conduct a joint study on the feasibility of an open – rotor engine design. In 2009, GE plans to revisit its original open-rotor fan designs to serve as a baseline. GE and NASA will then conduct wind tunnel tests using the same rig that was used for the ATP. [1446] Snecma, GE’s 50/50 partner in CFM International—an engine manufacturing partner­ship—will participate in fan blade design testing. GE says the new E Core design—whether it adopts an open rotor or not—aims to increase fuel efficiency 16 percent above the baseline (a conventional turbofan configuration) in narrow-body and regional aircraft.[1447]

Подпись: 12Another major breakthrough resulting from the ATP was the devel­opment of computational fluid dynamics (CFD), which allowed engi­neers to predict the efficiency of new propulsion systems more accurately. "What computational fluid dynamics allowed us to do was to design a new air foil based on what the flow field needed rather than proscrib­ing a fixed air foil before you even get started with a design process,” said Dennis Huff, NASA’s Deputy Chief of the Aeropropulsion Division. "It was the difference between two – and three-dimensional analysis; you could take into account how the fan interacted with nacelle and certain aerodynamic losses that would occur. You could model numer­ically, whereas the correlations before were more empirically based.”[1448] Initially, companies were reluctant to embrace NASA’s new approach because they distrusted computational codes and wanted to rely on existing design methods, according to Huff. However, NASA continued to verify and validate the design methods until the companies began to accept them as standard practice. "I would say by the time we came out of the Advanced Turboprop project, we had a lot of these aerodynamic CFD tools in place that were proven on the turboprop, and we saw the companies developing codes for the turbo engine,” Huff said.[1449]

The Truckee Workshop and Conference Report

In July 1989, NASA Ames sponsored a workshop on requirements for the development and use of very high-altitude aircraft for atmospheric research. The primary objectives of the workshop were to assess the sci­entific justification for development of new aircraft that would support stratospheric research beyond the altitudes attainable by NASA’s ER-2 aircraft and to determine the aircraft characteristics (ceiling, altitude, payload capabilities, range, flight duration, and operational capabilities) required to perform the stratospheric research missions. Approximately 35 stratospheric scientists and aircraft design and operations experts attended the conference, either as participants or as observers. Nineteen of these attendees were from NASA (1 for NASA Langley, 16 from NASA Ames, and 2 representing both NASA Dryden and Ames); 4 were from uni­versities and institutes, including Harvard University and Pennsylvania

State University; and 6 represented aviation companies, including Boeing Aerospace, Aurora Flight Sciences and Lockheed. Crofton Farmer, rep­resenting the Jet Propulsion Laboratory, served as workshop chair, and Philip Russell, from NASA Ames, was the workshop organizer and report editor. The attendees represented a broad range of expertise, including 9 aircraft and development experts, 3 aircraft operations representa­tives, 2 aeronautical science experts, 2 Earth science specialists, 1 instru­ment management expert (Steven Wegener from NASA Ames, who later directed the science and payload projects for the solar UAV program), 1 general management observer, and 17 stratospheric scientists.[1522]

Подпись: 13The workshop considered pressing scientific questions that required advanced aircraft capabilities in order to accomplished a number of pro­posed science related missions, including: (1) answering important polar vortex questions, including determining what causes ozone loss above the dehydration region in Antarctica and to what extent the losses are transmitted to the middle latitudes; (2) determining high-altitude pho­tochemistry in tropical and middle latitudes; (3) determining the impact and degree of airborne transport of certain chemicals; and (4) studying volcanic, stratospheric cloud/aerosol, greenhouse, and radiation balance. The workshop concluded that carrying out the above missions would require flights at a cruise altitude of 100,000 feet, the ability to make a round trip of between 5,000 and 6,000 nautical miles, the capability to fly into the polar night and over water more than 200 nautical miles from land, and carry a payload equal to or greater than the ER-2. The workshop report noted that experience with satellites pointed out the need for increased emphasis on correlative measurements for current and future remote sensing systems. Previously, balloons had provided most of this information, but balloons presented a number of problems, including a low frequency of successful launches, the small number of available launch sites worldwide, the inability to follow selected paths, and the difficulty in recovering payloads. The workshop concluded with the following finding:

We recommend development of an aircraft with the capacity to carry integrated payloads similar to the ER-2 to significantly higher altitude preferably with greater range. It is important
that the aircraft be able to operate over the ocean and in the polar night. This may dictate development of an autonomous or remotely piloted plane. There is a complementary need to explore strategies that would allow payloads of reduced weight to reach even higher altitude, enhancing the current capabil­ity of balloons.[1523]

High-altitude, long-duration vehicle development, along with devel­opment of reduced weight instrumentation, both became goals of the ERAST program.

The Research Culture

As part of the broad scope of aeronautics research, the rotary wing efforts spanned the full range of research activity, including theoretical study, wind tunnel testing, and ground-based simulation. Flight-test NACA rotary wing research began in the early 1920s with exploratory wind tun­nel tests of simple rotor models as the precursor to the basic research undertaken in the 1930s. The Langley Memorial Aeronautical Laboratory, established at Hampton, VA, in 1917, purchased a Pitcairn PCA-2 auto­giro in 1931 for research use.[269] The National Advisory Committee for Aeronautics had been formed in 1915 to "supervise and direct scien­tific study the problems of flight, with a view to their practical solution.” Rotary wing research at Langley proceeded under the direction of the Committee with annual inspection meetings by the full Committee to review aeronautical research progress. In the early 1940s, the Ames Aeronautical Laboratory, now known as the Ames Research Center, opened for research at Moffett Field in Sunnyvale, CA. Soon after, the Aircraft Engine Research Laboratory, known for many years as the Lewis Research Center and now known as the Glenn Research Center, opened in Cleveland, OH. Each NACA Center had unique facilities that accom­modated rotary wing research needs. Langley Research Center played a major role in NACA-NASA rotary wing research until 1976, when Ames Research Center was assigned the lead role.

The rotary wing research is carried out by a staff of research engi­neers, scientists, technical support specialists, senior management, and administrative personnel. The rotary wing research staff draws on the expertise of the technical discipline organizations in areas such as aero­dynamics, structures and materials, propulsion, dynamics, acoustics, and human factors. Key support functions include such activities as test apparatus design and fabrication, instrumentation research and development (R&D), and research computation support. The constant instrumentation challenge is to adapt the latest technology available to acquiring reliable research data. Over the years, the related challenge for computation tasks is to perform data reduction and analysis for the

increasing sophistication and scope of theoretical investigations and test projects. In the NACA environment, the word "computers” actu­ally referred to a large cadre of female mathematicians. They managed the test measurement recordings, extracted the raw data, analyzed the data using desktop electromechanical calculators, and hand-plotted the results. The NASA era transformed this work from a tedious enterprise into managing the application of the ever-increasing power of modern electronic data recording and computing systems.

The dissemination of the rotary wing research results, which form the basis of NACA-NASA contributions over the years, takes a number of forms. The effectiveness of the contributions depends on making the research results and staff expertise readily available to the Nation’s Government and industry users. The primary method has tradition­ally been the formal publication of technical reports, studies, and com­pilations that are available for exploitation and use by practitioners. Another method that fosters immediate dialogue with research peers and potential users is the presentation of technical papers at confer­ences and technical meetings. These papers are published in the con­ference proceedings and are frequently selected for broader publication as papers or journal articles by technical societies such as the Society of Automotive Engineers (SAE)-Aerospace and the American Institute of Aeronautics and Astronautics (AIAA). Since 1945, NACA-NASA rotary wing research results have been regularly published in the Proceedings of the American Helicopter Society Annual Forum and the Journal of the AHS. During this time, 30 honorary awards have been presented to NACA and NASA researchers at the Annual Forum Honors Night cere­monies. These awards were given to individual researchers and to tech­nical teams for significant contributions to the advancement of rotary wing technology.

Over the years, the technical expertise of the personnel conducting the ongoing rotary wing research at NACA-NASA has represented a valu­able national resource at the disposal of other Government organizations and industry. Until the Second World War, small groups of rotary wing specialists were the prime source of long-term, fundamental research. In the late 1940s, the United States helicopter industry emerged and estab­lished technical teams focused on more near-term research in support of their design departments. In turn, the military recognized the need to build an in-house research and development capability to guide their major investments in new rotary wing fleets. The Korean war marked

the beginning of the U. S. Army’s long-term commitment to the utiliza­tion of rotary wing aircraft. In 1962, Gen. Hamilton H. Howze, the first Director of Army Aviation, convened the U. S. Army Tactical Mobility Requirements Board (Howze Board).[270] This milestone launched the emer­gence of the Air Mobile Airborne Division concept and thereby the steady growth in U. S. military helicopter R&D and production. The working relationship among Government agencies and industry R&D organiza­tions has been close. In particular, the availability of unique facilities and the existence of a pool of experienced rotary wing researchers at NASA led to the United States Army’s establishing a "special relation­ship” with NASA and an initial research presence at the Ames Research Center in 1965. This was followed by the creation of co-located and inte­grated research organizations at the Ames, Langley, and Glenn Research Centers in the early 1970s. The Army organizations were staffed by spe­cialists in key disciplines such as unsteady aerodynamics, aeroelastic- ity, acoustics, flight mechanics, and advanced design. In addition, Army civilian and military engineering and support personnel were assigned to work full time in appropriate NASA research facilities and theoretical analysis groups. These assignments included placing active duty mili­tary test pilots in the NASA flight research organizations. Over the long term, this teaming arrangement facilitated significant research activity. In addition to Research and Technology Base projects, it made it possi­ble to perform major jointly funded and managed rotary wing Systems Technology and Experimental Aircraft programs. The United States Army partnership was augmented by other research teaming agreements with the United States Navy, FAA, the Defense Advanced Research Projects Agency (DARPA), academia, and industry.

Perspectives on the Past, Prospects for the Future

Unfortunately for the immediate future of civilian supersonic flight, the successful LaNCETS project coincided almost exactly with the spread of the global financial crisis and the start of a severe recession. These negative economic developments hit almost all major industries, not the least being air carriers and aircraft manufacturers. The impact on those recently thriving companies making business jets was aggravated even more by populist and political backlash at executives of troubled corporations, some now being subsidized by the Federal Government, for continuing to fly in corporate jets. Lamenting this unsought nega­tive publicity, Aviation Week and Space Technology examined the plight

of the small-jet manufacturers in a story with following subheading: "As if the economy were not enough, business aviation becomes a scape­goat for executive excess.”[541] Nevertheless, NASA was continuing to invest in supersonic technologies and sonic boom research, and the air­craft industry was not ready to abandon the ultimate goal of supersonic civilian flight. For example, Boeing—under a Supersonics Project con­tract—was studying low-boom modifications for one of NASA’s F-16XL aircraft as one way to seek the holy grail for practical supersonic com­mercial flight: acceptance by the public. This relatively low-cost idea for a shaped sonic boom demonstrator had been one of the options being considered during NASA’s short-lived Sonic Boom Mitigation Project in 2005. Since then, findings from the Quiet Spike and LaNCETS exper­iments, along with continued progress in computational fluid dynam­ics, were helping to confirm and refine the aerodynamic and propulsion attributes needed to mitigate the strength of sonic booms.

In the case of the F-16XL, the modifications proposed by Boeing included an extended nose glove (reminiscent of the SSBD), lateral chines that blend into the wings (as with the SR-71), a sharpened V-shaped front canopy (like those of the F-106 and SR-71), an expanded nozzle for its jet engine (similar to those of F-15B No. 837), and a dorsal extension (called a "stinger”) to lengthen the rear of the airplane. Although such add-ons would preclude the low-drag characteristics also desired in a demonstra­tor, Boeing felt that its "initial design studies have been encouraging with respect to shock mitigation of the forebody, canopy, inlet, wing leading edge, and aft lift/volume distribution features.” Positive results from more detailed designs and successful wind tunnel testing would be the next requirements for continuing consideration of the proposed modifications.[542]

It was clear that NASA’s discoveries about sonic booms and how to control them were beginning to pay dividends. Whatever the fate of Boeing’s idea or any other proposals yet to come, NASA was commit­ted to finding the best way to demonstrate fully shaped sonic booms. As another encouraging sign, the FAA was working with NASA on a roadmap for studying community reactions to sonic booms, one that would soon be presented to the ICAO.[543]

As shown in this study, past expectations for a quiet civilian super­sonic transport had repeatedly run up against scientific, technical, eco­nomic, and political hurdles too high to overcome. That is why such an airplane has yet to fly. Yet the knowledge gained and lessons learned from each attempt attest to the value of persistence in pursuing both basic and applied research. Recent progress in shaping sonic booms builds upon the work of dedicated NASA civil servants over more than half a century, the data and documentation preserved through NASA’s scientific and techni­cal information program, the special facilities and test resources main­tained and operated by NASA’s research Centers, and NASA’s support of and partnership with contractors and universities.

Since the dawn of civilization, conquering the twin tyrannies of time and distance has been a powerful human aspiration, one that has served as a catalyst for many technological innovations. It seems reasonable to assume that this need for speed will eventually break down the barriers in the way of practical supersonic transportation, to include solving the prob­lem of the sonic boom. When that time finally does come, it will have been made possible by NASA’s many years of meticulous research, careful test­ing, and inventive experimentation on ways to soften the sonic footprint.

Fly-By-Wire: Fulfilling Promise and Navigating Around Nuance

As designers and flightcrews became more comfortable with electronic flight control systems and the systems became more reliable, the idea of removing the extra weight of the pilot’s mechanical control system began to emerge. Pilots resisted the idea because electrical systems do fail, and the pilots (especially military pilots) wanted a "get-me-home” capabil­ity. One flight-test program received little attention but contributed a great deal to the acceptance of fly-by-wire technology. The Air Force ini­tiated a program to demonstrate that a properly designed fly-by-wire control system could be more reliable and survivable than a mechani­cal system. The F-4 Survivable Flight Control System (SFCS) program was initiated in the early 1970s. Many of the then-current accepted prac­tices for flight control installations were revised to improve survivabil­ity. Four independent analog computer systems provided fail-op, fail-op (FOFO) redundancy. A self-adaptive gain changer was also included in the control logic (similar to the MH-96 in the X-15). Redundant com­puters, gyros, and accelerometers were eventually mounted in separate locations in the airplane, as were power supplies. Flight control system wire bundles for redundant channels were separated and routed through different parts of the airplane. Individual surface actuators (one aileron for example) could be operated to continue to maintain control when the opposite control surface was inoperative. The result was a flight control system that was lighter yet more robust than a mechanical sys­tem (which could be disabled by a single failure of a pushrod or cable). After development flight-testing of the SFCS airplane was completed, the standard F-4 mechanical backup system was removed, and the air­plane was flown in a completely fly-by-wire configuration.[700]

The first production fly-by-wire airplane was the YF-16. It used four redundant analog computers with FOFO capability. The airplane was not only the first production aircraft to use FBW control, it was also the first airplane intentionally designed to be unstable in the pitch axis while

flying at subsonic speeds ("relaxed static stability”). The YF-16 proto­type test program allowed the Air Force and General Dynamics to iron out the quirks of the FBW control system as well as the airplane aero­dynamics before entering the full scale development of the F-16A/B. The high gains required for flying the unstable airplane resulted in some structural resonance and limit-cycle problems. The addition of exter­nal stores (tanks, bombs, and rockets) altered the structural mode fre­quencies and required fine-tuning of the control laws. Researchers and designers learned that flight control system design and aircraft inter­actions in the emergent FBW era were clearly far more complex and nuanced than control system design in the era of direct mechanical feedback and the augmented hydromechanical era that had followed.[701]

Configuration Influence upon Stall and Departure Behavior

Another maneuver that can lead to loss of control is a stall. An aircraft "stalls” when the wing’s angle of attack exceeds a critical angle beyond

which the wing can no longer generate the lift necessary to support the airplane. A typical stall consists of some pre-stall warning buffet as the flow over the wing begins to break down, followed by stall onset, usu­ally accompanied by an uncommanded nose-down pitching rotation of the aircraft, as gravity takes over and the airplane naturally tries to regain lost airspeed. The loss of control for a normal stall is quite brief and can usually be overcome, or prevented, by proper control applica­tion at the time of pre-stall warning. There are design features of some aircraft that result in quite different stall characteristics. Stalls may be a straightforward wings-level gentle drop (typically leading to a swift and smooth recovery), or sharply abrupt, or an unsymmetrical wing drop leading to a spin entry. The latter can be quite hazardous.

High-performance T-tail aircraft are particularly vulnerable to abnormal stall effects. Lockheed’s sleek F-104 Starfighter incorporated a T-tail operating behind a short, stubby, and extremely thin wing. As the wing approached the critical stall angle, the wing tip vortexes impinged on the horizontal tail creating an abrupt nose-up pitching moment, commonly referred to as a "pitch-up.” The pitch-up placed the airplane in an uncontrollable flight environment: either a highly oscillatory spin or a deep stall (a stable condition where the airplane remains locked in a high angle of attack vertical descent). To prevent inadvertent pitch-ups, the aircraft was equipped with a "stick shaker,” and a "stick kicker.” The stick shaker created an artificial vibration of the stick, simulating stall buffet, as the airplane approached a stall. The stick kicker applied a sharp nose-down command to the horizontal tail when the airplane reached the critical condition for an impending pitch – up. A similar situation developed for the McDonnell F-101 Voodoo (also a T-tail behind a short, stubby wing). Stick shakers and kickers were quite successful in allowing these airplanes to operate safely through­out their operational lifespan. Overall, however, the T-tail layout was largely discredited for high-performance fighter and attack aircraft, the most successful postwar fighters being those with low-placed hor­izontal tails. Such a configuration, typified by the F-100, F-101, F-105, F-5, F-14, F-15, F-16, F/A-18, F-22, F-35, and a host of foreign aircraft, is now a design standard for tailed transonic and supersonic military aircraft. It was a direct outgrowth of the extensive testing the NACA did in the late 1940s and early 1950s on such aircraft as the D-558-2, the North American F-86, and the Bell X-5, all of which, to greater or lesser extents, suffered from pitch-up.

The advent of the swept wing induced its own challenges. In 1935, German aerodynamicist Adolf Busemann discovered that aircraft could operate at higher speeds, and closer to the speed of sound (Mach 1), by using swept wings. By the end of the Second World War, American NACA researcher Robert T. Jones of Langley Memorial Aeronautical Laboratory had independently discovered its benefits as well. The swept wing sub­sequently transformed postwar military and civil aircraft design, but it was not without its own quite serious problems. The airflow over a swept wing tends to move aft and outboard, toward the tip. This results in the wingtip stalling before the rest of the wing. Because the wingtip is aft of the wing root, the loss of lift at the tip causes an uncommanded nose-rise as the airplane approaches a stall. This nose-rise is similar to a pitch-up but not nearly as abrupt. It can be controlled by the pilot, and for most swept wing airplanes there are no control system features spe­cifically to correct nose-rise problems. Understanding the manifestations of swept wing stall and swept wing pitch-up commanded a great deal of NACA and Air Force interest in the early years of the jet age, for reasons of both safety and combat effectiveness. Much of the NACA’s research program on its three swept wing Douglas D-558-2 Skyrockets involved examination of these problems. Research included analysis of a variety of technological "fixes,” such as sawtooth leading edge extensions, wing fences, and fixed and retracting slots. Afterward, various combinations of flaps, flow direction fences, wing twist, and other design features have been used to overcome the tip-stall characteristic in modern swept wing airplanes, which, of course, include most commercial airliners.[748]

Three-Dimensional Flows and Hypersonic Vehicles

Three-dimensional flow-field calculation was, for decades, a frustrat­ing impossibility. I recall colleagues in the 1960s who would have sold their children (at least they said) to be able to calculate three­dimensional flow fields. The number of grid points required for such cal­culations simply exceeded the capability of any computer at that time. With the advent of supercomputers, however, the practical calculation

Подпись:
of three-dimensional flow fields became realizable. Once again, NASA researchers led the way. The first truly three-dimensional flow calcula­tion of real importance was carried out by K. J. Weilmuenster in 1983 at the NASA Langley Research Center. He calculated the inviscid flow over a Shuttle-like body at angle of attack, including the shape and location of the three-dimensional bow shock wave. This was no small feat at the time, and it proved to the CFD community that the time had come for such three-dimensional calculations.[780]

This was followed by an even more spectacular success. In 1986, using the predictor-corrector method conceived by NASA Ames Research Center’s MacCormack, Joseph S. Shang and S. J. Scherr of the Air Force Flight Dynamics Laboratory (AFFDL) published the first Navier-Stokes calculation of the flow field around a complete airplane. The airplane

Подпись: X-24C computed surface streamlines. From author's collection.
Three-Dimensional Flows and Hypersonic Vehicles

was the "X-24C,” a proposed (though never completed) rocket-powered Mach 6+ hypersonic test vehicle conceived by the AFFDL, and the calcu­lation was made for flow conditions at Mach 5.95. The mesh system con­sisted of 475,200 grid points throughout the flow field, and the explicit time-marching procedure took days of computational time on a Cray computer. But it was the first such calculation and a genuine watershed in the advancement of computational fluid dynamics.[781]

Note that both of these pioneering three-dimensional calculations were carried out for hypersonic vehicles, once again underscoring the importance of hypersonic aerodynamics as a major driving force behind the development of computational fluid dynamics and of the leading role played by NASA in driving the whole field of hypersonics.[782]

Jet Propulsion Laboratory

Jet Propulsion Laboratory (JPL) began as an informal group of students and staff from the California Institute of Technology (Caltech) who experimented with rockets before and during World War II; evolved afterward into the Nation’s center for unpiloted exploration of the solar system and deep space, operating related tracking, and data acquisition systems; and was managed for NASA by Caltech.[890] Dr. Theodore von Karman, then head of Caltech’s Guggenheim Aeronautical Laboratory, shepherded this group to becoming a center of rocket research for the Army. Upon NASA’s formation in 1958, JPL came under NASA’s responsibility.[891]

Consistent with its origins and Caltech’s continuing role in its man­agement, JPL’s orientation has always emphasized advanced experimen­tal and analytical research in various disciplines, including structures. JPL developed efficiency improvements for NASTRAN as early as 1971.[892] Other JPL research included basic finite element techniques, high-veloc­ity impact effects, effect of spin on structural dynamics, geometrically nonlinear structures (i. e., structures that deflect sufficiently to signifi­cantly alter the structural properties), rocket engine structural dynam­ics, flexible manipulators, system identification, random processes, and optimization. The most notable of these are VISCEL, TEXLESP-S, and PID (AU-FREDI and MODE-ID).[893]

VISCEL (for Visco-Elastic and Hyperelastic Structures) and TEXLESP-S treat special classes of materials that general-purpose finite element codes typically cannot handle. VISCEL treats visco-elastic prob­lems, in which materials exhibit viscosity (normally a fluid characteris­tic) as well as elasticity. VISCEL was introduced in 1971 and was adapted by industry over the next decade.[894] In 1982, the Shell Oil Company used VISCEL to validate a proprietary code that was in development for the design of plastic products.[895] In 1984, AiResearch was using VISCEL to analyze seals and similar components in aircraft auxiliary power units (APUs).[896]

JPL has been leading research in the structural dynamics of solid rockets almost since the laboratory was first established. TEXLESP-S was specifically developed for analysis of solid rocket fuels, which may be polymeric materials exhibiting such hyperelastic behavior. TEXLESP-S is a finite element code developed for large-strain (hyperelastic) prob­lems, in which materials may be purely elastic but exhibit such large strain deformations that the geometric configuration of the structure is significantly altered. (This is distinct from the small-strain, large – deflection situations that can occur, for example, with long flexible booms on spacecraft.)[897]

System Identification/Parameter Identification (PID, including AU-FREDI and MODE-ID) is the use of empirical data to build or tune a mathematical model of a system. PID is used in many disciplines, including automatic control, flight-testing, and structural analysis.[898] Ideally, excitation of the system is performed by systematically exciting specific modes. However, such controlled excitation is not always prac­tical, and even under the best of circumstances, there is some uncer­tainty in the interpretation of the data. The MODE-ID program was developed in 1988 to estimate not only the modal parameters of a struc­ture, but also the level of uncertainty with which those parameters have been estimated:

Such a methodology is presented which allows the precision of the estimates of the model parameters to be computed.

It also leads to a guiding principle in applications. Namely, when selecting a single model from a given class of models, one should take the most probable model in the class based on the experimental data. Practical applications of this prin­ciple are given which are based on the utilization of measured seismic motions in large civil structures. Examples include the application of a computer program MODE-ID to identify modal properties directly from seismic excitation and response time histories from a nine-story steel-frame building at JPL and from a freeway overpass bridge.[899]

Another system identification program, Autonomous Frequency Domain Identification (AU-FREDI), was developed for the identification of structural dynamic parameters and the development of control laws for large and/or flexible space structures. It was furthermore intended to be used for online design and tuning of robust controllers, i. e., to develop control laws real time, although it could be modified for offline use as well. AU-FREDI was developed in 1989, validated in the Caltech/ Jet Propulsion Laboratory’s Large Spacecraft Control Laboratory and made publicly available.[900] This is just a small sample of the research that JPL has conducted and sponsored in system identification, control of flexible structures, integrated control/structural design, and related fields. While intended primarily for space structures, this research also has relevance for medicine, manufacturing technology, and the design and construction of large, ground-based structures.

NPLOT (Goddard, 1982)

NPLOT was a product of research into the visualization of finite element models, which had been ongoing at Goddard since the introduction of NASTRAN. A fast, hidden line algorithm was developed in 1982 and became the basis for the NPLOT plotting program for NASTRAN, pub­licly released initially in 1985 and in improved versions into the 1990s.[987]

2) Integrated Modeling of Optical Systems (IMOS) (Goddard and JPL,

1990s)

A combined multidisciplinary code, IMOS was developed during the 1990s by Goddard and JPL: "Integrated Modeling of Optical Systems (IMOS) is a finite element-based code combining structural, thermal, and optical ray-tracing capabilities in a single environment for analy­sis of space-based optical systems.”[988] IMOS represents a recent step in the continuing evolution of Structural-Thermal-Optical analysis capa­bility, which has been an important activity at the Space Flight Centers since the early 1970s.