Category AERONAUTICS

The Truckee Workshop and Conference Report

In July 1989, NASA Ames sponsored a workshop on requirements for the development and use of very high-altitude aircraft for atmospheric research. The primary objectives of the workshop were to assess the sci­entific justification for development of new aircraft that would support stratospheric research beyond the altitudes attainable by NASA’s ER-2 aircraft and to determine the aircraft characteristics (ceiling, altitude, payload capabilities, range, flight duration, and operational capabilities) required to perform the stratospheric research missions. Approximately 35 stratospheric scientists and aircraft design and operations experts attended the conference, either as participants or as observers. Nineteen of these attendees were from NASA (1 for NASA Langley, 16 from NASA Ames, and 2 representing both NASA Dryden and Ames); 4 were from uni­versities and institutes, including Harvard University and Pennsylvania

State University; and 6 represented aviation companies, including Boeing Aerospace, Aurora Flight Sciences and Lockheed. Crofton Farmer, rep­resenting the Jet Propulsion Laboratory, served as workshop chair, and Philip Russell, from NASA Ames, was the workshop organizer and report editor. The attendees represented a broad range of expertise, including 9 aircraft and development experts, 3 aircraft operations representa­tives, 2 aeronautical science experts, 2 Earth science specialists, 1 instru­ment management expert (Steven Wegener from NASA Ames, who later directed the science and payload projects for the solar UAV program), 1 general management observer, and 17 stratospheric scientists.[1522]

Подпись: 13The workshop considered pressing scientific questions that required advanced aircraft capabilities in order to accomplished a number of pro­posed science related missions, including: (1) answering important polar vortex questions, including determining what causes ozone loss above the dehydration region in Antarctica and to what extent the losses are transmitted to the middle latitudes; (2) determining high-altitude pho­tochemistry in tropical and middle latitudes; (3) determining the impact and degree of airborne transport of certain chemicals; and (4) studying volcanic, stratospheric cloud/aerosol, greenhouse, and radiation balance. The workshop concluded that carrying out the above missions would require flights at a cruise altitude of 100,000 feet, the ability to make a round trip of between 5,000 and 6,000 nautical miles, the capability to fly into the polar night and over water more than 200 nautical miles from land, and carry a payload equal to or greater than the ER-2. The workshop report noted that experience with satellites pointed out the need for increased emphasis on correlative measurements for current and future remote sensing systems. Previously, balloons had provided most of this information, but balloons presented a number of problems, including a low frequency of successful launches, the small number of available launch sites worldwide, the inability to follow selected paths, and the difficulty in recovering payloads. The workshop concluded with the following finding:

We recommend development of an aircraft with the capacity to carry integrated payloads similar to the ER-2 to significantly higher altitude preferably with greater range. It is important
that the aircraft be able to operate over the ocean and in the polar night. This may dictate development of an autonomous or remotely piloted plane. There is a complementary need to explore strategies that would allow payloads of reduced weight to reach even higher altitude, enhancing the current capabil­ity of balloons.[1523]

High-altitude, long-duration vehicle development, along with devel­opment of reduced weight instrumentation, both became goals of the ERAST program.

The Research Culture

As part of the broad scope of aeronautics research, the rotary wing efforts spanned the full range of research activity, including theoretical study, wind tunnel testing, and ground-based simulation. Flight-test NACA rotary wing research began in the early 1920s with exploratory wind tun­nel tests of simple rotor models as the precursor to the basic research undertaken in the 1930s. The Langley Memorial Aeronautical Laboratory, established at Hampton, VA, in 1917, purchased a Pitcairn PCA-2 auto­giro in 1931 for research use.[269] The National Advisory Committee for Aeronautics had been formed in 1915 to "supervise and direct scien­tific study the problems of flight, with a view to their practical solution.” Rotary wing research at Langley proceeded under the direction of the Committee with annual inspection meetings by the full Committee to review aeronautical research progress. In the early 1940s, the Ames Aeronautical Laboratory, now known as the Ames Research Center, opened for research at Moffett Field in Sunnyvale, CA. Soon after, the Aircraft Engine Research Laboratory, known for many years as the Lewis Research Center and now known as the Glenn Research Center, opened in Cleveland, OH. Each NACA Center had unique facilities that accom­modated rotary wing research needs. Langley Research Center played a major role in NACA-NASA rotary wing research until 1976, when Ames Research Center was assigned the lead role.

The rotary wing research is carried out by a staff of research engi­neers, scientists, technical support specialists, senior management, and administrative personnel. The rotary wing research staff draws on the expertise of the technical discipline organizations in areas such as aero­dynamics, structures and materials, propulsion, dynamics, acoustics, and human factors. Key support functions include such activities as test apparatus design and fabrication, instrumentation research and development (R&D), and research computation support. The constant instrumentation challenge is to adapt the latest technology available to acquiring reliable research data. Over the years, the related challenge for computation tasks is to perform data reduction and analysis for the

increasing sophistication and scope of theoretical investigations and test projects. In the NACA environment, the word "computers” actu­ally referred to a large cadre of female mathematicians. They managed the test measurement recordings, extracted the raw data, analyzed the data using desktop electromechanical calculators, and hand-plotted the results. The NASA era transformed this work from a tedious enterprise into managing the application of the ever-increasing power of modern electronic data recording and computing systems.

The dissemination of the rotary wing research results, which form the basis of NACA-NASA contributions over the years, takes a number of forms. The effectiveness of the contributions depends on making the research results and staff expertise readily available to the Nation’s Government and industry users. The primary method has tradition­ally been the formal publication of technical reports, studies, and com­pilations that are available for exploitation and use by practitioners. Another method that fosters immediate dialogue with research peers and potential users is the presentation of technical papers at confer­ences and technical meetings. These papers are published in the con­ference proceedings and are frequently selected for broader publication as papers or journal articles by technical societies such as the Society of Automotive Engineers (SAE)-Aerospace and the American Institute of Aeronautics and Astronautics (AIAA). Since 1945, NACA-NASA rotary wing research results have been regularly published in the Proceedings of the American Helicopter Society Annual Forum and the Journal of the AHS. During this time, 30 honorary awards have been presented to NACA and NASA researchers at the Annual Forum Honors Night cere­monies. These awards were given to individual researchers and to tech­nical teams for significant contributions to the advancement of rotary wing technology.

Over the years, the technical expertise of the personnel conducting the ongoing rotary wing research at NACA-NASA has represented a valu­able national resource at the disposal of other Government organizations and industry. Until the Second World War, small groups of rotary wing specialists were the prime source of long-term, fundamental research. In the late 1940s, the United States helicopter industry emerged and estab­lished technical teams focused on more near-term research in support of their design departments. In turn, the military recognized the need to build an in-house research and development capability to guide their major investments in new rotary wing fleets. The Korean war marked

the beginning of the U. S. Army’s long-term commitment to the utiliza­tion of rotary wing aircraft. In 1962, Gen. Hamilton H. Howze, the first Director of Army Aviation, convened the U. S. Army Tactical Mobility Requirements Board (Howze Board).[270] This milestone launched the emer­gence of the Air Mobile Airborne Division concept and thereby the steady growth in U. S. military helicopter R&D and production. The working relationship among Government agencies and industry R&D organiza­tions has been close. In particular, the availability of unique facilities and the existence of a pool of experienced rotary wing researchers at NASA led to the United States Army’s establishing a "special relation­ship” with NASA and an initial research presence at the Ames Research Center in 1965. This was followed by the creation of co-located and inte­grated research organizations at the Ames, Langley, and Glenn Research Centers in the early 1970s. The Army organizations were staffed by spe­cialists in key disciplines such as unsteady aerodynamics, aeroelastic- ity, acoustics, flight mechanics, and advanced design. In addition, Army civilian and military engineering and support personnel were assigned to work full time in appropriate NASA research facilities and theoretical analysis groups. These assignments included placing active duty mili­tary test pilots in the NASA flight research organizations. Over the long term, this teaming arrangement facilitated significant research activity. In addition to Research and Technology Base projects, it made it possi­ble to perform major jointly funded and managed rotary wing Systems Technology and Experimental Aircraft programs. The United States Army partnership was augmented by other research teaming agreements with the United States Navy, FAA, the Defense Advanced Research Projects Agency (DARPA), academia, and industry.

Perspectives on the Past, Prospects for the Future

Unfortunately for the immediate future of civilian supersonic flight, the successful LaNCETS project coincided almost exactly with the spread of the global financial crisis and the start of a severe recession. These negative economic developments hit almost all major industries, not the least being air carriers and aircraft manufacturers. The impact on those recently thriving companies making business jets was aggravated even more by populist and political backlash at executives of troubled corporations, some now being subsidized by the Federal Government, for continuing to fly in corporate jets. Lamenting this unsought nega­tive publicity, Aviation Week and Space Technology examined the plight

of the small-jet manufacturers in a story with following subheading: "As if the economy were not enough, business aviation becomes a scape­goat for executive excess.”[541] Nevertheless, NASA was continuing to invest in supersonic technologies and sonic boom research, and the air­craft industry was not ready to abandon the ultimate goal of supersonic civilian flight. For example, Boeing—under a Supersonics Project con­tract—was studying low-boom modifications for one of NASA’s F-16XL aircraft as one way to seek the holy grail for practical supersonic com­mercial flight: acceptance by the public. This relatively low-cost idea for a shaped sonic boom demonstrator had been one of the options being considered during NASA’s short-lived Sonic Boom Mitigation Project in 2005. Since then, findings from the Quiet Spike and LaNCETS exper­iments, along with continued progress in computational fluid dynam­ics, were helping to confirm and refine the aerodynamic and propulsion attributes needed to mitigate the strength of sonic booms.

In the case of the F-16XL, the modifications proposed by Boeing included an extended nose glove (reminiscent of the SSBD), lateral chines that blend into the wings (as with the SR-71), a sharpened V-shaped front canopy (like those of the F-106 and SR-71), an expanded nozzle for its jet engine (similar to those of F-15B No. 837), and a dorsal extension (called a "stinger”) to lengthen the rear of the airplane. Although such add-ons would preclude the low-drag characteristics also desired in a demonstra­tor, Boeing felt that its "initial design studies have been encouraging with respect to shock mitigation of the forebody, canopy, inlet, wing leading edge, and aft lift/volume distribution features.” Positive results from more detailed designs and successful wind tunnel testing would be the next requirements for continuing consideration of the proposed modifications.[542]

It was clear that NASA’s discoveries about sonic booms and how to control them were beginning to pay dividends. Whatever the fate of Boeing’s idea or any other proposals yet to come, NASA was commit­ted to finding the best way to demonstrate fully shaped sonic booms. As another encouraging sign, the FAA was working with NASA on a roadmap for studying community reactions to sonic booms, one that would soon be presented to the ICAO.[543]

As shown in this study, past expectations for a quiet civilian super­sonic transport had repeatedly run up against scientific, technical, eco­nomic, and political hurdles too high to overcome. That is why such an airplane has yet to fly. Yet the knowledge gained and lessons learned from each attempt attest to the value of persistence in pursuing both basic and applied research. Recent progress in shaping sonic booms builds upon the work of dedicated NASA civil servants over more than half a century, the data and documentation preserved through NASA’s scientific and techni­cal information program, the special facilities and test resources main­tained and operated by NASA’s research Centers, and NASA’s support of and partnership with contractors and universities.

Since the dawn of civilization, conquering the twin tyrannies of time and distance has been a powerful human aspiration, one that has served as a catalyst for many technological innovations. It seems reasonable to assume that this need for speed will eventually break down the barriers in the way of practical supersonic transportation, to include solving the prob­lem of the sonic boom. When that time finally does come, it will have been made possible by NASA’s many years of meticulous research, careful test­ing, and inventive experimentation on ways to soften the sonic footprint.

Fly-By-Wire: Fulfilling Promise and Navigating Around Nuance

As designers and flightcrews became more comfortable with electronic flight control systems and the systems became more reliable, the idea of removing the extra weight of the pilot’s mechanical control system began to emerge. Pilots resisted the idea because electrical systems do fail, and the pilots (especially military pilots) wanted a "get-me-home” capabil­ity. One flight-test program received little attention but contributed a great deal to the acceptance of fly-by-wire technology. The Air Force ini­tiated a program to demonstrate that a properly designed fly-by-wire control system could be more reliable and survivable than a mechani­cal system. The F-4 Survivable Flight Control System (SFCS) program was initiated in the early 1970s. Many of the then-current accepted prac­tices for flight control installations were revised to improve survivabil­ity. Four independent analog computer systems provided fail-op, fail-op (FOFO) redundancy. A self-adaptive gain changer was also included in the control logic (similar to the MH-96 in the X-15). Redundant com­puters, gyros, and accelerometers were eventually mounted in separate locations in the airplane, as were power supplies. Flight control system wire bundles for redundant channels were separated and routed through different parts of the airplane. Individual surface actuators (one aileron for example) could be operated to continue to maintain control when the opposite control surface was inoperative. The result was a flight control system that was lighter yet more robust than a mechanical sys­tem (which could be disabled by a single failure of a pushrod or cable). After development flight-testing of the SFCS airplane was completed, the standard F-4 mechanical backup system was removed, and the air­plane was flown in a completely fly-by-wire configuration.[700]

The first production fly-by-wire airplane was the YF-16. It used four redundant analog computers with FOFO capability. The airplane was not only the first production aircraft to use FBW control, it was also the first airplane intentionally designed to be unstable in the pitch axis while

flying at subsonic speeds ("relaxed static stability”). The YF-16 proto­type test program allowed the Air Force and General Dynamics to iron out the quirks of the FBW control system as well as the airplane aero­dynamics before entering the full scale development of the F-16A/B. The high gains required for flying the unstable airplane resulted in some structural resonance and limit-cycle problems. The addition of exter­nal stores (tanks, bombs, and rockets) altered the structural mode fre­quencies and required fine-tuning of the control laws. Researchers and designers learned that flight control system design and aircraft inter­actions in the emergent FBW era were clearly far more complex and nuanced than control system design in the era of direct mechanical feedback and the augmented hydromechanical era that had followed.[701]

Configuration Influence upon Stall and Departure Behavior

Another maneuver that can lead to loss of control is a stall. An aircraft "stalls” when the wing’s angle of attack exceeds a critical angle beyond

which the wing can no longer generate the lift necessary to support the airplane. A typical stall consists of some pre-stall warning buffet as the flow over the wing begins to break down, followed by stall onset, usu­ally accompanied by an uncommanded nose-down pitching rotation of the aircraft, as gravity takes over and the airplane naturally tries to regain lost airspeed. The loss of control for a normal stall is quite brief and can usually be overcome, or prevented, by proper control applica­tion at the time of pre-stall warning. There are design features of some aircraft that result in quite different stall characteristics. Stalls may be a straightforward wings-level gentle drop (typically leading to a swift and smooth recovery), or sharply abrupt, or an unsymmetrical wing drop leading to a spin entry. The latter can be quite hazardous.

High-performance T-tail aircraft are particularly vulnerable to abnormal stall effects. Lockheed’s sleek F-104 Starfighter incorporated a T-tail operating behind a short, stubby, and extremely thin wing. As the wing approached the critical stall angle, the wing tip vortexes impinged on the horizontal tail creating an abrupt nose-up pitching moment, commonly referred to as a "pitch-up.” The pitch-up placed the airplane in an uncontrollable flight environment: either a highly oscillatory spin or a deep stall (a stable condition where the airplane remains locked in a high angle of attack vertical descent). To prevent inadvertent pitch-ups, the aircraft was equipped with a "stick shaker,” and a "stick kicker.” The stick shaker created an artificial vibration of the stick, simulating stall buffet, as the airplane approached a stall. The stick kicker applied a sharp nose-down command to the horizontal tail when the airplane reached the critical condition for an impending pitch – up. A similar situation developed for the McDonnell F-101 Voodoo (also a T-tail behind a short, stubby wing). Stick shakers and kickers were quite successful in allowing these airplanes to operate safely through­out their operational lifespan. Overall, however, the T-tail layout was largely discredited for high-performance fighter and attack aircraft, the most successful postwar fighters being those with low-placed hor­izontal tails. Such a configuration, typified by the F-100, F-101, F-105, F-5, F-14, F-15, F-16, F/A-18, F-22, F-35, and a host of foreign aircraft, is now a design standard for tailed transonic and supersonic military aircraft. It was a direct outgrowth of the extensive testing the NACA did in the late 1940s and early 1950s on such aircraft as the D-558-2, the North American F-86, and the Bell X-5, all of which, to greater or lesser extents, suffered from pitch-up.

The advent of the swept wing induced its own challenges. In 1935, German aerodynamicist Adolf Busemann discovered that aircraft could operate at higher speeds, and closer to the speed of sound (Mach 1), by using swept wings. By the end of the Second World War, American NACA researcher Robert T. Jones of Langley Memorial Aeronautical Laboratory had independently discovered its benefits as well. The swept wing sub­sequently transformed postwar military and civil aircraft design, but it was not without its own quite serious problems. The airflow over a swept wing tends to move aft and outboard, toward the tip. This results in the wingtip stalling before the rest of the wing. Because the wingtip is aft of the wing root, the loss of lift at the tip causes an uncommanded nose-rise as the airplane approaches a stall. This nose-rise is similar to a pitch-up but not nearly as abrupt. It can be controlled by the pilot, and for most swept wing airplanes there are no control system features spe­cifically to correct nose-rise problems. Understanding the manifestations of swept wing stall and swept wing pitch-up commanded a great deal of NACA and Air Force interest in the early years of the jet age, for reasons of both safety and combat effectiveness. Much of the NACA’s research program on its three swept wing Douglas D-558-2 Skyrockets involved examination of these problems. Research included analysis of a variety of technological "fixes,” such as sawtooth leading edge extensions, wing fences, and fixed and retracting slots. Afterward, various combinations of flaps, flow direction fences, wing twist, and other design features have been used to overcome the tip-stall characteristic in modern swept wing airplanes, which, of course, include most commercial airliners.[748]

Three-Dimensional Flows and Hypersonic Vehicles

Three-dimensional flow-field calculation was, for decades, a frustrat­ing impossibility. I recall colleagues in the 1960s who would have sold their children (at least they said) to be able to calculate three­dimensional flow fields. The number of grid points required for such cal­culations simply exceeded the capability of any computer at that time. With the advent of supercomputers, however, the practical calculation

Подпись:
of three-dimensional flow fields became realizable. Once again, NASA researchers led the way. The first truly three-dimensional flow calcula­tion of real importance was carried out by K. J. Weilmuenster in 1983 at the NASA Langley Research Center. He calculated the inviscid flow over a Shuttle-like body at angle of attack, including the shape and location of the three-dimensional bow shock wave. This was no small feat at the time, and it proved to the CFD community that the time had come for such three-dimensional calculations.[780]

This was followed by an even more spectacular success. In 1986, using the predictor-corrector method conceived by NASA Ames Research Center’s MacCormack, Joseph S. Shang and S. J. Scherr of the Air Force Flight Dynamics Laboratory (AFFDL) published the first Navier-Stokes calculation of the flow field around a complete airplane. The airplane

Подпись: X-24C computed surface streamlines. From author's collection.
Three-Dimensional Flows and Hypersonic Vehicles

was the "X-24C,” a proposed (though never completed) rocket-powered Mach 6+ hypersonic test vehicle conceived by the AFFDL, and the calcu­lation was made for flow conditions at Mach 5.95. The mesh system con­sisted of 475,200 grid points throughout the flow field, and the explicit time-marching procedure took days of computational time on a Cray computer. But it was the first such calculation and a genuine watershed in the advancement of computational fluid dynamics.[781]

Note that both of these pioneering three-dimensional calculations were carried out for hypersonic vehicles, once again underscoring the importance of hypersonic aerodynamics as a major driving force behind the development of computational fluid dynamics and of the leading role played by NASA in driving the whole field of hypersonics.[782]

Jet Propulsion Laboratory

Jet Propulsion Laboratory (JPL) began as an informal group of students and staff from the California Institute of Technology (Caltech) who experimented with rockets before and during World War II; evolved afterward into the Nation’s center for unpiloted exploration of the solar system and deep space, operating related tracking, and data acquisition systems; and was managed for NASA by Caltech.[890] Dr. Theodore von Karman, then head of Caltech’s Guggenheim Aeronautical Laboratory, shepherded this group to becoming a center of rocket research for the Army. Upon NASA’s formation in 1958, JPL came under NASA’s responsibility.[891]

Consistent with its origins and Caltech’s continuing role in its man­agement, JPL’s orientation has always emphasized advanced experimen­tal and analytical research in various disciplines, including structures. JPL developed efficiency improvements for NASTRAN as early as 1971.[892] Other JPL research included basic finite element techniques, high-veloc­ity impact effects, effect of spin on structural dynamics, geometrically nonlinear structures (i. e., structures that deflect sufficiently to signifi­cantly alter the structural properties), rocket engine structural dynam­ics, flexible manipulators, system identification, random processes, and optimization. The most notable of these are VISCEL, TEXLESP-S, and PID (AU-FREDI and MODE-ID).[893]

VISCEL (for Visco-Elastic and Hyperelastic Structures) and TEXLESP-S treat special classes of materials that general-purpose finite element codes typically cannot handle. VISCEL treats visco-elastic prob­lems, in which materials exhibit viscosity (normally a fluid characteris­tic) as well as elasticity. VISCEL was introduced in 1971 and was adapted by industry over the next decade.[894] In 1982, the Shell Oil Company used VISCEL to validate a proprietary code that was in development for the design of plastic products.[895] In 1984, AiResearch was using VISCEL to analyze seals and similar components in aircraft auxiliary power units (APUs).[896]

JPL has been leading research in the structural dynamics of solid rockets almost since the laboratory was first established. TEXLESP-S was specifically developed for analysis of solid rocket fuels, which may be polymeric materials exhibiting such hyperelastic behavior. TEXLESP-S is a finite element code developed for large-strain (hyperelastic) prob­lems, in which materials may be purely elastic but exhibit such large strain deformations that the geometric configuration of the structure is significantly altered. (This is distinct from the small-strain, large – deflection situations that can occur, for example, with long flexible booms on spacecraft.)[897]

System Identification/Parameter Identification (PID, including AU-FREDI and MODE-ID) is the use of empirical data to build or tune a mathematical model of a system. PID is used in many disciplines, including automatic control, flight-testing, and structural analysis.[898] Ideally, excitation of the system is performed by systematically exciting specific modes. However, such controlled excitation is not always prac­tical, and even under the best of circumstances, there is some uncer­tainty in the interpretation of the data. The MODE-ID program was developed in 1988 to estimate not only the modal parameters of a struc­ture, but also the level of uncertainty with which those parameters have been estimated:

Such a methodology is presented which allows the precision of the estimates of the model parameters to be computed.

It also leads to a guiding principle in applications. Namely, when selecting a single model from a given class of models, one should take the most probable model in the class based on the experimental data. Practical applications of this prin­ciple are given which are based on the utilization of measured seismic motions in large civil structures. Examples include the application of a computer program MODE-ID to identify modal properties directly from seismic excitation and response time histories from a nine-story steel-frame building at JPL and from a freeway overpass bridge.[899]

Another system identification program, Autonomous Frequency Domain Identification (AU-FREDI), was developed for the identification of structural dynamic parameters and the development of control laws for large and/or flexible space structures. It was furthermore intended to be used for online design and tuning of robust controllers, i. e., to develop control laws real time, although it could be modified for offline use as well. AU-FREDI was developed in 1989, validated in the Caltech/ Jet Propulsion Laboratory’s Large Spacecraft Control Laboratory and made publicly available.[900] This is just a small sample of the research that JPL has conducted and sponsored in system identification, control of flexible structures, integrated control/structural design, and related fields. While intended primarily for space structures, this research also has relevance for medicine, manufacturing technology, and the design and construction of large, ground-based structures.

NPLOT (Goddard, 1982)

NPLOT was a product of research into the visualization of finite element models, which had been ongoing at Goddard since the introduction of NASTRAN. A fast, hidden line algorithm was developed in 1982 and became the basis for the NPLOT plotting program for NASTRAN, pub­licly released initially in 1985 and in improved versions into the 1990s.[987]

2) Integrated Modeling of Optical Systems (IMOS) (Goddard and JPL,

1990s)

A combined multidisciplinary code, IMOS was developed during the 1990s by Goddard and JPL: "Integrated Modeling of Optical Systems (IMOS) is a finite element-based code combining structural, thermal, and optical ray-tracing capabilities in a single environment for analy­sis of space-based optical systems.”[988] IMOS represents a recent step in the continuing evolution of Structural-Thermal-Optical analysis capa­bility, which has been an important activity at the Space Flight Centers since the early 1970s.

Materials Research and Development: The NASP Legacy

The National Aero-Space Plane (NASP) program had much to contribute to metallurgy, with titanium being a particular point. It has lately come to the fore in aircraft construction because of its high strength-to-den – sity ratio, high corrosion resistance, and ability to withstand moderately
high temperatures without creeping. Careful redesign must be accom­plished to include it, and it appears only in limited quantities in aircraft that are out of production. But newer aircraft have made increasing use of it, including the two largest manufacturers of medium – and long – range commercial jetliners, Boeing and Airbus, whose aircraft and their weight of titanium is shown in Table 4.

TABLE 4:

BOEING AND AIRBUS AIRCRAFT MAKING SIGNIFICANT USE OF TITANIUM

AIRCRAFT (INCLUDING THEIR ENGINES)

WEIGHT OF TI, IN METRIC TONS

Boeing 787

134

Boeing 777

59

Boeing 747

45

Boeing 737

18

Airbus A380

145

Airbus A350

74

Airbus A340

32

Airbus A330

18

Airbus A320

12

These numbers offer ample evidence of the increasing prevalence of titanium as a mainstream (and hence no longer "exotic”) aviation mate­rial, mirroring its use in other aspects of the commercial sector. For example, in the 1970s, the Parker Pen Company used titanium in its T-1 line of ball pens and rollerballs, which it introduced in 1971. Production stopped in 1972 because of the high cost of the metal. But hammerheads fabricated of titanium entered service in 1999. Their light weight allows a longer handle, which increases the speed of the head and delivers more energy to the nail while decreasing arm fatigue. Titanium also substan­tially diminishes the shock transferred to the user because it generates much less recoil than a steel hammerhead.

In advancing titanium’s use, techniques of powder metallurgy have been at the forefront. These methods give direct control of the micro­structure of metals by forming them from powder, with the grains of powder sintering or welding together by being pressed in a mold at high temperature. A manufacturer can control the grain size independently of any heat-treating process. Powder metallurgy also overcomes restrictions on alloying by mixing in the desired additives as powdered ingredients.

Several techniques exist to produce the powders. Grinding a metal slab to sawdust is the simplest, though it yields relatively coarse grains. "Splat-cooling” gives better control. It extrudes molten metal onto the chilled rim of a rotating wheel that cools it instantly into a thin ribbon. This represents a quenching process that produces a fine-grained micro­structure in the metal. The ribbon then is chemically treated with hydro­gen, which makes it brittle so that it can be ground into a fine powder. Heating the powder then drives off the hydrogen.

The Plasma Rotating Electrode Process, developed by the firm of Nuclear Metals, has shown particular promise. The parent metal is shaped into a cylinder that rotates at up to 30,000 revolutions per min­ute (rpm) and serves as an electrode. An electric arc melts the spinning metal, which throws off droplets within an atmosphere of cool inert helium. The droplets plummet in temperature by thousands of degrees within milliseconds and their microstructures are so fine as to approach an amorphous state. Their molecules do not form crystals, even tiny ones, but arrange themselves in formless patterns. This process, called "rapid solidi­fication,” has brought particular gains in high-temperature strength.

Standard titanium alloys lose strength at temperatures above 700 to 900 °F. By using rapid solidification, McDonnell-Douglas raised this limit to 1,100 °F prior to 1986, when NASP got underway. Philip Parrish, the manager of powder metallurgy at the Defense Advanced Research Projects Agency (DARPA), notes that his agency spent some $30 mil­lion on rapid-solidification technology in the decade after 1975. In 1986, he described it as "an established technology. This technology now can stand along such traditional methods as ingot casting or drop forging.”[1095]

Eleven-hundred degrees nevertheless was not enough. But after 1990, the advent of new baseline configurations for the X-30 led to an appre­ciation that the pertinent areas of the vehicle would face temperatures no higher than 1,500 °F. At that temperature, advanced titanium alloys could serve in metal matrix composites (MMCs), with thin-gauge met­als being reinforced with fibers.

A particular composition came from the firm of Titanium Metals and was designated Beta-21S. That company developed it specifi­cally for the X-30 and patented it in 1989. It consisted of Ti along with 15Mo+2.8Cb+3Al+0.2Si. Resistance to oxidation proved to be its strong suit, with this alloy showing resistance that was two orders of magnitude greater than that of conventional aircraft titanium. Tests showed that it could also be exposed repeatedly to leaks of gaseous hydrogen without being subject to embrittlement. Moreover, it lent itself readily to being rolled to foil-gauge thicknesses of 4 to 5 mils in the fabrication of MMCs.[1096]

There also was interest in using carbon-carbon for primary struc­ture. Here the property that counted was not its heat resistance but its light weight. In an important experiment, the firm of LTV fabricated half of an entire wing box of this material. An airplane’s wing box is a major element of aircraft structure that joins the wings and provides a solid base for attachment of the fuselage fore and aft. Indeed, one could compare it with the keel of a ship. It extends to left and right of the air­craft centerline, with LTV’s box constituting the portion to the left of this line. Built at full scale, it represented a hot-structure wing proposed by General Dynamics. It measured 5 by 8 feet, with a maximum thickness of 16 inches. Three spars ran along its length, five ribs were mounted transversely, and the complete assembly weighed 802 pounds.

The test plan called for it to be pulled upward at the tip to repro­duce the bending loads of a wing in flight. Torsion or twisting was to be applied by pulling more strongly on the front or rear spar. The maxi­mum load corresponded to having the X-30 execute a pullup maneuver at Mach 2.2 with the wing box at room temperature. With the ascent continuing and the vehicle undergoing aerodynamic heating, the next key event brought the maximum difference in the temperatures of the top and bottom of the wing box, with the former being at 994 °F and the latter being at 1,671 °F. At that moment, the load on the wing box corre­sponded to 34 percent of the Mach 2.2 maximum. Farther in the flight the wing box was to reach peak temperature, 1,925 °F, on the lower surface. These three points were to be reproduced through mechanical forces applied at the ends of the spars and through the use of graphite heaters.

But several important parts delaminated during their fabrication, which seriously compromised the ability of the wing box to bear its specified loads. Plans to impose the peak or Mach 2.2 load were aban­doned, with the maximum planned load being reduced to the 34 percent associated with the maximum temperature difference. For the same rea­son, the application of torsion was deleted from the test program. Amid these reductions in the scope of the structural tests, two exercises went forward during December 1991. The first took place at room tempera­ture and successfully reached the mark of 34 percent without causing further damage to the wing box.

The second test, a week later, reproduced the condition of peak tem­perature difference while briefly applying the calculated load of 34 per­cent. The plan then called for further heating to the peak temperature of 1,925 °F. As the wing box approached this value, a difficulty arose because of the use of metal fasteners in its assembly. Some were made from coated columbium and were rated for 2,300 °F, but most were a nickel alloy that had a permissible temperature of 2,000 °F. However, an instrumented nickel-alloy fastener overheated and reached 2,147 °F. The wing box showed a maximum temperature of 1,917 °F at that moment, and the test was terminated because the strength of the fasteners now was in question. This test nevertheless counted as a success because it had come within 8 degrees of the specified temperature.[1097]

Both tests thus were marked as having achieved their goals, but their merits were largely in the mind of the beholder. The entire project would have been far more impressive if it had avoided delamination, had successfully achieved the Mach 2.2 peak load, and had subjected the wing box to repeated cycles of bending, torsion, and heating. This effort stood as a bold leap toward a future in which carbon-carbon might take its place as a mainstream material, but it was clear that this future would not arrive during the NASP program. However, the all-carbon – composite airplane, as distinct from one of carbon-carbon, has now become a reality. Carbon alone has high temperature resistance, whereas carbon composite burns or melts readily. The airplane that showcases carbon composites is the White Knight 2, built by Burt Rutan’s Scaled Composites firm as part of the Virgin Galactic venture that is to achieve commercial space flight. As of this writing, White Knight 2 is the world’s largest all-carbon-composite aircraft in service; even its control wires are carbon composite. Its 140-foot-span wing is the longest single car­bon composite aviation component ever fabricated.[1098]

Far below this rarefied world of transatmospheric tourism, carbon composites are becoming the standard material for commercial avia­tion. Aluminum has held this role up to Mach 2 since 1930, but after 80 years, Boeing is challenging this practice with its 787 airliner. By weight, the 787 is 50 percent composite, 20 percent aluminum, 15 per­cent titanium, 10 percent steel, and 5 percent other. The 787 is 80 per­cent composite by volume. Each of them contains 35 tons of composite reinforced with 23 tons of carbon fiber. Composites are used on fuse­lage, wings, tail, doors, and interior. Aluminum appears at wing and tail leading edges, with titanium used mainly on engines. The exten­sive application of composites promotes light weight and long range. The 787 can fly nonstop from New York City to Beijing. The makeup of the 787 contrasts notably with that of the Boeing 777. Itself consid­ered revolutionary when it entered service in 1995, it nevertheless had a structure that was 50 percent aluminum and 12 percent composite. The course to all-composite construction is clear, and if the path is not yet trodden, nevertheless, the goal is clearly in sight. As in 1930, when all-metal structures first predominated in American commercial avia­tion, the year 2010 marks the same point for the evolution of commer­cial composite aircraft.

The NASP program also dealt with beryllium. This metal had only two-thirds the density of aluminum and possessed good strength, but its temperature range was restricted. The conventional metal had a limit of some 850 °F, while an alloy from Lockheed called Lockalloy, which contained 38 percent aluminum, was rated only for 600 °F. It had never become a mainstream material like titanium, but, for the X-30, it offered the advantage of high thermal conductivity. Work with titanium had greatly increased its temperatures of use, and there was hope of achiev­ing similar results with beryllium.

Initial efforts used rapid-solidification techniques and sought tem­perature limits as high as 1,500 °F. These attempts bore no fruit, and from 1988 onward the temperature goal fell lower and lower. In May 1990, a program review shifted the emphasis away from high-tempera­ture formulations toward the development of beryllium as a metal suit­able for use at cryogenic temperatures. Standard forms of this metal became unacceptably brittle when only slightly colder than -100 °F, but cryoberyllium proved to be out of reach as well. By 1992, investiga­tors were working with ductile alloys of beryllium and were sacrificing all prospect of use at temperatures beyond a few hundred degrees but were winning only modest improvements in low-temperature capabil­ity. Terence Ronald, the NASP materials director, wrote in 1995 of rapid – solidification versions with temperature limits as low as 500 °F, which was not what the X-30 needed to reach orbit.[1099]

In sum, the NASP materials effort scored a major advance with Beta – 21S, but the genuinely radical possibilities failed to emerge. These included carbon-carbon as primary structure along with alloys of beryllium that were rated for temperatures well above 1,000 °F. The latter, if available, might have led to a primary structure with the strength and temperature resistance of Beta-21S but with less than half the weight. Indeed, such weight savings would have ramified throughout the entire design, lead­ing to a configuration that would have been smaller and lighter overall.

Generally, work with materials fell well short of its goals. In dealing with structures and materials, the contractors and the National Program Office established 19 program milestones that were to be accomplished by September 1993. A General Accounting Office program review, issued in December 1992, noted that only six of them would indeed be com­pleted.[1100] This slow progress encouraged conservatism in drawing up the bill of materials, but this conservatism carried a penalty.

When the scramjets faltered in their calculated performance and the X-30 gained weight while falling short of orbit, designers lacked recourse to new and very light materials, such as beryllium and carbon-carbon, that might have saved the situation. With this, NASP spi­raled to its end. The future belonged to other less ambitious but more attainable programs, such as the X-43 and X-51. They, too, would press the frontier of aerothermodynamic structural design, as they pioneered the hypersonic frontier.

The Lightweight Fighter Program and the YF-16

Подпись: 10In addition to the NASA F-8 DFBW program, several other highly note­worthy efforts involving the use of computer-controlled fly-by-wire flight control technology occurred during the 1970s. The Air Force had initi­ated the Lightweight Fighter program in early 1972. Its purpose was "to determine the feasibility of developing a small, light-weight, low-cost fighter, to establish what such an aircraft can do, and to evaluate its pos­sible operational feasibility.”[1167] The LWF effort was focused on demon­strating technologies that provided a direct contribution to performance, were of moderate risk (but sufficiently advanced to require prototyping to reduce risk), and helped hold both procurement and operating costs down. Two companies, General Dynamics (GD) and Northrop, were selected, and each was given a contract to build two flight-test prototypes. These would be known as the YF-16 and the YF-17. In its YF-16 design, GD chose to use an analog-computer-based quadruplex fly-by-wire flight control system with no mechanical backup. The aircraft had been designed with a negative longitudinal static stability margin of between 7 percent and 10 percent in subsonic flight—this indicated that its center of gravity was aft of the aerodynamic center by a distance of 7 to 10 per­cent of the mean aerodynamic chord of the wing. A high-speed, com­puter-controlled fly-by-wire flight control system was essential to provide the artificial stability that made the YF-16 flyable. The aircraft also incor­porated electronically activated and electrically actuated leading edge maneuvering laps that were automatically configured by the flight con­trol system to optimize lift-to-drag ratio based on angle of attack, Mach number, and aircraft pitch rate. A side stick controller was used in place of a conventional control column.[1168]

Following an exceptionally rapid development effort, the first of the two highly maneuverable YF-16 technology demonstrator aircraft (USAF serial No. 72-1567) had officially first flown in February 1974, piloted by General Dynamics test pilot Phil Oestricher. However, an unin­tentional first flight had actually occurred several weeks earlier, an event that is discussed in a following section as it relates to developmental issues with the YF-16 fly-by-wire flight control system. During its devel­opment, NASA had provided major assistance to GD and the Air Force on the YF-16 in many technical areas. Fly-by-wire technology and the side stick controller concept originally developed by NASA were incor­porated in the YF-16 design. The NASA Dryden DFBW F-8 was used as a flight testbed to validate the YF-16 side stick controller design. NASA Langley also helped solve numerous developmental challenges involving aerodynamics and control laws for the fly-by-wire flight control system. The aerodynamic configuration had been in development by GD since 1968. Initially, a sharp-edged strake fuselage forebody had been elim­inated from consideration because it led to flow separation; however, rounded forward fuselage cross sections caused significant directional instability at high angles of attack. NASA aerodynamicists conducted wind tunnel tests at NASA Langley that showed the vortexes generated by sharp forebody strakes produced a more stable flow pattern with increased lift and improved directional stability. This and NASA research into leading – and trailing-edge flaps were used by GD in the develop­ment of the final YF-16 configuration, which was intensively tested in the Langley Full-Scale Wind Tunnel at high angle-of-attack conditions.[1169]

Подпись: 10During NASA wind tunnel tests, deficiencies in stability and control, deep stall, and spin recovery were identified even though GD had pre­dicted the configuration to be controllable at angles of attack up to 36 degrees. NASA wind tunnel testing revealed serious loss of directional stability at angles of attack higher than 25 degrees. As a result, an auto­matic angle of attack limiter was incorporated into the YF-16 flight con­trol system along with other changes designed to address deep stall and spin issues. Ensuring adequate controllability at higher angles of attack also required further research on the ability of the YF-16’s fly-by-wire flight control system to automatically limit certain other flight param­eters during energetic air combat maneuvering. The YF-16’s all-moving

horizontal tails provided pitch control and also were designed to oper­ate differentially to assist the wing flaperons in rolling the aircraft. The ability of the horizontal tails and longitudinal control system to limit the aircraft’s angle of attack during maneuvers with high roll rates at low airspeeds was critically important. Rapid rolling maneuvers at low airspeeds and high angles of attack were found to create large nose-up trim changes because of inertial effects at the same time that the aero­dynamic effectiveness of the horizontal tails was reduced.[1170]

Подпись: 10An important aspect of NASA’s support to the YF-16 flight control sys­tem development involved piloted simulator studies in the NASA Langley Differential Maneuvering Simulator (DMS). The DMS provided a real­istic means of simulating two aircraft or spacecraft operating with (or against) each other (for example, spacecraft conducting docking maneu­vers or fighters engaged in aerial combat against each other). The DMS consisted of two identical fixed-base cockpits and projection systems, each housed inside a 40-foot-diameter spherical projection screen. Each projection system consisted of a sky-Earth projector to provide a hori­zon reference and a system for target-image generation and projection. The projectors and image generators were gimbaled to allow visual sim­ulation with completely unrestricted freedom of motion. The cockpits contained typical fighter cockpit instruments, a programmable buffet mechanism, and programmable control forces, plus a g-suit that acti­vated automatically during maneuvering.[1171] Extensive evaluations of the YF-16 flight control system were conducted in the DMS using pilots from NASA, GD, and the Air Force, including those who would later fly the aircraft. These studies verified the effectiveness of the YF-16 fly-by­wire flight control system and helped to identify critical flight control system components, timing schedules, and feedback gains necessary to stabilize the aircraft during high angle-of-attack maneuvering. As a result, gains in the flight control system were modified, and new con-

trol elements—such as a yaw rate limiter, a rudder command fadeout, and a roll rate limiter—were developed and evaluated.[1172]

Подпись: 10Despite the use of the DMS and the somewhat similar GD Fort Worth domed simulator to develop and refine the YF-16 flight control system, nearly all flight control functions, including roll stick force gra­dient, were initially too sensitive. This contributed to the unintentional YF-16 first flight by Phil Oestricher at Edwards AFB on January 20, 1974. The intent of the scheduled test mission on that day was to evalu­ate the aircraft’s pretakeoff handling characteristics. Oestricher rotated the YF-16 to a nose-up attitude of about 10 degrees when he reached 130 knots, with the airplane still accelerating slightly. He made small lateral stick inputs to get a feel for the roll response but initially got no response, presumably because the main gear were still on the ground. At that point, he slightly increased angle of attack, and the YF-16 lifted off the ground. The left wing then dropped rather rapidly. After a right roll command was applied, it went into a high-frequency pilot-induced oscillation. Before the roll oscillation could be stopped, the aft fin of the inert AIM-9 missile on the left wingtip lightly touched the runway, the right horizontal tail struck the ground, and the aircraft bounced on its landing gear several times, resulting in the YF-16 heading toward the edge of the runway. Oestricher decided to take off, believing it impossible to stay on the runway. He touched down 6 minutes later and reported: "The roll control was too sensitive, too much roll rate as a function of stick force. Every time I tried to correct the oscillation, I got full-rate roll.” The roll control sensitivity problem was corrected with adjustments to the control gain logic. Stick force gradients and control gains continued to be refined during the flight-test program, with the YF-16 subsequently demonstrating generally excellent control characteristics. Oestricher later said that the YF-16 control problem would have been discovered before the first flight if better visual displays had been available for flight simulators in the early 1970s.[1173] Lessons from the YF-16 and DFBW F-8 simulation experiences helped NASA, the Air Force, and industry refine the way that preflight simulation was structured to support new fly-by-wire flight control systems development. Another flight control issue that arose during

the YF-16 flight-test program involved an instability caused by inter­action of the active fly-by-wire flight control system with the aeroelas – tic properties of the airframe. Flutter analysis had not accounted for the effects of active flight control. Closed loop control systems test­ing on the ground had used simulated aircraft dynamics based on a rigid airframe modeling assumption. In flight, the roll sensors detected aeroelastic vibrations in the wings, and the active flight control system attempted to apply corrective roll commands. However, at times these actually amplified the airframe vibrations. This problem was corrected by reducing the gain in the roll control loop and adding a filter in the feedback patch that suppressed the high-frequency signals from struc­tural vibrations. The fact that this problem was also rapidly corrected added confidence in the ability of the fly-by-wire flight control system to be reconfigured. Another change made as a result of flight test was to fit a modified side stick controller that provided the pilot with some small degree of motion (although the control inputs to the flight con­trol system were still determined by the amount of force being exerted on the side stick, not by its position).[1174]

Подпись: 10Three days after its first official flight on February 2, 1974, the YF-16 demonstrated supersonic windup turns at Mach 1.2. By March 11, it had flown 20 times and achieved Mach 2.0 in an outstanding demon­stration of the high systems reliability and excellent performance that could be achieved with a fly-by-wire flight control system. By the time the 12-month flight-test program ended January 31, 1975, the two YF-16s had flown a total of 439 flight hours in 347 flights, with the YF-16 Joint Test Force averaging over 30 sorties per month. Open communications between NASA, the Air Force, and GD had been critical to the success of the YF-16 development program. In highlighting this success, Harry J. Hillaker, GD Vice President and Deputy Program Director for the F-16, noted the vital importance of the "free exchange of experience from the

U. S. Air Force Laboratories and McDonnell-Douglas 680J projects on the F-4 and from NASA’s F-8 fly-by-wire research program.”[1175] The YF-16 would serve as the basis for the extremely successful family of F-16 mul­
tinational fighters; over 4,400 were delivered from assembly lines in five countries by 2009, and production is expected to continue to 2015. While initial versions of the production F-16 (the A and B models) used analog computers, later versions (starting with the F-16C) incorporated digital computers in their flight control systems.[1176] Fly-by-wire and relaxed static stability gave the F-16 a major advantage in air combat capabil­ity over conventional fighters when it was introduced, and this technol­ogy still makes it a viable competitor today, 35 years after its first flight.

Подпись: 10The F-16’s main international competition for sales at the time was another statically unstable full fly-by-wire fighter, the French Dassault Mirage 2000, which first flew in 1978. Despite the F-16 being selected for European coproduction, over 600 Mirage 2000s would also eventu­ally be built and operated by a number of foreign air forces. The other technology demonstrator developed under the LWF program was the Northrop YF-17. It featured a conventional mechanical/hydraulic flight control system and was statically stable. When the Navy decided to build the McDonnell-Douglas F/A-18, the YF-17 was scaled up to meet fleet requirements. Positive longitudinal static stability was retained, and a pri­mary fly-by-wire flight control system was incorporated into the F/A-18’s design. The flight control system also had an electric backup that enabled the pilot to transmit control inputs directly to the control surfaces, bypass­ing the flight control computer but using electrical rather than mechan­ical transmission of signals. A second backup provided a mechanical linkage to the horizontal tails only. These backup systems were possible because the F/A-18, like the YF-17, was statically stable about all axes.[1177]