Category AERONAUTICS

Emergent Hypersonic Technology and the Onset of the Missile Era

The ballistic missile and atomic bomb became realities within a year of each other. At a stroke, the expectation arose that one might increase the range of the former to intercontinental distance and, by installing an atomic tip, generate a weapon—and a threat—of almost incomprehen­sible destructive power. But such visions ran afoul of perplexing techni­cal issues involving rocket propulsion, guidance, and reentry. Engineers knew they could do something about propulsion, but guidance posed a formidable challenge. MIT’s Charles Stark Draper was seeking inertial guidance, but he couldn’t approach the Air Force requirement, which set an allowed miss distance of only 1,500 feet at a range of 5,000 miles for a ballistic missile warhead.[554]

Reentry posed an even more daunting prospect. A reentering 5,000-mile-range missile would reach 9,000 kelvins, hotter than the solar surface, while its kinetic energy would vaporize five times its weight in iron.[555] Rand Corporation studies encouraged Air Force and industry mis­sile studies. Convair engineers, working under Karel J. "Charlie” Bossart, began development of the Atlas ICBM in 1951. Even with this seemingly rapid implementation of the ballistic missile idea, time scales remained long term. As late as October 1953, the Air Force declared that it would not complete research and development until "sometime after 1964.”[556]

Matters changed dramatically immediately after the Castle Bravo nuclear test on March 1, 1954, a weaponizable 15-megaton H-bomb, fully 1,000 times more powerful than the atomic bomb that devastated Hiroshima less than a decade previously. The "Teapot Committee,” chaired by the Hungarian emigree mathematician John von Neumann, had anticipated success with Bravo and with similar tests. Echoing Bruno Augenstein of the Rand Corporation, the Teapot group recom-

It ia well known that for any truly blunt body, the bow shock wave is detached and there exists a stagnation point at the nose. Consider conditions at this point and assume that the local radius of curvature of the body is a (see sketch).

Emergent Hypersonic Technology and the Onset of the Missile Era

The bow shock wave is normal to the stagnation streamline and converts the supersonic flow ahead of the shock to a low subsonic speed flow at high static temperature downstream of the shock. Thus, it is suggested that conditions near the stagnation point may be investigated by treating the nose section as if it were a segment of a sphere in a subsonic flow field.

Extract of text from NACA Report 1381 (1953), in which H. Julian Allen and Alfred J. Eggers

postulated using a blunt-body reentry shape to reduce surface heating of a reentry body. NASA.

mended that the Atlas miss distance should be relaxed "from the pres­ent 1,500 feet to at least two, and probably three, nautical miles.”[557] This was feasible because the new H-bomb had such destructive power that such a "miss” distance seemed irrelevant. The Air Force leadership con­curred, and only weeks after the Castle Bravo shot, in May 1954, Vice Chief of Staff Gen. Thomas D. White granted Atlas the service’s highest developmental priority.

But there remained the thorny problem of reentry. Only recently, most people had expected an ICBM nose cone to possess the needle – nose sharpness of futurist and science fiction imagination. The realities of aerothermodynamic heating at near-orbital speeds dictated other­wise. In 1953, NACA Ames aerodynamicists H. Julian Allen and Alfred

Eggers concluded that an ideal reentry shape should be bluntly rounded, not sharply streamlined. A sharp nose produced a very strong attached shock wave, resulting in high surface heating. In contrast, a blunt nose generated a detached shock standing much further off the nose sur­face, allowing the airflow to carry away most of the heat. What heating remained could be alleviated via radiative cooling or by using hot struc­tures and high-temperature coatings.[558]

There was need for experimental verification of blunt body theory, but the hypersonic wind tunnel, previously so useful, was suddenly inad­equate, much as the conventional wind tunnel a decade earlier had been inadequate to obtaining the fullest understanding of transonic flows. As the slotted throat tunnel had replaced it, so now a new research tool, the shock tube, emerged for hypersonic studies. Conceived by Arthur Kantrowitz, a Langley veteran working at Cornell, the shock tube enabled far closer simulation of hypersonic pressures and temperatures. From the outset, Kantrowitz aimed at orbital velocity, writing in 1952 that: "it is possible to obtain shock Mach numbers in the neighborhood of 25 with reasonable pressures and shock tube sizes.”[559]

Despite the advantages of blunt body design, the hypersonic envi­ronment remained so extreme that it was still necessary to furnish ther­mal protection to the nose cone. The answer was ablation: covering the nose with a lightweight coating that melts and flakes off to carry away the heat. Wernher von Braun’s U. S. Army team invented ablation while working on the Jupiter intermediate-range ballistic missile (IRBM), though General Electric scientist George Sutton made particularly nota­ble contributions. He worked for the Air Force, which built and success­fully protected a succession of ICBMs: Atlas, Titan, and Minuteman.[560]

Emergent Hypersonic Technology and the Onset of the Missile Era

A Jupiter IRBM launches from Cape Canaveral on May 18, 1958, on an ablation reentry test. U. S. Army.

Flight tests were critical for successful nose cone development, and they began in 1956 with launches of the multistage Lockheed X-17. It rose high into the atmosphere before firing its final test stage back at Earth, ensuring the achievement of a high-heat load, as the test nose cone would typically attain velocities of at least Mach 12 at only 40,000 feet. This was half the speed of a satellite, at an altitude typically tra­versed by today’s subsonic airliners. In the pre-ablation era, the war­heads typically burned up in the atmosphere, making the X-17 effectively a flying shock tube whose nose cones only lived long enough to return data by telemetry. Yet out of such limited beginnings (analogous to the

rudimentary test methodologies of the early transonic and supersonic era just a decade previously) came a technical base that swiftly resolved the reentry challenge.[561]

Tests followed with various Army and Air Force ballistic missiles. In August 1957, a Jupiter-C (an uprated Redstone) returned a nose cone after a flight of 1,343 miles. President Dwight D. Eisenhower subse­quently showed it to the public during a TV appearance that sought to bolster American morale a month after Sputnik had shocked the world. Two Thor-Able flights went to 5,500 miles in July 1958, though their nose cones both were lost at sea. But the agenda also included Atlas, which first reached its full range of 6,300 miles in November 1958. Two nose cones built by GE, the RVX-1 and -2, flew subsequently as payloads. An RVX-2 flew 5,000 miles in July 1959 and was recovered, thereby becom­ing the largest object yet to be brought back. Attention now turned to a weaponized nose cone shape, GE’s Mark 3. Flight tests began in October, with this nose cone entering operational service the following April.[562]

Success in reentry now was a reality, yet there was much more for the future. The early nose cones were symmetric, which gave good ballistic char­acteristics but made no provision for significant aerodynamic maneuver and cross-range. The military sought both as a means of achieving greater oper­ational flexibility. An Air Force experimental uncrewed lifting body design, the Martin SV-5D (X-23) PRIME, flew three flights between December 1966 and April 1967, lofted over the Pacific Test Range by modified Atlas boost­ers. The first flew 4,300 miles, maneuvering in pitch (but not in cross-range), and missed its target aim point by only 900 feet. The third mission demon­strated a turning cross-range of 800 miles, the SV-5D impacting within 4 miles of its aim point and subsequently was recovered.[563]

Other challenges remained. These included piloted return from the Moon, reusable thermal protection for the Shuttle, and planetary entry into the Jovian atmosphere, which was the most demanding of all. Even

so, by the time of PRIME in 1967, the reentry problem had been resolved, manifested by the success of both ballistic missile nose cone development and the crewed spacecraft effort. The latter was arguably the most sig­nificant expression of hypersonic competency until the return to Earth from orbit by the Space Shuttle Columbia in 1981.

Load Feedback for Flight Controls: Imitating the Birds

Among their many distinctive attributes, birds possess a particularly unique characteristic not experienced by humans: they are continu­ously aware of the loads their wings and control feathers bear, and they can adjust the wing shape to alleviate or redistribute these loads in real time. This allows a bird to optimize its wing shape across its entire range of flight; for example, a different wing shape for low-speed soar­ing than for high-speed cruising. Humans are not so fortunate. In the earliest days of flight, most aircraft designers consciously emulated the design of birds for both the planform and airfoil cross section of wings. Indeed, the frail fabric and wood structure of thin wings used by pio­neers such as the Wright brothers, Louis Bleriot, the Morane brothers, and Anthony Fokker permitted use of aeroelastic wing-warping (twist­ing) of a wing to bank an airplane, until superseded by the invention of the pivoted aileron. Naturally, when thicker wings appeared, the option of wing-warping became a thing of the past, not revived until the far later jet age and the era of thin composite structures.

For human-created flight, structural loads can be measured via strain gages, and, indeed, the YF-16 utilized strain gages on the main wing spar to adjust the g limiter in the control laws for various fuel loadings and external store configurations. Though the system worked

and showed great promise, General Dynamics and the Air Force aban­doned this approach for the production F-16 out of concern over the relatively low reliability of the strain gages. The technology still has not yet evolved to the point where designers are willing to put the strain gage outputs directly into the flight control system in a load-feedback manner.[711] Certainly this technology will continue, and changing wing shapes based on load measurements will evolve.

The NASA-Air Force Transonic Aircraft Technology (TACT) program, a joint cooperative effort from 1969 to 1988 between the Langley, Ames, and Dryden Centers, and the Air Force Flight Dynamics Laboratory, led to the first significant test of a so-called mission-adaptive wing (MAW), one blending a Langley-designed flexible supercritical wing planform joined to complex hydraulic mechanisms that could vary its shape in flight. Installed on an F-111A testbed, the MAW could "recon­tour” itself from a thick supercritical low-speed airfoil section suitable for transonic performance to a thinner symmetrical section ideal for supersonic flight.[712] The MAW, a "first generation” approach to flexible skin and wing approaches, inspired follow-on work including tests by NASA Dryden on its Systems Research Aircraft, a McDonnell-Douglas (now Boeing) F/A-18B Hornet attack fighter using wing deformation as a means of achieving transonic and supersonic roll control.[713]

NASA DFRC is continuing its research on adaptive wing shapes and airfoils to improve efficiency in various flight environments. Thus, over a century after the Wrights first flew a bird-imitative wing­warping airplane at Kitty Hawk, wing-warping has returned to aero­nautics, in a "back to the future—back to nature” technique used by the Wright brothers (and birds) to bank, and to perform turns. This cutting-edge technology is not yet in use on any operational airplanes, but it is only a matter of time before these performance enhancement features will increase the efficiency of future military and civilian aircraft.

Ablation Cooling

Another potential method for disbursing heat during high-speed flight was the application of an "ablation” material to the outer surface of the structure. An ablator is a material that is applied to the outside of a vehi­cle that burns or chars when exposed to high temperature, thus carry­ing away much of the associated heat and hot gases. Ablators are quite efficient for short duration, one-time entries such as an intercontinen­tal ballistic missile (ICBM) nose cone. Ablators were also used on the early crewed orbiting capsules (Mercury, Gemini, and Apollo), which used ballistic or semiballistic entry trajectories with relatively short peak heating exposure times. They seemed to offer special promise for lifting bodies, with developers hoping to build classes of aluminum-structured spacecraft that could have a cheap, refurbishable ablative coating re­applied after each flight. Indeed, on April 19, 1967, the Air Force did fly and recover one such subscale experimental vehicle, the Mach 27 Martin SV-5D (X-23) Precision Recovery Including Maneuvering Entry (PRIME) lofted over the Pacific Test Range by a modified Atlas ballistic missile.[751]

But for all their merits, ablators are hardly a panacea. Subsonic and transonic testing of several rocket-powered aluminum lifting bodies at NASA’s Flight Research Center showed that this class of vehicle could be landed; however, later analysis indicated that the rough surface of an exposed ablator would probably have reduced the lift and increased the drag so that successful landings would have been questionable.[752]

Flight-test experience with the X-15 confirmed such conclusions. When the decision was made to rebuild the second X-15 after a crash landing, it seemed a perfect opportunity to demonstrate the potential of ablative coatings as a means of furnishing refurbishable thermal pro­tection to hypersonic aircraft and spacecraft. The X-15A-2 was designed to reach Mach 7, absorbing the additional heat load it would experience via MA-25S, a thin Martin-developed silica ablative coating. Coating the aircraft with the MA-25S proved surprisingly time-consuming, as did the refurbishment between flights.

During a flight to Mach 6.7 by Maj. William J. "Pete” Knight, unantic­ipated heating actions severely damaged the aircraft, melting a scramjet boilerplate test module off the airplane and burning holes in the exter­nal skin. Though Knight landed safely—a great tribute to his piloting skills—the X-15A-2 was in no condition to fly without major repairs. Although the ablator did provide the added protection needed for most of the airplane, the tedious process of applying it and the operational problems associated with repairing and protecting the soft coating were quite time-consuming and impracticable for an operational military or civilian system.[753] The postentry ablated surface also increased the drag of the airplane by about the same percentage that was observed on the PRIME vehicle. Clearly the X-15A-2’s record flight emphasized, as NASA engineer John V. Becker subsequently wrote, "the need for maximum attention to aerothermodynamic detail in design and preflight testing.”[754] The "lifting body” concept evolved as a means of using ablative protec­tion for entries of wingless, but landable, vehicles. As a result of the X-15 and lifting body testing by NASA, an ablative coating has not been seriously considered for any subsequent reusable lifting entry vehicle.

NASA and Computational Structural Analysis

David C. Aronstein

NASA research has been pivotal in its support of computational ana­lytical methods for structural analysis and design, particularly through the NASTRAN program. NASA Centers have evolved structural analy­sis programs tailored to their own needs, such as assessing high-tem­perature aerothermodynamic structural loading for high-performance aircraft. NASA-developed structural tools have been adopted through­out the aerospace industry and are available on the Agency Web site.

HE FIELD OF COMPUTER METHODS in structural analysis, and the contributions of the National Aeronautics and Space Administration (NASA) to it, is wide-ranging. Nearly every NASA Center has a struc­tural analysis group in some form. These groups conduct research and assist industry in grappling with a broad spectrum of problems. This paper is an attempt to show both aspects: the origins, evolution, and application of NASA Structural Analysis System (NASTRAN), and the variety and depth of other NASA activities and contributions to the field of computational structural methods.

In general terms, the goal of structural analysis is to establish that a product has the required strength and stiffness—structural integrity— to perform its function throughout its intended life. Its strength must exceed the loads to which the product is subjected, by some safety mar­gin, the value of which depends on the application.

With aircraft, loads derive from level flight, maneuvering flight, gusts, landings, engine thrust and torque, vibration, temperature and pressure differences, and other sources. Load cases may be specified by regula­tory agency, by the customer, and/or by the company practice and expe­rience. Many of the loads depend on the weight of the aircraft, and the weight in turn depends on the design of the structure. This makes the structural design process iterative. Because of this, and also because a large fraction of an aircraft’s weight is not actually accounted for by pri­mary structure, initial weight estimates are usually based on experience

rather than on a detailed buildup of structural material. A sizing pro­cess must be performed to reconcile the predicted empty weight and its relationship to the assumed maximum gross weight, with the required payload, fuel, and mission performance.[786]

After the sizing process has converged, the initial design is docu­mented in the form of a three-view drawing with supporting data. From there, the process is approximately as follows:

• The weights group generates an initial estimate of the weights of the major airframe components.

• The loads group analyzes the vehicle at the defined condition(s) to determine forces, bending moments, etc., in the major components and interfaces.

• The structures group defines the primary load paths and sizes the primary structural members to provide the required strength.

• Secondary load paths, etc., are defined to the required level of detail.

Process details vary between different organizations, but at some point, the structural definition reaches a level of maturity to enable a check of the initial weight estimate. Then the whole designmay be iterated, if required. Iteration may also be driven by maturing requirements or by evolution in other aspects of the design, e. g., aerodynamics, propulsion, etc.

Marshall Space Flight Center

Consistent with its mission to develop spacecraft technologies and with its heritage as the site where Wernher von Braun and his team had

worked since 1950, Marshall Space Flight Center has always had a strong technical/analytical organization, engaged in science and engineer­ing research as well as advanced design studies. Research areas have included basic finite element methods, shells, fluid-structure systems, and nonlinear structures, as well as quick-turnaround non-FEM meth­ods for early design and feasibility studies.[930]

Applications have usually involved the structural and structural – dynamic problems of launch vehicles. As an example, computational techniques were used to help resolve "pogo” oscillations in both the first and second stages of the Saturn V launch vehicle. As the name implies, the pogo mode is a longitudinal tensile/compressive oscillation. Flight data from the unpiloted flight of the second Saturn V in 1968 showed severe vibrations from 125 to 135 seconds into the first-stage burn. The pogo mode is not always harmful, but in this case, there were concerns that it could upset the guidance system or damage the payload. The structural frequency was dependent on fuel load, and at a certain point in the flight, it would coincide with a natural frequency of the engine/ fuel/oxygen system, causing resonance. Using the models to evaluate the effects of various design changes, the working group assigned to the task determined that accumulators in the liquid oxygen (LOX) lines would alter the engine frequency sufficiently to resolve the issue. Subsequently, engineers examining flight data from the Apollo 8, 9, and 13 missions noticed a similar occurrence in the second stage. This was studied and resolved using similar techniques.[931]

The first-stage pogo issue occurred at a point in the Apollo program when time was of the essence in identifying, analyzing, and resolving the problem. The computer models were most likely no more complex than they had to be to solve the problem at hand. Marshall Space Flight Center has continued to develop and use fairly simple codes for early con­ceptual studies. Simple, quick-turnaround tools developed at Marshall include Cylindrical Optimization of Rings, Skin and Stringers (CORSS, 1994) and the VLOADS launch loads and dynamics program (1997). VLOADS was developed as a Visual BASIC macro in Microsoft Excel. When released in COSMIC in 1997, it was also available in PC format.

It was distributed on a single 3.5-inch diskette.[932] This was a remark­able development from the days when the problem of launch vehicle dynamics occupied a sizable fraction of this Nation’s computing power!

Like researchers at Langley, Marshall’s personnel moved swiftly from single or limited application tools to finding ways to integrate them with other tools and processes and thereby achieve enhanced or previ­ously unattainable capabilities. The Coupled Eulerian Lagrangian Finite Element (CELFE) code, developed collaboratively with NASA Lewis Research Center in 1978, included specialized nonlinear methods to cal­culate local effects of an impact. It was coupled to NASTRAN for calcu­lation of the far-field response of the structure. Applications included space debris, micrometeor, and foreign object impact studies for air­craft engines.[933] Marshall developed an interface between the PATRAN finite element preprocessor (normally used with NASTRAN) and the NASA Langley STAGS shell analysis code in 1990.[934] Marshall sponsored Southwest Research Institute to develop an interface between Lewis – developed NESSUS probabilistic analysis and NASTRAN in 1996.[935] Both STAGS and NESSUS have been widely used outside NASA. This review of NASA Centers and their work on computational structural analysis has offered only a glimpse of the variety of structural problems that exist and the corresponding variety of methods developed and used at the various NASA Centers and then shared with industry.

Structural Analysis of General Shells (STAGS) (Marshall and Langley, 1960s-present)

Structural Analysis of General Shells (STAGS) evolved from early shell analysis codes developed by Lockheed Palo Alto Research Laboratory and sponsored by the NASA Marshall Space Flight Center between 1963 and 1968, with subsequent development funded primarily from Langley.

B. O. "Bo” Almroth of Lockheed was the principal developer. The name STAGS seems to have first appeared around 1970.[998] Thus, STAGS ini-^^^B^8 tial development was nearly concurrent with that of NASTRAN. While NASTRAN development aimed to stem the proliferation of analysis codes, and of shell analysis codes in particular, NASTRAN did not ini­tially provide the full capability needed to replace such codes. In par­ticular, STAGS from the beginning included nonlinear capability that was found necessary in the accurate modeling of shells with cutouts. In the mid – to late 1970s, STAGS was released publicly, with user manu­als. "Under contract with NASA, STAGS has been converted from being more or less a pure research tool into a code that is suitable for use by the public for practical engineering analysis. Suggestions from NASA – Langley have resulted in considerable enhancement of the code and are to some degree the cause of its increasing popularity. . . . User reaction consistently seems to indicate that the run time with STAGS is surpris­ingly low in comparison to comparable codes. A STAGS input deck is usually compact and time for its preparation is short.”[999] STAGS con­tinued to be enhanced through the 1980s (as STAGS-C1, actually a fam­ily of versions), offering unique capabilities for modeling total collapse of structures and problems that bifurcate into multiple possible solu­tions.[1000] It was apparently popular and widely used. For example, in 1990, Engineering Dynamics, Inc., of Kenner, LA, used STAGS-C1 to model and verify a repair design for a damaged offshore oil platform.[1001]

STAGS Version 5.0 was released in 2006, and STAGS is still used for fail­ure analysis, analysis of damaged structures, and similar problems.[1002]

4) Nonlinear Structures: PANES (1975) and AGGIE-I (1980) (Marshall)

Program for Analysis of Nonlinear Equilibrium and Stability (PANES) was developed for structural problems involving geometric and/or mate­rial nonlinear characteristics. AGGIE-I was a more comprehensive code capable of solving larger and more general problems, also involving geo­metric or material nonlinearities.[1003]

5) Finite Element Modeling of Piping Systems (Stennis)

While Stennis is not active in structural methods research, there have been some activities applying finite element and structural health moni­toring techniques to the complex fuel distribution systems at the facility. One such effort was presented at the 27th Joint Propulsion Conference in Sacramento, CA, in 1991: "A set of PC-based computational Dynamic Fluid Flow Simulation models is presented for modeling facility gas and cryogenic systems. . . . A set of COSMIC NASTRAN-based finite element models is also presented to evaluate the loads and stresses on test facil­ity piping systems from fluid and gaseous effects, thermal chill down, and occasional wind loads. The models are based on Apple Macintosh software which makes it possible to change numerous parameters.”[1004] NASA was, in this case, its own spinoff technology customer.

Appendix C:

Fly-By-Wire: The Beginnings

The Second World War witnessed the first applications of computer – controlled fly-by-wire flight control systems. With fly-by-wire, primary control surface movements were directed via electrical signals trans­mitted by wires rather than by the use of mechanical linkages. The German Army’s A-4 rocket (the famous V-2 that postwar was the basis for both U. S. and Soviet efforts to move into space) used an electronic
analog computer that modeled the differential equations governing the missile’s flight control laws. The computer-generated electronic signals were transmitted by wire to direct movement of the actuators that drove graphite vanes located in the rocket motor exhaust. The thrust of the rocket engine was thus vectored as required to stabilize the V-2 missile at lower airspeeds until the aerodynamic control surfaces on the fins became effective.[1107] Postwar, a similar analog computer-controlled fly­by-wire thrust vectoring approach was used in the U. S. Army Redstone missile, perhaps not surprisingly, because Redstone was predominantly designed by a team of German engineers headed by Wernher von Braun of V-2 fame. The Redstone would be used to launch the Mercury space capsule that carried Alan Shepard (the first American into space) in 1961.

Подпись: 10The German Mistle (Mistletoe) composite aircraft of late World War II was probably the first example of the use of fly-by-wire for flight con­trol in a manned aircraft application. Mistle consisted of a fighter (usu­ally a Focke-Wulf FW 190) mounted on a support structure on a Junkers Ju 88 bomber.[1108] The Ju 88 was equipped with a 3,500-pound warhead and was intended to be flown to the vicinity of its target by the FW 190 pilot, at which time he would separate from the bomber and evade enemy defenses while the Ju 88 flew into its target. Potentiometers at the base of the FW 190 pilot’s control stick generated electrical commands that were transmitted via wire through the support structure to the bomber. These electrical commands activated electric motors that moved the sys­tem of pushrods leading to the Ju 88 control surfaces.[1109]

Another electronic flight control system innovation related to the fly-by-wire concept had its origins in electronic feedback flight control research that began in Germany in the late 1930s and was published by Ernst Heinkel and Eduard Fischel in 1940. Their research was used in the 1944 development of a directional stability augmentation system for the Luftwaffe’s heavily armed and armored Henschel Hs 129 ground
attack aircraft to compensate for an inherent Dutch roll[1110] instability that affected strafing accuracy with its large-caliber, low-rate-of-fire antitank cannon.[1111] This consisted of modifying the rudder portion of the flight control system for dual mode operation. The rudder was split into two sections, with the lower portion directly linked to the pilot’s flight con­trols. The upper section was electromechanically linked to a gyroscopic yaw rate sensor that automatically provided rudder corrections as yawing motions were detected.[1112] This was the first practical aircraft yaw damper. Northrop incorporated electronic stability augmentation devices into its YB-49 flying wing bomber that first flew in late 1947 in an attempt to compensate for serious directional stability problems. After the war, the NACA Ames Aeronautical Laboratory conducted extensive flight research into artificial stability. An NACA-operated Grumman F6F-3 Hellcat was modified to incorporate roll and yaw rate servos that provided stability augmentation, with flight tests beginning in 1948. In the following years, a number of other aircraft were modified by the NACA at Ames for vari­able stability research, including several variants of the North American F-86.[1113] By the 1950s, most high-performance swept wing jet-powered aircraft were designed with electronic stability augmentation devices.

Advanced Fighter Technology Integration F-16 Program

The USAF Flight Dynamics Laboratory began the Advanced Fighter Technology Integration program in the late 1970s. Overall objectives of this joint Air Force and NASA research program were to develop and demonstrate technologies and assess alternative approaches for use in future aircraft design. In December 1978, the F-16 was selected for modification as the AFTI/F-16. General Dynamics began conversion of the sixth preproduction F-16A (USAF serial No. 75-0750) at its Forth Worth, TX, factory in March 1980. The aircraft had originally been built in 1978 for the F-16 full-scale development effort. GD built on ear­lier experience with its F-16 CCV program. The twin canted movable canard ventral fins from the F-16 CCV were installed under the inlet of the AFTI/F-16. In addition, a dorsal fairing was fitted to the top of the fuselage to accommodate extra avionics equipment. A triply redundant, asynchronous, multimode, digital flight control system with an ana­log backup was installed in the aircraft. The DFCS was integrated with improved avionics and had different control modes optimized for air – to-air combat and air-to-ground attack. The Stores Management System (SMS) was responsible for signaling requests for mode change to the DFCS. Other modifications included provision for a six-degree-of-free – dom Automated Maneuvering Attack System (AMAS), a 256-word-capac­ity Voice-Controlled Interactive Device (VCID) to control the avionics

Подпись: During Phase I, five test pilots from NASA, the Air Force, and the Navy flew the AFTI/F-1 6 at NASA Dryden in California. NASA. Подпись: 10

suite, and a helmet-mounted target designation sight that could auto­matically slave the forward-looking infrared (FLIR) device and the radar to the pilot’s head movements.[1182] First flight of the modified aircraft in the AFTI/F-16 configuration occurred on July 10, 1982, from Carlswell AFB, TX, with GD test pilot Alex V. Wolfe at the controls. Following con­tractor testing, the aircraft was flown to Edwards AFB for AFTI/F-16 test effort. This was organized into two phases; Phase I was a 2-year effort focused on evaluating the DFCS, with a follow-on Phase II oriented to assessing the AMAS and other technologies.

Digital Electronic Engine Control

NASA pioneered in the development and validation of advanced com­puter-controlled electronic systems to optimize engine performance

Подпись: 10 Digital Electronic Engine Control

across the full flight envelope while also improving reliability. One such system was the Digital Electronic Engine Control (DEEC), whose gene­sis can be traced back to NASA Dryden work on the integrated flight and engine control system developed and evaluated in a joint NASA-Air Force program that used two Mach 3+ Lockheed YF-12C aircraft. The YF-12C was a cousin of the SR-71 strategic reconnaissance aircraft, and both air­craft used twin Pratt & Whitney J58 afterburning engines. As the SR-71 neared Mach 3, a significant portion of the engine thrust was produced from the supersonic shock wave that was captured within each engine inlet and exited through the engine nozzle. A serious issue with the oper­ational SR-71 fleet was so-called engine inlet unstarts. These occurred when the airflow into the inlet was not properly matched to that of the engine. This caused the standing shock wave normally located in the inlet to be expelled out the front of the SR-71’s inlet, causing insuffi­cient pressure and airflow for normal engine operations. The result was a sudden loss of thrust on the affected engine. The resulting imbalance in thrust between the two SR-71 engines caused violent yawing, along with pitching and rolling motions. Studies showed that strong vortexes produced by each of the forward fuselage chines passed directly into the

inlets during the yawing motion produced by an unstart. NASA efforts supported development of a computerized automatic inlet sensing and cone control system and helped to optimize the ratio of air passing through the engine to that leaving the inlet through the forward bypass doors. Dryden successfully integrated the engine inlet control, auto-throt­tle, air data, and navigation functions to improve overall performance, with aircraft range being increased 7 percent. Handling qualities were also improved, and the frequency of engine inlet unstarts was greatly reduced. Pratt & Whitney and the Air Force incorporated the improve­ments demonstrated by Dryden into the entire SR-71 fleet in 1983.[1257] The Dryden YF-12C made its last NASA flight on October 31, 1979. On November 7, 1979, it was ferried to the Air Force Museum at Wright – Patterson AFB, OH, where it is now on display.[1258]

Подпись: 10The broad objective of the DEEC program, conducted by NASA Dryden between 1981 and 1983, was to demonstrate and evaluate the system on a turbofan engine in a high-performance fighter across its full flight envelope. The program was a joint effort between Dryden, Pratt & Whitney, the Air Force, and NASA Lewis Research Center (now the NASA Glenn Research Center). The DEEC had been commercially devel­oped by Pratt & Whitney based on its experience with the J58 engine during the NASA YF-12 flight research program. It integrated a vari­ety of engine functions to improve performance and extend engine life. The DEEC system was tested on an F100 engine mounted in the left engine bay of a NASA Dryden McDonnell-Douglas F-15 fighter. Engine – mounted and fuel-cooled, the DEEC was a single-channel digital control­ler. Engine inputs to the DEEC included compressor face static pressure and temperature, fan and core rotation speed, burner pressure, turbine inlet temperature, turbine discharge pressure, throttle position, after­burner fuel flow, and fan and compressor speeds. Using these inputs, the DEEC computer set the variable vanes, positioned the compressor air bleed, controlled gas-generator and augmentor fuel flows, adjusted the augmentor segment-sequence valve, and controlled the exhaust noz­zle position. Thirty test missions that accumulated 35.5 flight hours were flown during the 2-year test program, which covered the opera-

tional envelope of the F-15 at speeds up to Mach 2.36 and altitudes up to 60,000 feet. The DEEC evaluation included nearly 1,300 throttle and afterburner transients, more than 150 air starts, maximum accelerations and climbs, and the full spectrum of flight maneuvers. An engine noz­zle instability that caused stalls and blowouts was encountered when operating in afterburner at high altitudes. This instability had not been predicted in previous computer simulations or during ground-testing in NASA high-altitude test facilities. The instability problem was eventually resolved, and stall-free engine operation was demonstrated across the entire F-15 flight envelope. Faster throttle response, improved engine air – start capability, and an increase of more than 10,000 feet in the altitude that could be attained in afterburner without pilot restrictions on throt­tle use were achieved.[1259]

Подпись: 10DEEC-equipped engines were then installed on several operational USAF F-15s for service testing, during which they showed major improve­ments in reliability and maintainability. Mean time between failures was doubled, and unscheduled engine removals were reduced by a factor of nine. As a result, DEEC-equipped F100 engines were installed in all USAF F-15 and F-16 aircraft. The DEEC was a major event in the his­tory of jet engine propulsion control and represented a significant tran­sition from hydromechanical to digital-computer-based engine control. Performance improvements made possible by the DEEC included faster throttle responses, improved air-start capability, and an altitude increase of over 10,000 feet in afterburner without pilot restrictions on throttle use. Following the successful NASA test program, the DEEC went into standard use on F100 engines in the Boeing F-15 and the Lockheed F-16. Pratt & Whitney also incorporated digital engine control technology in turbofan engines used on some Boeing commercial jetliners. The lin­eage of similar digital engine control units used on other engines can be traced to the results of NASA’s DEEC test and evaluation program.[1260]

Energy Efficient Engine Project

Taking everything learned to date by NASA and the industry about mak­ing turbo machinery more fuel efficient, the Energy Efficient Engine (E Cubed) project sought to further reduce the airlines’ fuel usage and its effect on direct operating costs, while also meeting future FAA regulations and Environmental Protection Agency exhaust emission standards for turbo­fan engines. Research contracts were awarded to GE and Pratt & Whitney, which initially focused on the CF6-50C and JT9D-7A engines, respectively. The program ran from 1975 to 1983 and cost NASA about $200 million.[1311]

Similar to the goals for the Engine Component Improvement proj­ect, the E Cubed goals included a 12-percent reduction in specific fuel consumption (SFC), which is a measure of the ratio between the mass of fuel used to the output power of the jet engine—much like a miles per gallon measurement for automobiles. Other goals of the E Cubed effort included a 5-percent reduction in direct operating costs and a

Подпись: 11 Energy Efficient Engine Project

50-percent reduction in the rate at which the SFC worsens over time as the engine ages. In addition to making these immediate improve­ments, it was hoped that a new generation of fuel-conservative turbofan engines could be developed from this work.[1312]

Highlighting that program was development of a new type of com­pressor core and an advanced combustor made up of a doughnut-shaped ring with two zones—or domes—of combustion. During times when low power is needed or the engine is idling, only one of the two zones is lit up. For higher thrust levels, including full power, both domes are ignited. By creating a dual combustion option, the amount of fuel being burned can be more carefully controlled, reducing emissions of smoke, carbon monoxide, and hydrocarbons by 50 percent, and nitro­gen oxides by 35 percent.[1313]

As part of the development of the new compressor in particular, and the E Cubed and Engine Component Improvement programs in gen­eral, the Lewis Research Center developed first-generation computer pro­grams for use in creating the new engine. The software helped engineers
with conceptualizing the aerodynamic design and visualizing the flow of gases through the engine. The computer programs were credited with making it possible to design more fuel-efficient compressors with less tip and end-wall pressure losses, higher operating pressure ratios, and the ability to use fewer blades. The compressors also helped to reduce per­formance deterioration, surface erosion, and damage from bird strikes.[1314]

Подпись: 11History has judged the E Cubed program as being highly successful, in that the technology developed from the effort was so promising—and proved to meet the objectives for reducing emissions and increasing fuel efficiency—that both major U. S. jet engine manufacturers, GE and Pratt & Whitney, moved quickly to incorporate the technology into their prod­ucts. The ultimate legacy of the E Cubed program is found today in the GE90 engine, which powers the Boeing 777. The E Cubed technology is directly responsible for the engine’s economical fuel burn, reduced emissions, and low maintenance cost.[1315]