Category AERONAUTICS

Structural Analysis of General Shells (STAGS) (Marshall and Langley, 1960s-present)

Structural Analysis of General Shells (STAGS) evolved from early shell analysis codes developed by Lockheed Palo Alto Research Laboratory and sponsored by the NASA Marshall Space Flight Center between 1963 and 1968, with subsequent development funded primarily from Langley.

B. O. "Bo” Almroth of Lockheed was the principal developer. The name STAGS seems to have first appeared around 1970.[998] Thus, STAGS ini-^^^B^8 tial development was nearly concurrent with that of NASTRAN. While NASTRAN development aimed to stem the proliferation of analysis codes, and of shell analysis codes in particular, NASTRAN did not ini­tially provide the full capability needed to replace such codes. In par­ticular, STAGS from the beginning included nonlinear capability that was found necessary in the accurate modeling of shells with cutouts. In the mid – to late 1970s, STAGS was released publicly, with user manu­als. "Under contract with NASA, STAGS has been converted from being more or less a pure research tool into a code that is suitable for use by the public for practical engineering analysis. Suggestions from NASA – Langley have resulted in considerable enhancement of the code and are to some degree the cause of its increasing popularity. . . . User reaction consistently seems to indicate that the run time with STAGS is surpris­ingly low in comparison to comparable codes. A STAGS input deck is usually compact and time for its preparation is short.”[999] STAGS con­tinued to be enhanced through the 1980s (as STAGS-C1, actually a fam­ily of versions), offering unique capabilities for modeling total collapse of structures and problems that bifurcate into multiple possible solu­tions.[1000] It was apparently popular and widely used. For example, in 1990, Engineering Dynamics, Inc., of Kenner, LA, used STAGS-C1 to model and verify a repair design for a damaged offshore oil platform.[1001]

STAGS Version 5.0 was released in 2006, and STAGS is still used for fail­ure analysis, analysis of damaged structures, and similar problems.[1002]

4) Nonlinear Structures: PANES (1975) and AGGIE-I (1980) (Marshall)

Program for Analysis of Nonlinear Equilibrium and Stability (PANES) was developed for structural problems involving geometric and/or mate­rial nonlinear characteristics. AGGIE-I was a more comprehensive code capable of solving larger and more general problems, also involving geo­metric or material nonlinearities.[1003]

5) Finite Element Modeling of Piping Systems (Stennis)

While Stennis is not active in structural methods research, there have been some activities applying finite element and structural health moni­toring techniques to the complex fuel distribution systems at the facility. One such effort was presented at the 27th Joint Propulsion Conference in Sacramento, CA, in 1991: "A set of PC-based computational Dynamic Fluid Flow Simulation models is presented for modeling facility gas and cryogenic systems. . . . A set of COSMIC NASTRAN-based finite element models is also presented to evaluate the loads and stresses on test facil­ity piping systems from fluid and gaseous effects, thermal chill down, and occasional wind loads. The models are based on Apple Macintosh software which makes it possible to change numerous parameters.”[1004] NASA was, in this case, its own spinoff technology customer.

Appendix C:

Fly-By-Wire: The Beginnings

The Second World War witnessed the first applications of computer – controlled fly-by-wire flight control systems. With fly-by-wire, primary control surface movements were directed via electrical signals trans­mitted by wires rather than by the use of mechanical linkages. The German Army’s A-4 rocket (the famous V-2 that postwar was the basis for both U. S. and Soviet efforts to move into space) used an electronic
analog computer that modeled the differential equations governing the missile’s flight control laws. The computer-generated electronic signals were transmitted by wire to direct movement of the actuators that drove graphite vanes located in the rocket motor exhaust. The thrust of the rocket engine was thus vectored as required to stabilize the V-2 missile at lower airspeeds until the aerodynamic control surfaces on the fins became effective.[1107] Postwar, a similar analog computer-controlled fly­by-wire thrust vectoring approach was used in the U. S. Army Redstone missile, perhaps not surprisingly, because Redstone was predominantly designed by a team of German engineers headed by Wernher von Braun of V-2 fame. The Redstone would be used to launch the Mercury space capsule that carried Alan Shepard (the first American into space) in 1961.

Подпись: 10The German Mistle (Mistletoe) composite aircraft of late World War II was probably the first example of the use of fly-by-wire for flight con­trol in a manned aircraft application. Mistle consisted of a fighter (usu­ally a Focke-Wulf FW 190) mounted on a support structure on a Junkers Ju 88 bomber.[1108] The Ju 88 was equipped with a 3,500-pound warhead and was intended to be flown to the vicinity of its target by the FW 190 pilot, at which time he would separate from the bomber and evade enemy defenses while the Ju 88 flew into its target. Potentiometers at the base of the FW 190 pilot’s control stick generated electrical commands that were transmitted via wire through the support structure to the bomber. These electrical commands activated electric motors that moved the sys­tem of pushrods leading to the Ju 88 control surfaces.[1109]

Another electronic flight control system innovation related to the fly-by-wire concept had its origins in electronic feedback flight control research that began in Germany in the late 1930s and was published by Ernst Heinkel and Eduard Fischel in 1940. Their research was used in the 1944 development of a directional stability augmentation system for the Luftwaffe’s heavily armed and armored Henschel Hs 129 ground
attack aircraft to compensate for an inherent Dutch roll[1110] instability that affected strafing accuracy with its large-caliber, low-rate-of-fire antitank cannon.[1111] This consisted of modifying the rudder portion of the flight control system for dual mode operation. The rudder was split into two sections, with the lower portion directly linked to the pilot’s flight con­trols. The upper section was electromechanically linked to a gyroscopic yaw rate sensor that automatically provided rudder corrections as yawing motions were detected.[1112] This was the first practical aircraft yaw damper. Northrop incorporated electronic stability augmentation devices into its YB-49 flying wing bomber that first flew in late 1947 in an attempt to compensate for serious directional stability problems. After the war, the NACA Ames Aeronautical Laboratory conducted extensive flight research into artificial stability. An NACA-operated Grumman F6F-3 Hellcat was modified to incorporate roll and yaw rate servos that provided stability augmentation, with flight tests beginning in 1948. In the following years, a number of other aircraft were modified by the NACA at Ames for vari­able stability research, including several variants of the North American F-86.[1113] By the 1950s, most high-performance swept wing jet-powered aircraft were designed with electronic stability augmentation devices.

Advanced Fighter Technology Integration F-16 Program

The USAF Flight Dynamics Laboratory began the Advanced Fighter Technology Integration program in the late 1970s. Overall objectives of this joint Air Force and NASA research program were to develop and demonstrate technologies and assess alternative approaches for use in future aircraft design. In December 1978, the F-16 was selected for modification as the AFTI/F-16. General Dynamics began conversion of the sixth preproduction F-16A (USAF serial No. 75-0750) at its Forth Worth, TX, factory in March 1980. The aircraft had originally been built in 1978 for the F-16 full-scale development effort. GD built on ear­lier experience with its F-16 CCV program. The twin canted movable canard ventral fins from the F-16 CCV were installed under the inlet of the AFTI/F-16. In addition, a dorsal fairing was fitted to the top of the fuselage to accommodate extra avionics equipment. A triply redundant, asynchronous, multimode, digital flight control system with an ana­log backup was installed in the aircraft. The DFCS was integrated with improved avionics and had different control modes optimized for air – to-air combat and air-to-ground attack. The Stores Management System (SMS) was responsible for signaling requests for mode change to the DFCS. Other modifications included provision for a six-degree-of-free – dom Automated Maneuvering Attack System (AMAS), a 256-word-capac­ity Voice-Controlled Interactive Device (VCID) to control the avionics

Подпись: During Phase I, five test pilots from NASA, the Air Force, and the Navy flew the AFTI/F-1 6 at NASA Dryden in California. NASA. Подпись: 10

suite, and a helmet-mounted target designation sight that could auto­matically slave the forward-looking infrared (FLIR) device and the radar to the pilot’s head movements.[1182] First flight of the modified aircraft in the AFTI/F-16 configuration occurred on July 10, 1982, from Carlswell AFB, TX, with GD test pilot Alex V. Wolfe at the controls. Following con­tractor testing, the aircraft was flown to Edwards AFB for AFTI/F-16 test effort. This was organized into two phases; Phase I was a 2-year effort focused on evaluating the DFCS, with a follow-on Phase II oriented to assessing the AMAS and other technologies.

Digital Electronic Engine Control

NASA pioneered in the development and validation of advanced com­puter-controlled electronic systems to optimize engine performance

Подпись: 10 Digital Electronic Engine Control

across the full flight envelope while also improving reliability. One such system was the Digital Electronic Engine Control (DEEC), whose gene­sis can be traced back to NASA Dryden work on the integrated flight and engine control system developed and evaluated in a joint NASA-Air Force program that used two Mach 3+ Lockheed YF-12C aircraft. The YF-12C was a cousin of the SR-71 strategic reconnaissance aircraft, and both air­craft used twin Pratt & Whitney J58 afterburning engines. As the SR-71 neared Mach 3, a significant portion of the engine thrust was produced from the supersonic shock wave that was captured within each engine inlet and exited through the engine nozzle. A serious issue with the oper­ational SR-71 fleet was so-called engine inlet unstarts. These occurred when the airflow into the inlet was not properly matched to that of the engine. This caused the standing shock wave normally located in the inlet to be expelled out the front of the SR-71’s inlet, causing insuffi­cient pressure and airflow for normal engine operations. The result was a sudden loss of thrust on the affected engine. The resulting imbalance in thrust between the two SR-71 engines caused violent yawing, along with pitching and rolling motions. Studies showed that strong vortexes produced by each of the forward fuselage chines passed directly into the

inlets during the yawing motion produced by an unstart. NASA efforts supported development of a computerized automatic inlet sensing and cone control system and helped to optimize the ratio of air passing through the engine to that leaving the inlet through the forward bypass doors. Dryden successfully integrated the engine inlet control, auto-throt­tle, air data, and navigation functions to improve overall performance, with aircraft range being increased 7 percent. Handling qualities were also improved, and the frequency of engine inlet unstarts was greatly reduced. Pratt & Whitney and the Air Force incorporated the improve­ments demonstrated by Dryden into the entire SR-71 fleet in 1983.[1257] The Dryden YF-12C made its last NASA flight on October 31, 1979. On November 7, 1979, it was ferried to the Air Force Museum at Wright – Patterson AFB, OH, where it is now on display.[1258]

Подпись: 10The broad objective of the DEEC program, conducted by NASA Dryden between 1981 and 1983, was to demonstrate and evaluate the system on a turbofan engine in a high-performance fighter across its full flight envelope. The program was a joint effort between Dryden, Pratt & Whitney, the Air Force, and NASA Lewis Research Center (now the NASA Glenn Research Center). The DEEC had been commercially devel­oped by Pratt & Whitney based on its experience with the J58 engine during the NASA YF-12 flight research program. It integrated a vari­ety of engine functions to improve performance and extend engine life. The DEEC system was tested on an F100 engine mounted in the left engine bay of a NASA Dryden McDonnell-Douglas F-15 fighter. Engine – mounted and fuel-cooled, the DEEC was a single-channel digital control­ler. Engine inputs to the DEEC included compressor face static pressure and temperature, fan and core rotation speed, burner pressure, turbine inlet temperature, turbine discharge pressure, throttle position, after­burner fuel flow, and fan and compressor speeds. Using these inputs, the DEEC computer set the variable vanes, positioned the compressor air bleed, controlled gas-generator and augmentor fuel flows, adjusted the augmentor segment-sequence valve, and controlled the exhaust noz­zle position. Thirty test missions that accumulated 35.5 flight hours were flown during the 2-year test program, which covered the opera-

tional envelope of the F-15 at speeds up to Mach 2.36 and altitudes up to 60,000 feet. The DEEC evaluation included nearly 1,300 throttle and afterburner transients, more than 150 air starts, maximum accelerations and climbs, and the full spectrum of flight maneuvers. An engine noz­zle instability that caused stalls and blowouts was encountered when operating in afterburner at high altitudes. This instability had not been predicted in previous computer simulations or during ground-testing in NASA high-altitude test facilities. The instability problem was eventually resolved, and stall-free engine operation was demonstrated across the entire F-15 flight envelope. Faster throttle response, improved engine air – start capability, and an increase of more than 10,000 feet in the altitude that could be attained in afterburner without pilot restrictions on throt­tle use were achieved.[1259]

Подпись: 10DEEC-equipped engines were then installed on several operational USAF F-15s for service testing, during which they showed major improve­ments in reliability and maintainability. Mean time between failures was doubled, and unscheduled engine removals were reduced by a factor of nine. As a result, DEEC-equipped F100 engines were installed in all USAF F-15 and F-16 aircraft. The DEEC was a major event in the his­tory of jet engine propulsion control and represented a significant tran­sition from hydromechanical to digital-computer-based engine control. Performance improvements made possible by the DEEC included faster throttle responses, improved air-start capability, and an altitude increase of over 10,000 feet in afterburner without pilot restrictions on throttle use. Following the successful NASA test program, the DEEC went into standard use on F100 engines in the Boeing F-15 and the Lockheed F-16. Pratt & Whitney also incorporated digital engine control technology in turbofan engines used on some Boeing commercial jetliners. The lin­eage of similar digital engine control units used on other engines can be traced to the results of NASA’s DEEC test and evaluation program.[1260]

Energy Efficient Engine Project

Taking everything learned to date by NASA and the industry about mak­ing turbo machinery more fuel efficient, the Energy Efficient Engine (E Cubed) project sought to further reduce the airlines’ fuel usage and its effect on direct operating costs, while also meeting future FAA regulations and Environmental Protection Agency exhaust emission standards for turbo­fan engines. Research contracts were awarded to GE and Pratt & Whitney, which initially focused on the CF6-50C and JT9D-7A engines, respectively. The program ran from 1975 to 1983 and cost NASA about $200 million.[1311]

Similar to the goals for the Engine Component Improvement proj­ect, the E Cubed goals included a 12-percent reduction in specific fuel consumption (SFC), which is a measure of the ratio between the mass of fuel used to the output power of the jet engine—much like a miles per gallon measurement for automobiles. Other goals of the E Cubed effort included a 5-percent reduction in direct operating costs and a

Подпись: 11 Energy Efficient Engine Project

50-percent reduction in the rate at which the SFC worsens over time as the engine ages. In addition to making these immediate improve­ments, it was hoped that a new generation of fuel-conservative turbofan engines could be developed from this work.[1312]

Highlighting that program was development of a new type of com­pressor core and an advanced combustor made up of a doughnut-shaped ring with two zones—or domes—of combustion. During times when low power is needed or the engine is idling, only one of the two zones is lit up. For higher thrust levels, including full power, both domes are ignited. By creating a dual combustion option, the amount of fuel being burned can be more carefully controlled, reducing emissions of smoke, carbon monoxide, and hydrocarbons by 50 percent, and nitro­gen oxides by 35 percent.[1313]

As part of the development of the new compressor in particular, and the E Cubed and Engine Component Improvement programs in gen­eral, the Lewis Research Center developed first-generation computer pro­grams for use in creating the new engine. The software helped engineers
with conceptualizing the aerodynamic design and visualizing the flow of gases through the engine. The computer programs were credited with making it possible to design more fuel-efficient compressors with less tip and end-wall pressure losses, higher operating pressure ratios, and the ability to use fewer blades. The compressors also helped to reduce per­formance deterioration, surface erosion, and damage from bird strikes.[1314]

Подпись: 11History has judged the E Cubed program as being highly successful, in that the technology developed from the effort was so promising—and proved to meet the objectives for reducing emissions and increasing fuel efficiency—that both major U. S. jet engine manufacturers, GE and Pratt & Whitney, moved quickly to incorporate the technology into their prod­ucts. The ultimate legacy of the E Cubed program is found today in the GE90 engine, which powers the Boeing 777. The E Cubed technology is directly responsible for the engine’s economical fuel burn, reduced emissions, and low maintenance cost.[1315]

The 1970s and the Rise of Synthetic Fuels

NASAs interest in alternative fuels did not end with liquid hydrogen; syn­thetic fuel research, joined with research on new, more aerodynamically effi­cient aircraft configurations, took off in the 1970s and 1980s, as rising oil prices and a growing concern about mankind’s (and aviation’s) impact on the environment pushed researchers to seek alternatives to oil-based fuel.[1482]

In 1979, NASA Langley released an aircraft fuel study that compared liquid hydrogen, liquid methane, and synthetic aviation kerosene derived from coal or oil shale.[1483] The study took into account factors including cost, capital requirements, and energy resources required to make the fuel. These factors were considered in light of the practicality of using the fuel in terms of the fuel production processes, transportation, stor­age, and its suitability for use on aircraft. Environmental emissions and safety aspects of the fuel also were considered. The study concluded that all three fuels met the criteria, but that synthetic aviation kerosene was the most attractive because it was the least expensive.[1484]

Despite the promising findings of NASA’s study, however, synthetic fuel never made it into mainstream production. The fuel’s capital costs are still relatively high when compared with oil-based jet fuel, because new synthetic fuel production plants have to be built to produce the fuel.[1485] Private industry has been hesitant to get into this business, fearing it would not make a return on its investment. If oil prices were to drop—as they did in the mid-1980s— companies that invested in synthetic aircraft fuel production would find it difficult to compete with cheap oil-based jet fuel.

Подпись: 12Regardless of industry’s hesitation, Government efforts to develop and test alternative fuels are springing to life again as a result of a return to high oil prices and a growing concern about the impact of emissions on air quality and climate change. The U. S. Air Force has engaged in a systematic process to certify all of its aircraft to fly on a 50/50 blend of oil-based jet fuel and synthetic fuel. Air Force officials hoped that testing and flying their own aircraft on synthetic fuels would encourage com­mercial airlines to do the same, believing that if the service and airline industry could create a buyer’s market for synthetic fuel, then the energy industry might be more amenable to investing the money required to build synthetic fuel plants for mass production.[1486]

NASA has also begun testing the performance and emissions of two synthetic fuels derived from coal and natural gas. While the Air Force’s interest in alternative fuels is largely related to concerns about oil price volatility and the national security risks of relying on foreign oil suppli­ers, NASA has embarked on alternative fuels research largely to study the potential for reducing emissions. NASA’s research effort, which is being conducted at NASA Dryden, seeks to closely measure particulate levels. "Even though there are no current regulations for particulates, we see particulates as being very important,” said Bulzan, who is lead­ing the alternative fuels effort. "They are very important to local air qual­ity when the aircraft is taking off and landing at the airport, and they can also generate cloud formation that can affect global warming.”[1487]

Both the USAF and NASA are using synthetic fuel derived from a process developed by the Germans in World War II known as Fischer – Tropsch. In this process, a mixture of carbon monoxide and hydrogen is used to create liquid hydrocarbons for fuel. NASA Dryden’s latest alter­native fuels testing, which took place in early 2009, involved fueling a grounded DC-8 with both 100-percent synthetic fuel and a 50/50 blend. The test results are being compared with baseline tests of hydrocarbon fuel emissions tests performed in the DC-8 in 2004. Air Force research­ers were on hand to help measure the emissions.[1488]

Подпись: 12NASA and the Air Force are also working with Boeing to explore the possibility of using biofuel, which may prove to be cleaner than fuel derived from the Fischer-Tropsch process. The main obstacle to biofuel use at this time is the fact that it is difficult to procure in large quantities. For example, algae are attractive feedstock for biofuel, but the problem lies in being able to grow enough. NASA has begun to take on the feed­stock problem by setting up a Greenlab Research Facility at NASA Glenn, where NASA researchers are seeking to optimize the growing conditions for algae and halophytes, which are plants tolerant of salt water.[1489]

In conclusion, the oil crisis and growing environmental awareness of the 1970s presented a critical opportunity for NASA to reclaim its mantle at the forefront of aeronautics research. NASA-led programs in fuel-efficient engines, aircraft structures, and composites—as well as the Agency’s contribution to computational fluid dynamics—planted the seeds that gave private industry the confidence and technological know­how to pursue bold aircraft fuel-efficiency initiatives on its own. Without NASA’s E Cubed program, U. S. engine companies may not have had the financial resources to develop their fuel-saving, emissions-reducing TAPS and TALON combustors. E Cubed also spawned the open-rotor engine concept, which is still informing engine fuel-efficiency efforts today. The turbulent 1970s also created the opening for NASA Langley’s Richard Whitcomb to proceed full throttle with efforts to develop supercritical wings and winglets that have revolutionized fuel-efficient airframe design. And NASA’s research on alternative fuels during the 1970s, if stillborn, nevertheless set the stage for the Agency to play a significant role in the Government’s revitalized alternative fuels research that came with the dawning of the 21st century.

Addressing the Nation’s scientific leadership in 2009, President Barack Obama compared the energy challenge facing America to the shock of Sputnik in 1957, declaring it the Nation’s new "great project.”[1490] Reflecting the increasing emphasis and rising priorities of Federal envi­ronmental research, NASA had received funding to support global climate studies, while NASA’s aeronautics research received additional funding to "improve aircraft performance while reducing noise, emissions, and fuel consumption.”[1491] Clearly, NASA’s experience in energy and aeronautics positioned the Agency well to continue playing a major role in these areas.

Подпись: 12As the Agency enters the second decade of the 21st century, much remains to be done to increase aircraft fuel efficiency, but much, like­wise, has already been accomplished. To NASA’s aeronautics research­ers, inheritors of a legacy of accomplishment in flight, the energy and environmental challenges of the new century constitute an exciting stim­ulus, one as profoundly intriguing as any of the other challenges—super­sonic flight and landing on the Moon among them—that the NACA and NASA have faced before. Those challenges, too, had appeared daunting. But just as creative NACA-NASA research overcame them, those in the Agency charged with responsibility for pursuing the energy and envi­ronmental challenges of the new century were confident that they, and the Agency, would once again see their efforts crowned with success.

ERAST Pathfinder Sensor Technology Development

Подпись: 13As noted in the last two letters of the ERAST acronym, the development of sensor technology for the program constituted a major program goal. Science activities of the Pathfinder missions were coordinated by NASA’s Ames Research Center. Ames developed and tested a number of scientific instruments, including two imaging sensors—a high spectral resolution Digital Array Scanned Interferometer (DASI) and an Airborne Real-Time Imaging System (ARTIS). Steven Wegener of NASA Ames served as proj­ect manager for the science and sensor program. Dougal Maclise was pay­load project manager, Steven Dunagan was the team leader of the DASI project, and Stan Ault was team leader for the ARTIS project. DASI, which weighs less than 25 pounds and was mounted under Pathfinder’s wing, is a remote sensing instrument that looks at reflected spectral intensi­ties from the Earth. The ARTIS payload was built around a color infra­red six-megapixel digital camera. Both sensors were designed to be small, lightweight, and interactive in accordance with ERAST program goals of miniaturizing flight payloads. Both sensor systems also were designed to complement high-altitude studies of atmospheric ozone, land – cover changes, and natural hazard studies conducted by NASA’s Earth Resources Survey ER-2 aircraft. The Pathfinder’s imaging systems fea­tured a remote interactive operation and near-real-time transmission of images to ground stations and the Internet. This capability improved the speed, quality, and efficiency of data collection, analysis, and interpreta­tion. The NASA team noted that the rapid availability of information from these systems could aid in fast decision making during natural disasters.[1543]

The science and sensor aspects of the ERAST program promoted new solar UAV payloads and missions, including disaster management with the Global Disaster Information Network, over-the-horizon and real-time technologies, support of Earth science enterprises, high-resolution map­ping, and promotion of the Commercial Remote Sensing program partner­ship.[1544] The advantages of UAVs over satellites and piloted aircraft include: (1) long-range capability, including the ability to fly to remote locations
and cover large areas; (2) long-endurance capability, including the ability to fly longer and revisit areas on a frequent basis; (3) high-altitude capa­bility, including the ability to fly above weather or danger; (4) slow-speed flight, including the capability to stay near one location; and (5) elimina­tion of pilot exposure, thus enabling long duration or dangerous flights.[1545]

Swing Wing: The Path to Variable Geometry

The notion of variable wing-sweeping dates to the earliest days of avi­ation and, in many respects, represents an expression of the "bird imi­tative” philosophy of flight that gave the ornithopter and other flexible wing concepts to aviation. Varying the sweep of a wing was first con­ceptualized as a means of adjusting longitudinal trim. Subsequently,

Swing Wing: The Path to Variable Geometry

A time-lapse photograph of the Bell X-5, showing the range of its wing sweep. Note how the wing roots translated fore and aft to accommodate changes in center of lift with varying sweep angles. NASA.

variable-geometry advocates postulated possible use of asymmetric sweeping as a means of roll control. Lippisch, pioneer of tailless and delta design, likewise filed a patent in 1942 for a scheme of wing sweeping, but it was another German, Waldemar Voigt (the chief of advanced design for the Messerschmitt firm) who triggered the path to modern variable wing-sweeping. Ironically, at the time he did so, he had no plan to make use of such a scheme himself. Rather, he designed a graceful midwing turbojet swept wing fighter, the P 1101. The German air ministry rejected its devel­opment based upon assessments of its likely utility. Voigt decided to con­tinue its development, planning to use the airplane as an in-house swept wing research aircraft, fitted with wings of varying sweep and ballasted to accommodate changes in center of lift.[110]

By war’s end, when the Oberammergau plant was overrun by American forces, the P 1101 was over 80-percent complete. A techni­cal team led by Robert J. Woods, a member of the NACA Aerodynamics Committee, moved in to assess the plant and its projects. Woods imme­diately recognized the value of the P 1101 program, but with a twist: he proposed to Voigt that the plane be finished with a wing that could be variably swept in flight, rather than with multiple wings that could be installed and removed on the ground. Woods’s advocacy, and the results of NACA variable-sweep tests by Charles Donlan of a modified XS-1 model in the Langley 7-foot by 10-foot wind tunnel, convinced the NACA to support development of such an aircraft. In May 1949, the Air Force Air Materiel Command issued a contract covering development of two Bell variable sweep airplanes, to be designated X-5. They were effectively American-built versions of the P 1101, but with American, not German, propulsion, larger cockpit canopies for greater pilot visibility, and, of course, variable sweep wings that could range from 20 to 60 degrees.[111]

Swing Wing: The Path to Variable GeometryThe first X-5 flew in June 1951 and within 5 weeks had demonstrated variable in-flight wing sweep to its maximum 60-degree aft position. Slightly over a year later, Grumman flew a prototype variable wing-sweep naval fighter, the XF10F-1 Jaguar. Neither aircraft represented a mature application of variable sweep design. The mechanism in each was heavy and complex and shifted the wing roots back and forth down the cen­terline of the aircraft to accommodate center of lift changes as the wing was swept and unswept. Each of the two had poor flying qualities unre­lated to the variable-sweep concept, reflecting badly on their design. The XF10F-1 was merely unpleasant (its test pilot, the colorful Corwin "Corky” Meyer, tellingly recollected later "I had never attended a test pilots’ school, but, for me, the F10F provided the complete curriculum”), but the X-5 was lethal.[112] It had a vicious pitch-up at higher-sweep angles, and its aerodynamic design ensured that it would have very great difficulty when it departed into a spin. The combination of the two led to the death of Air Force test pilot Raymond Popson in the crash of the second X-5

in 1953. More fortunate, NACA pilots completed 133 research flights in the first X-5 before retiring it in 1955.

Swing Wing: The Path to Variable GeometryThe X-5 experience demonstrated that variable geometry worked, and the potential of combining good low-speed performance with high-speed supersonic dash intrigued military authorities looking at future inter­ceptor and long-range strike aircraft concepts. Coincidentally, in the late 1950s, Langley developed increasingly close ties with the British aeronau­tical community, largely a result of the personal influence of John Stack of Langley Research Center, who, in characteristic fashion, used his force­ful personality to secure a strong transatlantic partnership. This partner­ship, best known for its influence upon Anglo-American V/STOL research leading to the Harrier strike fighter, influenced as well the course of vari­able-geometry research. Barnes Wallis of Vickers had conceptualized a sharply swept variable-geometry tailless design, the Swallow, but was not satisfied with the degree of support he was receiving for the idea within British aeronautical and governmental circles. Accordingly, he turned to the United States. Over November 13-18, 1958, Stack sponsored an Anglo – American meeting at Langley to craft a joint research program, in which Wallis and his senior staff briefed the Swallow design.[113] As revealed by subsequent Langley tunnel tests over the next 6 months, Wallis’s Swallow had many stability and control deficiencies but one significant attribute: its outboard wing-pivot design. Unlike the X-5 and Jaguar and other early symmetrical-sweep v-g concepts, the wing did not adjust for chang­ing center of lift position by translating fore and aft along the fuselage centerline using a track-type approach and a single pivot point. Rather, slightly outboard of the fuselage centerline, each wing panel had its own independent pivot point. This permitted elimination of the complex track and allowed use of a sharply swept forebody to address at least some of the changes in center-of-lift location as the wings moved aft and forward. The remainder could be accommodated by control surface deflection and shifting fuel. Studies in Langley’s 7-foot by 10-foot tunnel led to refinement of the outboard pivot concept and, eventually, a patent to William J. Alford and E. C. Polhamus for its concept, awarded in September 1962. Wallis’s inspiration, joined with insightful research by Alford and Polhamus and

followed by adaptation of a conventional "tailed” configuration (a crit­ical necessity in the pre-fly-by-wire computer-controlled era), made variable wing sweep a practical reality.[114] (Understandably, after return­ing to Britain, Wallis had mixed feelings about the NASA involvement. On one hand, he had sought it after what he perceived as a "go slow” approach to his idea in Britain. On the other, following enunciation of outboard wing sweep, he believed—as his biographer subsequently wrote—"The Americans stole his ideas,”)[115]

Swing Wing: The Path to Variable GeometryThus, by the early 1960s, multiple developments—swept wings, high-performance afterburning turbofans, area ruling, the outboard wing pivot, low horizontal tail, advanced stability augmentation sys­tems, to select just a few—made possible the design of variable – geometry combat aircraft. The first of these was the General Dynamics Tactical Fighter Experimental (TFX), which became the F-111. It was a troubled program, though, like most of the Century series that had pre­ceded it (the F-102 in particular), this had essentially nothing to do with the adaptation of a variably swept wing. Instead, a poorly written speci­fication emphasizing joint service over practical, attainable military util­ity resulted in development of a compromised design. The result was a decade of lost fighter time for the U. S. Navy, which never did receive the aircraft it sought, and a constrained Air Force program that resulted in the eventual development of a satisfactory strike aircraft—the F-111F— but years late and at tremendous cost. Throughout the evolution of the F-111, NASA research proved of crucial importance to saving the pro­gram. NASA Langley, Ames, and Lewis researchers invested over 30,000

hours of wind tunnel test time in the F-111 (over 22,000 at Langley alone), addressing various shortcomings in its design, including excessive drag, lack of transonic and supersonic maneuverability, deficient directional stability, and inlet distortion that plagued its engine performance. As a result, the Air Force F-111 became a reliable weapon system, evidenced by its performance in Desert Storm, where it flew long-range strike mis­sions, performed electronic jamming, and proved the war’s single most successful "tank plinker,” on occasion destroying upward of 150 tanks per night and 1,500 over the length of the 43-day conflict.[116]

Swing Wing: The Path to Variable GeometryFrom the experience gained with the F-111 program sprang the Grumman F-14 Tomcat naval fighter and the Rockwell B-1 bomber, both of which experienced fewer development problems, benefitting greatly from NASA tunnel and other analytical research.[117] Emulating American variable-geometry development, Britain, France, and the Soviet Union undertook their own development efforts, spawning the experi­mental Dassault Mirage G (test-flown, though never placed in service), the multipartner NATO Tornado interceptor and strike fighter program, and a range of Soviet fighter and bomber aircraft, including the MiG – 23/27 Flogger, the Sukhoi Su-17/22 Fitter, the Su-24 Fencer, the Tupolev Tu-22M Backfire, and the Tu-160 Blackjack.[118]

Variable geometry has had a mixed history since; in the heyday of the space program, many proposals existed for tailored lifting body shapes deploying "switchblade” wings, and the variable-sweep wing was a prom­inent feature of the Boeing SST concept before its subsequent rejection. The tailored aerodynamics and power available with modern aircraft have rendered variable-geometry approaches less attractive than they once were, particularly because, no matter how well thought out, they invari-

Swing Wing: The Path to Variable Geometry

The Grumman F-14A Tomcat naval fighter marked the maturation of the variable wing-sweep con­cept. This is one was assigned to Dryden for high angle of attack and departure flight-testing. NASA.

ably involve greater cost, weight, and structural complexity. In 1945-1946, John Campbell and Hubert Drake undertook tests in the Langley Free Flight Tunnel of a simple model with a single pivot, so that its wing could be skewed over a range of sweep angles. This concept, which German aerodynamicists had earlier proposed in the Second World War, demon­strated "that an airplane wing can be skewed as a unit to angles as great as 40° without encountering serious stability and control difficulties.”[119] This concept, the simplest of all variable-geometry schemes, returned to the fore in the late 1970s, thanks to the work of Robert T. Jones, who adopted and expanded upon it to generate the so-called "oblique wing” design con­cept. Jones conceptualized the oblique wing as a means of producing a transonic transport that would have minimal drag and a minimal sonic boom; he even foresaw possible twin fuselage transports with a skewed wing shifting their relative position back and forth. Tests with a subscale turbojet demonstrator, the AD-1 (for Ames-Dryden), at the Dryden Flight Research Center confirmed what Campbell and Drake had discovered
nearly four decades previously, namely that at moderate sweep angles the oblique wing possessed few vices. But at higher sweep angles near 60 degrees, its deficits became more pronounced, calling into question whether its promise could ever actually be achieved.[120] On the whole, the variable-geometry wing has not enjoyed the kind of widespread suc­cess that its adherents hoped. While it may be expected that, from time to time, variable sweep aircraft will be designed and flown for partic­ular purposes, overall the fixed conventional planform, outfitted with all manner of flaps and slats and blowing, sucking, and perhaps even warping technology, continues to prevail.

NASA 1990-2007: Coping with Institutional and Resource Challenges

Over the next decade and a half, the NASA rotary wing program’s avail­able organizational and financial resources were significantly impacted by NASA and supporting Agency organizational, mission, and budget management decisions. These decisions were driven by changes in pro­gram priorities in the face of severe budget pressures and reorganization mandates seeking to improve operational efficiency. NASA leaders were being tasked with more ambitious space missions and with recovering from two Shuttle losses. In the face of these challenges, the rotary wing program, among others, was adjusted in the effort to continue to make notable research contributions. Examples of the array of real impacts on the rotary wing program over this period were: (1) termination of the NASA-DARPA RSRA-X-Wing program; (2) stopping the NASA-Army flight operations of the only XV-15 TRRA aircraft and the two RSRA vehi­cles; (3) transfer of all active NASA research aircraft to Dryden Flight Research Center, which essentially closed NASA rotary wing flight oper­ations; (4) elimination of vehicle program offices at NASA Headquarters; (5) closing the National Full-Scale Aerodynamic Complex wind tunnel at

Ames in 2003 (reopened under a lease to the United States Air Force in 2007); (6) converting to full-cost accounting, which represented a new burden on vehicle research funding allocations; and (7) the imposition of a steady and severe decline in aeronautics budget requests, staring in the late 1990s. Overshadowing this retrenching activity in the 1990s was the total reorientation, and hence complete transformation, of the Ames Research Center from an Aeronautics Research Mission Center to a Science Mission Center with the new lead in information technol­ogy (IT).[313] Responsibility for Ames’s aerodynamics and wind tunnel management was assigned to Langley Research Center. The persistent turbulence in the NASA rotary wing research community presented a growing challenge to the ability to generate research contributions. Here is where the established partnership with the United States Army and co-located laboratories at Ames, Langley, and Glenn Research Centers made it possible to maximize effectiveness by strengthening the com­bined efforts. In the case of Ames, this was done by creating a new com­bined Army-NASA Rotorcraft Division. The center of gravity of NASA rotary wing research thus gradually shifted to the Army.

The decision to ground and place in storage the only remaining XV-15 TRRA in 1994 was fortunately turned from a real setback to an unplanned contribution. Bell Helicopter, having lost the other XV-15, N702NA, in an accident in 1992, requested bailment of the Ames air­craft, N703NA, in 1994 to continue its own tilt rotor research, demon­strations, and applications evaluations in support of the ongoing (and troubled) V-22 Osprey program. The NASA and Army management agreed. As part of the extended use, on April 21, 1995, the XV-15 became the first tilt rotor to land at the world’s first operational civil vertiport at the Dallas Convention Center Heliport/Vertiport. After its long and successful operation and its retirement in 2003, this aircraft is on per­manent display at the Smithsonian Institution’s Udvar-Hazy Center at Washington Dulles International Airport, Chantilly, VA.

With the military application of proven tilt rotor technology well underway with the procurement of the V-22 Osprey by the Marine Corps and Air Force, the potential for parallel application of tilt rotor technol­ogy to civil transportation was also addressed by NASA. Early studies, funded by the FAA and NASA, indicated that the concept had potential

for worldwide application and could be economically viable.[314] In late 1992, Congress directed the Secretary of Transportation to establish a Civil Tilt Rotor Development Advisory Committee (CTRDAC) to exam­ine the technical, operational, and economic issues associated with inte­grating the civil tilt rotor (CTR) into the Nation’s transportation system. The Committee was also charged with determining the required addi­tional research and development, the regulatory changes required, and the estimated cost of the aircraft and related infrastructure develop­ment. In 1995, the Committee issued the findings. The CTR was deter­mined to be technically feasible and could be developed by the United States’ industry. It appeared that the CTR could be economically viable in heavily traveled corridors. Additional research and development and infrastructure planning were needed before industry could make a pro­duction decision. In response to this finding, elements of work suggested by the CTRDAC were included in the NASA rotorcraft program plans.

Significant advances in several technological areas would be required to enable the tilt rotor concept to be introduced into the transportation system. In 1994, researchers at Ames, Langley, and Glenn Research Centers launched the Advanced Tiltrotor Transport Technology (ATTT) program to develop the new technologies. Because of existing fund­ing limitations, initial research activity was focused on the primary concerns of noise and safety. The noise research activity included the development of refined acoustic analyses, the acquisition of wind tun­nel prop-rotor noise data to validate the analytical method, and flight tests to determine the effect of different landing approach profiles on terminal area and community noise. The safety effort was related to the need to execute approaches and departures at confined urban ver – tiports. For these situations the capability to operate safely with one – engine-inoperative in adverse weather conditions was required. This area was addressed by conducting engine design studies to enable generat­ing high levels of emergency power in OEI situations without adversely impacting weight, reliability, maintenance, or normal fuel economy. Additional operational safety investigations were carried out on the Ames Vertical Motion Simulator to assess crew station issues, control law variations, and assign advanced configurations such as the vari­able diameter tilt rotor. The principal American rotary wing airframe

and engine manufacturers participated in the noise and safety investi­gations, which assured that proper attention was given to the practical application of the new technology.[315] An initial step in civil tilt rotor air­craft development was taken by Bell Helicopter in September 1998, by teaming with Agusta Helicopter Company of Italy, to design, manufac­ture, and certify a commercial version of the XV-15 aircraft design des­ignated the BA 609.

Despite the institutional and resource turbulence overshadowing rotary wing activity, the NASA and Army researchers persisted in con­ducting base research. They continued to make contributions to advance the state of rotary wing technology applicable to civil and military needs, a typical example being the analysis of the influence of the vortex ring state (VRS) flight in rapid, steep descents, brought to the forefront by initial operating problems experienced by the V-22 Osprey.[316] The cur­rent NASA Technical Report Server (NTRS) Web site has posted over 2,200 NASA rotary wing technical reports. Of these, approximately 800 entries have been posted since 1991—the peak year, with 143 entries. These postings facilitate public access to the formal documentation of NASA contributions to rotary wing technology. The annual postings grad­ually declined after 1991. In what may be a mirror image of the state of NASA’s realigned rotary wing program, since 2001 the annual totals of posted rotary wing reports are in the 20-40 range, with an increasing percentage reflecting contributions by Army coauthors.

As the Army and NASA rotary wing research was increasingly linked in mutually supporting roles at the co-located centers, outsourcing, cooperation, and partnerships with industry and academia also grew. In 1995, the Army and NASA agreed to form the National Rotorcraft Technology Center (NRTC) occupying a dedicated facility at Ames Research Center. This jointly funded and managed organization was created to provide central coordination of rotary wing research activities of the Government, academia, and industry. Government participation included Army, NASA, Navy, and the FAA. The academic laboratories’ participation was accomplished by NRTC having acquired the responsi­bility to manage the Rotorcraft Centers of Excellence (RCOE) program

that had been in existence since 1982 under the Army Research Office. In 1996, the periodic national competition resulted in establishing Georgia Institute of Technology, the University of Maryland at College Park, and Pennsylvania State University as the three RCOE sites.

The Rotorcraft Industry Technology Association (RITA), Inc., was also established in 1996. Principal members of RITA included the United States helicopter manufacturers Bell Helicopter Textron, the Boeing Company, Sikorsky Aircraft Corporation, and Kaman Aerospace Corporation. Supporting members included rotorcraft subsystem man­ufacturers and other industry entities. Associate Members included a growing number of American universities and nonprofit organizations. RITA was governed by a Board of Directors supported by a Technical Advisory Committee that guided and coordinated the performance of the research projects. This industry-led organization and NRTC signed a unique agreement to be partners in rotary wing research. The Government would share the cost of annual research projects pro­posed by RITA and approved by NRTC evaluation teams. NASA and the Army each contributed funds for 25 percent of the cost of each proj­ect—together they matched the industry-member share of 50 percent. Over the first 5 years of the Government-industry agreement, the total annual investment averaged $20 million. The RITA projects favored mid – and near-term research efforts that complemented mid – and long­term research missions of the Army and NASA. Originally, there was concern that the research staff of industry competitors would be reluc­tant to share project proposal information and pool results under the RITA banner. This concern quickly turned out to be unfounded as the research teams embarked on work addressing common technical prob­lems faced by all participants.

NRTC was not immune to the challenges posed by limited NASA budgets, which eventually caused some cutbacks in NRTC support of RITA and the RCOE program. In 2005, the name of the RITA enter­prise was changed to the Center for Rotorcraft Innovation (CRI), and the principal office was relocated from Connecticut to the Philadelphia area.[317] Accomplishments posted by RITA-CRI include cost-effective integrated helicopter design tools and improved design and manufac­turing practices for increased damage tolerance. The area of rotorcraft

operations accomplishments included incorporating developments in synthetic vision and cognitive decision-making systems to enhance the routine performance of critical piloting tasks and enabling changes in the air traffic management system that will help rotorcraft become a more-significant participant in the civil transportation system. The American Helicopter Society International recognized RITA for one of its principal areas of research effort by awarding the Health and Usage Monitoring Project Team the AHS 1998 Grover E. Bell Award for "fos­tering and encouraging research and experimentation in the important field of helicopters.”

As previously noted, in the mid-1990s, NASA Ames’s entire aircraft fleet was transferred some 300 miles south to Dryden Flight Research Center at Edwards Air Force Base, CA. This inventory included a num­ber of NASA rotary wing research aircraft that had been actively engaged since the 1970s.[318] However, the U. S. Army Aeroflightdynamics Directorate, co-located at Ames since 1970, chose to retain their research aircraft. In 1997, after several years of negotiation, NASA Headquarters signed a directive that Ames would continue to support the Army’s rotorcraft air­worthiness research using three military helicopters outfitted for special flight research investigations. The AH-1 Cobra had been configured as the Flying Laboratory for Integrated Test and Evaluation (FLITE). One UH-60 Blackhawk was configured as the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) and remained as the focus for advanced controls and was utilized by the NASA-Army Rotorcraft Division to develop programmable, fly-by-wire controls for nap-of-the – Earth maneuvering studies. This aircraft was also used for investigat­ing noise-abatement, segmented approaches using local differential Global Positioning System (GPS) guidance. The third aircraft, another UH-60 Blackhawk, had been extensively instrumented for the conduct of the UH-60 Airloads Program. The principal focus of the program was the acquisition of detailed rotor-blade pressure distributions in a wide array of flight conditions to improve and validate advanced analytical methodology. The last NACA-NASA rotor air-loads flight program of this nature had been conducted over three decades earlier, before the advent of the modern digital data acquisition and processing revolu-

tion.[319] Again, the persistence of the NASA-Army researchers met the institutional and resource challenges and pressed on with fundamen­tal research to advance rotary wing technology.

On December 20, 2006, the White House issued Executive Order 13419 establishing the first National Aeronautics Research and Development Policy. The Executive order was accompanied by the policy statement pre­pared by the National Science and Technology Council’s Committee on Technology. This 13-page document included recommendations to clar­ify, focus, and coordinate Federal Government aeronautics R&D activi­ties. Of particular note for NASA’s rotary wing community was Section V of the policy statement: "Stable and Long-Term Foundational Research Guidelines.” The roles and responsibilities of the executive departments and agencies were addressed, noting that several executive organizations should take responsibility for specific parts of the national foundational (i. e., fundamental) aeronautical research program. Specifically, "NASA should maintain a broad foundational research effort aimed at preserv­ing the intellectual stewardship and mastery of aeronautics core compe­tencies.” In addition, "NASA should conduct research in key areas related to the development of advanced aircraft technologies and systems that support DOD, FAA, the Joint Planning and Development Office (JPDO) and other executive departments and agencies.[320] NASA may also con­duct such research to benefit the broad aeronautics community in its pursuit of advanced aircraft technologies and systems. . . . ” In support­ing research benefiting the broad aeronautics community, care is to be taken "to ensure that the government is not stepping beyond its legiti­mate purpose by competing with or unfairly subsidizing commercial ven­tures.” There is a strong implication that the new policy may lead NASA’s aeronautics role in a return to the more modest, but successful, ways of NASA’s predecessor, the National Advisory Committee for Aeronautics, with a primary focus on fundamental research, with the participation of

academia, and the cooperative research support for systems technology and experimental aircraft program investments by the DOD, the FAA, and industry. In the case of rotary wing research, since the 1990s, NASA man­agement decisions had moved the residual effort in this direction under the pressure of limited resources.

As charged, 1 year after the Executive order and policy statement were issued, the National Science and Technology Council issued the "National Plan For Aeronautics Research and Development and Related Infrastructure.” Rotary wing R&D is specifically identified as being among the aviation elements vital to national security and homeland defense with a goal of "Developing improved lift, range, and mission capability for rotorcraft.” Future NASA rotary wing foundational research contributions may also contribute to other goals and objective of the plan. For example, under Energy Efficiency and Environment Protection, is Goal 2: Advance development of technologies and operations to enable significant increases in energy efficiency of the aviation system, and Goal 3: Advance development of technologies and operational procedures to decrease the significant environmental impacts of the aviation system.

Perhaps the most important long-term challenge for the rotary wing segment of aviation is the need for focused attention on improved safety. In this regard, Goal 2 under the plan section titled "Aviation Safety is Paramount” appears to embrace the rotary wing need in calling for devel­oping technologies to reduce accidents and incidents through enhanced aerospace vehicle operations on the ground and in the air. The opportu­nity for making significant contributions in this arena may exist through enhanced teaming of NASA and the rotary wing community under the International Helicopter Study Team (IHST).[321] The goal of the ambitious IHST is to work to reduce helicopter accident rates by 80 percent in 10 years. The participating members of the organization include techni­cal societies, helicopter and engine manufacturers, commercial operator and public service organizations, the FAA, and NASA. Past performance suggests that the timely application of NASA rotary wing fundamental research expertise and unique facilities to this international endeavor would spawn significant contributions and accomplishments.

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

During the 1950s and early 1960s, aviation advanced from flight at high altitude and Mach 1 to flight in orbit at Mach 25. Within the atmo­sphere, a number of these advances stemmed from the use of the ram­jet, at a time when turbojets could barely pass Mach 1 but ramjets could aim at Mach 3 and above. Ramjets needed an auxiliary rocket stage as a booster, which brought their general demise after high-performance afterburning turbojets succeeded in catching up. But in the heady days of the 1950s, the ramjet stood on the threshold of becoming a main­stream engine. Many plans and proposals existed to take advantage of their power for a variety of aircraft and missile applications.

The burgeoning ramjet industry included Marquardt and Wright Aeronautical, though other firms such as Bendix developed them as well. There were also numerous hardware projects. One was the Air Force- Lockheed X-7, an air-launched high-speed propulsion, aerodynamic, and structures testbed. Two were surface-to-air ramjet-powered mis­siles: the Navy’s ship-based Mach 2.5+ Talos and the Air Force’s Mach 3+ Bomarc. Both went on to years of service, with the Talos flying "in anger” as a MiG-killer and antiradiation SAM-killer in Vietnam. The Air Force also was developing a 6,300-mile-range Mach 3+ cruise missile— the North American SM-64 Navaho—and a Mach 3+ interceptor fighter— the Republic XF-103. Neither entered the operational inventory. The Air Force canceled the troublesome Navaho in July 1957, weeks after the first flight of its rival, Atlas, but some flight hardware remained, and Navaho flew in test for as far as 1,237 miles, though this was a rare success. The XF-103 was to fly at Mach 3.7 using a combined turbojet-ramjet engine. It was to be built largely of titanium, at a time when this metal was little understood; it thus lived for 6 years without approaching flight test. Still, its engine was built and underwent test in December 1956.[564]

The steel-structured X-7 proved surprisingly and consistently produc­tive. The initial concept of the X-7 dated to December 1946 and consti­tuted a three-stage vehicle. A B-29 (later a B-50) served as a "first stage” launch aircraft; a solid rocket booster functioned as a "second stage” accelerating it to Mach 2, at which the ramjet would take over. First flying in April 1951, the X-7 family completed 100 missions between 1955 and program termination in 1960. After achieving its Mach 3 design goal, the program kept going. In August 1957, an X-7 reached Mach 3.95 with a 28-inch diameter Marquardt ramjet. The following April, the X-7 attained Mach 4.31—2,881 mph—with a more-powerful 36-inch Marquardt ram­jet. This established an air-breathing propulsion record that remains unsurpassed for a conventional subsonic combustion ramjet.[565]

At the same time that the X-7 was edging toward the hypersonic fron­tier, the NACA, Air Force, Navy, and North American Aviation had a far more ambitious project underway: the hypersonic X-15. This was Round Two, following the earlier Round One research airplanes that had taken flight faster than sound. The concept of the X-15 was first proposed by Robert Woods, a cofounder and chief engineer of Bell Aircraft (manu­facturer of the X-1 and X-2), at three successive meetings of the NACA’s influential Committee on Aerodynamics between October 1951 and June 1952. It was a time when speed was king, when ambitious technology­pushing projects were flying off the drawing board. These included the Navaho, X-2, and XF-103, and the first supersonic operational fight­ers—the Century series of the F-100, F-101, F-102, F-104, and F-105.[566]

Some contemplated even faster speeds. Walter Dornberger, former commander of the Nazi research center at Peenemunde turned senior Bell Aircraft Corporation executive, was advocating BoMi, a proposed skip­gliding "Bomber-Missile” intended for Mach 12. Dornberger supported Woods in his recommendations, which were adopted by the NACA’s Executive Committee in July 1952. This gave them the status of policy, while the Air Force added its own support. This was significant because

its budget was 300 times larger than that of the NACA.[567] The NACA alone lacked funds to build the X-15, but the Air Force could do this easily. It also covered the program’s massive cost overruns. These took the air­frame from $38.7 million to $74.5 million and the large engine from $10 million to $68.4 million, which was nearly as much as the airframe.[568]

The Air Force had its own test equipment at its Arnold Engineering Development Center (AEDC) at Tullahoma, TN, an outgrowth of the Theodore von Karman technical intelligence mission that Army Air Forces Gen. Henry H. "Hap” Arnold had sent into Germany at the end of the Second World War. The AEDC, with brand-new ground test and research facilities, took care to complement, not duplicate, the NACA’s research facilities. It specialized in air-breathing and rocket-engine test­ing. Its largest installation accommodated full-size engines and provided continuous flow at Mach 4.75. But the X-15 was to fly well above this, to over Mach 6, highlighting the national facilities shortfall in hypersonic test capabilities existing at the time of its creation.[569]

While the Air Force had the deep pockets, the NACA—specifically Langley—conducted the research that furnished the basis for a design. This took the form of a 1954 feasibility study conducted by John Becker, assisted by structures expert Norris Dow, rocket expert Maxime Faget, configu­ration and controls specialist Thomas Toll, and test pilot James Whitten. They began by considering that during reentry, the vehicle should point its nose in the direction of flight. This proved impossible, as the heating was too high. He considered that the vehicle might alleviate this problem by using lift, which he was to obtain by raising the nose. He found that the thermal environment became far more manageable. He concluded that the craft should enter with its nose high, presenting its flat under­surface to the atmosphere. The Allen-Eggers paper was in print, and he later wrote that: "it was obvious to us that what we were seeing here was a new manifestation of H. J. Allen’s ‘blunt-body’ principle.”[570]

To address the rigors of the daunting aerothermodynamic environ­ment, Norris Dow selected Inconel X (a nickel alloy from International Nickel) as the temperature-resistant superalloy that was to serve for the aircraft structure. Dow began by ignoring heating and calculated the skin gauges needed only from considerations of strength and stiffness. Then he determined the thicknesses needed to serve as a heat sink. He found that the thicknesses that would suffice for the latter were nearly the same as those that would serve merely for structural strength. This meant that he could design his airplane and include heat sink as a bonus, with little or no additional weight. Inconel X was a wise choice; with a density of 0.30 pounds per cubic inch, a tensile strength of over 200,000 pounds per square inch (psi), and yield strength of 160,000 psi, it was robust, and its melting temperature of over 2,500 °F ensured that the rigors of the anticipated 1,200 °F surface temperatures would not weaken it.[571]

Work at Langley also addressed the important issue of stability. Just then, in 1954, this topic was in the forefront because it had nearly cost the life of the test pilot Chuck Yeager. On the previous December 12, he had flown the X-1A to Mach 2.44 (approximately 1,650 mph). This exceeded the plane’s stability limits; it went out of control and plunged out of the sky. Only Yeager’s skill as a pilot had saved him and his airplane. The problem of stability would be far more severe at higher speeds.[572]

Analysis, confirmed by experiments in the 11-inch wind tunnel, had shown that most of the stability imparted by an aircraft’s tail surfaces was produced by its wedge-shaped forward portion. The aft portion contributed little to the effectiveness because it experienced lower air pressure. Charles McLellan, another Langley aerodynamicist, now proposed to address the problem of hypersonic stability by using tail sur­faces that would be wedge-shaped along their entire length. Subsequent tests in the 11-inch tunnel, as mentioned previously, confirmed that this solution worked. As a consequence, the size of the tail surfaces shrank from being almost as large as the wings to a more nearly con­ventional appearance.[573]

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

LIQUID OXYGEN TANK (OXIDIZER)

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

HYDROGEN-

PEROXIDE

 

ATTITUDE ROCKE’

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

Подпись:Подпись: HYDROGEN PEROXOE HELIUM

TANKS ‘

EJECTION SEAT-

A schematic drawing of the X-15’s internal layout. NASA.

This study made it possible to proceed toward program approval and the award of contracts both for the X-15 airframe and its power-plant, a 57,000-pound-thrust rocket engine burning a mix of liquid oxygen and anhydrous ammonia. But while the X-15 promised to advance the research airplane concept to over Mach 6, it demanded something more than the conventional aluminum and stainless steel structures of earlier craft such as the X-1 and X-2. Titanium was only beginning to enter use, primarily for reducing heating effects around jet engine exhausts and afterburners. Magnesium, which Douglas favored for its own high-speed designs, was flammable and lost strength at temperatures higher than 600 °F. Inconel X was heat-resistant, reasonably well known, and relatively easily worked. Accordingly, it was swiftly selected as the structural material of choice when Becker’s Langley team assessed the possibility of designing and fabricating a rocket-boosted air-launched hypersonic research airplane. The Becker study, completed in April 1954, chose Mach 6 as the goal and proposed to fly to altitudes as great as 350,000 feet. Both marks proved remarkably prescient: the X-15 eventually flew to 354,200 feet in 1963 and Mach 6.70 in 1967. This was above 100 kilometers and well above the sensible atmo­sphere. Hence, at that early date, more than 3 years before Sputnik, Becker and his colleagues already were contemplating piloted flight into space.[574]

The X-15: Pioneering Piloted Hypersonics

North American Aviation won the contract to build the X-15. It first flew under power in September 1959, by which time an Atlas had hurled an

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

The North American X-1 5 at NASA’s Flight Research Center (now the Dryden Flight Research Center) in 1961. NASA.

RVX-2 nose cone to its fullest range. However, as a hypersonic experiment, the X-15 was a complete airplane. It thus was far more complex than a simple reentry body, and it took several years of cautious flight-testing before it reached peak speed of above Mach 6, and peak altitude as well.

Testing began with two so-called "Little Engines,” a pair of vintage Reaction Motors XLR11s that had earlier served in the X-1 series and the Douglas D-558-2 Skyrocket. Using these, the X-15 topped the records of the earlier X-2, reaching Mach 3.50 and 136,500 feet. Starting in 1961, using the "Big Engine”—the Thiokol XLR99 with its 57,000 pounds of thrust—the X-15 flew to its Mach 6 design speed and 50+ mile design alti­tude, with test pilot Maj. Robert White reaching Mach 6.04 and NASA pilot Joseph Walker an altitude of 354,200 feet. After a landing accident, the second X-15 was modified with external tanks and an ablative coating, with Air Force Maj. William "Pete” Knight subsequently flying this variant to Mach 6.70 (4,520 mph) in 1967. However, it sustained severe thermal damage, partly as a result of inadequate understanding of the interac­tions of impinging hypersonic shock-on-shock flows. It never flew again.[575]

The X-15’s cautious buildup proved a wise approach, for this gave lee­way when problems arose. Unexpected thermal expansion leading to local­ized buckling and deformation showed up during early high-Mach flights. The skin behind the wing leading edge exhibited localized buckling after the first flight to Mach 5.3, but modifications to the wings eliminated hot

spots and prevented subsequent problems, enabling the airplane to reach beyond Mach 6. In addition, a flight to Mach 6.04 caused a windshield to crack because of thermal expansion. This forced redesign of its frame to incorporate titanium, which has a much lower coefficient of expansion. The problem—a rare case in which Inconel caused rather than resolved a heating problem—was fixed by this simple substitution.[576]

Altitude flights brought their own problems, involving potentially dangerous auxiliary power unit (APU) failures. These issues arose in 1962 as flights began to reach well above 100,000 feet; the APUs began to experience gear failure after lubricating oil foamed and lost its lubri­cating properties. A different oil had much less tendency to foam; it now became standard. Designers also enclosed the APU gearbox within a pressurized enclosure. The gear failures ceased.[577]

The X-15 substantially expanded the use of flight simulators. These had been in use since the famed Link Trainer of Second World War and now included analog computers, but now they also took on a new role as they supported the development of control systems and flight equip­ment. Analog computers had been used in flight simulation since 1949. Still, in 1955, when the X-15 program began, it was not at all custom­ary to use flight simulators to support aircraft design and development. But program managers turned to such simulators because they offered effective means to study new issues in cockpit displays, control systems, and aircraft handling qualities. A 1956 paper stated that simulation had "heretofore been considered somewhat of a luxury for high-speed air­craft,” but now "has been demonstrated as almost a necessity,” in all three axes, "to insure [sic] consistent and successful entries into the atmosphere.” Indeed, pilots spent much more time practicing in simu­lators than they did in actual flight, as much as an hour per minute of actual flying time.[578]

The most important flight simulator was built by North American. Located originally in Los Angeles, Paul Bikle, the Director of NASA’s Flight Research Center, moved it to that Center in 1961. It replicated the X-15 cockpit and included actual hydraulic and control-system hard­ware. Three analog computers implemented equations of motion that governed translation and rotation of the X-15 about all three axes, trans­forming pilot inputs into instrument displays.[579]

The North American simulator became critical in training X-15 pilots as they prepared to execute specific planned flights. A particular mission might take little more than 10 minutes, from ignition of the main engine to touchdown on the lakebed, but a test pilot could easily spend 10 hours making practice runs in this facility. Training began with repeated trials of the normal flight profile with the pilot in the simulator cockpit and a ground controller close at hand. The pilot was welcome to recommend changes, which often went into the flight plan. Next came rehearsals of off-design missions: too much thrust from the main engine, too high a pitch angle when leaving the stratosphere.

Much time was spent practicing for emergencies. The X-15 had an inertial reference unit that used analog circuitry to display attitude, alti­tude, velocity, and rate of climb. Pilots dealt with simulated failures in this unit as they worked to complete the normal mission or, at least, to execute a safe return. Similar exercises addressed failures in the stabil­ity augmentation system. When the flight plan raised issues of possible flight instability, tests in the simulator used highly pessimistic assump­tions concerning stability of the vehicle. Other simulations introduced in-flight failures of the radio or Q-ball multifunction sensor. Premature engine shutdown imposed a requirement for safe landing on an alter­nate lakebed that was available for emergency use.[580]

The simulations indeed had realistic cockpit displays, but they left out an essential feature: the g-loads, produced both by rocket thrust and by deceleration during reentry. In addition, a failure of the stability aug­mentation system, during reentry, could allow the airplane to oscillate

in pitch and yaw. This changed the drag characteristics and imposed a substantial cyclical force.

To address such issues, investigators installed a flight simulator within the gondola of an existing centrifuge at the Naval Air Development Center in Johnsville, PA. The gondola could rotate on two axes while the centrifuge as a whole was turning. It not only produced g-forces; its g-forces increased during the simulated rocket burn. The centrifuge imposed such forces anew during reentry while adding a cyclical com­ponent to give the effect of an oscillation in yaw or pitch.[581]

There also were advances in pressure suits, under development since the 1930s. Already an early pressure suit had saved the life of Maj. Frank K. Everest during a high-altitude flight in the X-1, when it had suffered cabin decompression from a cracked canopy. Marine test pilot Lt. Col. Marion Carl had worn another during a flight to 83,235 feet in the D-558-2 Skyrocket in 1953, as had Capt. Iven Kincheloe during his record flight to 126,200 feet in the Bell X-2 in 1956. But these early suits, while effective in protecting pilots, were almost rigid when inflated, nearly immobilizing them. In contrast, the David G. Clark Company, a girdle manufacturer, introduced a fabric that contracted in circumfer­ence while it stretched in length. An exchange between these effects cre­ated a balance that maintained a constant volume, preserving a pilot’s freedom of movement. The result was the Clark MC-2 suit, which, in addition to the X-15, formed the basis for American spacesuit develop­ment from Project Mercury forward. Refined as the A/P22S-2, the X-15’s suit became the standard high-altitude pressure suit for NASA and the Air Force. It formed the basis for the Gemini suit and, after 1972, was adopted by the U. S. Navy as well, subsequently being employed by pilots and aircrew in the SR-71, U-2, and Space Shuttle.[582]

The X-15 also accelerated development of specialized instrumenta­tion, including a unique gimbaled nose sensor developed by Northrop. It furnished precise speed and positioning data by evaluation of dynamic pressure ("q” in aero engineering shorthand), and thus was known as the Q-ball. The Q-ball took the form of a movable sphere set in the nose of the craft, giving it the appearance of the enlarged tip of a ballpoint pen. "The Q-ball is a go-no go item,” NASA test pilot Joseph Walker told Time magazine reporters in 1961, adding: "Only if she checks okay do we go.”[583] The X-15 also incorporated "cold jet” hydrogen peroxide reac­tion controls for maintaining vehicle attitude in the tenuous upper atmo­sphere, when dynamic air pressure alone would be insufficient to permit adequate flight control functionality. When Iven Kincheloe reached 126,200 feet, his X-2 was essentially a free ballistic object, uncontrolla­ble in pitch, roll, and yaw as it reached peak altitude and then began its descent. This situation made reaction controls imperative for the new research airplane, and the NACA (later NASA) had evaluated them on a so-called "Iron Cross” simulator on the ground and then in flight on the Bell X-1B and on a modified Lockheed F-104 Starfighter. They then proved their worth on the X-15 and, as with the Clark pressure suit, were incorporated on Mercury and subsequent American spacecraft.

The X-15 introduced a side stick flight controller that the pilot would utilize during acceleration (when under loads of approximately 3 g’s), relying on a fighter-type conventional control column for approach and landing. The third X-15 had a very different flight control system than the other two, differing greatly from the now-standard stability-aug­mented hydromechanical system carried by operational military and civilian aircraft. The third aircraft introduced a so-called "adaptive” flight control system, the MH-96. Built by Minneapolis Honeywell, the MH-96 relied on rate gyros, which sensed rates of motion in pitch, roll, and yaw. It also incorporated "gain,” defined as the proportion between sensed rates of angular motion and a deflection of the ailerons or other controls. This variable gain, which changed automatically in response to flight conditions, functioned to maintain desired handling qualities across the spectrum of X-15 performance. This arrangement made it possible to introduce blended reaction and aerodynamic controls on the same stick, with this blending occurring automatically in response to

the values determined for gain as the X-15 flew out of the atmosphere and back again. Experience, alas, would reveal the MH-96 as an imma­ture, troublesome system, one that, for all its ambition, posed signifi­cant headaches. It played an ultimately fatal role in the loss of X-15 pilot Maj. Michael Adams in 1967.[584]

The three X-15s accumulated a total of 199 flights from 1959 through

1968. As airborne instruments of hypersonic research, they accumu­lated nearly 9 hours above Mach 3, close to 6 hours above Mach 4, and 87 minutes above Mach 5. Many concepts existed for X-15 deriva­tives and spinoffs, including using it as a second stage to launch small satellite-lofting boosters, to be modified with a delta wing and scram – jet, and even to form the basis itself for some sort of orbital spacecraft; for a variety of reasons, NASA did not proceed with any of these. More significantly, however, was the strong influence the X-15 exerted upon subsequent hypersonic projects, particularly the National Hypersonic Flight Research Facility (NHFRF, pronounced "nerf”), intended to reach Mach 8.

A derivative of the Air Force Flight Dynamics Laboratory’s X-24C study effort, NHFRF was also to cruise at Mach 6 for 40 seconds. A joint Air Force-NASA committee approved a proposal in July 1976 with an estimated program cost of $200 million, and NHFRF had strong support from NASA’s hypersonic partisans in the Langley and Dryden Centers. Unfortunately, its rising costs, at a time when the Shuttle demanded an ever-increasing proportion of the Agency’s budget and effort, doomed it, and it was canceled in September 1977. Overall, the X-15 set speed and altitude records that were not surpassed until the advent of the Space Shuttle.[585]

The X-20 Dyna-Soar

During the 1950s, as the X-15 was taking shape, a parallel set of ini­tiatives sought to define a follow-on hypersonic program that could actually achieve orbit. They were inspired in large measure by the 1938-1944 Silbervogel ("Silver Bird”) proposal of Austrian space flight advocate Eugen Sanger and his wife, mathematician Irene Sanger-Bredt, which greatly influenced postwar Soviet, American, and European think­ing about hypersonics and long-range "antipodal” flight. Influenced by Sanger’s work and urged onward by the advocacy of Walter Dornberger, Bell Aircraft Corporation in 1952 proposed the BoMi, intended to fly 3,500 miles. Bell officials gained funding from the Air Force’s Wright Air Development Center (WADC) to study longer-range 4,000-mile and 6,000-mile systems under the aegis of Air Force project MX-2276.

Support took a giant step forward in February 1956, when Gen. Thomas Power, Chief of the Air Research and Development Command (ARDC, predecessor of Air Force Systems Command) and a future Air Force Chief of Staff, stated that the service should stop merely consid­ering such radical craft and instead start building them. With this level of interest, events naturally moved rapidly. A month later, Bell received a study contract for Brass Bell, a follow-on Mach 15 rocket-lofted boost – glider for strategic reconnaissance. Power preferred another orbital glider concept, RoBo (for Rocket Bomber), which was to serve as a global strike system. To accelerate transition of hypersonics from the research to the operational community, the ARDC proposed its own concept, Hypersonic Weapons Research and Development Supporting System (HYWARDS). With so many cooks in the kitchen, the Air Force needed a coordinated plan. An initial step came in December 1956, as Bell raised the velocity of Brass Bell to Mach 18. A month later, a group headed by John Becker, at Langley, recommended the same design goal for HYWARDS. RoBo still remained separate, but it emerged as a long­term project that could be operational by the mid-1970s.[586]

NACA researchers split along centerlines over the issue of what kind of wing design to employ for HYWARDS. At NACA Ames, Alfred Eggers and Clarence Syvertson emphasized achieving maximum lift. They pro­posed a high-wing configuration with a flat top, calculating its hypersonic

life-to-drag (L/D) as 6.85 and measuring a value of 6.65 during hyper­sonic tunnel tests. Langley researchers John Becker and Peter Korycinski argued that Ames had the configuration "upside down.” Emphasizing lighter weight, they showed that a flat-bottom Mach 18 shape gave a weight of 21,400 pounds, which rose only modestly at higher speeds. By contrast, the Ames "flat-top” weight was 27,600 pounds and rising steeply. NASA officials diplomatically described the Ames and Langley HYWARDS concepts respectively as "high L/D” and "low heating,” but while the imbroglio persisted, there still was no acceptable design. It fell to Becker and Korycinski to break the impasse in August 1957, and they did so by considering heating. It was generally expected that such craft required active cooling. But Becker and his Langley colleagues found that a glider of global range achieved peak uncooled skin tem­peratures of 2,000 °F, which was survivable by using improved materi­als. Accordingly, the flat-bottom design needed no coolant, dramatically reducing both its weight and complexity.[587]

This was a seminal conclusion that reshaped hypersonic thinking and influenced all future development down to the Space Shuttle. In October 1957, coincident with the Soviet success with Sputnik, the ARDC issued a coordinated plan that anticipated building HYWARDS for research at 18,000 feet per second, following it with Brass Bell for reconnaissance at the same speed and then RoBo, which was to carry nuclear bombs into orbit. HYWARDS now took on the new name of Dyna-Soar, for "Dynamic Soaring,” an allusion to the Sanger-legacy skip-gliding hypersonic reentry. (It was later designated X-20.) To the NACA, it constituted a Round Three following the Round One X-1, X-2, and Skyrocket, and the Round Two X-15.

The flat-bottom configuration quickly showed that it was robust enough to accommodate flight at much higher speeds. In 1959, Herbert York, the Defense Director of Research and Engineering, stated that Dyna-Soar was to fly at 15,000 mph, lofted by the Martin Company’s Titan I missile, though this was significantly below orbital speed. But

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

W/S, – 20 turbulent.

 

Vatiatkn

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

This 1957 Langley trade-study shows weight advantage of flat-bottom reentry vehicles at higher Mach numbers. This led to abandonment of high-wing designs in favor of flat-bottom ones such as the X-20 Dyna-Soar and the Space Shuttle. NASA.

during subsequent years it changed to the more-capable Titan II and then to the powerful Titan III-C. With two solid-fuel boosters augment­ing its liquid hypergolic main stage, it could easily boost Dyna-Soar to the 18,000 mph necessary for it to achieve orbit. A new plan of December 1961 dropped suborbital missions and called for "the early attainment of orbital flight.”[588]

By then, though, Dyna-Soar was in deep political trouble. It had been conceived initially as a prelude to the boost-glider Brass Bell for

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

This full-size mockup of the X-20 gives an indication of its small, compact design. USAF.

reconnaissance and to the orbital RoBo for bombardment. But Brass Bell gave way to a purpose-built concept for a small-piloted station, the Manned Orbiting Laboratory (MOL), which could carry more sophis­ticated reconnaissance equipment. (Ironically, though a team of MOL astronauts was selected, MOL itself likewise was eventually canceled.) RoBo, a strategic weapon, fell out of the picture completely, for the success of the solid-propellant Minuteman ICBM established the silo – launched ICBM as the Nation’s prime strategic force, augmented by the Navy’s fleet of Polaris-launching ballistic missile submarines.[589]

In mid-196l, Secretary of Defense Robert S. McNamara directed the Air Force to justify Dyna-Soar on military grounds. Service advocates responded by proposing a host of applications, including orbital recon­naissance, rescue, inspection of Soviet spacecraft, orbital bombardment,

and use of the craft as a ferry vehicle. McNamara found these rational­izations unconvincing but was willing to allow the program to proceed as a research effort, at least for the time being. In an October 1961 memo to President John F. Kennedy, he proposed to "re-orient the program to solve the difficult technical problems involved in boosting a body of high lift into orbit, sustaining man in it and recovering the vehicle at a desig­nated place.”[590] This reorientation gave the project 2 more years of life.

Then in 1963, he asked what the Air Force intended to do with it after using it to demonstrate maneuvering entry. He insisted he could not justify continuing the program if it was a dead-end effort with no ultimate purpose. But it had little potential utility, for it was not a cargo rocket, nor could it carry substantial payloads, nor could it conduct long – duration missions. And so, in December McNamara canceled it, after 6 years of development time, a Government contract investment of $410 million, the expenditure of 16 million man-hours by nearly 8,000 con­tractor personnel, 14,000 hours of wind tunnel testing, 9,000 hours of simulator runs, and the preparation of 3,035 detailed technical reports.[591]

Ironically, by time of its cancellation, the X-20 was so far advanced that the Air Force had already set aside a block of serial numbers for the 10 production aircraft. Its construction was well underway, Boeing having completed an estimated 42 percent of design and fabrication tasks.[592] Though the X-20 never flew, portions of its principal purposes were fulfilled by other programs. Even before cancellation, the Air Force launched the first of several McDonnell Aerothermodynamic/ elastic Structural Systems Environmental Test (ASSET) hot-structure radiative-cooled flat-bottom cone-cylinder shapes sharing important configuration similarities to the Dyna-Soar vehicle. Slightly later, its Project PRIME demonstrated cross-range maneuver after atmospheric entry. This used the Martin SV-5D lifting body, a vehicle differing significantly from the X-20 but which complemented it nonetheless. In this fashion, the Air Force succeeded at least partially in obtain­ing lifting reentry data from winged vehicles and lifting bodies that widened the future prospects for reentry.

Hot Structures and Return from Space: X-20’s Legacy and ASSET

Dyna-Soar never flew, but it sharply extended both the technology and the temperature limits of hot structures and associated aircraft ele­ments, at a time when the American space program was in its infancy.[593] The United States successfully returned a satellite from orbit in April 1959, while ICBM nose cones were still under test, when the Discoverer II test vehicle supporting development of the National Reconnaissance Office’s secret Corona spy satellite returned from orbit. Unfortunately, it came down in Russian-occupied territory far removed from its intended recovery area near Hawaii. Still, it offered proof that practical hyper­sonic reentry and recovery were at hand.

An ICBM nose cone quickly transited the atmosphere, whereas recov­erable satellite reentry took place over a number of minutes. Hence a sat­ellite encountered milder aerothermodynamic conditions that imposed strong heat but brought little or no ablation. For a satellite, the heat of ablation, measured in British thermal units (BTU) per pound of protec­tive material, was usually irrelevant. Instead, insulative properties were more significant: Teflon, for example, had poor ablative properties but was an excellent insulator.[594]

Production Dyna-Soar vehicles would have had a four-flight ser­vice life before retirement or scrapping, depending upon a hot structure comprised of various materials, each with different but complementary properties. A hot structure typically used a strong material capable of withstanding intermediate temperatures to bear flights loads. Set off from it were outer panels of a temperature-resistant material that did not have to support loads but that could withstand greatly elevated tem­peratures as high as 3,000 °F. In between was a lightweight insulator (in Dyna-Soar’s case, Q-felt, a silica fiber from the firm of Johns Manville). It had a tendency to shrink, thus risking dangerous gaps where high heat could bypass it. But it exhibited little shrinkage above 2,000 °F

and could withstand 3,000 °F. By "preshrinking” this material, it qual­ified for operational use.[595]

For its primary structure, Dyna-Soar used Rene 41, a nickel alloy that included chromium, cobalt, and molybdenum. Its use was pioneered by General Electric for hot-section applications in its jet engines. The alloy had room temperature yield strength of 130,000 psi, declining slightly at 1,200 °F, and was still strong at 1,800 °F. Some of the X-20’s panels were molybdenum alloy, which offered clear advantages for such hot areas as the wing leading edges. D-36 columbium alloy covered most other areas of the vehicle, including the flat underside of the wings.

These panels had to resist flutter, which brought a risk of cracking because of fatigue, as well as permitting the entry of superheated hypersonic flows that could destroy the internal structure within seconds. Because of the risks to wind tunnels from hasty and ill-considered flutter testing (where a test model for example can disintegrate, damaging the interior of the tun­nel), X-20 flutter testing consumed 18 months of Boeing’s time. Its people started testing at modest stress levels and reached levels that exceeded the vehicle’s anticipated design requirements.[596]

The X-20’s nose cap had to function in a thermal and dynamic pres­sure environment more extreme even than that experienced by the X-15’s Q-ball. It was a critical item that faced temperatures of 3,680 °F, accom­panied by a daunting peak heat flux of 143 BTU per square foot per second. Both Boeing and its subcontractor Chance Vought pursued inde­pendent approaches to development, resulting in two different designs. Vought built its cap of siliconized graphite with an insulating layer of a temperature-resistant zirconium oxide ceramic tiles. Their melting point was above 4,500 °F, and they covered its forward area, being held in place by thick zirconium oxide pins. The Boeing design was simpler, using a solid zirconium oxide nose cap reinforced against cracking with two screens of platinum-rhodium wire. Like the airframe, the nose caps were rated through four orbital flights and reentries.[597]

Generally, the design of the X-20 reflected the thinking of Langley’s John Becker and Peter Korycinski. It relied on insulation and radia­tion of the accumulated thermal load for primary thermal protection. But portions of the vehicle demanded other approaches, with special­ized areas and equipment demanding specialized solutions. Ball bear­ings, facing a 1,600 °F thermal environment, were fabricated as small spheres of Rene 41 nickel alloy covered with gold. Antifriction bearings used titanium carbide with nickel as a binder. Antenna windows had to survive hot hypersonic flows yet be transparent to radio waves. A mix of oxides of cobalt, aluminum, and nickel gave a coating that showed a suitable emittance while furnishing requisite temperature protection.

The pilot looked through five clear panes: three that faced forward and two on the sides. The three forward panes were protected by a jetti – sonable protective shield and could only be used below Mach 5 after reen­try, but the side ones faced a less severe aerothermodynamic environment and were left unshielded. But could the X-20 be landed if the protective shield failed to jettison after reentry? NASA test pilot Neil Armstrong, later the first human to set foot upon the Moon, flew approaches using a modified Douglas F5D Skylancer. He showed it was possible to land the Dyna-Soar using only visual cues obtained through the side windows.

The cockpit, equipment bay, and a power bay were thermally iso­lated and cooled via a "water wall” using lightweight panels filled with a jelled water mix. The hydraulic system was cooled as well. To avoid overheating and bursting problems with conventional inflated rubber tires, Boeing designed the X-20 to incorporate tricycle-landing skids with wire brush landing pads.[598] Dyna-Soar, then, despite never having flown, significantly advanced the technology of hypersonic aerospace vehicle design. Its contributions were many and can be illustrated by examin­ing the confidence with which engineers could approach the design of critical technical elements of a hypersonic craft, in 1958 (the year North American began fabricating the X-15) and 1963 (the year Boeing began fabricating the X-20):[599] In short, within the 5 years that took the X-20 from a paper study to a project well underway, the "art of the possible”

TABLE 1

INDUSTRY HYPERSONIC "DESIGN CONFIDENCE"

AS MEASURED BY ACHIEVABLE DESIGN TEMPERATURE CRITERIA, °F

ELEMENT

X-15

X-20

Nose cap

3,200

4,300

Surface panels

1,200

2,750

Primary structure

1,200

1,800

Leading edges

1,200

3,000

Control surfaces

1,200

1,800

Bearings

1,200

1,800

in hypersonics witnessed a one-third increase in possible nose cap tem­peratures, a more than double increase in the acceptable temperatures of surface panels and leading edges, and a one-third increase in the accept­able temperatures of primary structures, control surfaces, and bearings.

The winddown and cancellation of Dyna-Soar coincided with the first flight tests of the much smaller but nevertheless still very techni­cally ambitious McDonnell ASSET hypersonic lifting reentry test vehicle. Lofted down the Atlantic Test Range on modified Thor and Thor-Delta boosters, they demonstrated reentry at over Mach 18. ASSET dated to 1959, when Air Force hypersonic advocates advanced it as a means of assessing the accuracy of existing hypersonic theory and predictive tech­niques. In 1961, McDonnell Aircraft, a manufacturer of fighter aircraft and also the Project Mercury spacecraft, began design and fabrication of ASSET’s small sharply swept delta wing flat-bottom boost-gliders. They had a length of 69 inches and a span of 55 inches.

Though in many respects they resembled the soon-to-be-canceled X-20, unlike that larger, crewed transatmospheric vehicle, the ASSET gliders were more akin to lifting nose cone shapes. Instead of the X-20’s primary reliance upon Rene 41, the ASSET gliders largely used colum- bium alloys, with molybdenum alloy on their forward lower heat shield, graphite wing leading edges, various insulative materials, and colum – bium, molybdenum, and graphite coatings as needed. There were also three nose caps: one fabricated from zirconium oxide rods, another from tungsten coated with thorium, and a third of siliconized graphite coated with zirconium oxide. Though all six ASSETs looked alike, they were built in two differing variants: four Aerothermodynamic Structural Vehicles (ASV) and two Aerothermodynamic Elastic Vehicles (AEV). The former reentered from higher velocities (between 16,000 and 19,500 feet

per second) and altitudes (from 202,000 to 212,000 feet), necessitating use of two-stage Thor-Delta boosters. The latter (only one of which flew successfully) used a single-stage Thor booster and reentered at 13,000 feet per second from an altitude of 173,000 feet. It was a hypersonic flut­ter research vehicle, analyzing as well the behavior of a trailing-edge flap representing a hypersonic control surface. Both the ASV and AEV flew with a variety of experimental panels installed at various locations and fabricated by Boeing, Bell, and Martin.[600] The ASSET program conducted six flights between September 1963 and February 1965, all successful save for one AEV launch in March 1964. Though intended for recovery from the Atlantic, only one survived the rigors of parachute deployment, descent, and being plunged into the ocean. But that survivor, the ASV – 3, proved to be in excellent condition, with the builder, International Harvester, rightly concluding it "could have been used again.”[601] ASV-4, the best flight flown, was also the last one, with the final flight-test report declaring that it returned "the highest quality data of the ASSET pro­gram.” It flew at a peak speed of Mach 18.4, including a hypersonic glide that covered 2,300 nautical miles.[602]

Overall, the ASSET program scored a host of successes. It was all the more impressive for the modest investment made in its development: just $21.2 million. It furnished the first proof of the magnitude and serious­ness of upper-surface leeside heating and the dangers of hypersonic flow impingement into interior structures. It dealt with practical issues of fab­rication, including fasteners and coatings. It contributed to understand­ing of hypersonic flutter and of the use of movable control surfaces. It also demonstrated successful use of an attitude-adjusting reaction con­trol system, in near vacuum and at speeds much higher than those of the X-15. It complemented Dyna-Soar and left the aerospace industry believ­ing that hot structure design technology would be the normative tech­nical approach taken on future launch vehicles and orbital spacecraft.[603]

TABLE 2

MCDONNELL ASSET FLIGHT TEST PROGRAM

DATE

VEHICLE

BOOSTER

VELOCITY

(FEET/

SECOND)

ALTITUDE

(FEET)

RANGE

(NAUTICAL

MILES)

Sept. 1 8, 1963

ASV-1

Thor

16,000

205,000

987

Mar. 24, 1964

ASV-2

Thor-Delta

18,000

195,000

1,800

July 22, 1964

ASV-3

Thor-Delta

19,500

225,000

1,830

Oct. 27, 1964

AEV-1

Thor

13,000

168,000

830

Dec. 8, 1964

AEV-2

Thor

13,000

1 87,000

620

Feb. 23, 1965

ASV-4

Thor-Delta

19,500

206,000

2,300

Hypersonic Aerothermodynamic Protection and the Space Shuttle

Certainly over much of the Shuttle’s early conceptual period, advocates thought such logistical transatmospheric aerospace craft would employ hot structure thermal protection. But undertaking such structures on large airliner-size vehicles proved troublesome and thus premature. Then, as though given a gift, NASA learned that Lockheed had built a pilot plant and could mass-produce silica "tiles” that could be attached to a conventional aluminum structure, an approach far more appealing than designing a hot structure. Accordingly, when the Agency under­took development of the Space Shuttle in the 1970s, it selected this approach, meaning that the new Shuttle was, in effect, a simple alumi­num airplane. Not surprisingly, Lockheed received a NASA subcontract in 1973 for the Shuttle’s thermal-protection system.

Lockheed had begun its work more than a decade earlier, when investigators at Lockheed Missiles and Space began studying ceramic fiber mats, filing a patent on the technology in December 1960. Key people included R. M. Beasley, Ronald Banas, Douglas Izu, and Wilson Schramm. By 1965, subsequent Lockheed work had led to LI-1500, a material that was 89 percent porous and weighed 15 pounds per cubic foot (lb/ft3). Thicknesses of no more than an inch protected test sur­faces during simulations of reentry heating. LI-1500 used methyl meth­acrylate (Plexiglas), which volatilized when hot, producing an outward

flow of cool gas that protected the heat shield, though also compromis­ing its reusability.[604]

Lockheed’s work coincided with NASA plans in 1965 to build a space station as is main post-Apollo venture and, consequently, the first great wave of interest in designing practical logistical Shuttle-like spacecraft to fly between Earth and the orbital stations. These typically were con­ceived as large winged two-stage-to-orbit systems with fly-back boosters and orbital spaceplanes. Lockheed’s Maxwell Hunter devised an influen­tial design, the Star Clipper, with two expendable propellant tanks and LI-1500 thermal protection.[605] The Star Clipper also was large enough to benefit from the Allen-Eggers blunt-body principle, which lowered its temperatures and heating rates during reentry. This made it possi­ble to dispense with the outgassing impregnant, permitting use—and, more importantly, reuse—of unfilled LI-1500. Lockheed also introduced LI-900, a variant of LI-1500 with a porosity of 93 percent and a weight of only 9 pounds per cubic foot. As insulation, both LI-900 and LI-1500 were astonishing. Laboratory personnel found that they could heat a tile in a furnace until it was white hot, remove it, allow its surface to cool for a couple of minutes, and pick it up at its edges with their fin­gers, with its interior still glowing at white heat.[606]

Previous company work had amounted to general materials research. But Lockheed now understood in 1971 that NASA wished to build the Shuttle without simultaneously proceeding with the station, opening a strong possibility that the company could participate. The program had started with a Phase A preliminary study effort, advancing then to Phase B, which was much more detailed. Hot structures were ini­tially ascendant but posed serious challenges, as NASA Langley research­ers found when they tried to build a columbium heat shield suitable for the Shuttle. The exercise showed that despite the promise of reusability and long life, coatings were fragile and damaged easily, leading to rapid oxygen-induced embrittlement at high temperatures. Unprotected colum­bium oxidized particularly readily and, when hot, could burst into flame. Other refractory metals were available, but they were little understood because they had been used mostly in turbine blades.

Even titanium amounted literally to a black art. Only one firm, Lockheed, had significant experience with a titanium hot structure. That experience came from the Central Intelligence Agency-sponsored Blackbird strategic reconnaissance program, so most of the pertinent shop-floor experience was classified. The aerospace community knew that Lockheed had experienced serious difficulties in learning how to work with titanium, which for the Shuttle amounted to an open invita­tion to difficulties, delays, and cost overruns.

The complexity of a hot structure—with large numbers of clips, brackets, standoffs, frames, beams, and fasteners—also militated against its use. Each of the many panel geometries needed their own structural analysis that was to show with confidence that the panel could withstand creep, buckling, flutter, or stress under load, and in the early computer era, this posed daunting analytical challenges. Hot structures were also known generally to have little tolerance for "over­temps,” in which temperatures exceeded the structure’s design point.[607]

Thus, having taken a long look at hot structures, NASA embraced the new Lockheed pilot plant and gave close examination to Shuttle designs that used tiles, which were formally called Reusable Surface Installation (RSI). Again, the choice of hot structures versus RSI reflected the deep pockets of the Air Force, for hot structures were

costly and complex. But RSI was inexpensive, flexible, and simple. It suited NASA’s budget while hot structures did not, so the Agency chose it.

In January 1972, President Richard M. Nixon approved the Shuttle as a program, thereby raising it to the level of a Presidential initiative. Within days, Dale Myers, a senior official, announced that NASA had made the basic decision to use RSI. The North American Rockwell con­cept that won the $2.6 billion prime contract in July therefore specified RSI as well—but not Lockheed’s. North American Rockwell’s version came from General Electric and was made from mullite.[608]

Which was better, the version from GE or the one from Lockheed? Only tests would tell—and exposure to temperature cycles of 2,300 °F gave Lockheed a clear advantage. NASA then added acoustic tests that simulated the loud roars of rocket flight. This led to a "sudden-death shootout,” in which competing tiles went into single arrays at NASA Johnson. After 20 cycles, only Lockheed’s entrants remained intact. In separate tests, Lockheed’s LI-1500 withstood 100 cycles to 2,500 °F and survived a thermal overshoot to 3,000 °F as well as an acoustic overshoot to 174 decibels (dB).

Lockheed won the thermal-protection subcontract in 1973, with NASA specifying LI-900 as the baseline RSI. The firm responded by pre­paring to move beyond the pilot-plant level and to construct a full-scale production facility in Sunnyvale, CA. With this, tiles entered the main­stream of thermal protection systems available for spacecraft design, in much the same way that blunt bodies and ablative approaches had before them, first flying into space aboard the Space Shuttle Columbia in April 1981. But getting them operational and into space was far from easy.[609]