Category AERONAUTICS

The High-Speed Environment

During World War II the whole of aeronautics used aluminum. There was no hypersonics; the very word did not exist, for it took until 1946 for the investigator Hsue-shen Tsien to introduce it. Germany’s V-2 was flying at Mach 5, but its nose cone was of mild steel, and no one expected that this simple design problem demanded a separate term for its flight regime.[1018]

A decade later, aeronautics had expanded to include all flight speeds because of three new engines: the liquid-fuel rocket, the ramjet, and the variable-stator turbojet. The turbojet promised power beyond Mach 3, while the ramjet proved useful beyond Mach 4. The Mach 6 X-15 was under contract. Intermediate-range missiles were in development, with ranges of 1,200 to 1,700 miles, and people regarded intercontinental missiles as preludes to satellite launchers.

A common set of descriptions presents the flight environments within which designers must work. Well beyond Mach 3, engineers accommo­date aerodynamic heating through materials substitutions. The aircraft themselves continue to accelerate and cruise much as they do at lower speeds. Beyond Mach 4, however, cruise becomes infeasible because of heating. A world airspeed record for air-breathing flight (one that lasted for nearly the next half century) was set in 1958 with the Lockheed X-7, which was made of 4130 steel, at Mach 4.31 (2,881 mph). The airplane had flown successfully at Mach 3.95, but it failed structurally in flight at Mach 4.31, and no airplane has approached such performance in the past half century.[1019]

No aircraft has ever cruised at Mach 5, and an important reason involves structures and materials. "If I cruise in the atmosphere for 2 hours,” said Paul Czysz of McDonnell-Douglas, "I have a thousand times the heat load into the vehicle that the Shuttle gets on its quick transit of the atmosphere.”[1020] Aircraft indeed make brief visits to such speed regimes, but they don’t stay there; the best approach is to pop out of the atmosphere and then return, the hallmark of a true trans­atmospheric vehicle.

At Mach 4, aerodynamic heating raises temperatures. At higher Mach, other effects are seen. A reentering intercontinental ballistic mis­sile (ICBM) nose cone, at speeds above Mach 20, has enough kinetic energy to vaporize 5 times its weight in iron. Temperatures behind its bow shock reach 9,000 kelvins (K), hotter than the surface of the Sun. The research physicist Peter Rose has written that this velocity would be "large enough to dissociate all the oxygen molecules into atoms, dissociate about half of the nitrogen, and thermally ionize a consider­able fraction of the air.”[1021]

Aircraft thus face a simple rule: they can cruise up to Mach 4 if built with suitable materials, but they cannot cruise at higher speeds. This rule applies not only to entry into Earth’s atmosphere but also to entry into the atmosphere of Jupiter, which is far more demanding but which an entry probe of the Galileo spacecraft investigated in 1995, at Mach 50.[1022]

Other speed limits become important in the field of wind tunnel simulation. The Government’s first successful hypersonic wind tun­nel was John Becker’s 11-inch facility, which entered service in 1947. It approached Mach 7, with compressed air giving run times of 40 sec­onds.[1023] A current facility, which is much larger and located at the National Aeronautics and Space Administration (NASA) Langley Research Center, is the Eight-Foot High-Temperature Tunnel—which also uses compressed air and operates near Mach 7.

The reason for such restrictions involves fundamental limitations of compressed air, which liquefies if it expands too much when seeking higher speeds. Higher speeds indeed are achievable but only by creat­ing shock waves within an instrument for periods measured in milli­seconds. Hence, the field of aerodynamics introduces an experimental speed limit of Mach 7, which describes its wind tunnels, and an opera­tional speed limit of Mach 4, which sets a restriction within which cruis­ing flight remains feasible. Compared with these velocities, the usual definition of hypersonics, describing flight beyond Mach 5, is seen to describe nothing in particular.

Project 680J: Survivable Flight Control System YF-4E

In mid-1969, modifications began to convert the prototype McDonnell – Douglas YF-4E (USAF serial No. 62-12200) for the SFCS program. A quadruple-redundant analog computer-based three-axis fly-by-wire flight control system with integrated hydraulic servo-actuator packages was incorporated and side stick controllers were added to both the front and back cockpits. Roll control was pure fly-by-wire with no mechani­cal backup. For initial testing, the Phantom’s mechanical flight control system was retained in the pitch and yaw axes as a safety backup. On April 29, 1972, McDonnell-Douglas test pilot Charles P. "Pete” Garrison flew the SFCS YF-4E for the first time from the McDonnell-Douglas fac­tory at Lambert Field in St. Louis, MO. The mechanical flight control system was used for takeoff with the pilot switching to the fly-by-wire system during climb-out. The aircraft was then flown to Edwards AFB for a variety of additional tests, including low-altitude supersonic flights. After the first 27 flights, which included 23 hours in the full three-axis fly­by-wire configuration, the mechanical flight control system was disabled. First flight in the pure fly-by-wire configuration occurred January 22, 1973. The SFCS YF-4E flew as a pure fly-by-wire aircraft for the remain­der of its flight-test program, ultimately completing over 100 flights.[1138]

Whereas the earlier phases of the flight-test effort were primarily flown by McDonnell-Douglas test pilots, the next aspect of the SFCS

program was focused on an Air Force evaluation of the operational suitability of fly-by-wire and an assessment of the readiness of the tech­nology for transition into new aircraft designs. During this phase, 15 flights were accomplished by two Air Force test pilots (Lt. Col. C. W. Powell and Maj. R. C. Ettinger), who concluded that fly-by-wire was indeed ready and suitable for use in new designs. They also noted that flying qualities were generally excellent, especially during takeoffs and landings, and that the pitch transient normally encountered in the F-4 during rapid deceleration from supersonic to subsonic flight was nearly eliminated. Another aspect of the flight-test effort involved so – called technology transition and demonstration flights in the SFCS aircraft. At this time, the Air Force had embarked on the Lightweight Fighter (LWF) program. One of the two companies developing flight demonstrator aircraft (General Dynamics)had elected to use fly-by-wire in its new LWF design (the YF-16). A block of 11 flights in the SFCS YF-4E was allocated to three pilots assigned to the LWF test force at Edwards AFB (Lt. Col. Jim Ryder, Maj. Walt Hersman, and Maj. Mike Clarke). Based on their experiences flying the SFCS YF-4E, the LWF test force pilots were able to provide valuable inputs into the design, devel­opment, and flight test of the YF-16, directly contributing to the dra­matic success of that program. An additional 10 flights were allocated to another 10 pilots, who included NASA test pilot Gary E. Krier and USAF Maj. Robert Barlow.[1139] Earlier, Krier had piloted the first flight of a digital fly-by-wire (DFBW) flight control system in the NASA DFBW F-8C on May 25, 1972. That event marked the first time that a piloted aircraft had been flown purely using a fly-by-wire flight control system without any mechanical backup provisions. Barlow, as a colonel, would command the Air Force Flight Dynamics Laboratory during execution of several important fly-by-wire flight research efforts. The Air Force YF-16 and the NASA DFBW F-8 programs are discussed in following sections.

Power-By-Wire Testbed

During 1997, NASA Dryden had evaluated a single electrohydrostatic actuator installation on the NASA F-18 Systems Research Aircraft (SRA), with the primary goal being the flight demonstration of power-by-wire technology on a single primary flight control surface. The electrohydro­static actuator, provided by the Air Force, replaced the F-18’s standard left aileron actuator and was evaluated throughout the aircraft’s flight envelope out to speeds of Mach 1.6. Numerous mission profiles were accomplished that included a full series of aerobatic maneuvers. The electrohydrostatic actuator accumulated 23.5 hours of flight time on the F-18 SRA between January and July 1997. It performed as well as the standard F-18 actuator and was shown to have more load capabil­ity than required by the aileron actuator specification for the aircraft.[1188]

At about the same time, a Joint Strike Fighter/Integrated Subsystems Technology program had been formed to reduce the risk of selected

technology candidates, in particular the power-by-wire approach that was intended to replace cumbersome hydraulic actuation systems with all-electrical systems for flight surface actuation. A key to this effort was the AFTI F-16, which was modified to replace all of the standard hydrau­lic actuators on the primary flight control surfaces with electrohydro­static actuators (EHAs) to operate the flaperons, horizontal tails, and rudder. Each electrohydrostatic actuator uses an internal electric motor to drive an integral hydraulic pump, thus it relies on local hydraulics for force transmission (similar to the approach used with the Powered Flight Control Units on the Vickers VC10 aircraft discussed earlier).[1189]

Подпись: 10In a conventional F-16, the digital fly-by-wire flight control system sends out electrical command signals to each of the flight control actu­ators. These electrical signals drive the control valves (located with the actuators) that schedule the fluid from the high-pressure hydraulic pump to position the flight control surfaces. Dual engine-driven 3,000 pounds per square inch (psi) hydraulic systems power each primary control sur­face actuator to drive the control surfaces to the desired position. The standard F-16 hydraulic actuators operate continuously at 3,000 psi, and power is dumped into the actuators, whether it is needed or not.[1190] In straight and level flight (where most aircraft operate most of their time, including even high-performance fighters), the actual electrical power requirement of the actuation system is low (only about 500 watts per actuator), and excess energy is dissipated as heat and is transferred into the fuel system.[1191]

With the electrohydrostatic power design tested in the AFTI/F-16, the standard fly-by-wire flight control system was relatively unchanged. However, the existing F-16 hydraulic power system was removed and replaced by a new power-by-wire system, consisting of an engine-driven Hamilton Sundstrand dual 270-volt direct current (DC) electrical power generation system (to provide redundancy) and Parker Aerospace elec­trohydrostatic actuators on the flaperons, rudder, and horizontal sta­bilizer. The new electrical system powers five dual power electronics units, one for each flight control surface actuator. Each power electron­ics unit regulates the DC electrical power that drives dual motor/pumps that are self-contained in each electrohydrostatic actuator. The dual
motor/pumps convert electrical power into hydraulic power, allowing the piston on the actuators to move the control surfaces. The electrohy­drostatic actuators operate at pressures ranging from 300 to 3,000 psi, providing power only on demand and generating much less heat. An electrical distribution and electrical actuation system simplifies second­ary power and thermal management systems, because the need to pro­vide secondary and emergency backup sources of hydraulic power for the flight control surfaces is eliminated. The electrohydrostatic system also provides more thermal margin, which can be applied to cooling other high-demand systems (such as avionics and electronic warfare), or, alternatively, the thermal management system weight and volume can be reduced making new aircraft designs smaller, lighter, and more affordable. Highly integrated electrical subsystems, including power-by­wire, reportedly could reduce takeoff weight by 6 percent, vulnerable area by 15 percent, procurement cost by 5 percent, and total life-cycle cost by 2 to 3 percent, compared with current fighters based on Air Force and industry studies. The power-by-wire approach is now being used in the Lockheed Martin F-35 Lightning II, with the company estimat­ing a reduction in aircraft weight of as much as 700 pounds because of weight reductions in the hydraulic system, the secondary power sys­tem, and the thermal management system, made possible because the electrical power-by-wire system produces less heat than the traditional hydraulic system that it replaces.[1192]

Подпись: 10The modified power-by-wire AFTI/F-16 was the first piloted aircraft of any type to fly with a totally electric control surface actuation system with no hydraulic or mechanical backup flight control capability of any kind. It was designed to have the same flight control system responses as an unmodified F-16. After the first power-by-wire AFTI/F-16 flight on October 24, 2000, at Fort Worth, Lockheed Martin test pilot Steve Barter stated aircraft handling qualities with the power-by-wire modi­fications were indistinguishable from that of the unmodified AFTI/F-16. The aircraft was subsequently flown about 10 times, with flight control effectiveness of the power-by-wire system demonstrated during super­sonic flight. Test pilots executed various flying quality maneuvers, includ­ing high-g turns, control pulses (in pitch, roll, and yaw), doublet inputs, and sideslips. The tests also included simulated low-altitude attack

missions and an evaluation of the electrostatic actuator and generator subsystems and their thermal behavior under mission loads.[1193]

Подпись: 10NASA Dryden hosted the AFTI/F-16 program for 16 years, from 1982 to 1998. During that time, personnel from Dryden composed 50 percent of the AFTI joint test team. Dryden pilots who flew the AFTI/F-16 included Bill Dana, Dana Purifoy, Jim Smolka, Rogers Smith, and Steve Ishmael. Dryden responsibilities, in addition to its host role, included flight safety, operations, and maintenance. Mark Skoog, who served as the USAF AFTI/F-16 project manager for many years and later became a NASA test pilot, commented: "AFTI had the highest F-16 sortie success rate on base, due to Dryden maintenance personnel having tremendous exper­tise in tailoring their operations to the uniqueness of the vehicle. That includes all the other F-16s based at Edwards during those years, none of which were nearly as heavily modified as the AFTI.”[1194] A good summary of the AFTI/F-16’s accomplishments was provided by NASA test pilot Dana Purifoy: "Flying AFTI was a tremendous opportunity. The aircraft pineered many important technologies including glass cockpit human factors, automated ground collision avoidance, integrated night vision capability and on-board data link operations. All of these technologies are cur­rently being implemented to improve the next generation of both civil and military aircraft.”[1195] The AFTI F-16’s last flight at Dryden was on November 4, 1997. Over a period of 15 years, it made over 750 flights and was flown by 23 pilots from the U. S. Air Force, NASA, the U. S. Marine Corps, and the Swedish Air Force. The AFTI F-16 then served as an Air Force technol­ogy testbed. Experience and lessons learned were used to help develop the production DFBW flight control system used in the F-16. The F-16, the F-22, and the F-35, in particular, directly benefited from AFTI/F-16 research and technology maturation efforts. After 22 years as a research aircraft for NASA and the Air Force, the AFTI F-16 was flown to Wright-Patterson AFB, OH, on January 9, 2001, for display at the Air Force Museum.[1196]

Self-Repairing Flight Control System

Подпись: 10The Self-Repairing Flight Control System (SRFCS) consists of software integrated into an aircraft’s digital flight control system that is used to detect failures or damage to the aircraft control surfaces. In the event of control surface damage, the remaining control surfaces are automat­ically reconfigured to maintain control, enabling pilots to complete their mission and land safely. The program, sponsored by the U. S. Air Force, demonstrated the ability of a flight control system to identify the failure of a control surface and reconfigure commands to other control devices, such as ailerons, rudders, elevators, and flaps, to continue the aircraft’s mission or allow it to be landed safely. As an example, if the horizontal elevator were damaged or failed in flight, the SRFCS would diagnose the failure and determine how the remaining flight control surfaces could be repositioned to compensate for the damaged or inoperable control surface. A visual warning to the pilot was used to explain the type of fail­ure that occurred. It also provided revised aircraft flight limits, such as reduced airspeed, angle of attack, and maneuvering loads. The SRFCS also had the capability of identifying failures in electrical, hydraulic, and mechanical systems. Built-in test and sensor data provided a diag­nostic capability and identified failed components or system faults for subsequent ground maintenance repair. System malfunctions on an air­craft with a SRFCS can be identified and isolated at the time they occur and then repaired as soon as the aircraft is on the ground, eliminating lengthy postflight maintenance troubleshooting.[1267]

The SRFCS was flown 25 times on the HIDEC F-15 at NASA Dryden between December 1989 and March 1990, with somewhat mixed results. The maintenance diagnostics aspect of the system was a general suc­cess, but there were frequent failures with the SRFCS. Simulated con­trol system failures were induced, with the SRFCS correctly identifying every failure that it detected. However, it only sensed induced control system failures 61 percent of the time. The overall conclusion was that the SRFCS concept was promising, but it needed more develop­ment if it was to be successfully implemented into production aircraft.

NASA test pilot Jim Smolka flew the first SRFCS flight, on December 12, 1989, with test engineer Gerard Schkolnik in the rear cockpit; other SRFCS test pilots were Bill Dana and Tom McMurtry.[1268]

Damage-Tolerant Fan Casing

Подпись: 11While most eyes were on the big picture of making major engine advance­ments through the years, some very specific problems were addressed with programs that are just as interesting to consider as the larger research endeavors. The casings that surround the jet engine’s turbo­machinery are a case in point.

With the 1989 crash of United Airlines Flight 232 at Sioux City, IA, aviation safety officials became more interested in finding new materials capable of containing the resulting shrapnel created when a jet engine’s blade or other component breaks free. In the case of the DC-10 involved in this particular crash, the fan disk of the No. 2 engine—the one located in the tail—separated from the engine and caused the powerplant to explode, creating a rain of shrapnel that could not be contained within the engine casing. The sharp metal fragments pierced the body of the aircraft and cut lines in all three of the aircraft’s hydraulic systems. As previously mentioned in this case study, the pilots on the DC-10 were able to steer their aircraft to a nearly controlled landing. The incident inspired NASA pilots to refine the idea of using only jet thrust to maneuver an airplane and undertake the Propulsion Controlled Aircraft program, which took full advantage of the earlier Digital Electronic Engine Control research. The Iowa accident also sent structures and materials experts off on a hunt to find a way to prevent accidents like this in the future.

Подпись: 72. Tong and Jones, Подпись: 'An Updated Assessment of NASA Ultra-Efficient Engine Technologies, Подпись: p. 1.

The United Flight 232 example notwithstanding, the challenge for structures engineers is to design an engine casing that will contain a failed fan blade within the engine so that it has no chance to pierce the passenger compartment wall and threaten the safety of passengers or cause a catastrophic tear in the aircraft wall. Moreover, not only does the casing have to be strong enough to withstand any blade or shrapnel impacts, it must not lose its structural integrity during an emergency

engine shutdown in flight. A damaged engine can take some 15 seconds to shut down, during which time cracks from the initial blade impacts can propagate in the fan case. Should the fan case totally fail, the result­ing breakup of the already compromised turbomachinery could be cat­astrophic to the aircraft and all aboard.[1360]

Подпись: 11As engineers considered the use of composite materials, two methods for containing blade damage within the engine casing were now available: the new softwall and the traditional hardwall. In the softwall concept, the casing was made of a sandwich-type aluminum structure overwound with dry aramid fibers. (Aramid fibers were introduced commercially by DuPont during the early 1960s and were known by the trade name Nomex.) The design allows broken blades and other shrapnel to pass through the "soft” aluminum and be stopped and contained within the aramid fiber wrap. In the hardwall approach, the casing is made of aluminum only and is built as a rigid wall to reflect blade bits and other collateral damage back into the casing interior. Of course that vastly increases the risk that the shrap­nel will be ingested through the engine and cause even greater damage, perhaps catastrophic. While that risk exists with the softwall design, it is not as substantial. Another benefit of the hardwall is that it maintains its structural soundness, or ductility, during a breakup of an engine. A softwall also features some amount of ductility, but the energy-absorb­ing properties of the aramid fibers is the major draw.[1361]

In 1994, NASA engineers at the Lewis Research Center began look­ing into better understanding engine fan case structures and conducted impact tests as part of the Enabling Propulsion Materials program. Various metallic materials and new ideas for lightweight fan contain­ment structures were studied. By 1998, the research expanded to include investigations into use of polymer composites for engine fan casings. As additional composite materials were made available, NASA researchers sought to understand their properties and the appropriateness of those materials in terms of containment capability, damage tolerance, com­mercial viability, and understanding any potential risk not yet identi­fied for their use on jet engines.[1362]

In 2001, NASA awarded a Small Business Innovation Research (SBIR) grant to A&P Technology, Inc., of Cincinnati to develop a dam­age-tolerant fan casing for a jet engine. Long before composites came along, the company’s expertise was in braiding materials together, such as clotheslines and candlewicks. A&P—working together with the FAA, Ohio State University, and the University of Akron—was able to rapidly develop a prototype composite fan case that could be compared to the metal fan case. Computer simulations were key to the effort and seren­dipitously provided an opportunity to grow the industry’s understand­ing and ability to use those very same simulation capabilities. First, well understood metallic casings undergoing a blade-out scenario were modeled, and the computer tested the resulting codes to reproduce the already-known results. Then came the trick of introducing code that would represent A&P’s composite casing and its reaction to a blade-out situation. The process was repeated for a composite material wrapped with a braided fiber material, and results were very promising.[1363]

Подпись: 11The composite casing proposed by A&P used a triaxial carbon braid, which has a toughness superior to aluminum but is lighter, which helps ease fuel consumption. In tests of debris impact, the braided laminate performed better than the metal casing, because in some cases, the com­posite structure absorbed the energy of the impact as the debris bounced off the wall, and in other cases where the shrapnel penetrated the mate­rial, the damage to the wall was isolated to the impact point and did not spread. In a metal casing that was pierced, the resulting hole would instigate several cracks that would continue to propagate along the cas­ing wall, appearing much like the spiderweb of cracks that appear on an automobile windshield when it is hit with a small stone on the freeway.

NASA continues to study the use of composite casings to better understand the potential effects of aging and/or degradation following the constant temperature, vibration, and pressure cycles a jet engine experiences during each flight. There also is interest in studying the effects of higher operating temperatures on the casing structure for pos­sible use on future supersonic jets. (The effect of composite fan blades on casing containment also has been studied.)[1364]

Подпись: A General Electric GEnx engine with a composite damage-tolerant fan casing is checked out before eventual installation on the new Boeing 787. General Electric. Подпись: 11

While composites have found many uses in commercial and military aviation, the first use of an all-composite engine casing, provided by A&P, is set to be used on GE’s GEnx turbojet designed for the Boeing 787. The braided casing weighs 350 pounds less per engine, and, when other engine installation hardware to handle the lighter powerplants is considered, the 787 should weigh 800 pounds less than a similarly equipped airliner using aluminum casings. The weight reduction also should provide a savings in fuel cost, increased payload, and/or a greater range for the aircraft.[1365]

NASA-Industry Wind Energy Program Large Horizontal-Axis Wind Turbines

The primary objective of the Federal Wind Energy Program and the specific objectives of NASA’s portion of the program were outlined in a followup technical paper presented in 1975 by Thomas, Savino, and Richard L. Puthoff. The paper noted that the overall objective of the
program was "to develop the technology for practical cost-competitive wind-generator conversion systems that can be used for supplying sig­nificant amounts of energy to help meet the nation’s energy needs.”[1499] The specific objectives of NASA Lewis’s portion of the program were to: (1) identify cost-effective configurations and sizes of wind-conversion systems; (2) develop the technology needed to produce cost-effective, reliable systems; (3) design wind turbine generators that are compati­ble with user applications, especially with electric utility networks; (4) build up industry capability in the design and fabrication of wind tur­bine generators; and (5) transfer the technology from the program to industry for commercial application. To satisfy these objectives, NASA Lewis divided the development function into the three following areas: (1) design, fabrication, and testing of a 100-kilowatt experimental wind turbine generator; (2) optimizing the wind turbines for selected user operation; and (3) supporting research and technology for the systems.

Подпись: 13The planned workload was divided further by assignment of dif­ferent tasks to different NASA Research Centers and industry partici­pants. NASA Lewis would provide project management and support in aerodynamics, instrumentation, structural dynamics, data reduction, machine design, facilities, and test operations. Other NASA Research Centers would provide consulting services within their areas of expertise. For example, Langley worked on aeroelasticity matters, Ames consulted on rotor dynamics, and Marshall provided meteorology support. Initial industry participants included Westinghouse, Lockheed Corporation, General Electric, Boeing, and Kaman Aerospace.

In order to undertake its project management role, NASA Lewis established the Center’s Wind Power Office, which consisted initially of three operational units—one covering the development of an experi­mental 100-kilowatt wind turbine, one handling the industry-built util­ity-operated wind turbines, and one providing supporting research and technology. The engineers in these offices basically worked together in a less formal structure, crossing over between various operational areas. Also, the internal organization apparently underwent several changes during the program’s existence. For example, in 1976, the program was

Подпись: 13 NASA-Industry Wind Energy Program Large Horizontal-Axis Wind Turbines

directed by the Wind Power Office as part of the Solar Energy Branch. The first two office managers were Ronald Thomas and William Robbins. By 1982, the organization consisted of a Wind Energy Project Office, which was once again under the supervision of Thomas and was part of the Wind and Stationary Power Division. The office consisted of a proj­ect development and support section under the supervision of James P. Couch (who managed the Mod-2 project), a research and technology sec­tion headed by Patrick M. Finnegan, and a wind turbine analysis section under the direction of David A. Spera. By 1984, the program organiza­tion had changed again with the Wind Energy Project Office, which was under the supervision of Darrell H. Baldwin, becoming part of the Energy Technology Division. The office consisted of a technology section under Richard L. Puthoff and an analysis section headed by David A. Spera. The last NASA Lewis wind energy program manager was Arthur Birchenough.

Dick Whitcomb and the Transonic-Supersonic Breakthrough

Whitcomb joined the research community at Langley in 1943 as a mem­ber of Stack’s Transonic Aerodynamics Branch working in the 8-foot High-Speed Tunnel (HST). Initially, NACA managers placed him in the Flight Instrument Research Division, but Whitcomb’s force of person­ality ensured that he would be working directly on problems related to aircraft design. As many of his colleagues and historians would attest, Whitcomb quickly became known for an analytical ability rooted in mathematics, instinct, and aesthetics.[145]

In 1945, Langley increased the power of the 8-foot HST to gener­ate Mach 0.95 speeds, and Whitcomb was becoming increasingly famil­iar with transonic aerodynamics, which helped him in his developing investigation into the design of supersonic aircraft. The onset of drag created by shock waves at transonic speeds was the primary challenge. John Stack, Ezra Kotcher, and Lawrence D. Bell proved that breaking the sound barrier was possible when Chuck Yeager flew the Bell X-1 to Mach 1.06 (700 mph) on October 14, 1947. Designed in the style of a.50- caliber bullet with straight wings, the Bell X-1 was a successful super­sonic airplane, but it was a rocket-powered research airplane designed specifically for and limited to that purpose. The X-1 would not offer designers the shape of future supersonic airplanes. Operational turbojet – powered aircraft designed for military missions were much heavier and would use up much of their fuel gradually accelerating toward Mach 1 to lessen transonic drag.[146] The key was to get operation aircraft through the transonic regime, which ranged from Mach 0.9 to Mach 1.1.

A very small body of transonic research existed when Whitcomb undertook his investigation. British researchers W. T. Lord of the Royal Aeronautical Establishment and G. N. Ward of the University of Manchester and American Wallace D. Hayes attempted to solve the problem of transonic drag through mathematical analyses shortly after World War II in 1946. These studies generated mathematical symbols that did not lend themselves to the design and shape of transonic and supersonic aircraft.[147]

Whitcomb’s analysis of available data generated by the NACA in ground and free-flight tests led him to submit a proposal for testing swept wing and fuselage combinations in the 8-foot HST in July 1948. There had been some success in delaying transonic drag by addressing the relationship between wing sweep and fuselage shape. Whitcomb believed that careful attention to arrangement and shape of the wing and fuselage would result in their counteracting each other. His goal was to reach a milestone in supersonic aircraft design. The tests, conducted from late 1949 to early 1950, revealed no significant decrease in drag at high subsonic (Mach 0.95) and low supersonic (Mach 1.2) speeds. The wing-fuselage combinations actually generated higher drag than their individual values combined. Whitcomb was at an impasse and realized he needed to refocus on learning more about the fundamental nature of transonic airflow.[148]

Just before Whitcomb had submitted his proposal for his wind tun­nel tests, John Stack ordered the conversion of the 8-foot HST in the spring of 1948 to a slotted throat to enable research in the transonic regime. In theory, slots in the tunnel’s test section, or throat, would enable smooth operation at very high subsonic speeds and at low supersonic speeds. The initial conversion was not satisfactory because of uneven flow. Whitcomb and his colleagues, physicist Ray Wright and engineer Virgil S. Ritchie, hand-shaped the slots based on their visualization of smooth transonic flow. They also worked directly with Langley wood­workers to design and fabricate a channel at the downstream end of the test section that reintroduced air that traveled through the slots. Their painstaking work led to the inauguration of transonic operations within the 8-foot HST 7 months later, on October 6, 1950.[149] Whitcomb,

Dick Whitcomb and the Transonic-Supersonic Breakthrough

The slotted-throat test section of the 8-foot High-Speed Tunnel. NASA.

as a young engineer, was helping to refine a tunnel configuration that was going to allow him to realize his potential as a visionary experimen­tal aeronautical engineer.

The NACA distributed a confidential report on the new tunnel during the fall of 1948, which was distributed to the military services and select manufacturers. By the following spring, rumors had been circulating about the new tunnel throughout the industry. Initially, the call for secrecy evolved into outright public acknowledgement of the NACAs new tran­sonic tunnels (including the 16-foot HST) with the awarding of the 1951 Collier Trophy to John Stack and 19 of his associates at Langley for the slotted wall. The Collier Trophy specifically recognized the importance of a research tool, which was a first in the 40-year history of the award. The NACA claimed that its slotted-throat transonic tunnels gave the United States a 2-year lead in the design of supersonic military aircraft.[150]

With the availability of the 8-foot HST and its slotted throat, the com­bined use of previously available wind tunnel components—the tunnel bal­ance, pressure orifice, tuft surveys, and schlieren photographs—resulted in a new theoretical understanding of transonic drag. The schlieren photo­graphs revealed three shock waves at transonic speeds. One was the famil­iar shock wave that formed at the nose of an aircraft as it pushed forward through the air. The other two were, according to Whitcomb, "fascinating new types” of shock waves never before observed, in which the fuselage and wings met and at the trailing edge of the wing. These shocks contributed to a new understanding that transonic drag was much larger in proportion to the size of the fuselage and wing than previously believed. Whitcomb speculated that these new shock waves were the cause of transonic drag.[151]

From SCAT Research to SST Development

The recently established FAA became the major advocate within the U. S. Government for a supersonic transport, with key personnel at three of the NACA’s former laboratories eager to help with this challenging new program. The Langley Research Center in Hampton, VA, (the NACA’s oldest and largest lab) and the Ames Research Center at Moffett Field in Sunnyvale, CA, both had airframe design expertise and facilities, while the Lewis Research Center in Cleveland, OH, specialized in the kind of advanced propulsion technologies needed for supersonic cruise.

The strategy for developing the SCAT depended heavily on leveraging technologies being developed for another Air Force bomber—one much larger, faster, and more advanced than the B-58. This would be the rev­olutionary B-70, designed to cruise several thousand miles at speeds of Mach 3. NACA experts had been helping the Air Force plan this giant intercontinental bomber since the mid-1950s (with aerodynamicist Alfred Eggers of the Ames Laboratory conceiving the innovative design for it to ride partially on compression lift created by its own supersonic shock waves). North American Aviation won the B-70 contract in 1958, but the projected expense of the program and advances in missile technol­ogy led President Dwight Eisenhower to cancel all but one prototype in 1959. The administration of President John Kennedy eventually approved production of two XB-70As. Their main purpose would be to serve as Mach 3 testbeds for what had become known simply as the Supersonic Transport (SST). NASA continued to refer to design concepts for the SST using the older acronym for Supersonic Commercial Air Transport. By 1962, these concepts had been narrowed down to three Langley designs (SCAT-4, SCAT-15, and SCAT-16) and one from Ames (SCAT-17). These became the baselines for industry studies and SST proposals.[345]

Even though Department of Defense resources (especially the Air Force’s) would be important in supporting the SST program, the aero­space industry made it clear that direct federal funding and assistance would be essential. Thus research and development (R&D) of the SST became a split responsibility between the Federal Aviation Agency and the National Aeronautics and Space Administration—with NASA con­ducting and sponsoring the supersonic research and the FAA in charge of the SST’s overall development. The first two leaders of the FAA, retired Lt. Gen. Elwood R. "Pete” Quesada (1958-1961) and Najeeb E. Halaby (1961-1965), were both staunch proponents of producing an SST, as to a slightly lesser degree was retired Gen. William F. "Bozo” McKee (1965­1968). As heads of an independent agency that reported directly to the president, they were at the same level as NASA Administrators T. Keith Glennan (1958-1961) and James Beggs (1961-1968). The FAA and NASA administrators, together with Secretary of Defense Robert McNamara (somewhat of a skeptic on the SST program), provided interagency ovesight and comprised the Presidential Advisory Committee (PAC) for the SST established in April 1964. This arrangement lasted until 1967, when the Federal Aviation Agency became the Federal Aviation Administration under the new Department of Transportation, whose secretary became responsible for the program.[346]

Much of NASA’s SST-related research involved advancing the state – of-the-art in such technologies as propulsion, fuels, materials, and aerodynamics. The latter included designing airframe configurations for sustained supersonic cruise at high altitudes, suitable subsonic maneuvering in civilian air traffic patterns at lower altitudes, safe take­offs and landings at commercial airports, and acceptable noise levels— to include the still-puzzling matter of sonic booms.

Dealing with the sonic boom entailed a multifaceted approach: (1) performing flight tests to better quantify the fluid dynamics and atmo­spheric physics involved in generating and propagating shock waves, as well as their effects on structures and people; (2) conducting com­munity surveys to gather public opinion data on sample populations exposed to booms; (3) building and using acoustic simulators to fur­ther evaluate human and structural responses in controlled settings; (4) performing field studies of possible effects on animals; (5) evaluat­ing various aerodynamic configurations in wind tunnel experiments; and (6) analyzing flight test and wind tunnel data to refine theoretical constructs and mathematical models for lower-boom aircraft designs. Within NASA, the Langley Research Center was a focal point for sonic boom studies, with the Flight Research Center (FRC) at Edwards AFB conducting many of the supersonic tests.[347]

Although the NACA, especially at Langley and Ames, had been doing research on supersonic flight since World War II, none of its technical reports (and only one conference paper) published through 1957 dealt directly with sonic booms.[348] That situation began to change when Langley’s long-time manager and advocate of supersonic programs, John P. Stack, formalized the SCAT venture in 1958. During the next year, three Langley employees whose names would become well known in the field of sonic boom research began publishing NASA’s first scientific papers on the sub­ject. These were Harry W. Carlson, a versatile supersonic aerodynamicist, Harvey H. Hubbard, chief of the Acoustics and Noise Control Division, and Domenic J. Maglieri, a young engineer who became Hubbard’s top sonic boom specialist. Carlson would tend to focus on wind tunnel exper­iments and sonic boom theory, while the other two men specialized in planning and monitoring field tests, then analyzing the data collected.[349] These research activities began to expand under the new pro-SST Kennedy Administration in 1961. After the president formally approved develop­ment of the supersonic transport in June 1963, sonic boom research took off. Langley’s experts, augmented by NASA contractors and grantees, pub­lished 26 papers on sonic booms just 3 years later.[350]

Transatmospherics after NASP

Two developments have paced work in hypersonics since NASP died in 1995. Continuing advances in computers, aided markedly by advance­ments in wind tunnels, have brought forth computational fluid dynam­ics (CFD). Today, CFD simulates the aerodynamics of flight vehicles with increasing (though not perfect) fidelity. In addition, NASA and the Air Force have pursued a sequence of projects that now aim clearly at developing operational scramjet-powered military systems.

Early in the NASP effort, in 1984, Robert Whitehead of the Office of Naval Research spoke on CFD to its people. Robert Williams recalls that Williams presented the equations of fluid dynamics "so the com­puter could solve them, then showed that the computer technology was also there. We realized that we could compute our way to Mach 25 with high confidence.”[658] Unfortunately, in reality, DARPA could not do that. In 1987, the trade journal Aerospace America reported: "almost noth­ing is known about the effects of heat transfer, pressure gradient, three – dimensionality, chemical reactions, shock waves, and other influences on hypersonic transition.”[659] (This transition causes a flow to change from laminar to turbulent, a matter of fundamental importance.)

Code development did mature so that it could adequately support the next hypersonic system, NASA’s X-43A program. In supporting the X-43A effort, NASA’s most important code was GASP. NASP had used version 2.0; the X-43A used 3.0.[660] Like any flow code, it could not cal­culate the turbulence directly but had to model it. GASP 3.0 used the Baldwin-Lomax algebraic model that Princeton’s Antony Jameson, a leading writer of flow codes, describes as: "the most popular model in the industry, primarily because it’s easy to program.”[661] GASP 3.0 also uses "eddy-viscosity” models, which Stanford’s Peter Bradshaw rejects out of hand: "Eddy viscosity does not even deserve to be described as a ‘theory’ of turbulence!” More broadly, he adds, "Even the most sophis­ticated turbulence models are based on brutal simplifications” of the pertinent nonlinear partial differential equations.[662]

Can increasing computer power make up for this? Calculations of the NASP era had been rated in gigaflops, billions of floating point oper­ations per second (FLOPS).[663] An IBM computer has recently cracked the petaflop mark—at a quadrillion operations per second, and even greater performance is being contemplated.[664] At Stanford University’s Center for Turbulence Research, analyst Krishnan Mahesh studied flow within a commercial turbojet and found a mean pressure drop that dif­fers from the observed value by only 2 percent. An earlier computa­tion had given an error of 26 percent, an order of magnitude higher.[665] He used Large Eddy Simulation, which calculates the larger turbulent eddies and models the small ones that have a more universal charac­ter. But John Anderson, a historian of fluid dynamics, notes that LES

"is not viewed as an industry standard.” He sees no prospect other than direct numerical simulation (DNS), which directly calculates all scales of turbulence. "It’s clear-cut,” he adds. "The best way to calculate turbu­lence is to use DNS. Put in a fine enough grid and calculate the entire flow field, including the turbulence. You don’t need any kind of model and the turbulence comes out in the wash as part of the solution.” But in seeking to apply DNS, even petaflops aren’t enough. Use of DNS for practical problems in industry is "many decades down the road. Nobody to my knowledge has used DNS to deal with flow through a scramjet. That type of application is decades away.”[666] With the limitations as well as benefits of CFD more readily apparent, it thus is significant that more traditional hypersonic test facilities are also improving. As just one exam­ple, NASA Langley’s largest hypersonic facility, the 8-foot High Temperature Tunnel (HTT), has been refitted to burn methane and use its combustion products, with oxygen replenishment, as the test gas. This heats the gas. As reviewed by the Journal of Spacecraft and Rockets: "the oxygen content of the freestream gas is representative of flight conditions as is the Mach num­ber, total enthalpy, dynamic pressure, and Reynolds number.”[667]

One fruitful area with NASP had been its aggressive research on scramjets, which benefited substantially because of NASA’s increasing investment in high-temperature hypersonic test facilities.[668]

Table 3 enumerates the range of hypersonic test facilities for scramjet and aerothermodynamic research available to researchers at the NASA Langley Research Center. Between 1987 and the end of 1994, Langley researchers ran over 1,500 tests on 10 NASP engine modules, over 1,200 in a single 3-year period, from the end of 1987 to 1990. After NASP wound down, Agency researchers ran nearly 700 tests on four other configura­tions between 1994 and 1996. These tests, ranging from Mach 4 to Mach

TABLE 3

NASA LRC SCRAMJET PROPULSION AND AEROTHERMODYNAMIC TEST FACILITIES

FACILITY NAME

MACH

REYNOLDS NUMBER

SIZE

8-foot High Temperature Tunnel

4, 5, 7

0.3-5.1 x 106 / ft.

8-ft. dia.

Arc-Heated Scramjet Test Facility

4.7-8.0

0.04-2.2 x 106 / ft.

4-ft. dia.

Combustion-Heated Scramjet Test Facility

3.5-6.0

1.0-6.8 x 106 / ft.

42" x 30"

Direct Connect Supersonic Combustion Test Facility

4.0-7.5

1.8-31.0 x 106 / ft.

[Note (a)]

HYPULSE Shock Tunnel [Note (b)]

5.0-25

0.5-2.5 x 106 / ft.

7-ft dia.

15-inch Mach 6 High Temperature Tunnel

6

0.5-8.0 x 106 / ft.

15" dia.

20-inch Mach 6 CF4 Tunnel

6

0.05-0.7 x 106 / ft.

20" dia.

20-inch Mach 6 Tunnel

6

0.5-8.0 x 106 / ft.

20" x 20"

31 – inch Mach 10 Tunnel

10

0.2-2.2 x 106 / ft.

31" x 31"

Source: Data from NASA LRC Facility brochures.

(a) DCSCTF section varies: 1.52" x 3.46" with a M = 2 nozzle and 1.50" x 6.69" with a M = 2.7 nozzle.

(b) LRCs HYPULSE shock tunnel is at the GASL Division of Allied Aerospace Industries, Ronkonkoma, NY.

8, so encouraged scramjet proponents that they went ahead with plans for a much-scaled-back effort, the Hyper-X (later designated X-43A), which compared in some respects with the ASSET program undertaken after cancellation of the X-20 Dyna-Soar three decades earlier.[669]

The X-43, managed at Langley Research Center by Vincent Rausch, a veteran of the earlier TAV and NASP efforts, began in 1995 as Hyper-X, coincident with the winddown of NASP. It combined a GASL scramjet engine with a 100-inch-long by 60-inch-span slender lifting body and an Orbital Sciences Pegasus booster, this combination being carried to a launch altitude of 40,000 feet by NASA Dryden’s NB-52B Stratofortress. After launch, the Pegasus took the X-43 to approximately 100,000 feet,

Transatmospherics after NASP

Schematic layout of the Hyper-X (subsequently X-43A) scramjet test vehicle and its Orbital Sciences Pegasus winged booster, itself a hypersonic vehicle. NASA.

where it would separate, demonstrating scramjet ignition (using silane and then adding gaseous hydrogen) and operation at velocities as high as Mach 10.

The X-43 program cost $230 million and consumed not quite a decade of development time. Built by Microcraft, Inc., of Tullahoma, TN, the X-43 used the shape of a Boeing study for a Mach 10 global reconnaissance and space access vehicle, conceived by a team under the leadership of George Orton. Langley Research Center furnished vital support, executing nearly 900 test runs of 4 engine configurations between 1996 and 2003.[670]

Microcraft completed three X-43A flight-test vehicles for testing by NASA Dryden Flight Research Center. Unfortunately, the first flight attempt failed in 2001, when the Pegasus booster shed a control fin after launch. A 3-year reexamination and review of the program led to a suc­cessful flight on March 27, 2004, the first successful hypersonic flight of a scramjet-powered airplane. The Pegasus boosted the X-43A to Mach 6.8. After separation, the X-43A burned silane, which ignites on contact

with the air, for 3 seconds. Then it ramped down the silane and began injecting gaseous hydrogen, burning this gas for 8 seconds. This was the world’s first flight test of such a scramjet.[671]

That November, NASA did it again with its third X-43A. On November 16, it separated from its booster at 110,000 feet and Mach 9.7 and its engine burned for 10 to 12 seconds with silane off. On its face, this looked like the fastest air-breathing flight in history, but this speed (approxi­mately 6,500 mph) resulted from its use of Pegasus, a rocket. The key point was that the scramjet worked, however briefly. During the flight, the X-43A experienced airframe temperatures as high as 3,600 °F.[672]

Meanwhile, the Air Force was preparing to take the next step with its HyTech program. Within it, Pratt & Whitney, now merged with Rocketdyne, has been a major participant. In January 2001, it dem­onstrated the Performance Test Engine (PTE), an airframe-integrated scramjet that operated at hypersonic speeds using the hydrocarbon JP-7. Like the X-43A engine, though, the PTE was heavy. Its successor, the Ground Demonstrator Engine (GDE), was flight-weight. It also used fuel to cool the engine structure. One GDE went to Langley for testing in the HTT in 2005. It made the important demonstration that the cooling could be achieved using no more fuel than was to be employed for propulsion.

Next on transatmospheric agenda is a new X-test vehicle, the X-51A, built by Boeing, with a scramjet by Pratt & Whitney Rocketdyne. These firms are also participants in a consortium that includes support from NASA, DARPA, and the Air Force. The X-51A scramjet is fuel-cooled, with the cooling allowing it to be built of Inconel 625 nickel alloy rather than an exotic superalloy. Lofted to Mach 4.7 by a modified Army Tactical Missile System (ATACMS) artillery rocket booster, the X-51A is intended to fly at Mach 7 for minutes at a time, burning JP-7, a hydrocarbon fuel

Transatmospherics after NASP

The first Boeing X-51 WaveRider undergoing final preparations for flight, Edwards AFB, California, 2010. USAF

used previously on the Lockheed SR-71. The X-51A uses ethylene to start the combustion. Then the flight continues on JP-7. Following checkout trials beginning in late 2009, the X-51 made its first powered flight on May 26, 2010. After being air-launched from a B-52, it demonstrated successful hydrocarbon scramjet ignition and acceleration. Further tests will hopefully advance the era of practical scramjet-powered flight, likely beginning with long-range missiles. As this review indicates, the story of transatmospherics illustrates the complexity of hypersonics; the tenac­ity and dedication of NASA’s aerodynamics, structures, and propulsion community; and the Agency’s commitment to take on challenges, no matter how difficult, if the end promises to be the advancement of flight and humanity’s ability to utilize the air and space medium.[673]

Updating Simulator Prediction with Flight-Test Experience

Test pilots who "flew” the early simulators were skeptical of the results that they observed, because there was usually some aspect of the sim­ulation that did not match the real airplane. Stick forces and control surface hinge moments were often not properly matched on the sim­ulator, and thus the apparent effectiveness of the ailerons or elevators was often higher or lower than experienced with the airplane. For pro­cedural trainers (used for checking out pilots in new airplanes) mathe­matical models were often changed erroneously based strictly on pilot comments, such as "the airplane rolls faster than the simulator.” Since these early simulators were based strictly on wind tunnel or theoretical aerodynamic predictions and calculated moments of inertia, the flight – test community began to explore methods for measuring and validating the mathematical models to improve the acceptance of simulators as valid tools for analysis and training. Ground procedures and support equip­ment were devised by NASA to measure the moments of inertia of small aircraft and were used for many of the research airplanes flown at DFRC.[725]

A large inertia table was constructed in the Air Force Flight Test Center Weight and Balance facility at Edwards AFB for the purpose of measuring the inertia of large airplanes. Unfortunately, the system was never able to provide accurate results, as fluctuations in temperature and humidity adversely affected the performance of the table’s sensitive bearings, so the concept was discarded.

During the X-15 flight-test program, NASA researchers at Edwards developed several methods for extracting the aerodynamic stability

derivatives from specific flight-test maneuvers. Researchers then com­pared these results with wind tunnel or theoretical predictions and, where necessary, revised the simulator mathematical models to reflect the flight-test-derived information. For the X-15, the predictions were quite good, and only minor simulator corrections were needed to allow flight maneuvers to be replicated quite accurately on the simulator. The most useful of these methods was an automatic computer analy­sis of pulse-type maneuvers, originally referred to as Newton-Raphson Parameter Identification.53,54 This system evolved into a very useful tool subsequently used as an industry standard for identifying the real-world stability and control derivatives during early testing of new aircraft.[726] The resulting updates are usually also transplanted into the final train­ing simulators to provide the pilots with the best possible duplication of the airplanes’ handling qualities. Bookkeeping methods for determin­ing moments of inertia of a new aircraft (i. e., tracking the weight and location of each individual component or structural member during air­craft manufacture) have also been given more attention.

Characteristically, the predicted aerodynamics for a new airplane are often in error for at least a few of the derivatives. These errors are usually a result of either a discrepancy between the wind tunnel model that was tested and the actual airplane that was manufactured, or a result of a misinterpretation or poor interpolation of the wind tunnel data. In some cases, these discrepancies have been significant and have led to major incidents (such as the HL-10 first flight described earlier). Another source of prediction errors for simulation is the prediction of the aeroelastic effects from applied air loads to the structure. These aeroelastic effects are quite complex and difficult to predict for a lim­ber airplane. They usually require flight-test maneuvers to identify or validate the actual handling quality effects of structural deformation. There have been several small, business aircraft that have been built, developed, and sold commercially wherein calculated predictions of the aerodynamics were the primary data source, and very little if any wind tunnel tests were ever accomplished. Accurate simulators for pilot

training have been created by conducting a brief flight test of each air­plane, performing required test maneuvers, then applying the flight-test parameter estimation methods developed by NASA. With a little bit of attention during the flight-test program, a highly accurate mathematical model of a new airplane can be assembled and used to produce excellent simulators, even without wind tunnel data.[727]