Category AERONAUTICS

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

During the 1950s and early 1960s, aviation advanced from flight at high altitude and Mach 1 to flight in orbit at Mach 25. Within the atmo­sphere, a number of these advances stemmed from the use of the ram­jet, at a time when turbojets could barely pass Mach 1 but ramjets could aim at Mach 3 and above. Ramjets needed an auxiliary rocket stage as a booster, which brought their general demise after high-performance afterburning turbojets succeeded in catching up. But in the heady days of the 1950s, the ramjet stood on the threshold of becoming a main­stream engine. Many plans and proposals existed to take advantage of their power for a variety of aircraft and missile applications.

The burgeoning ramjet industry included Marquardt and Wright Aeronautical, though other firms such as Bendix developed them as well. There were also numerous hardware projects. One was the Air Force- Lockheed X-7, an air-launched high-speed propulsion, aerodynamic, and structures testbed. Two were surface-to-air ramjet-powered mis­siles: the Navy’s ship-based Mach 2.5+ Talos and the Air Force’s Mach 3+ Bomarc. Both went on to years of service, with the Talos flying "in anger” as a MiG-killer and antiradiation SAM-killer in Vietnam. The Air Force also was developing a 6,300-mile-range Mach 3+ cruise missile— the North American SM-64 Navaho—and a Mach 3+ interceptor fighter— the Republic XF-103. Neither entered the operational inventory. The Air Force canceled the troublesome Navaho in July 1957, weeks after the first flight of its rival, Atlas, but some flight hardware remained, and Navaho flew in test for as far as 1,237 miles, though this was a rare success. The XF-103 was to fly at Mach 3.7 using a combined turbojet-ramjet engine. It was to be built largely of titanium, at a time when this metal was little understood; it thus lived for 6 years without approaching flight test. Still, its engine was built and underwent test in December 1956.[564]

The steel-structured X-7 proved surprisingly and consistently produc­tive. The initial concept of the X-7 dated to December 1946 and consti­tuted a three-stage vehicle. A B-29 (later a B-50) served as a "first stage” launch aircraft; a solid rocket booster functioned as a "second stage” accelerating it to Mach 2, at which the ramjet would take over. First flying in April 1951, the X-7 family completed 100 missions between 1955 and program termination in 1960. After achieving its Mach 3 design goal, the program kept going. In August 1957, an X-7 reached Mach 3.95 with a 28-inch diameter Marquardt ramjet. The following April, the X-7 attained Mach 4.31—2,881 mph—with a more-powerful 36-inch Marquardt ram­jet. This established an air-breathing propulsion record that remains unsurpassed for a conventional subsonic combustion ramjet.[565]

At the same time that the X-7 was edging toward the hypersonic fron­tier, the NACA, Air Force, Navy, and North American Aviation had a far more ambitious project underway: the hypersonic X-15. This was Round Two, following the earlier Round One research airplanes that had taken flight faster than sound. The concept of the X-15 was first proposed by Robert Woods, a cofounder and chief engineer of Bell Aircraft (manu­facturer of the X-1 and X-2), at three successive meetings of the NACA’s influential Committee on Aerodynamics between October 1951 and June 1952. It was a time when speed was king, when ambitious technology­pushing projects were flying off the drawing board. These included the Navaho, X-2, and XF-103, and the first supersonic operational fight­ers—the Century series of the F-100, F-101, F-102, F-104, and F-105.[566]

Some contemplated even faster speeds. Walter Dornberger, former commander of the Nazi research center at Peenemunde turned senior Bell Aircraft Corporation executive, was advocating BoMi, a proposed skip­gliding "Bomber-Missile” intended for Mach 12. Dornberger supported Woods in his recommendations, which were adopted by the NACA’s Executive Committee in July 1952. This gave them the status of policy, while the Air Force added its own support. This was significant because

its budget was 300 times larger than that of the NACA.[567] The NACA alone lacked funds to build the X-15, but the Air Force could do this easily. It also covered the program’s massive cost overruns. These took the air­frame from $38.7 million to $74.5 million and the large engine from $10 million to $68.4 million, which was nearly as much as the airframe.[568]

The Air Force had its own test equipment at its Arnold Engineering Development Center (AEDC) at Tullahoma, TN, an outgrowth of the Theodore von Karman technical intelligence mission that Army Air Forces Gen. Henry H. "Hap” Arnold had sent into Germany at the end of the Second World War. The AEDC, with brand-new ground test and research facilities, took care to complement, not duplicate, the NACA’s research facilities. It specialized in air-breathing and rocket-engine test­ing. Its largest installation accommodated full-size engines and provided continuous flow at Mach 4.75. But the X-15 was to fly well above this, to over Mach 6, highlighting the national facilities shortfall in hypersonic test capabilities existing at the time of its creation.[569]

While the Air Force had the deep pockets, the NACA—specifically Langley—conducted the research that furnished the basis for a design. This took the form of a 1954 feasibility study conducted by John Becker, assisted by structures expert Norris Dow, rocket expert Maxime Faget, configu­ration and controls specialist Thomas Toll, and test pilot James Whitten. They began by considering that during reentry, the vehicle should point its nose in the direction of flight. This proved impossible, as the heating was too high. He considered that the vehicle might alleviate this problem by using lift, which he was to obtain by raising the nose. He found that the thermal environment became far more manageable. He concluded that the craft should enter with its nose high, presenting its flat under­surface to the atmosphere. The Allen-Eggers paper was in print, and he later wrote that: "it was obvious to us that what we were seeing here was a new manifestation of H. J. Allen’s ‘blunt-body’ principle.”[570]

To address the rigors of the daunting aerothermodynamic environ­ment, Norris Dow selected Inconel X (a nickel alloy from International Nickel) as the temperature-resistant superalloy that was to serve for the aircraft structure. Dow began by ignoring heating and calculated the skin gauges needed only from considerations of strength and stiffness. Then he determined the thicknesses needed to serve as a heat sink. He found that the thicknesses that would suffice for the latter were nearly the same as those that would serve merely for structural strength. This meant that he could design his airplane and include heat sink as a bonus, with little or no additional weight. Inconel X was a wise choice; with a density of 0.30 pounds per cubic inch, a tensile strength of over 200,000 pounds per square inch (psi), and yield strength of 160,000 psi, it was robust, and its melting temperature of over 2,500 °F ensured that the rigors of the anticipated 1,200 °F surface temperatures would not weaken it.[571]

Work at Langley also addressed the important issue of stability. Just then, in 1954, this topic was in the forefront because it had nearly cost the life of the test pilot Chuck Yeager. On the previous December 12, he had flown the X-1A to Mach 2.44 (approximately 1,650 mph). This exceeded the plane’s stability limits; it went out of control and plunged out of the sky. Only Yeager’s skill as a pilot had saved him and his airplane. The problem of stability would be far more severe at higher speeds.[572]

Analysis, confirmed by experiments in the 11-inch wind tunnel, had shown that most of the stability imparted by an aircraft’s tail surfaces was produced by its wedge-shaped forward portion. The aft portion contributed little to the effectiveness because it experienced lower air pressure. Charles McLellan, another Langley aerodynamicist, now proposed to address the problem of hypersonic stability by using tail sur­faces that would be wedge-shaped along their entire length. Subsequent tests in the 11-inch tunnel, as mentioned previously, confirmed that this solution worked. As a consequence, the size of the tail surfaces shrank from being almost as large as the wings to a more nearly con­ventional appearance.[573]

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

LIQUID OXYGEN TANK (OXIDIZER)

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

HYDROGEN-

PEROXIDE

 

ATTITUDE ROCKE’

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

Подпись:Подпись: HYDROGEN PEROXOE HELIUM

TANKS ‘

EJECTION SEAT-

A schematic drawing of the X-15’s internal layout. NASA.

This study made it possible to proceed toward program approval and the award of contracts both for the X-15 airframe and its power-plant, a 57,000-pound-thrust rocket engine burning a mix of liquid oxygen and anhydrous ammonia. But while the X-15 promised to advance the research airplane concept to over Mach 6, it demanded something more than the conventional aluminum and stainless steel structures of earlier craft such as the X-1 and X-2. Titanium was only beginning to enter use, primarily for reducing heating effects around jet engine exhausts and afterburners. Magnesium, which Douglas favored for its own high-speed designs, was flammable and lost strength at temperatures higher than 600 °F. Inconel X was heat-resistant, reasonably well known, and relatively easily worked. Accordingly, it was swiftly selected as the structural material of choice when Becker’s Langley team assessed the possibility of designing and fabricating a rocket-boosted air-launched hypersonic research airplane. The Becker study, completed in April 1954, chose Mach 6 as the goal and proposed to fly to altitudes as great as 350,000 feet. Both marks proved remarkably prescient: the X-15 eventually flew to 354,200 feet in 1963 and Mach 6.70 in 1967. This was above 100 kilometers and well above the sensible atmo­sphere. Hence, at that early date, more than 3 years before Sputnik, Becker and his colleagues already were contemplating piloted flight into space.[574]

The X-15: Pioneering Piloted Hypersonics

North American Aviation won the contract to build the X-15. It first flew under power in September 1959, by which time an Atlas had hurled an

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

The North American X-1 5 at NASA’s Flight Research Center (now the Dryden Flight Research Center) in 1961. NASA.

RVX-2 nose cone to its fullest range. However, as a hypersonic experiment, the X-15 was a complete airplane. It thus was far more complex than a simple reentry body, and it took several years of cautious flight-testing before it reached peak speed of above Mach 6, and peak altitude as well.

Testing began with two so-called "Little Engines,” a pair of vintage Reaction Motors XLR11s that had earlier served in the X-1 series and the Douglas D-558-2 Skyrocket. Using these, the X-15 topped the records of the earlier X-2, reaching Mach 3.50 and 136,500 feet. Starting in 1961, using the "Big Engine”—the Thiokol XLR99 with its 57,000 pounds of thrust—the X-15 flew to its Mach 6 design speed and 50+ mile design alti­tude, with test pilot Maj. Robert White reaching Mach 6.04 and NASA pilot Joseph Walker an altitude of 354,200 feet. After a landing accident, the second X-15 was modified with external tanks and an ablative coating, with Air Force Maj. William "Pete” Knight subsequently flying this variant to Mach 6.70 (4,520 mph) in 1967. However, it sustained severe thermal damage, partly as a result of inadequate understanding of the interac­tions of impinging hypersonic shock-on-shock flows. It never flew again.[575]

The X-15’s cautious buildup proved a wise approach, for this gave lee­way when problems arose. Unexpected thermal expansion leading to local­ized buckling and deformation showed up during early high-Mach flights. The skin behind the wing leading edge exhibited localized buckling after the first flight to Mach 5.3, but modifications to the wings eliminated hot

spots and prevented subsequent problems, enabling the airplane to reach beyond Mach 6. In addition, a flight to Mach 6.04 caused a windshield to crack because of thermal expansion. This forced redesign of its frame to incorporate titanium, which has a much lower coefficient of expansion. The problem—a rare case in which Inconel caused rather than resolved a heating problem—was fixed by this simple substitution.[576]

Altitude flights brought their own problems, involving potentially dangerous auxiliary power unit (APU) failures. These issues arose in 1962 as flights began to reach well above 100,000 feet; the APUs began to experience gear failure after lubricating oil foamed and lost its lubri­cating properties. A different oil had much less tendency to foam; it now became standard. Designers also enclosed the APU gearbox within a pressurized enclosure. The gear failures ceased.[577]

The X-15 substantially expanded the use of flight simulators. These had been in use since the famed Link Trainer of Second World War and now included analog computers, but now they also took on a new role as they supported the development of control systems and flight equip­ment. Analog computers had been used in flight simulation since 1949. Still, in 1955, when the X-15 program began, it was not at all custom­ary to use flight simulators to support aircraft design and development. But program managers turned to such simulators because they offered effective means to study new issues in cockpit displays, control systems, and aircraft handling qualities. A 1956 paper stated that simulation had "heretofore been considered somewhat of a luxury for high-speed air­craft,” but now "has been demonstrated as almost a necessity,” in all three axes, "to insure [sic] consistent and successful entries into the atmosphere.” Indeed, pilots spent much more time practicing in simu­lators than they did in actual flight, as much as an hour per minute of actual flying time.[578]

The most important flight simulator was built by North American. Located originally in Los Angeles, Paul Bikle, the Director of NASA’s Flight Research Center, moved it to that Center in 1961. It replicated the X-15 cockpit and included actual hydraulic and control-system hard­ware. Three analog computers implemented equations of motion that governed translation and rotation of the X-15 about all three axes, trans­forming pilot inputs into instrument displays.[579]

The North American simulator became critical in training X-15 pilots as they prepared to execute specific planned flights. A particular mission might take little more than 10 minutes, from ignition of the main engine to touchdown on the lakebed, but a test pilot could easily spend 10 hours making practice runs in this facility. Training began with repeated trials of the normal flight profile with the pilot in the simulator cockpit and a ground controller close at hand. The pilot was welcome to recommend changes, which often went into the flight plan. Next came rehearsals of off-design missions: too much thrust from the main engine, too high a pitch angle when leaving the stratosphere.

Much time was spent practicing for emergencies. The X-15 had an inertial reference unit that used analog circuitry to display attitude, alti­tude, velocity, and rate of climb. Pilots dealt with simulated failures in this unit as they worked to complete the normal mission or, at least, to execute a safe return. Similar exercises addressed failures in the stabil­ity augmentation system. When the flight plan raised issues of possible flight instability, tests in the simulator used highly pessimistic assump­tions concerning stability of the vehicle. Other simulations introduced in-flight failures of the radio or Q-ball multifunction sensor. Premature engine shutdown imposed a requirement for safe landing on an alter­nate lakebed that was available for emergency use.[580]

The simulations indeed had realistic cockpit displays, but they left out an essential feature: the g-loads, produced both by rocket thrust and by deceleration during reentry. In addition, a failure of the stability aug­mentation system, during reentry, could allow the airplane to oscillate

in pitch and yaw. This changed the drag characteristics and imposed a substantial cyclical force.

To address such issues, investigators installed a flight simulator within the gondola of an existing centrifuge at the Naval Air Development Center in Johnsville, PA. The gondola could rotate on two axes while the centrifuge as a whole was turning. It not only produced g-forces; its g-forces increased during the simulated rocket burn. The centrifuge imposed such forces anew during reentry while adding a cyclical com­ponent to give the effect of an oscillation in yaw or pitch.[581]

There also were advances in pressure suits, under development since the 1930s. Already an early pressure suit had saved the life of Maj. Frank K. Everest during a high-altitude flight in the X-1, when it had suffered cabin decompression from a cracked canopy. Marine test pilot Lt. Col. Marion Carl had worn another during a flight to 83,235 feet in the D-558-2 Skyrocket in 1953, as had Capt. Iven Kincheloe during his record flight to 126,200 feet in the Bell X-2 in 1956. But these early suits, while effective in protecting pilots, were almost rigid when inflated, nearly immobilizing them. In contrast, the David G. Clark Company, a girdle manufacturer, introduced a fabric that contracted in circumfer­ence while it stretched in length. An exchange between these effects cre­ated a balance that maintained a constant volume, preserving a pilot’s freedom of movement. The result was the Clark MC-2 suit, which, in addition to the X-15, formed the basis for American spacesuit develop­ment from Project Mercury forward. Refined as the A/P22S-2, the X-15’s suit became the standard high-altitude pressure suit for NASA and the Air Force. It formed the basis for the Gemini suit and, after 1972, was adopted by the U. S. Navy as well, subsequently being employed by pilots and aircrew in the SR-71, U-2, and Space Shuttle.[582]

The X-15 also accelerated development of specialized instrumenta­tion, including a unique gimbaled nose sensor developed by Northrop. It furnished precise speed and positioning data by evaluation of dynamic pressure ("q” in aero engineering shorthand), and thus was known as the Q-ball. The Q-ball took the form of a movable sphere set in the nose of the craft, giving it the appearance of the enlarged tip of a ballpoint pen. "The Q-ball is a go-no go item,” NASA test pilot Joseph Walker told Time magazine reporters in 1961, adding: "Only if she checks okay do we go.”[583] The X-15 also incorporated "cold jet” hydrogen peroxide reac­tion controls for maintaining vehicle attitude in the tenuous upper atmo­sphere, when dynamic air pressure alone would be insufficient to permit adequate flight control functionality. When Iven Kincheloe reached 126,200 feet, his X-2 was essentially a free ballistic object, uncontrolla­ble in pitch, roll, and yaw as it reached peak altitude and then began its descent. This situation made reaction controls imperative for the new research airplane, and the NACA (later NASA) had evaluated them on a so-called "Iron Cross” simulator on the ground and then in flight on the Bell X-1B and on a modified Lockheed F-104 Starfighter. They then proved their worth on the X-15 and, as with the Clark pressure suit, were incorporated on Mercury and subsequent American spacecraft.

The X-15 introduced a side stick flight controller that the pilot would utilize during acceleration (when under loads of approximately 3 g’s), relying on a fighter-type conventional control column for approach and landing. The third X-15 had a very different flight control system than the other two, differing greatly from the now-standard stability-aug­mented hydromechanical system carried by operational military and civilian aircraft. The third aircraft introduced a so-called "adaptive” flight control system, the MH-96. Built by Minneapolis Honeywell, the MH-96 relied on rate gyros, which sensed rates of motion in pitch, roll, and yaw. It also incorporated "gain,” defined as the proportion between sensed rates of angular motion and a deflection of the ailerons or other controls. This variable gain, which changed automatically in response to flight conditions, functioned to maintain desired handling qualities across the spectrum of X-15 performance. This arrangement made it possible to introduce blended reaction and aerodynamic controls on the same stick, with this blending occurring automatically in response to

the values determined for gain as the X-15 flew out of the atmosphere and back again. Experience, alas, would reveal the MH-96 as an imma­ture, troublesome system, one that, for all its ambition, posed signifi­cant headaches. It played an ultimately fatal role in the loss of X-15 pilot Maj. Michael Adams in 1967.[584]

The three X-15s accumulated a total of 199 flights from 1959 through

1968. As airborne instruments of hypersonic research, they accumu­lated nearly 9 hours above Mach 3, close to 6 hours above Mach 4, and 87 minutes above Mach 5. Many concepts existed for X-15 deriva­tives and spinoffs, including using it as a second stage to launch small satellite-lofting boosters, to be modified with a delta wing and scram – jet, and even to form the basis itself for some sort of orbital spacecraft; for a variety of reasons, NASA did not proceed with any of these. More significantly, however, was the strong influence the X-15 exerted upon subsequent hypersonic projects, particularly the National Hypersonic Flight Research Facility (NHFRF, pronounced "nerf”), intended to reach Mach 8.

A derivative of the Air Force Flight Dynamics Laboratory’s X-24C study effort, NHFRF was also to cruise at Mach 6 for 40 seconds. A joint Air Force-NASA committee approved a proposal in July 1976 with an estimated program cost of $200 million, and NHFRF had strong support from NASA’s hypersonic partisans in the Langley and Dryden Centers. Unfortunately, its rising costs, at a time when the Shuttle demanded an ever-increasing proportion of the Agency’s budget and effort, doomed it, and it was canceled in September 1977. Overall, the X-15 set speed and altitude records that were not surpassed until the advent of the Space Shuttle.[585]

The X-20 Dyna-Soar

During the 1950s, as the X-15 was taking shape, a parallel set of ini­tiatives sought to define a follow-on hypersonic program that could actually achieve orbit. They were inspired in large measure by the 1938-1944 Silbervogel ("Silver Bird”) proposal of Austrian space flight advocate Eugen Sanger and his wife, mathematician Irene Sanger-Bredt, which greatly influenced postwar Soviet, American, and European think­ing about hypersonics and long-range "antipodal” flight. Influenced by Sanger’s work and urged onward by the advocacy of Walter Dornberger, Bell Aircraft Corporation in 1952 proposed the BoMi, intended to fly 3,500 miles. Bell officials gained funding from the Air Force’s Wright Air Development Center (WADC) to study longer-range 4,000-mile and 6,000-mile systems under the aegis of Air Force project MX-2276.

Support took a giant step forward in February 1956, when Gen. Thomas Power, Chief of the Air Research and Development Command (ARDC, predecessor of Air Force Systems Command) and a future Air Force Chief of Staff, stated that the service should stop merely consid­ering such radical craft and instead start building them. With this level of interest, events naturally moved rapidly. A month later, Bell received a study contract for Brass Bell, a follow-on Mach 15 rocket-lofted boost – glider for strategic reconnaissance. Power preferred another orbital glider concept, RoBo (for Rocket Bomber), which was to serve as a global strike system. To accelerate transition of hypersonics from the research to the operational community, the ARDC proposed its own concept, Hypersonic Weapons Research and Development Supporting System (HYWARDS). With so many cooks in the kitchen, the Air Force needed a coordinated plan. An initial step came in December 1956, as Bell raised the velocity of Brass Bell to Mach 18. A month later, a group headed by John Becker, at Langley, recommended the same design goal for HYWARDS. RoBo still remained separate, but it emerged as a long­term project that could be operational by the mid-1970s.[586]

NACA researchers split along centerlines over the issue of what kind of wing design to employ for HYWARDS. At NACA Ames, Alfred Eggers and Clarence Syvertson emphasized achieving maximum lift. They pro­posed a high-wing configuration with a flat top, calculating its hypersonic

life-to-drag (L/D) as 6.85 and measuring a value of 6.65 during hyper­sonic tunnel tests. Langley researchers John Becker and Peter Korycinski argued that Ames had the configuration "upside down.” Emphasizing lighter weight, they showed that a flat-bottom Mach 18 shape gave a weight of 21,400 pounds, which rose only modestly at higher speeds. By contrast, the Ames "flat-top” weight was 27,600 pounds and rising steeply. NASA officials diplomatically described the Ames and Langley HYWARDS concepts respectively as "high L/D” and "low heating,” but while the imbroglio persisted, there still was no acceptable design. It fell to Becker and Korycinski to break the impasse in August 1957, and they did so by considering heating. It was generally expected that such craft required active cooling. But Becker and his Langley colleagues found that a glider of global range achieved peak uncooled skin tem­peratures of 2,000 °F, which was survivable by using improved materi­als. Accordingly, the flat-bottom design needed no coolant, dramatically reducing both its weight and complexity.[587]

This was a seminal conclusion that reshaped hypersonic thinking and influenced all future development down to the Space Shuttle. In October 1957, coincident with the Soviet success with Sputnik, the ARDC issued a coordinated plan that anticipated building HYWARDS for research at 18,000 feet per second, following it with Brass Bell for reconnaissance at the same speed and then RoBo, which was to carry nuclear bombs into orbit. HYWARDS now took on the new name of Dyna-Soar, for "Dynamic Soaring,” an allusion to the Sanger-legacy skip-gliding hypersonic reentry. (It was later designated X-20.) To the NACA, it constituted a Round Three following the Round One X-1, X-2, and Skyrocket, and the Round Two X-15.

The flat-bottom configuration quickly showed that it was robust enough to accommodate flight at much higher speeds. In 1959, Herbert York, the Defense Director of Research and Engineering, stated that Dyna-Soar was to fly at 15,000 mph, lofted by the Martin Company’s Titan I missile, though this was significantly below orbital speed. But

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

W/S, – 20 turbulent.

 

Vatiatkn

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

This 1957 Langley trade-study shows weight advantage of flat-bottom reentry vehicles at higher Mach numbers. This led to abandonment of high-wing designs in favor of flat-bottom ones such as the X-20 Dyna-Soar and the Space Shuttle. NASA.

during subsequent years it changed to the more-capable Titan II and then to the powerful Titan III-C. With two solid-fuel boosters augment­ing its liquid hypergolic main stage, it could easily boost Dyna-Soar to the 18,000 mph necessary for it to achieve orbit. A new plan of December 1961 dropped suborbital missions and called for "the early attainment of orbital flight.”[588]

By then, though, Dyna-Soar was in deep political trouble. It had been conceived initially as a prelude to the boost-glider Brass Bell for

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

This full-size mockup of the X-20 gives an indication of its small, compact design. USAF.

reconnaissance and to the orbital RoBo for bombardment. But Brass Bell gave way to a purpose-built concept for a small-piloted station, the Manned Orbiting Laboratory (MOL), which could carry more sophis­ticated reconnaissance equipment. (Ironically, though a team of MOL astronauts was selected, MOL itself likewise was eventually canceled.) RoBo, a strategic weapon, fell out of the picture completely, for the success of the solid-propellant Minuteman ICBM established the silo – launched ICBM as the Nation’s prime strategic force, augmented by the Navy’s fleet of Polaris-launching ballistic missile submarines.[589]

In mid-196l, Secretary of Defense Robert S. McNamara directed the Air Force to justify Dyna-Soar on military grounds. Service advocates responded by proposing a host of applications, including orbital recon­naissance, rescue, inspection of Soviet spacecraft, orbital bombardment,

and use of the craft as a ferry vehicle. McNamara found these rational­izations unconvincing but was willing to allow the program to proceed as a research effort, at least for the time being. In an October 1961 memo to President John F. Kennedy, he proposed to "re-orient the program to solve the difficult technical problems involved in boosting a body of high lift into orbit, sustaining man in it and recovering the vehicle at a desig­nated place.”[590] This reorientation gave the project 2 more years of life.

Then in 1963, he asked what the Air Force intended to do with it after using it to demonstrate maneuvering entry. He insisted he could not justify continuing the program if it was a dead-end effort with no ultimate purpose. But it had little potential utility, for it was not a cargo rocket, nor could it carry substantial payloads, nor could it conduct long – duration missions. And so, in December McNamara canceled it, after 6 years of development time, a Government contract investment of $410 million, the expenditure of 16 million man-hours by nearly 8,000 con­tractor personnel, 14,000 hours of wind tunnel testing, 9,000 hours of simulator runs, and the preparation of 3,035 detailed technical reports.[591]

Ironically, by time of its cancellation, the X-20 was so far advanced that the Air Force had already set aside a block of serial numbers for the 10 production aircraft. Its construction was well underway, Boeing having completed an estimated 42 percent of design and fabrication tasks.[592] Though the X-20 never flew, portions of its principal purposes were fulfilled by other programs. Even before cancellation, the Air Force launched the first of several McDonnell Aerothermodynamic/ elastic Structural Systems Environmental Test (ASSET) hot-structure radiative-cooled flat-bottom cone-cylinder shapes sharing important configuration similarities to the Dyna-Soar vehicle. Slightly later, its Project PRIME demonstrated cross-range maneuver after atmospheric entry. This used the Martin SV-5D lifting body, a vehicle differing significantly from the X-20 but which complemented it nonetheless. In this fashion, the Air Force succeeded at least partially in obtain­ing lifting reentry data from winged vehicles and lifting bodies that widened the future prospects for reentry.

Hot Structures and Return from Space: X-20’s Legacy and ASSET

Dyna-Soar never flew, but it sharply extended both the technology and the temperature limits of hot structures and associated aircraft ele­ments, at a time when the American space program was in its infancy.[593] The United States successfully returned a satellite from orbit in April 1959, while ICBM nose cones were still under test, when the Discoverer II test vehicle supporting development of the National Reconnaissance Office’s secret Corona spy satellite returned from orbit. Unfortunately, it came down in Russian-occupied territory far removed from its intended recovery area near Hawaii. Still, it offered proof that practical hyper­sonic reentry and recovery were at hand.

An ICBM nose cone quickly transited the atmosphere, whereas recov­erable satellite reentry took place over a number of minutes. Hence a sat­ellite encountered milder aerothermodynamic conditions that imposed strong heat but brought little or no ablation. For a satellite, the heat of ablation, measured in British thermal units (BTU) per pound of protec­tive material, was usually irrelevant. Instead, insulative properties were more significant: Teflon, for example, had poor ablative properties but was an excellent insulator.[594]

Production Dyna-Soar vehicles would have had a four-flight ser­vice life before retirement or scrapping, depending upon a hot structure comprised of various materials, each with different but complementary properties. A hot structure typically used a strong material capable of withstanding intermediate temperatures to bear flights loads. Set off from it were outer panels of a temperature-resistant material that did not have to support loads but that could withstand greatly elevated tem­peratures as high as 3,000 °F. In between was a lightweight insulator (in Dyna-Soar’s case, Q-felt, a silica fiber from the firm of Johns Manville). It had a tendency to shrink, thus risking dangerous gaps where high heat could bypass it. But it exhibited little shrinkage above 2,000 °F

and could withstand 3,000 °F. By "preshrinking” this material, it qual­ified for operational use.[595]

For its primary structure, Dyna-Soar used Rene 41, a nickel alloy that included chromium, cobalt, and molybdenum. Its use was pioneered by General Electric for hot-section applications in its jet engines. The alloy had room temperature yield strength of 130,000 psi, declining slightly at 1,200 °F, and was still strong at 1,800 °F. Some of the X-20’s panels were molybdenum alloy, which offered clear advantages for such hot areas as the wing leading edges. D-36 columbium alloy covered most other areas of the vehicle, including the flat underside of the wings.

These panels had to resist flutter, which brought a risk of cracking because of fatigue, as well as permitting the entry of superheated hypersonic flows that could destroy the internal structure within seconds. Because of the risks to wind tunnels from hasty and ill-considered flutter testing (where a test model for example can disintegrate, damaging the interior of the tun­nel), X-20 flutter testing consumed 18 months of Boeing’s time. Its people started testing at modest stress levels and reached levels that exceeded the vehicle’s anticipated design requirements.[596]

The X-20’s nose cap had to function in a thermal and dynamic pres­sure environment more extreme even than that experienced by the X-15’s Q-ball. It was a critical item that faced temperatures of 3,680 °F, accom­panied by a daunting peak heat flux of 143 BTU per square foot per second. Both Boeing and its subcontractor Chance Vought pursued inde­pendent approaches to development, resulting in two different designs. Vought built its cap of siliconized graphite with an insulating layer of a temperature-resistant zirconium oxide ceramic tiles. Their melting point was above 4,500 °F, and they covered its forward area, being held in place by thick zirconium oxide pins. The Boeing design was simpler, using a solid zirconium oxide nose cap reinforced against cracking with two screens of platinum-rhodium wire. Like the airframe, the nose caps were rated through four orbital flights and reentries.[597]

Generally, the design of the X-20 reflected the thinking of Langley’s John Becker and Peter Korycinski. It relied on insulation and radia­tion of the accumulated thermal load for primary thermal protection. But portions of the vehicle demanded other approaches, with special­ized areas and equipment demanding specialized solutions. Ball bear­ings, facing a 1,600 °F thermal environment, were fabricated as small spheres of Rene 41 nickel alloy covered with gold. Antifriction bearings used titanium carbide with nickel as a binder. Antenna windows had to survive hot hypersonic flows yet be transparent to radio waves. A mix of oxides of cobalt, aluminum, and nickel gave a coating that showed a suitable emittance while furnishing requisite temperature protection.

The pilot looked through five clear panes: three that faced forward and two on the sides. The three forward panes were protected by a jetti – sonable protective shield and could only be used below Mach 5 after reen­try, but the side ones faced a less severe aerothermodynamic environment and were left unshielded. But could the X-20 be landed if the protective shield failed to jettison after reentry? NASA test pilot Neil Armstrong, later the first human to set foot upon the Moon, flew approaches using a modified Douglas F5D Skylancer. He showed it was possible to land the Dyna-Soar using only visual cues obtained through the side windows.

The cockpit, equipment bay, and a power bay were thermally iso­lated and cooled via a "water wall” using lightweight panels filled with a jelled water mix. The hydraulic system was cooled as well. To avoid overheating and bursting problems with conventional inflated rubber tires, Boeing designed the X-20 to incorporate tricycle-landing skids with wire brush landing pads.[598] Dyna-Soar, then, despite never having flown, significantly advanced the technology of hypersonic aerospace vehicle design. Its contributions were many and can be illustrated by examin­ing the confidence with which engineers could approach the design of critical technical elements of a hypersonic craft, in 1958 (the year North American began fabricating the X-15) and 1963 (the year Boeing began fabricating the X-20):[599] In short, within the 5 years that took the X-20 from a paper study to a project well underway, the "art of the possible”

TABLE 1

INDUSTRY HYPERSONIC "DESIGN CONFIDENCE"

AS MEASURED BY ACHIEVABLE DESIGN TEMPERATURE CRITERIA, °F

ELEMENT

X-15

X-20

Nose cap

3,200

4,300

Surface panels

1,200

2,750

Primary structure

1,200

1,800

Leading edges

1,200

3,000

Control surfaces

1,200

1,800

Bearings

1,200

1,800

in hypersonics witnessed a one-third increase in possible nose cap tem­peratures, a more than double increase in the acceptable temperatures of surface panels and leading edges, and a one-third increase in the accept­able temperatures of primary structures, control surfaces, and bearings.

The winddown and cancellation of Dyna-Soar coincided with the first flight tests of the much smaller but nevertheless still very techni­cally ambitious McDonnell ASSET hypersonic lifting reentry test vehicle. Lofted down the Atlantic Test Range on modified Thor and Thor-Delta boosters, they demonstrated reentry at over Mach 18. ASSET dated to 1959, when Air Force hypersonic advocates advanced it as a means of assessing the accuracy of existing hypersonic theory and predictive tech­niques. In 1961, McDonnell Aircraft, a manufacturer of fighter aircraft and also the Project Mercury spacecraft, began design and fabrication of ASSET’s small sharply swept delta wing flat-bottom boost-gliders. They had a length of 69 inches and a span of 55 inches.

Though in many respects they resembled the soon-to-be-canceled X-20, unlike that larger, crewed transatmospheric vehicle, the ASSET gliders were more akin to lifting nose cone shapes. Instead of the X-20’s primary reliance upon Rene 41, the ASSET gliders largely used colum- bium alloys, with molybdenum alloy on their forward lower heat shield, graphite wing leading edges, various insulative materials, and colum – bium, molybdenum, and graphite coatings as needed. There were also three nose caps: one fabricated from zirconium oxide rods, another from tungsten coated with thorium, and a third of siliconized graphite coated with zirconium oxide. Though all six ASSETs looked alike, they were built in two differing variants: four Aerothermodynamic Structural Vehicles (ASV) and two Aerothermodynamic Elastic Vehicles (AEV). The former reentered from higher velocities (between 16,000 and 19,500 feet

per second) and altitudes (from 202,000 to 212,000 feet), necessitating use of two-stage Thor-Delta boosters. The latter (only one of which flew successfully) used a single-stage Thor booster and reentered at 13,000 feet per second from an altitude of 173,000 feet. It was a hypersonic flut­ter research vehicle, analyzing as well the behavior of a trailing-edge flap representing a hypersonic control surface. Both the ASV and AEV flew with a variety of experimental panels installed at various locations and fabricated by Boeing, Bell, and Martin.[600] The ASSET program conducted six flights between September 1963 and February 1965, all successful save for one AEV launch in March 1964. Though intended for recovery from the Atlantic, only one survived the rigors of parachute deployment, descent, and being plunged into the ocean. But that survivor, the ASV – 3, proved to be in excellent condition, with the builder, International Harvester, rightly concluding it "could have been used again.”[601] ASV-4, the best flight flown, was also the last one, with the final flight-test report declaring that it returned "the highest quality data of the ASSET pro­gram.” It flew at a peak speed of Mach 18.4, including a hypersonic glide that covered 2,300 nautical miles.[602]

Overall, the ASSET program scored a host of successes. It was all the more impressive for the modest investment made in its development: just $21.2 million. It furnished the first proof of the magnitude and serious­ness of upper-surface leeside heating and the dangers of hypersonic flow impingement into interior structures. It dealt with practical issues of fab­rication, including fasteners and coatings. It contributed to understand­ing of hypersonic flutter and of the use of movable control surfaces. It also demonstrated successful use of an attitude-adjusting reaction con­trol system, in near vacuum and at speeds much higher than those of the X-15. It complemented Dyna-Soar and left the aerospace industry believ­ing that hot structure design technology would be the normative tech­nical approach taken on future launch vehicles and orbital spacecraft.[603]

TABLE 2

MCDONNELL ASSET FLIGHT TEST PROGRAM

DATE

VEHICLE

BOOSTER

VELOCITY

(FEET/

SECOND)

ALTITUDE

(FEET)

RANGE

(NAUTICAL

MILES)

Sept. 1 8, 1963

ASV-1

Thor

16,000

205,000

987

Mar. 24, 1964

ASV-2

Thor-Delta

18,000

195,000

1,800

July 22, 1964

ASV-3

Thor-Delta

19,500

225,000

1,830

Oct. 27, 1964

AEV-1

Thor

13,000

168,000

830

Dec. 8, 1964

AEV-2

Thor

13,000

1 87,000

620

Feb. 23, 1965

ASV-4

Thor-Delta

19,500

206,000

2,300

Hypersonic Aerothermodynamic Protection and the Space Shuttle

Certainly over much of the Shuttle’s early conceptual period, advocates thought such logistical transatmospheric aerospace craft would employ hot structure thermal protection. But undertaking such structures on large airliner-size vehicles proved troublesome and thus premature. Then, as though given a gift, NASA learned that Lockheed had built a pilot plant and could mass-produce silica "tiles” that could be attached to a conventional aluminum structure, an approach far more appealing than designing a hot structure. Accordingly, when the Agency under­took development of the Space Shuttle in the 1970s, it selected this approach, meaning that the new Shuttle was, in effect, a simple alumi­num airplane. Not surprisingly, Lockheed received a NASA subcontract in 1973 for the Shuttle’s thermal-protection system.

Lockheed had begun its work more than a decade earlier, when investigators at Lockheed Missiles and Space began studying ceramic fiber mats, filing a patent on the technology in December 1960. Key people included R. M. Beasley, Ronald Banas, Douglas Izu, and Wilson Schramm. By 1965, subsequent Lockheed work had led to LI-1500, a material that was 89 percent porous and weighed 15 pounds per cubic foot (lb/ft3). Thicknesses of no more than an inch protected test sur­faces during simulations of reentry heating. LI-1500 used methyl meth­acrylate (Plexiglas), which volatilized when hot, producing an outward

flow of cool gas that protected the heat shield, though also compromis­ing its reusability.[604]

Lockheed’s work coincided with NASA plans in 1965 to build a space station as is main post-Apollo venture and, consequently, the first great wave of interest in designing practical logistical Shuttle-like spacecraft to fly between Earth and the orbital stations. These typically were con­ceived as large winged two-stage-to-orbit systems with fly-back boosters and orbital spaceplanes. Lockheed’s Maxwell Hunter devised an influen­tial design, the Star Clipper, with two expendable propellant tanks and LI-1500 thermal protection.[605] The Star Clipper also was large enough to benefit from the Allen-Eggers blunt-body principle, which lowered its temperatures and heating rates during reentry. This made it possi­ble to dispense with the outgassing impregnant, permitting use—and, more importantly, reuse—of unfilled LI-1500. Lockheed also introduced LI-900, a variant of LI-1500 with a porosity of 93 percent and a weight of only 9 pounds per cubic foot. As insulation, both LI-900 and LI-1500 were astonishing. Laboratory personnel found that they could heat a tile in a furnace until it was white hot, remove it, allow its surface to cool for a couple of minutes, and pick it up at its edges with their fin­gers, with its interior still glowing at white heat.[606]

Previous company work had amounted to general materials research. But Lockheed now understood in 1971 that NASA wished to build the Shuttle without simultaneously proceeding with the station, opening a strong possibility that the company could participate. The program had started with a Phase A preliminary study effort, advancing then to Phase B, which was much more detailed. Hot structures were ini­tially ascendant but posed serious challenges, as NASA Langley research­ers found when they tried to build a columbium heat shield suitable for the Shuttle. The exercise showed that despite the promise of reusability and long life, coatings were fragile and damaged easily, leading to rapid oxygen-induced embrittlement at high temperatures. Unprotected colum­bium oxidized particularly readily and, when hot, could burst into flame. Other refractory metals were available, but they were little understood because they had been used mostly in turbine blades.

Even titanium amounted literally to a black art. Only one firm, Lockheed, had significant experience with a titanium hot structure. That experience came from the Central Intelligence Agency-sponsored Blackbird strategic reconnaissance program, so most of the pertinent shop-floor experience was classified. The aerospace community knew that Lockheed had experienced serious difficulties in learning how to work with titanium, which for the Shuttle amounted to an open invita­tion to difficulties, delays, and cost overruns.

The complexity of a hot structure—with large numbers of clips, brackets, standoffs, frames, beams, and fasteners—also militated against its use. Each of the many panel geometries needed their own structural analysis that was to show with confidence that the panel could withstand creep, buckling, flutter, or stress under load, and in the early computer era, this posed daunting analytical challenges. Hot structures were also known generally to have little tolerance for "over­temps,” in which temperatures exceeded the structure’s design point.[607]

Thus, having taken a long look at hot structures, NASA embraced the new Lockheed pilot plant and gave close examination to Shuttle designs that used tiles, which were formally called Reusable Surface Installation (RSI). Again, the choice of hot structures versus RSI reflected the deep pockets of the Air Force, for hot structures were

costly and complex. But RSI was inexpensive, flexible, and simple. It suited NASA’s budget while hot structures did not, so the Agency chose it.

In January 1972, President Richard M. Nixon approved the Shuttle as a program, thereby raising it to the level of a Presidential initiative. Within days, Dale Myers, a senior official, announced that NASA had made the basic decision to use RSI. The North American Rockwell con­cept that won the $2.6 billion prime contract in July therefore specified RSI as well—but not Lockheed’s. North American Rockwell’s version came from General Electric and was made from mullite.[608]

Which was better, the version from GE or the one from Lockheed? Only tests would tell—and exposure to temperature cycles of 2,300 °F gave Lockheed a clear advantage. NASA then added acoustic tests that simulated the loud roars of rocket flight. This led to a "sudden-death shootout,” in which competing tiles went into single arrays at NASA Johnson. After 20 cycles, only Lockheed’s entrants remained intact. In separate tests, Lockheed’s LI-1500 withstood 100 cycles to 2,500 °F and survived a thermal overshoot to 3,000 °F as well as an acoustic overshoot to 174 decibels (dB).

Lockheed won the thermal-protection subcontract in 1973, with NASA specifying LI-900 as the baseline RSI. The firm responded by pre­paring to move beyond the pilot-plant level and to construct a full-scale production facility in Sunnyvale, CA. With this, tiles entered the main­stream of thermal protection systems available for spacecraft design, in much the same way that blunt bodies and ablative approaches had before them, first flying into space aboard the Space Shuttle Columbia in April 1981. But getting them operational and into space was far from easy.[609]

Structures and their Aeroelastic Manifestations

Though an airplane looks rigidly solid, in fact it is a surprisingly flexible machine. The loadings it experiences in flight can manifest themselves in a variety of ways that affect and "move” the structure, and, as dis­cussed previously, the flight control system itself can adversely affect the structure. The convoluted field in which aerodynamics and structures collide both statically and dynamically has led to some of the most com­plex and challenging problems that engineers, researchers, and design­ers have faced in the history of aeronautics.

The safety factor for a railroad bridge is usually "10,” meaning that the structural members are sized to carry 10 times the design load with­out failing. Since weight is so crucial to the performance of an airplane, however, its structural safety factor is typically "1.5,” that is, the struc­ture can fail if the loads are only 50 percent higher than the design value. As a result of the low aircraft design safety factor, aircraft structures receive far more attention during the design than do bridge structures and are subject to much larger deformations when loaded. This struc­tural deformation can also interact with the aerodynamics of an air­plane, both dynamically and statically, independently from the control system interaction mentioned earlier.

Hot Structure Approaches

Another option for thermal protection during entry was the use of exotic, high-temperature materials for the external surface that could re­radiate the heat back into space. This concept was proposed for the X-20 Dyna-Soar program, and the vehicle was well under construc­tion at the time of cancellation.[755] In parallel with the X-20 program, the Air Force Flight Dynamics Laboratory developed a small radia – tive-cooled hot structure vehicle (essentially the first 4 feet of the X-20 Dyna Soar’s nose), called the McDonnell Aerothermodynamic/elastic Structural Systems Environmental Tests (ASSET). The ASSET design used the same materials and thermal protection concepts as the X-20 and first flew in September 1963, 3 months before cancellation of the Dyna-Soar. The fourth ASSET vehicle successfully completed a Mach 18.4 entry from 202,000 feet in 1965. Postflight examination indicated

it survived the entry well, although the operational problems and man­ufacturing methods for these exotic materials were expensive and time­consuming. Since that time, joint NASA-Air Force-Navy-industry devel­opmental programs such as the X-30 National Aero-Space Plane (NASP) effort of the late 1980s to early 1990s have advanced materials and fabri­cation technologies that, in due course, may be applied to future hyper­sonic systems.[756]

Structural Analysis Prior to Computers

Basic principles of structural analysis—static equilibrium, trusses, and beam theory—were known long before computers, or airplanes, existed. Bridges, towers and other buildings, and ships were designed by a combination of experience and some amount of analysis—more so as designs became larger and more ambitious during and after the Industrial Revolution.

With airplanes came much greater emphasis on weight minimiza­tion. Massive overdesign was no longer an acceptable means to achieve structural integrity. More rigorous analysis and structural sizing was required. Simplifications allowed the analysis of primary members under simple loading conditions:

• Slender beams: axial load, shear, bending, torsion.

• Trusses: members carry axial load only, joined to other such members at ends.

• Simple shells: pressure loading.

• Semi-monocoque (skin and stringer) structures: shear flow, etc.

• Superposition of loading conditions.

With these simplifications, primary structural members could be sized appropriately to the expected loads. In the days of wood, wire, and fabric, many aircraft structures could be analyzed as trusses: exter­nally braced biplane wings; fuselage structures consisting of longerons, uprights, and cross braces, with diagonal braces or wires carrying tor­sion; landing gears; and engine mounts. As early as the First World War and in the 1920s, researchers were working to cover every required aspect of the problem: general analysis methods, analysis of wings, horizontal and vertical tails, gust loads, test methods, etc. The National Advisory Committee for Aeronautics (NACA) contributed significantly to the build­ing of this early body of methodology.[787]

Structures with redundancy—multiple structural members capable of sharing one or more loading components—may be desirable for safety, but they posed new problems for analysis. Redundant structures cannot be analyzed by force equilibrium alone. A conservative simplification, often practiced in the early days of aviation, was to analyze the struc­ture with redundant members missing. A more precise solution would require the consideration of displacements and "compatibility” condi­tions: members that are connected to one another must deform in such a manner that they move together at the point of connection. Analysis was feasible but time-consuming. Large-scale solutions to redundant ("statically indeterminate”) structure problems would become practical with the aid of computers. Until then, more simplifications were made, and specific types of solutions—very useful ones—were developed.

While these analysis methods were being developed, there was a lot of airplane building going on without very much analysis at all. In the "golden age of aviation,” many airplanes were built in garages or at small companies that lacked the resources for extensive analysis. "In many cases people who flew the airplanes were the same people who car­ried out the analysis and design. They also owned the company. There was very little of what we now call structural analysis. Engineers were brought in and paid—not to design the aircraft—but to certify that the aircraft met certain safety requirements.”[788]

Through the 1930s, as aircraft structures began to be formed out of aluminum, the semi-monocoque or skin-and-stringer structure became prevalent, and analysis methods were developed to suit. "In the 1930s, ’40s, and ’50s, techniques were being developed to analyze specific struc­tural components, such as wing boxes and shear panels, with combined bending, torsion, and shear loads and with stiffeners on the skins.”[789] A number of exact solutions to the differential equations for stress and strain in a structural member were known, but these generally exist only for very simple geometric shapes and very limited sets of loading conditions and boundary conditions. Exact solutions were of little prac­tical value to the aircraft designer or stress analyst. Instead, "free body diagrams” were used to analyze structures at selected locations, or "sta­tions.” The structure was considered to be cut by a theoretical plane at the station of interest. All loads, applied and inertial, on the portion of the aircraft outboard of the cut had to be borne (reacted) by the struc­ture at the cut.

In principle, this allowed the stress at any point in the structure to be analyzed—given the time to make an arbitrarily large number of these theoretical cuts through the aircraft. In practice, free body dia­grams were used to analyze the structure at key locations—selected fuselage stations, the root, and selected stations of wings and tail sur­faces. Structural members were left constant, or tapered appropriately, according to experience and judgment, between the analyzed sections. For major projects such as airliners or bombers, the analysis would be more thorough, and consequently, major design organizations had rooms full of people whose jobs were to perform the required calculations.

The NACA also utilized this brute-force approach to large calcu­lations, and the people who performed the calculations—overwhelm­ingly women—were called "computers.” Annie J. Easley, who worked at the NASA Lewis (now Glenn) Research Center starting in 1955, recalls:

. . . we were called computers until we started to get the machines, and then we were changed over to either math tech­nicians or mathematicians. . . . The engineers and the scien­tists are working away in their labs and their test cells, and they come up with problems that need mathematical compu­tation. At that time, they would bring that portion to the com­puters, and our equipment then were the huge calculators, where you’d put in some numbers and it would clonk, clonk, clonk out some answers, and you would record them by hand. Could add, subtract, multiply, and divide. That was pretty much what those big machines, those big desktop machines, could do. If we needed to find a logarithm or an exponential, we then pulled out the tables.[790]

After World War II, with jet engines pushing aircraft into ever more demanding flight regimes, the analytical community sought to keep up. The NACA continued to improve the methodologies for calculating loads on various parts of an aircraft, and some of the reports generated during that time are still used by industry practitioners today. NACA Technical Report (TR) 1007, for horizontal tail loads in pitch maneuvers, is a good example, although it does not cover all of the conditions required by recent airworthiness regulations.[791]

For structural analysis, energy methods and matrix methods began to receive more attention. Energy methods work as follows: one first expresses the deflection of a member as a set of assumed shape func­tions, each multiplied by an (initially unknown) coefficient; expresses the total strain energy in terms of these unknown coefficients; and finally, finds the values of the coefficients that minimize the strain energy. If the shape functions, from which the solution is built, satisfy the boundary conditions of the problem, then so does the final solution.

Energy methods were not new. The concept of energy minimization was introduced by Lord Rayleigh in the late 19 th century and extended by Walter Ritz in two papers of 1908 and 1909.[792] Rayleigh and Ritz were par­ticularly concerned with vibrations. Carlo Alberto Castigliano, an Italian engineer, published a dissertation in 1873 that included two important theorems for applying energy principles to forces and static displace­ments in structures.[793] However, in the early works, the shape functions were continuous over the domain of interest. The idea of breaking up (discretizing) a complex structure into many simple elements for numer­ical solution would lead to the concept of finite elements, but for this to be useful, computing technology needed to mature.

Applying Computational Structural Analysis to Flight Research

We now turn to an area of activity that provides, for aviation, the ulti­mate proof of design techniques and predictive capabilities: flight-test­ing. While there are many fascinating projects that could be discussed, we will consider only five that had particular relevance to the subject at hand, either because they collected data that were specifically intended to provide validation of computational predictions of structural behav­ior, or because they demonstrated unique structural design approaches.

Two of these are the YF-12 Thermal Loads project and the Rotor Aerodynamic Limits survey, both of which collected data for validat­ing and improving predictive methods. The remaining three are the Highly Maneuverable Aircraft Technology (HiMAT) digital fly-by-wire (DFBW) enhanced agility composite-structured canard demonstrator, the AD-1 oblique wing demonstrator, and the Grumman X-29 forward- swept wing (FSW) research aircraft. These three projects exercised, in progressively more challenging ways, the concept of aeroelastic tailor­ing: that is, predicting airframe flexibility and having enough confidence in those predictions to design an airplane that takes advantage of elas­tic deformation, rather than just trying to minimize it. In all of these, NASA-rooted computational structural prediction proved of great, and even occasionally, critical, significance.

The investigation of aircraft structural mechanics or, indeed, of almost any discipline, can be considered to include the following activ­ities: investigation by basic theory, computational analysis or simula­tion, laboratory test, and flight test (or, more generally, any test of the final product in its actual operating environment). Many arguments have been had over which is the most valuable. This author is of the opinion—based on his experience in the practice of engineering, on a certain amount of historical research, and on the teaching and example of mentors and peers—that theory, computation, laboratory test, and flight test all constitute imperfect but complementary views of reality. Thus, until someone comes up with a way to know the exact state of stress and deflection in every part of a vehicle under actual operating conditions, we must form our understanding of reality as a composite image, using what information we can gain from each available source:

• Flight test, obviously, is the best representation we have of an aircraft in actual operational conditions. However, our ability to interrogate the system is most severely compromised in this activity. Many data parameters are not available unless special instrumentation is installed, if at all, and this is the most difficult environment in which to obtain stable, high-quality data.

• Laboratory test offers better visibility into the opera­tion of specific parts of the system and better control of experimental parameters, at the price of some separa­tion from true operational conditions.

• Computation offers even greater opportunity to inter­rogate the value of any data parameter at any time(s) and to simulate conditions that might be impossible, difficult, or dangerous to test. Computation also elimi­nates all physical complications of running the experi­ment and all physical sources of noise and uncertainty.

But in stepping out of the physical world and into the analytical world, the researcher also becomes subject to the limited fidelity of his computational method: what effects are and are not included in the computation and how well the computation represents physical reality.

• Theory is sometimes the best source of insight and of understanding what parameters might be changed to obtain some desired effect, but it does not provide the detailed quantitative data necessary to implement the solution.

In this light, the following flight programs are discussed. Much more could be said about each of them. The present discussion is necessarily confined to their significance to the development or validation of loads and structural computation methods.

Structural Analysis and Loads Prediction Facilities

Test facilities have an important role in verifying and improving anal­ysis methods. A few test facilities that had a lot to do with the devel­opment and validation of structural analysis methods are described below. In addition to those described, other "landmark” test facilities include large-scale launch vehicle structural test facilities at Johnson and Marshall Space Centers, and the crash dynamics test facility at Langley Research Center.

Structural Dynamics Laboratory (Ames Research Center, 1965)

During the 1960s, Ames and Langley collaborated on some of the struc­tural dynamics and buffet problems of spacecraft during ascent. (This collaboration occurred through some of the same meetings at NASA Headquarters that led to the development of NASTRAN.) To help assess the structural dynamic characteristics of boosters, and to build confi­dence in predictive methods, a large structural dynamics test facility was built at Ames (completed in 1965). This facility was large enough to hold a full-size Atlas or Titan II, had provisions for exciting the struc­tural modes of the test article, and could be evacuated to test the struc­tural damping characteristics in zero or reduced ambient air density.[1005] The facility was also used for research on buffet during reentry and land­ing impacts.[1006] Much of the structural dynamics research at Ames was discontinued or relocated during the early 1970s. The laboratory is long since deactivated, but the large, pentagonal tower still stands, housing a machine shop and a wind tunnel that can simulate Mars’s atmosphere by evacuating the chamber and then filling to low pressure with CO2.[1007]

Thermal Loads Laboratory (Dryden Flight Research Center, 1960s)

A 1973 accounting of NASA research facilities listed only one major ground laboratory at Dryden: the High Temperature Loads Calibration Laboratory.[1008] High supersonic and hypersonic flight research created a need (1) to test airframes on the ground under simultaneous thermal and structural loading conditions and (2) to calibrate loads instrumen-

tation at elevated temperatures, so that the data obtained in flight could be reliably interpreted. These needs " . . . led to the construction of a laboratory for calibrating strain-gage installations to measure loads in an elevated temperature environment. The problems involved in mea­suring loads with strain gages. . . require the capability to heat and load aircraft under simulated flight conditions. . . . The laboratory has the capability of testing structural components and complete vehicles under the combined effects of loads and temperatures, and calibrating and evaluating flight loads instrumentation under [thermal] conditions expected in flight.”[1009]

The laboratory is housed in a hangarlike building with attached shop, offices, and control room. Capabilities included:

• Hangar-door opening 40 feet high by 136 feet wide.

• Unobstructed test area 150 by 120 by 40 feet allowed the testing of aircraft up to and including, for example, a YF-12 or SR-71.

• Ten megawatts of electrical heating power via quartz lamps and reflectors.

• Temperatures up to 3,000 °F.

• Hydraulic power of 4.5 gallons/minute at 3,000 pounds per square inch (psi) to apply loads.

• Fourteen channels closed-loop load or position control of up to 34 separate actuators.

• Sensors including strain gages, thermocouples, load cells, and position transducers.

Slots in the floor provided flexible locations for tiedown points, as well as routing for hydraulic and electrical power, instrumentation wir­ing, compressed air, or water (presumably for cooling). Closed-loop ana­log control of both mechanical load and heating was provided, to any desired preprogrammed time history.

The facility was used in the YF-12 thermal loads project (discussed elsewhere in this paper), in Space Shuttle structural verification at high

temperatures, and for a variety of other studies.[1010] The loads laboratory made contributions to the validation of computational methods by pro­viding the opportunity to compare computational predictions with test data obtained under known, controlled, thermal and structural load­ing conditions, applied together or independently as required. At time of this writing, the facility is still in use.[1011]

Early Aircraft Fly-By-Wire Applications

By the 1950s, fully boosted flight controls were common, and the potential benefits of fly-by-wire were becoming increasingly apparent. Beginning during the Second World War and continuing postwar, fly-by­wire and power-by-wire flight control systems had been fielded in var­ious target drones and early guided missiles.[1114] However, most aircraft designers were reluctant to completely abandon mechanical linkages to
flight control surfaces in piloted aircraft, an attitude that would undergo an evolutionary change over the next two decades as a result of a broad range of NACA-NASA, Air Force, and foreign research efforts.

Подпись: 10Beginning in 1952, the NACA Langley Aeronautical Laboratory began an effort oriented to exploring various aspects of fly-by-wire, including the use of a side stick controller.[1115] By 1954, flight-testing began with what was perhaps the first jet-powered fly-by-wire research aircraft, a modified former U. S. Navy Grumman F9F-2 Panther carrier-based jet fighter used as an NACA research aircraft. The primary objective of the NACA effort was to evaluate various automatic flight control systems, including those based on rate and normal acceleration feedback. Secondary objectives were to evaluate use of fly-by-wire with a side stick controller for pilot inputs. The existing F9F-2 hydraulic flight control system, with its mechan­ical linkages, was retained with the NACA designing an auxiliary flight control system based on a fly-by-wire analog concept. A small, 4-inch-tall side stick controller was mounted at the end of the right ejection seat arm­rest. The controller was pivoted at the bottom and was used for both lat­eral (roll) and longitudinal (pitch) control. Only 4 pounds of force were required for full stick deflection. The control friction normally present in a hydromechanical system was completely eliminated by the electrically powered system. Additionally, the aircraft’s fuel system was modified to enable fuel to be pumped aft to destabilize the aircraft by moving the cen­ter of gravity rearward. Another modification was the addition of a steel container mounted on the lower aft fuselage. This carried 250 pounds of lead shot to further destabilize the aircraft. In an emergency, the shot could be rapidly jettisoned to restabilize the aircraft. Fourteen pilots flew the modified F9F-2, including NACA test pilots William Alford[1116] and Donald

L. Mallick.[1117] [1118] Using only the side stick controller, the pilots conducted
takeoffs, stall approaches, acrobatics, and rapid precision maneuvers that included air-to-air target tracking, ground strafing runs, and pre­cision approaches and landings. The test pilots quickly became used to flying with the side stick and found it comfortable and natural to use.18

In mid-1956, after interviewing aircraft flight control experts from the Air Force Wright Air Development Center’s Flight Control Laboratory, Aviation Week magazine concluded:

Подпись: 10The time may not be far away when the complex mechani­cal linkage between the pilot’s control stick and the airplane’s control surface (or booster valve system) is replaced with an electrical servo system. It has long been recognized that this"fly-by-wire” approach offered attractive possibilities for reducing weight and complexity. However, airplane designers and pilots have been reluctant to entrust such a vital function to electronics whose reliability record leaves much to be desired.[1119]

Even as the Aviation Week article was published, several noteworthy aircraft were under development that would incorporate various fly-by­wire approaches in their flight control systems. In 1956, the British Avro Vulcan B.2 bomber flew with a partial fly-by-wire system that operated in conjunction with hydraulically boosted, mechanically activated flight controls. The supersonic North American A-5 Vigilante Navy carrier – based attack bomber flew in 1958 with a pseudo-fly-by-wire flight control system. The Vigilante served the fleet for many years, but its highly com­plex design proved very difficult to maintain and operate in an aircraft carrier environment. By the mid-1960s, the General Dynamics F-111 was flying with triple-redundant, large-authority stability and command aug­mentation systems and fly-by-wire-controlled wing-mounted spoilers.[1120]

On the basic research side, the delta winged British Short S. C.1, first flown in 1957, was a very small, single-seat Vertical Take-Off and Landing
(VTOL) aircraft. It incorporated a triply redundant fly-by-wire flight con­trol system with a mechanical backup capability. The outputs from the three independent fly-by-wire channels were compared, and a failure in a single channel was overridden by the other two. A single channel failure was relayed to the pilot as a warning, enabling him to switch to the direct (mechanical) control system. The S. C. 1 had three flight control modes, as described below, with the first two only being selectable prior to takeoff.[1121]

Подпись: 10Full fly-by-wire mode with aerodynamic surfaces and noz­zles controlled electrically via three independent servo motors with triplex fail-safe operation in conjunction with three analog autostabilizer control systems.

• A hybrid mode in which the reaction nozzles were servo/ autostabilizer (fly-by-wire) controlled and the aerodynamic surfaces were linked directly to the pilot’s manual controls.

• A direct mode in which all controls were mechanically linked to the pilot control stick.

The S. C. 1 weighed about 8,000 pounds and was powered by four ver­tically mounted Rolls-Royce RB.108 lift engines, providing a total ver­tical thrust of 8,600 pounds. One RB.108 engine mounted horizontally in the rear fuselage provided thrust for forward flight. The lift engines were mounted vertically in side-by-side pairs in a central engine bay and could be swiveled to produce vectored thrust (up to 23 degrees for­ward for acceleration or -12 degrees for deceleration). Variable thrust nose, tail, and wingtip jet nozzles (powered by bleed air from the four lift engines) provided pitch, roll, and yaw control in hover and at low speeds during which the conventional aerodynamic controls were inef­fective. The S. C.1 made its first flight (a conventional takeoff and land­ing) on April 2, 1957. It demonstrated tethered vertical flight on May 26, 1958, and free vertical flight on October 25, 1958. The first transi­tion from vertical flight to conventional flight was made April 6, 1960.[1122]

During 10 years of flight-testing, the two S. C.1 aircraft made hun­dreds of flights and were flown by British, French, and NASA test pilots. A Royal Aircraft Establishment (RAE) report summarizing flight-test experience with the S. C. 1 noted: "Of the visiting pilots, those from NASA [Langley’s John P. "Jack” Reeder and Fred Drinkwater from Ames] flew the aircraft 6 or 7 times each. They were pilots of very wide experience, including flight in other VTOL aircraft and variable stability helicopters, which was of obvious assistance to them in assessing the S. C.1.”[1123] On October 2, 1963, while hovering at an altitude of 30 feet, a gyro input malfunction in the flight control system produced uncontrollable pitch and roll oscillations that caused the second S. C. 1 test aircraft (XG 905) to roll inverted and crash, killing Shorts test pilot J. R. Green. The air­craft was then rebuilt for additional flight-testing. The first S. C. 1 (XG 900) was used for VTOL research until 1971 and is now part of the Science Museum aircraft collection at South Kensington, London. The second S. C.1 (XG 905) is in the Flight Experience exhibit at the Ulster Folk and Transport Museum in Northern Ireland, near where the air­craft was originally built by Short Brothers.

Подпись: 10The Canadian Avro CF-105 Arrow supersonic interceptor flew for the first time in 1958. Revolutionary in many ways, it featured a dual channel, three-axis fly-by-wire flight control system designed without any mechanical backup flight control capability. In the CF-105, the pilot’s control inputs were detected by pressure-sensitive transducers mounted in the pilot’s control column. Electrical signals were sent from the transducers to an electronic control servo that operated the valves in the hydraulic system to move the various flight control surfaces. The CF-105 also incorporated artificial feel and stability augmentation sys­tems.[1124] In a highly controversial decision, the Canadian government can­celed the Arrow program in 1959 after five aircraft had been built and flown. Although only about 50 flight test hours had been accumulated, the Arrow had reached Mach 2.0 at an altitude of 50,000 feet. During its development, NACA Langley Aeronautical Laboratory assisted the CF-105 design team in a number of areas, including aerodynamics, performance, stability, and control. After the program was terminated,

many Avro Canada engineers accepted jobs with NASA and British or American aircraft companies.[1125] Although it never entered production and details of its pioneering flight control system design were reportedly lit­tle known at the time, the CF-105 presaged later fly-by-wire applications.

Подпись: 10NACA test data derived from the F9F-2 fly-by-wire experiment were used in development of the side stick controllers in the North American X-15 rocket research plane, with its adaptive flight control system.[1126] First flown in 1959, the X-15 eventually achieved a speed of Mach 6.7 and reached a peak altitude of 354,200 feet. One of the two side stick con­trollers in the X-15 cockpit (on the left console) operated the reaction thruster control system, critical to maintaining proper attitude control at high Mach numbers and extreme altitudes during descent back into the higher-density lower atmosphere. The other controller (on the right cockpit console) operated the conventional aerodynamic flight control surfaces. A CALSPAN NT-33 variable stability test aircraft equipped with a side stick controller and an NACA-operated North American F-107A (ex-USAF serial No. 55-5120), modified by NACA engineers with a side stick flight control system, were flown by X-15 test pilots during 1958— 1959 to gain side stick control experience prior to flying the X-15.[1127]

Interestingly, the British VC10 jet transport, which first flew in 1962, has a quad channel flight control system that transmits electrical signals directly from the pilot’s flight controls or the aircraft’s autopilot via elec­trical wiring to self-contained electrohydraulic Powered Flight Control Units (PFCUs) in the wings and tail of the aircraft, adjacent to the flight control surfaces. Each VC10 PFCU consists of an individual small self – contained hydraulic system with an electrical pump and small reservoir. The PFCUs move the control surfaces based on electrical signals pro­vided to the servo valves that are electrically connected to the cockpit flying controls.[1128] There are no mechanical linkages or hydraulic lines between the pilot and the PFCUs. The PFCUs drive the primary flight
control surfaces that consist of split rudders, ailerons, and elevators on separate electrical circuits. Thus, the VC10 has many of the attributes of fly-by-wire and power-by-wire flight control systems. It also features a backup capability that allows it to be flown using the hydraulically boosted variable incidence tail plane and differential spoilers that are operated via conventional mechanical linkages and separate hydraulic systems.[1129] The VC10K air refueling tanker was still in Royal Air Force (RAF) service as of 2009, and the latest Airbus airliner, the A380, uses the PFCU concept in its fly-by-wire flight control system.

Подпись: 10The Anglo-French Concorde supersonic transport first flew in 1969 and was capable of transatlantic sustained supercruise speeds of Mach 2.0 at cruising altitudes well above 50,000 feet. In support of the Concorde development effort, a two-seat Avro 707C delta winged flight research aircraft was modified as a fly-by-wire technology testbed with a side stick controller. It flew 200 hours on fly-by-wire flight trails at the U. K. at Farnborough until September 1966.[1130] Concorde had a dual channel analog fly-by-wire flight control system with a backup mechanical capa­bility. The mechanical system served in a follower role unless problems developed with the fly-by-wire control elements of the system, in which case it was automatically connected. Pilot movements of the cockpit con­trols operated signal transducers that generated commands to the flight control system. These commands were processed by an analog electri­cal controller that included the aircraft autopilot. Mechanically operated servo valves were replaced by electrically controlled ones. Much as with the CF-105, artificial feel forces were electrically provided to the Concorde pilots based on information generated by the electronic controller.[1131]

AFTI Phase I Testing

Phase I flight-testing was conducted by the AFTI/F-16 Joint Test Force from the NASA Dryden Flight Research Facility at Edwards AFB, CA, from July 10, 1982, through July 30, 1983. During this phase, five test pilots from NASA, the Air Force, and the U. S. Navy flew the aircraft. Initial flights checked out the aircraft’s stability and control systems. Handling qualities were assessed in air-to-air and air-to-ground scenarios, as well
as in-formation flight and during approach and landing. The Voice Command System allowed the pilot to change switch positions, display formats, and modes simply by saying the correct word. Initial tests were of the system’s ability to recognize words, with later testing conducted under increasing levels of noise, vibrations, and g-forces. Five pilots flew a total of 87 test sorties with the Voice Command System, with a gen­eral success rate approaching 90 percent. A prototype helmet-mounted sight was also evaluated. On July 30, 1983, the AFTI/F-16 aircraft was flown back to the General Dynamics facility at Fort Worth, TX, for mod­ification for Phase II. During the Phase I test effort, 118 flight-test sor­ties were flown, totaling about 177 flight hours. In addition to evaluating the DFCS, the potential operational utility of task-tailored flight modes (that included decoupling of aircraft attitude and flight path) was also assessed. During these unconventional maneuvers, the AFTI/F-16 dem­onstrated that it could alter its nose position without changing flight path and change its flight path without changing aircraft attitude. The air­craft also performed coordinated horizontal turns without banking or sideslip.[1183] NASA test pilot Bill Dana recounted: "In Phase I we evaluated non-classic flight control modes. By deflecting the elevators and flaps in various relationships, it was possible to translate the aircraft vertically without changing pitch attitude or to pitch-point the airplane without changing your altitude. You could also translate laterally without using bank and yaw-point without translating the aircraft, by using rudder and canard inputs programmed together in the flight control computer.”[1184]

Highly Integrated Digital Electronic Control

The Highly Integrated Digital Electronic Control (HIDEC) evolved from the earlier DEEC research effort. Major elements of the HIDEC were
a Digital Electronic Flight Control System (DEFCS), engine-mounted DEECs, an onboard general-purpose computer, and an integrated archi­tecture that provided connectivity between components. The HIDEC F-15A (USAF serial No. 71-0287) was modified to incorporate DEEC – equipped F100 engine model derivative (EMD) engines. A dual chan­nel Digital Electronic Flight Control System augmented the standard hydromechanical flight control system in the F-15A and replaced its ana­log control augmentation system. The DEFCS was linked to the aircraft data buses to tie together all other electronic systems, including the air­craft’s variable geometry engine inlet control system.[1261] Over a span of about 15 years, the HIDEC F-15 would be used to develop several modes of integrated propulsion and flight control systems. These integrated modes were Adaptive Engine Control System, Performance Seeking Control, Self-Repairing Flight Control System, and the Propulsion-Only Flight Control System. They are discussed separately in the following sections.[1262]

Advanced Turboprop Project-Yesterday and Today

The third engine-related effort to design a more fuel-efficient powerplant during this era did not focus on another idea for a turbojet configura­tion. Instead, engineers chose to study the feasibility of reintroducing a jet-powered propeller to commercial airliners. An initial run of the numbers suggested that such an advanced turboprop promised the larg­est reduction in fuel cost, perhaps by as much as 20 to 30 percent over turbofan engines powering aircraft with a similar performance. This compared with the goal of a 5-percent increase in fuel efficiency for the Engine Component Improvement program and a 10- to 15-percent increase in fuel efficiency for the E Cubed program.[1316]

But the implementation of an advanced turboprop was one of NASA’s more challenging projects, both in terms of its engineering and in secur­ing public acceptance. For years, the flying public had been conditioned to see the fanjet engine as the epitome of aeronautical advancement. Now they had to be "retrained” to accept the notion that a turbopropel­ler engine could be every bit as advanced, indeed, even more advanced, than the conventional fanjet engine. The idea was to have a jet engine
firing as usual with air being compressed and ignited with fuel and the exhaust expelled after first passing through a turbine. But instead of the turbine spinning a shaft that turned a fan at the front of the engine, the turbines would be spinning a shaft, which fed into a gearbox that turned another shaft that spun a series of unusually shaped propeller blades exterior to the engine casing.[1317]

Begun in 1976, the project soon grew into one of the larger NASA aeronautics endeavors in the history of the Agency to that point, eventu­ally involving 4 NASA Field Centers, 15 university grants, and more than 40 industrial contracts.[1318]

Подпись: 11Early on in the program, it was recognized that the major areas of concern were going to be the efficiency of the propeller at cruise speeds, noise both on the ground and within the passenger cabin, the effect of the engine on the aerodynamics of the aircraft, and maintenance costs. Meeting those challenges were helped once again by the computer-aided, three-dimensional design programs created by the Lewis Research Center. An original look for an aircraft propeller was devised that changed the blade’s sweep, twist, and thickness, giving the propellers the look of a series of scimitar-shaped swords sticking out of the jet engine. After much development and testing, the NASA-led team eventually found a solution to the design challenge and came up with a propeller shape and engine configuration that was promising in terms of meeting the fuel-efficiency goals and reduced noise by as much as 65 decibels.[1319]

In fact, by 1987, the new design was awarded a patent, and the NASA-industry group was awarded the coveted Collier Trophy for creat­ing a new fuel-efficient turboprop propulsion system. Unfortunately, two unexpected variables came into play that stymied efforts to put the design into production.[1320]

The first had to do with the public’s resistance to the idea of flying in an airliner powered by propellers—even though the blades were still

Подпись: A General Electric design for an Unducted Fan engine is tested during the early 1980s. General Electric. Подпись: 11

being turned by a jet engine. It didn’t matter that a standard turbofan jet also derived most of its thrust from a series of blades—which did, in fact, look more like a fan than a series of propellers. Surveys showed passengers had safety concerns about an exposed blade letting go and sending shrapnel into the cabin, right where they were sitting. Many passengers also believed an airliner equipped with an advanced turbo­prop was not as modern or reliable as pure turbojet engine. Jets were in; propellers were old fashioned. The second thing that happened was that world fuel prices dropped to the lower levels that preceded the oil embargo and the very rationale for developing the new turboprop in the first place. While fuel-efficient jet engines were still needed, the "extra mile” in fuel efficiency the advanced turboprop provided was no lon­ger required. As a result, NASA and its partners shelved the technology and waited to use the archived files another day.[1321]

The story of the Advanced Turboprop project had one more twist to it. While NASA and its team of contractor engineers were working on their new turboprop design, engineers at GE were quietly working on their own design, initially without NASA’s knowledge. NASA’s engine was distinguished by the fact that it had one row of blades, while GE’s ver­sion featured two rows of counter-rotating blades. GE’s design, which became known as the Unducted Fan (UDF), was unveiled in 1983 and demonstrated at the 1985 Paris Air Show. A summary of the UDF’s tech­nical features is described in a GE-produced report about the program:

Подпись: 11The engine system consists of a modified F404 gas generator engine and counterrotating propulsor system, mechanically decoupled, and aerodynamically integrated through a mixing frame structure. Utilization of the existing F404 engine min­imized engine hardware, cost, and timing requirements and provided an engine within the desired thrust class. The power turbine provides direct conversion of the gas generator horse­power into propulsive thrust without the requirement for a gearbox and associated hardware. Counterrotation utilizes the full propulsive efficiency by recovering the exit swirl between blade stages and converting it into thrust.[1322]

Although shelved during the late 1980s, the Alternate Turboprop and UDF technology and concept is being explored again as part of programs such as the Ultra-High Bypass Turbofan and Pratt & Whitney’s Geared Turbofan. Neither engine is routinely flying yet on commercial airlin­ers. But both concepts promise further reductions in noise, increases in fuel efficiency, and lower operating costs for the airline—goals the aero­space community is constantly working to improve upon.

Several concepts are under study for an Ultra-High Bypass Turbofan, including a modernized version of the Advanced Turboprop that takes advantage of lessons learned from GE’s UDF effort. NASA has teamed with GE to start testing an open-rotor engine. For the NASA tests at Glenn Research Center, GE will run two rows of counter-rotating fan blades, with 12 blades in the front row and 10 blades in the back row. The composite fan blades are one-fifth subscale in size. Tests in
a low-speed wind tunnel will simulate low-altitude aircraft speeds for acoustic evaluation, while tests in a high-speed wind tunnel will simulate high-altitude cruise conditions in order to evaluate blade efficiency and performance.[1323]

"The tests mark a new journey for GE and NASA in the world of open rotor technology. These tests will help to tell us how confident we are in meeting the technical challenges of an open-rotor architecture. It’s a journey driven by a need to sharply reduce fuel consumption in future aircraft,” David Joyce, president of GE Aviation, said in a statement.[1324]

Подпись: 11In an Ultra-High Bypass Turbofan, the amount of air going through the engine casing but not through the core compressor and combustion chamber is at least 10 times greater than the air going through the core. Such engines promise to be quieter, but there can be tradeoffs. For exam­ple, an Ultra-High Bypass Engine might have to operate at a reduced thrust or have its fan spin slower. While the engine would meet all the goals, it would fly slower, thus making passengers endure longer trips.

In the case of Pratt & Whitney’s Geared Turbofan engine, the idea is to have an Ultra-High Bypass Ratio engine, yet spin the fan slower (to reduce noise and improve engine efficiency) than the core compressor blades and turbines, all of which traditionally spin at the same speed, as they are connected to the same central shaft. Pratt & Whitney designed a gearbox into the engine to allow for the central shaft to turn at one speed yet turn a second shaft connected to the fan at another speed.[1325]

Alan H. Epstein, a Pratt & Whitney vice president, testifying before the House Subcommittee on Transportation and Infrastructure in 2007, explained the potential benefits the company’s Geared Turbofan might bring to the aviation industry:

The Geared Turbofan engine promises a new level of very low noise while offering the airlines superior economics and envi­ronmental performance. For aircraft of 70 to 150 passenger size, the Geared Turbofan engine reduces the fuel burned,
and thus the CO2 produced, by more than 12% compared to today’s aircraft, while reducing cumulative noise levels about 20dB below the current Stage 4 regulations. This noise level, which is about half the level of today’s engines, is the equiva­lent difference between standing near a garbage disposal run­ning and listening to the sound of my voice right now.[1326]

Подпись: 11Pratt & Whitney’s PW1000G engine incorporating a geared turbo­fan is selected to be used on the Bombardier CSeries and Mitsubishi Regional Jet airliners beginning in 2013. The engine was first flight-tested in 2008, using an Airbus A340-600 airliner out of Toulouse, France.[1327]