Category Facing the Heat Barrier: a History of Hypersonics

X-15: The Technology

Four companies competed for the main contract, covering design and construc­tion of the X-15: Republic, Bell, Douglas, and North American. Each of them brought a substantial amount of hands-on experience with advanced aircraft. Republic, for example, had Alexander Kartveli as its chief designer. He was a highly imaginative and talented man whose XF-105 was nearly ready for first flight and whose XF-103 was in development. Republic had also built a rocket plane, the XF – 91- This was a jet fighter that incorporated the rocket engine of the X-l for an extra boost in combat. It did not go into production, but it flew in flight tests.

Still, Republic placed fourth in the competition. Its concept rated “unsatisfac­tory” as a craft for hypersonic research, for it had a thin outer fuselage skin that appeared likely to buckle when hot. The overall proposal rated no better than aver­age in a number of important areas, while achieving low scores in Propulsion System and Tanks, Engine Installation, Pilot’s Instruments, Auxiliary Power, and Landing Gear. In addition, the company itself was judged as no more than “marginal” in the key areas of Technical Qualifications, Management, and Resources. The latter included availability of in-house facilities and of an engineering staff not committed to other projects.53

Bell Aircraft, another contender, was the mother of research airplanes, having built the X-l series as well as the X-2. This firm therefore had direct experience both with advanced heat-resistant metals and with the practical issues of powering piloted aircraft using liquid-fuel rocket engines. It even had an in-house group that was building such engines. Bell also was the home of the designers Robert Woods and Walter Dornberger, with the latter having presided over the V-2.

Dornberger’s Bomi concept already was introducing the highly useful concept of hot structures. These used temperature-resistant alloys such as stainless steel. Wings might be covered with numerous small and very hot metal panels, resembling shin­gles, that would radiate heat away from the aircraft. Overheating would be particu­larly severe along the leading edges of wings; these could be water-cooled. Insulation could protect an internal structure that would withstand the stresses and forces of flight; active cooling could protect a pilot’s cockpit and instrument compartment. Becker described these approaches as “the first hypersonic aircraft hot structures concepts to be developed in realistic meaningful detail.”54

Even so, Bell ranked third. Historian Dennis Jenkins writes that within the pro­posal, “almost every innovation they proposed was hedged in such a manner as to make the reader doubt that it would work. The proposal itself seemed rather poorly organized and was internally inconsistent (i. e., weights and other figures frequently differed between sections).”55 Yet the difficulties ran deeper and centered on the specifics of its proposed hot structure.

Bell adopted the insulated-structure approach, with the primary structure being of aluminum, the most familiar of aircraft materials and the best understood. Cor­rugated panels of Inconel X, mounted atop the aluminum, were to provide insula­tion. Freely-suspended panels of this alloy, contracting and expanding with ease, were to serve as the outer skin.

Yet this concept was quite unsuitable for the X-15, both on its technical merits and as a tool for research. A major goal of the program was to study aircraft struc­tures at elevated temperatures, and this would not be possible with a primary struc­ture of cool aluminum. There were also more specific deficiencies, as when Bell’s thermal analysis assumed that the expanding panels of the outer shell would prevent leakage of hot air from the boundary layer. However, the evaluation made the flat statement, “leakage is highly probable.” Aluminum might not withstand the result­ing heating, with the loss of even one such panel leading perhaps to destructive heating. Indeed, the Bell insulated structure appeared so sensitive that it could be trusted to successfully complete only three of 13 reference flights.56

Another contender, Douglas Aircraft, had shared honors with Bell in building previous experimental aircraft. Its background included the X-3 and the Skyrocket, which meant that Douglas also had people who knew how to integrate a liquid rocket engine with an airplane. This company’s concept came in second.

Its design avoided reliance on insulated structures, calling instead for use of a heat sink. The material was to be a lightweight magnesium alloy that had excellent

X-15: The Technology

The North American X-15. (NASA)

heat capacity. Indeed, its properties were so favorable that it would reach tempera­tures of only 600°F, while an Inconel X heat-sink airplane would go to 1,200°F.

Again, though, this concept missed the point. Managers wanted a vehicle that could cope successfully with temperatures of 1,200°F, to lay groundwork for opera­tional fighters that could fly well beyond Mach 3. In addition, the concept had virtually no margin for temperature overshoots. Its design limit of 600°F was right on the edge of a regime of which its alloy lost strength rapidly. At 680°F, its strength could fall off by 90 percent. With magnesium being flammable, there was danger of fire within the primary structure itself, with the evaluation noting that “only a small area raised to the ignition temperature would be sufficient to destroy the aircraft.”57

Then there was North American, the home of Navaho. That missile had not flown, but its detailed design was largely complete and specified titanium in hot areas. This meant that that company knew something about using advanced metals. The firm also had a particularly strong rocket-engine group, which split off during 1955 to form a new corporate division called Rocketdyne. Indeed, engines built by that association had already been selected for Atlas.58

North American became the winner. It paralleled the thinking at Douglas by independently proposing its own heat-sink structure, with the material being Inco­nel X. This concept showed close similarities to that of Becker’s feasibility study a year earlier. Still, this was not to say that the deck was stacked in favor of Beck­er’s approach. He and his colleagues had pursued conceptual design in a highly impromptu fashion. The preliminary-design groups within industry were far more experienced, and it had appeared entirely possible that these experts, applying their seasoned judgment, might come up with better ideas. This did not happen. Indeed, the Bell and Douglas concepts failed even to meet an acceptable definition of the new research airplane. By contrast, the winning concept from North American amounted to a particularly searching affirmation of the work of Becker’s group.59

How had Bell and Douglas missed the boat? The government had set forth per­formance requirements, which these companies both had met. In the words of the North American proposal, “the specification performance can be obtained with very moderate structural temperatures.” However, “the airplane has been designed to tolerate much more severe heating in order to provide a practical temperature band within which exploration can be conducted.”

In Jenkins’s words, “the Bell proposal…was terrible—you walked away not entirely sure that Bell had committed themselves to the project. The exact opposite was true of the North American proposal. From the opening page you knew that North American understood what was trying to be accomplished with the X-15 program and had attempted to design an airplane that would help accomplish the task—not just meet the performance specifications (which did not fully describe the intent of the program).”60 That intent was to build an aircraft that could accomplish research at 1,200°F and not merely meet speed and altitude goals.

The overall process of proposal evaluation cast the competing concepts in sharp relief, heightening deficiencies and emphasizing sources of potential difficulty. These proposals also received numerical scores, while another basis for comparison involved estimated program costs:

North American

81.5 percent

$56.1 million

Douglas Aircraft

80.1

36.4

Bell Aircraft

75.5

36.3

Republic Aviation

72.2

47.0

North Americans concept thus was far from perfect, while Republic’s repre­sented a serious effort. In addition, it was clear that the Air Force—which was to foot most of the bill—was willing to pay for what it would get. The X-15 program thus showed budgetary integrity, with the pertinent agencies avoiding the tempta­tion to do it on the cheap.61

On 30 September 1955, letters went out to North American as well as to the unsuccessful bidders, advising them of the outcome of the competition. With this, engineers now faced the challenge of building and flying the X-15 as a practical exercise in hypersonic technology. Accordingly, it broke new ground in such areas as metallurgy and fabrication, onboard instruments, reaction controls, pilot training, the pilots pressure suit, and flight simulation.62

Inconel X, a nickel alloy, showed good ductility when fully annealed and had some formability. When severely formed or shaped, though, it showed work-hardening, which made the metal brittle and prone to crack. Workers in the shop addressed this problem by forming some parts in stages, annealing the workpieces by heating them between each stage. Inconel X also was viewed as a weldable alloy, but some welds tended to crack, and this problem resisted solution for some time. The solution lay in making welds that were thicker than the parent material. After being ground flat, their surfaces were peened—bombarded with spherical shot—and rolled flush with the parent metal. After annealing, the welds often showed better crack resistance than the surrounding Inconel X.

A titanium alloy was specified for the internal structure of the wings. It proved difficult to weld, for it became brittle by reacting with oxygen and nitrogen in the air. It therefore was necessary to enclose welding fixtures within enclosures that could be purged with an inert gas such as helium and to use an oxygen-detecting device to determine the presence of air. With these precautions, it indeed proved possible to weld titanium while avoiding embrittlement.63

Greases and lubricants posed their own problems. Within the XT5, journal and antifriction bearings received some protection from heat and faced operat­ing temperatures no higher than 600°F. This nevertheless was considerably hotter than engineers were accustomed to accommodating. At North American, candidate lubricants underwent evaluation by direct tests in heated bearings. Good greases protected bearing shafts for 20,000 test cycles and more. Poor greases gave rise to severe wearing of shafts after as few as 350 cycles.64

In contrast to conventional aircraft, the X-15 was to fly out of the sensible atmo­sphere and then re-enter, with its nose high. It also was prone to yaw while in near­vacuum. Hence, it needed a specialized instrument to determine angles of attack and of sideslip. This took form as the “Q-ball,” built by the Nortronics Division of Northrop Aircraft. It fitted into the tip of the X-15’s nose, giving it the appearance of a greatly enlarged tip of a ballpoint pen.

The ball itself was cooled with liquid nitrogen to withstand air temperatures as high as 3,500°F. Orifices set within the ball, along yaw and pitch planes, measuring differential pressures. A servomechanism rotated the ball to equalize these pressures by pointing the balls forward tip directly into the onrushing airflow. With the direction of this flow thus established, the pilot could null out any sideslip. He also could raise the nose to a desired angle of attack. “The Q-ball is a go-no go item,” the test pilot Joseph Walker told Time magazine in 1961. “Only if she checks okay do we go.”65

To steer the aircraft while in flight, the X-15 mounted aerodynamic controls. These retained effectiveness at altitudes well below 100,000 feet. However, they lost

effectiveness between 90,000 and 100,000 feet. The X-15 therefore incorporated reac­tion controls, which were small thrusters fueled with hydrogen peroxide. Nose-mounted units controlled pitch and yaw. Other units, set near the wingtips, gave control of roll.

X-15: The TechnologyNo other research airplane had ever flown with such thrust­ers, although the X-1B con­ducted early preliminary experi­ments and the X-2 came close to needing them in 1956. During a flight in September of that year, the test pilot Iven Kinche – loe took it to 126,200 feet. At that altitude, its aerodynamic controls were useless. Kincheloe

, . . , . . flew a ballistic arc, experiencing

Attitude control or a hypersonic airplane using aerody – г 1

namic controls and reaction controls. (U. S. Air Force) near-weightlessness for close to

a minute. His airplane banked to the left, but he did not try to counter this movement, for he knew that his X-2 could easily go into a deadly tumble.66

In developing reaction controls, an important topic for study involved determin­ing the airplane handling qualities that pilots preferred. Initial investigations used an analog computer as a flight simulator. The “airplane” was disturbed slightly; a man used a joystick to null out the disturbance, achieving zero roll, pitch, and yaw. These experiments showed that pilots wanted more control authority for roll than for pitch or yaw. For the latter, angular accelerations of 2.5 degrees per second squared were acceptable. For roll, the preferred control effectiveness was two to four times greater.

Flight test came next. The X-2 would have served splendidly for this purpose, but only two had been built, with both being lost in accidents. At NACA’s High – Speed Flight Station, investigators fell back on the X-lB, which was less capable but still useful. In preparation for its flights with reaction controls, the engineers built a simulator called the Iron Cross, which matched the dimensions and inertial characteristics of this research plane. A pilot, sitting well forward along the central arm, used a side-mounted control stick to actuate thrusters that used compressed
nitrogen. This simulator was mounted on a universal joint, which allowed it to move freely in yaw, pitch, and roll.

Reaction controls went into the X-1B late in 1957. The test pilot Neil Arm­strong, who walked on the Moon 12 years later, made three flights in this research plane before it was grounded in mid-1958 due to cracks in its fuel tank. Its peak altitude during these three flights was 55,000 feet, where its aerodynamic controls readily provided backup. The reaction controls then went into an F-104, which reached 80,000 feet and went on to see much use in training X-15 pilots. When the X-15 was in flight, these pilots had to transition from aerodynamic controls to reaction controls and back again. The complete system therefore provided overlap. It began blending in the reaction controls at approximately 85,000 feet, with most pilots switching to reaction controls exclusively by 100,000 feet.67

Since the war, with aircraft increasing in both speed and size, it had become increasingly impractical for a pilot to exert the physical strength to operate a plane’s ailerons and elevators merely by moving the control stick in the cockpit. Hydrauli­cally-boosted controls thus were in the forefront, resembling power steering in a car. The X-15 used such hydraulics, which greatly eased the workload on a test pilots muscles. These hydraulic systems also opened the way for stability augmentation systems of increasing sophistication.

Stability augmentation represented a new refinement of the autopilot. Conven­tional autopilots used gyroscopes to detect deviations from a plane’s straight and level course. These instruments then moved an airplane’s controls so as to null these deviations to zero. For high-performance jet fighters, the next step was stability augmentation. Such aircraft often were unstable in flight, tending to yaw or roll; indeed, designers sometimes enhanced this instability to make them more maneu­verable. Still, it was quite wearying for a pilot to have to cope with this. A stability augmentation system made life in the cockpit much easier.

Such a system used rate gyros, which detected rates of movement in pitch, roll, and yaw at so many degrees per second. The instrument then responded to these rates, moving the controls somewhat like before to achieve a null. Each axis of this control had “gain,” defining the proportion or ratio between a sensed rate of angu­lar motion and an appropriate deflection of ailerons or other controls. Fixed-gain systems worked well; there also were variable-gain arrangements, with the pilot set­ting the value of gain within the cockpit. This addressed the fact that the airplane might need more gain in thin air at high altitude, to deflect these surfaces more strongly.68

The X-15 program built three of these aircraft. The first two used a stability aug­mentation system that incorporated variable gain, although in practice these aircraft flew well with constant values of gain, set in flight.69 The third replaced it with a more advanced arrangement that incorporated something new: adaptive gain. This

was a variable gain, which changed automatically in response to flight conditions. Within the Air Force, the Flight Control Laboratory at WADC had laid ground­work with a program dating to 1955- Adaptive-gain controls flew aboard F-94 and F-101 test aircraft. The X-15 system, the Minneapolis Honeywell MH-96, made its first flight in December 1961.70

How did it work? When a pilot moved the control stick, as when changing the pitch, the existing value of gain in the pitch channel caused the aircraft to respond at a certain rate, measured by a rate gyro. The system held a stored value of the optimum pitch rate, which reflected preferred handling qualities. The adaptive-gain control compared the measured and desired rates and used the difference to deter­mine a new value for the gain. Responding rapidly, this system enabled the airplane to maintain nearly constant control characteristics over the entire flight envelope.71

The MH-96 made it possible to introduce the X-15’s blended aerodynamic and reaction controls on the same control stick. This blending occurred automatically in response to the changing gains. When the gains in all three channels—roll, pitch, and yaw—reached 80 percent of maximum, thereby indicating an imminent loss of effectiveness in the aerodynamic controls, the system switched to reaction controls. During re-entry, with the airplane entering the sensible atmosphere, the system returned to aerodynamic control when all the gains dropped to 60 percent.72

The X-15 flight-control system thus stood three steps removed from the conven­tional stick-and-cable installations of World War II. It used hydraulically-boosted controls; it incorporated automatic stability augmentation; and with the MH-96, it introduced adaptive gain. Fly-by-wire systems lay ahead and represented the next steps, with such systems being built both in analog and digital versions.

Analog fly-by-wire systems exist within the F-16A and other aircraft. A digital system, as in the space shuttle, uses a computer that receives data both from the pilot and from the outside world. The pilot provides input by moving a stick or sidearm controller. These movements do not directly actuate the ailerons or rudder, as in days of old. Instead, they generate signals that tell a computer the nature of the desired maneuver. The computer then calculates a gain by applying control laws, which take account of the planes speed and altitude, as measured by onboard instruments. The computer then sends commands down a wire to hydraulic actua­tors co-mounted with the controls to move or deflect these surfaces so as to comply with the pilot’s wishes.73

The MH-96 fell short of such arrangements in two respects. It was analog, not digital, and it was a control system, not a computer. Like other systems executing automatic control, the MH-96 could measure an observed quantity such as pitch rate, compare it to a desired value, and drive the difference to zero. But the MH-96 was wholly incapable of implementing a control law, programmed as an algebraic expression that required values of airspeed and altitude. Hence, while the X-15 with

MH-96 stood three steps removed from the fighters of the recent war, it was two steps removed from the digital fly-by-wire control of the shuttle.

The X-l 5 also used flight simulators. These served both for pilot training and for development of onboard systems, including the reaction controls and the MH-96. The most important flight simulator was built by North American. It replicated the X-l5 cockpit and included actual hydraulic and control-system hardware. Three analog computers implemented equations of motion that governed translation and rotation of the X-l 5 about all three axes, transforming pilot inputs into instrument displays.74

Flight simulators dated to the war. The famous Link Trainer introduced over half a million neophytes to their cockpits. The firm of Link Aviation added analog computers in 1949, within a trainer that simulated flight in a jet fighter.75 In 1955, when the X-l 5 program began, it was not at all customary to use flight simulators to support aircraft design and development. But program managers turned to such simulators because they offered effective means to study new issues in cockpit dis­plays, control systems, and aircraft handling qualities.

Flight simulation showed its value quite early. An initial X-l5 design proved excessively unstable and difficult to control. The cure lay in stability augmentation. A 1956 paper stated that this had “heretofore been considered somewhat of a luxury for high-speed aircraft,” but now “has been demonstrated as almost a necessity,” in all three axes, to ensure “consistent and successful entries” into the atmosphere.76

The North American simulator, which was transferred to the NACA Flight Research Center, became critical in training X-l 5 pilots as they prepared to execute specific planned flights. A particular mission might take little more than 10 min­utes, from ignition of the main engine to touchdown on the lakebed, but a test pilot could easily spend 10 hours making practice runs in this facility. Training began with repeated trials of the normal flight profile, with the pilot in the simulator cock­pit and a ground controller close at hand. The pilot was welcome to recommend changes, which often went into the flight plan. Next came rehearsals of off-design missions: too much thrust from the main engine, too high a pitch angle when leav­ing the stratosphere.

Much time was spent practicing for emergencies. The X-l 5 had an inertial refer­ence unit that used analog circuitry to display attitude, altitude, velocity, and rate of climb. Pilots dealt with simulated failures in this unit, attempting to complete the normal mission or, at least, execute a safe return. Similar exercises addressed failures in the stability augmentation system. When the flight plan raised issues of possible flight instability, tests in the simulator used highly pessimistic assumptions concern­ing stability of the vehicle. Other simulated missions introduced in-flight failures of the radio or Q-ball. Premature engine shutdowns imposed a requirement for safe landing on an alternate lakebed, which was available for emergency use.77

The simulations indeed were realistic in their cockpit displays, but they left out an essential feature: the g-loads, produced both by rocket thrust and by deceleration during re-entry. In addition, a failure of the stability augmentation system, during re-entry, could allow the airplane to oscillate in pitch or yaw. This would change its drag characteristics, imposing a substantial cyclical force.

To address such issues, investigators installed a flight simulator within the gon­dola of a centrifuge at the Naval Air Development Center in Johnsville, Pennsylva­nia. The gondola could rotate on two axes while the centrifuge as a whole was turn­ing. It not only produced g-forces, but its g-forces increased during the simulated rocket burn. The centrifuge imposed such forces anew during reentry, while adding a cyclical component to give the effect of a yaw or pitch oscillation.78

Not all test pilots rode the centrifuge. William “Pete” Knight, who stood among the best, was one who did not. His training, coupled with his personal coolness and skill, enabled him to cope even with an extreme emergency. In 1967, during a planned flight to 250,000 feet, an X-l5 experienced a complete electrical failure while climbing through 107,000 feet at Mach 4. This failure brought the shutdown of both auxiliary power units and hence of both hydraulic systems. Knight, the pilot, succeeded in restarting one of these units, which restored hydraulic power. He still had zero electrical power, but with his hydraulics, he now had both his aerody­namic and reaction controls. He rode his plane to a peak of 173,000 feet, re-entered the atmosphere, made a 180-degree turn, and glided to a safe landing on Mud Lake near Tonopah, Nevada.79

During such flights, as well as during some exercises in the centrifuge, pilots wore a pressure suit. Earlier models had already been good enough to allow the test pilot Marion Carl to reach 83,235 feet in the Douglas Skyrocket in 1953. Still, some of those versions left much to be desired. Time magazine, in 1952, discussed an Air Force model that allowed a pilot to breathe, but “with difficulty. His hands, not fully pressurized, swell up with blue venous blood. His throat is another trouble spot; the medicos have not yet learned how to pressurize a throat without strangling its owner.”80

The David G. Clark Company, a leading supplier of pressure suits for Air Force flight crews, developed a greatly improved model for the X-l5. Such suits tended to become rigid and hard to bend when inflated. This is also true of a child’s long balloon, with an internal pressure that only slightly exceeds that of the atmosphere. The X-l 5 suit was to hold five pounds per square inch of pressure, or 720 pounds per square foot. The X-l 5 cockpit had its own counterbalancing pressure, but it could (and did) depressurize at high altitude. In such an event, the suit was to pro­tect the test pilot rather than leave him immobile.

The solution used an innovative fabric that contracted in circumference while it stretched in length. With proper attention to the balance between these two effects,

the suit maintained a constant volume when pressurized, enhancing a pilot’s free­dom of movement. Gloves and boots were detachable and zipped to this fabric. The helmet was joined to the suit with a freely-swiveling ring that gave full mobility to the head. Oxygen flowed into the helmet; exhalant passed through valves in a neck seal and pressurized the suit. Becker later described it as “the first practical full-pres­sure suit for pilot protection in space.”81

Thus accoutered, protected for flight in near-vacuum, X-15 test pilots rode their rockets as they approached the edge of space and challenged the hypersonic frontier. They returned with results galore for project scientists—and for the nation.

The Fading,. the Comeback

During the 1960s and 1970s, work in re-entry went from strength to strength. The same was certainly not true of scramjets, which reached a peak of activity in the Aerospaceplane era and then quickly faded. Partly it was their sheer difficulty, along with an appreciation that whatever scramjets might do tomorrow, rockets were already doing today. Yet the issues went deeper.

The 1950s saw the advent of antiaircraft missiles. Until then, the history of air power had been one of faster speeds and higher altitudes. At a stroke, though, it became clear that missiles held the advantage. A hot fighter plane, literally hot from aerodynamic heating, now was no longer a world-class dogfighter; instead it was a target for a heat-seeking missile.

When antiaircraft no longer could outrace defenders, they ceased to aim at speed records. They still needed speed but not beyond a point at which this requirement would compromise other fighting qualities. Instead, aircraft were developed with an enhanced ability to fly low, where missiles could lose themselves in ground clutter, and became stealthy. In 1952, late in the dogfight era, Clarence “Kelly” Johnson designed the F-104 as the “missile with a man in it,” the ultimate interceptor. No one did this again, not after the real missiles came in.

This was bad news for ramjets. The ramjet had come to the fore around 1950, in projects such as Navaho, Bomarc, and the XF-103, because it offered Mach 3 at a time when turbojets could barely reach Mach 1. But Mach 3, when actually achieved in craft such as the XB-70 and SR-71, proved to be a highly specialized achievement that had little to do with practical air power. No one ever sent an SR-71 to conduct close air support at subsonic speed, while the XB-70 gave way to its predecessor, the B-52, because the latter could fly low whereas the XB-70 could not.

Ramjets also faltered on their merits. The ramjet was one of two new airbreath – ers that came forth after the war, with the other being the turbojet. Inevitably this set up a Darwinian competition in which one was likely to render the other extinct. Ramjets from the start were vulnerable, for while they had the advantage of speed, they needed an auxiliary boost from a rocket or turbojet. Nor was it small; the Navaho booster was fully as large as the winged missile itself.

The problem of compressor stall limited turbojet performance for a time. But from 1950 onward, several innovations brought means of dealing with it. They led to speedsters such as the F-104 and F-105, operational aircraft that topped Mach 2, along with the B-58 which also did this. The SR-71, in turn, exceeded Mach 3. This meant that there was no further demand for ramjets, which were not selected for new aircraft.

The ramjet thus died not only because its market was lost to advanced turbojets, but because the advent of missiles made it clear that there no longer was a demand for really fast aircraft. This, in turn, was bad news for scramjets. The scramjet was an advanced ramjet, likely to enter the aerospace mainstream only while ramjets remained there. The decline of the ramjet trade meant that there was no industry that might build scramjets, no powerful advocates that might press for them.

The scramjet still held the prospective advantage of being able to fly to orbit as a single stage. With Aerospaceplane, the Air Force took a long look as to whether this was plausible, and the answer was no, at least not soon. With this the scramjet lost both its rationale in the continuing pursuit of high speed and the prospect of an alternate mission—ascent to orbit—that might allow it to bypass this difficulty.

In its heyday the scramjet had stood on the threshold of mainstream research and development, with significant projects under way at General Electric and United Air­craft Research Laboratories, which was affiliated with Pratt & Whitney. As scram­jets faded, though, even General Applied Science Laboratories (GASL), a scramjet center that had been founded by Antonio Ferri himself, had to find other activities. For a time the only complete scramjet lab in business was at NASA-Langley.

And then—lightning struck. President Ronald Reagan announced the Strategic Defense Initiative (SDI), which brought the prospect of a massive new demand for access to space. The Air Force already was turning away from the space shuttle, while General Lawrence Skantze, head of the Air Force Systems Command, was strongly interested in alternatives. He had no background in scramjets, but he embraced the concept as his own. The result was the National Aerospace Plane (NASP) effort, which aimed at airplane-like flight to orbit.

In time SDI faded as well, while lessons learned by researchers showed that NASP offered no easy path to space flight. NASP faded in turn and with it went hopes for a new day for hypersonics. Final performance estimates for the prime NASP vehicle, the X-30, were not terribly far removed from the early and optimistic estimates that had made the project appear feasible. Still, the X-30 design was so sensitive that even modest initial errors could drive its size and cost beyond what the Pentagon was willing to accept.

X-15: Some Results

During the early 1960s, when the nation was agog over the Mercury astronauts, the X-15 pointed to a future in which piloted spaceplanes might fly routinely to orbit. The men of Mercury went water-skiing with Jackie Kennedy, but within their orbiting capsules, they did relatively little. Their flights were under automatic con­trol, which left them as passengers along for the ride. Even a monkey could do it. Indeed, a chimpanzee named Ham rode a Redstone rocket on a suborbital flight in January 1961, three months before Alan Shepard repeated it before the gaze of an astonished world. Later that year another chimp, Enos, orbited the Earth and returned safely. The much-lionized John Glenn did this only later.82

In the X-15, by contrast, only people entered the cockpit. A pilot fired the rocket, controlled its thrust, and set the angle of climb. He left the atmosphere, soared high over the top of the trajectory, and then used reaction controls to set up his re-entry. All the while, if anything went wrong, he had to cope with it on the spot and work to save himself and the plane. He maneuvered through re-entry, pulled out of his dive, and began to glide. Then, while Mercury capsules were using parachutes to splash clumsily near an aircraft carrier, the X-15 pilot goosed his craft onto Rogers Dry Lake like a fighter.

All aircraft depend on propulsion for their performance, and the X-15’s engine installations allow the analyst to divide its career into three eras. It had been designed from the start to use the so-called Big Engine, with 57,000 pounds of thrust, but delays in its development brought a decision to equip it with two XLR11 rocket engines, which had served earlier in the X-l series and the Douglas Skyrocket. Together they gave 16,000 pounds of thrust.

Flights with the XLR1 Is ran from June 1959 to February 1961. The best speed and altitude marks were Mach 3-50 in February 1961 and 136,500 feet in August 1961. These closely matched the corresponding numbers for the X-2 during 1956: Mach 3-196, 126,200 feet.83 The X-2 program had been ill-starred—it had had two operational aircraft, both of which were destroyed in accidents. Indeed, these research aircraft made only 20 flights before the program ended, prematurely, with the loss of the second flight vehicle. The X-15 with XLR1 Is thus amounted to X – 2s that had been brought back from the dead, and that belatedly completed their intended flight program.

The Big Engine, the Reaction Motors XLR99, went into service in November

1960. It launched a program of carefully measured steps that brought the fall of one Mach number after another. A month after the last flight with XLR1 Is, in March

1961, the pilot Robert White took the X-15 past Mach 4. This was the first time a piloted aircraft had flown that fast, as White raised the speed mark by nearly a full Mach. Mach 5 fell, also to Robert White, four months later. In November 1961 White did it again, as he reached Mach 6.04. Once flights began with the Big Engine, it took only 15 of them to reach this mark and to double the maximum Mach that had been reached with the X-2.

Altitude flights were also on the agenda. The X-15 climbed to 246,700 feet in April 1962, matched this mark two months later, and then soared to 314,750 feet in July 1962. Again White was in the cockpit, and the Federation Aeronautique Inter­nationale, which keeps the world’s aviation records, certified this one as the absolute altitude record for its class. A year later, without benefit of the FAI, the pilot Joseph Walker reached 354,200 feet. He thus topped 100 kilometers, a nice round number that put him into space without question or cavil.84

The third era in the X-15 s history took shape as an extension of the second one. In November 1962, with this airplanes capabilities largely demonstrated, a serious landing accident caused major damage and led to an extensive rebuild. The new air­craft, designated X-15A-2, retained the Big Engine but sported external tankage for a longer duration of engine burn. It also took on an ablative coating for enhanced thermal protection.

It showed anew the need for care in flight test. In mid-1962, and for that matter in 1966, the X-2s best speed stood at 4,104 miles per hour, or Mach 5-92. (Mach number depends on both vehicle speed and air temperature. The flight to Mach 6.04 reached 4,093 miles per hour.) Late in 1966, flying the X-l 5A-2 without the ablator, Pete Knight raised this to Mach 6.33. Engineers then applied the ablator and mounted a dummy engine to the lower fin, with Knight taking this craft to Mach 4.94 in August 1967. Then in October he tried for more.

But the X-15A-2, with both ablator and dummy engine, now was truly a new configuration. Further, it had only been certified with these additions in the flight to Mach 4.94 and could not be trusted at higher Mach. Knight took the craft to Mach 6.72, a jump of nearly two Mach numbers, and this proved to be too much. The ablator, when it came back, was charred and pitted so severely that it could not be restored for another flight. Worse, shock-impingement heating burned the engine off its pylon and seared a hole in the lower fin, disabling the propellant ejec-

X-15: Some Results

X-15 with dummy Hypersonic Research Engine mounted to the lower fin. (NASA)

tion system and threatening the craft’s vital hydraulics. No one ever tried to fly faster in the X-15.85

It soon retired with honor, for in close to 200 powered flights, it had operated as a true instrument of hypersonic research. Its flight log showed nearly nine hours above Mach 3, close to six hours above Mach 4, and 87 minutes above Mach 5-86 It served as a flying wind tunnel and made an important contribution by yielding data that made it possible to critique the findings of experiments performed in ground-based tunnels. Tunnel test sections were small, which led to concern that their results might not be reliable when applied to full-size hypersonic aircraft. Such discrepancies appeared particularly plausible because wind tunnels could not repro­duce the extreme temperatures of hypersonic flight.

The X-15 set many of these questions to rest. In Becker’s words, “virtually all of the flight pressures and forces were found to be in excellent agreement with the low-temperature wind-tunnel predictions.”87 In addition to lift and drag, this good agreement extended as well to wind-tunnel values of “stability derivatives,” which governed the aircraft’s handling qualities and its response to the aerodynamic con­trols. Errors due to temperature became important only beyond Mach 10 and were negligible below such speeds.

X-15: Some Results

B-52 mother ship with X-15A-2. The latter mounted a dummy scramjet and carried external tanks as well as ablative thermal protection. (NASA)

But the X-15 brought surprises in boundary-layer flow and aerodynamic heat­ing. There was reason to believe that this flow would remain laminar, being stabi­lized in this condition by heat flow out of the boundary layer. This offered hope, for laminar flow, as compared to turbulent, meant less skin-friction drag and less heating. Instead, the X-15 showed mostly turbulent boundary layers. These resulted from small roughnesses and irregularities in the aircraft skin surface, which tripped the boundary layers into turbulence. Such skin roughness commonly produced tur­bulent boundary layers on conventional aircraft. The same proved to be true at Mach 6.

The X-15 had a conservative thermal design, giving large safety margins to cope with the prevailing lack of knowledge. The turbulent boundary layers might have brought large increases in the heat-transfer rates, limiting the X-15’s peak speed. But in another surprise, these rates proved to be markedly lower than expected. As a consequence, the measured skin temperatures often were substantially less than had been anticipated (based on existing theory as well as on wind-tunnel tests). These flight results, confirmed by repeated measurements, were also validated with further wind-tunnel work. They resisted explanation by theory, but a new empirical model used these findings to give a more accurate description of hypersonic heat­ing. Because this model predicted less heating and lower temperatures, it permitted design of vehicles that were lighter in weight.88

An important research topic involved observation of how the X-15 itself would stand up to thermal stresses. The pilot Joseph Walker stated that when his craft was accelerating and heating rapidly, “the airplane crackled like a hot stove.” This resulted from buckling of the skin. The consequences at times could be serious, as when hot air leaked into the nose wheel well and melted aluminum tubing while in flight. On other occasions, such leaks destroyed the nose tire.89

Fortunately, such problems proved manageable. For example, the skin behind the wing leading edge showed local buckling during the first flight to Mach 5-3- The leading edge was a solid bar of Inconel X that served as a heat sink, with thin slots or expansion joints along its length. The slots tripped the local airflow into turbulence, with an accompanying steep rise in heat transfer. This created hot spots, which led to the buckling. The cure lay in cutting additional expansion slots, covering them with thin Inconel tabs, and fastening the skin with additional rivets. The wing lead­ing edge faced particularly severe heating, but these modifications prevented buck­ling as the X-15 went beyond Mach 6 in subsequent flights.

Buckling indeed was an ongoing problem, and an important way to deal with it lay in the cautious step-by-step program of advance toward higher speeds. This allowed problems of buckling to appear initially in mild form, whereas a sudden leap toward record-breaking performance might have brought such problems in forms so severe as to destroy the airplane. This caution showed its value anew as buckling problems proved to lie behind an ongoing difficulty in which the cockpit canopy windows repeatedly cracked.

An initial choice of soda-lime glass for these windows gave way to alumino-sili – cate glass, which had better heat resistance. The wisdom of this decision became clear in 1961, when a soda-lime panel cracked in the course of a flight to 217,000 feet. However, a subsequent flight to Mach 6.04 brought cracking of an alumino­silicate panel that was far more severe. The cause again was buckling, this time in the retainer or window frame. It was made of Inconel X; its buckle again produced a local hot spot, which gave rise to thermal stresses that even this heat-resistant glass could not withstand. The original retainers were replaced with new ones made of titanium, which had a significantly lower coefficient of thermal expansion. Again the problem disappeared.90

The step-by-step test program also showed its merits in dealing with panel flut­ter, wherein skin panels oscillated somewhat like a flag waving in the breeze. This brought a risk of cracking due to fatigue. Some surface areas showed flutter at con­ditions no worse than Mach 2.4 and dynamic pressure of 650 pounds per square foot, a rather low value. Wind-tunnel tests verified the flight results. Engineers rein­forced the panels with skin doublers and longitudinal stiffeners to solve the prob­lem. Flutter did not reappear, even at the much higher dynamic pressure of 2,000 pounds per square foot.91

Caution in flight test also proved beneficial in dealing with the auxiliary power units (APUs). The APU, built by General Electric, was a small steam turbine driven by hydrogen peroxide and rotating at 51,200 revolutions per minute. Each X-15 airplane mounted two of them for redundancy, with each unit using gears to drive an electric alternator and a pump for the hydraulic system. Either APU could carry the full electrical and hydraulic load, but failure of both was catastrophic. Lacking hydraulic power, a pilot would have been unable to operate his aerodynamic con­trols.

Midway through 1962 a sudden series of failures in a main gear began to show up. On two occasions, a pilot experienced complete gear failure and loss of one APU, forcing him to rely on the second unit as a backup. Following the second such flight, the other APU gear also proved to be badly worn. The X-15 aircraft then were grounded while investigators sought the source of the problem.

They traced it to a lubricating oil, one type of which had a tendency to foam when under reduced pressure. The gear failures coincided with an expansion of the altitude program, with most of the flights above 100,000 feet having taken place during 1962 and later. When the oil turned to foam, it lost its lubricating proper­ties. A different type had much less tendency to foam; it now became standard. Designers also enclosed the APU gearbox within a pressurized enclosure. Subse­quent flights again showed reliable APU operation, as the gear failures ceased.92

Within the X-15 flight-test program, the contributions of its research pilots were decisive. A review of the first 44 flights, through November 1961, showed that 13 of them would have brought loss of the aircraft in the absence of a pilot and of redun­dancies in onboard systems. The actual record showed that all but one of these mis­sions had been successfully flown, with the lone exception ending in an emergency landing that also went well.93

Still there were risks. The dividing line between a proficient flight and a disas­trous one, between life and death for the pilot, could be narrow indeed, and the man who fell afoul of this was Major Mike Adams. His career in the cockpit dated to the Korean War. He graduated from the Experimental Test Pilot School, ranking first in his class, and then was accepted for the Aerospace Research Pilot School. Yeager himself was its director; his faculty included Frank Borman, Tom Stafford, and Jim McDivitt, all of whom went on to win renown as astronauts. Yeager and his selec­tion board picked only the top one percent of this school s applicants.94

Adams made his first X-15 flight in October 1966. The engine shut down pre­maturely, but although he had previously flown this craft only in a simulator, he successfully guided his plane to a safe landing on an emergency dry lakebed. A year later, in the fall of 1967, he trained for his seventh mission by spending 23 hours in the simulator. The flight itself took place on 15 November.

As he went over the top at 266,400 feet, his airplane made a slow turn to the right that left it yawing to one side by 15 degrees.95 Soon after, Adams made his mistake. His instrument panel included an attitude indicator with a vertical bar. He could select between two modes of display, whereby this bar could indicate either sideslip angle or roll angle. He was accustomed to reading it as a yaw or sideslip angle—but he had set it to display roll.

“It is most probable that the pilot misinterpreted the vertical bar and flew it as a sideslip indicator,” the accident report later declared. Radio transmissions from the ground might have warned him of his faulty attitude, but the ground controllers had no data on yaw. Adams might have learned more by looking out the window, but he had been carefully trained to focus on his instruments. Three other cockpit indicators displayed the correct values of heading and sideslip angle, but he appar­ently kept his eyes on the vertical bar. He seems to have felt vertigo, which he had trained to overcome by concentrating on that single vertical needle.96

Mistaking roll for sideslip, he used his reaction controls to set up a re-entry with his airplane yawed at ninety degrees. This was very wrong; it should have been pointing straight ahead with its nose up. At Mach 5 and 230,000 feet, he went into a spin. He fought his way out of it, recovering from the spin at Mach 4.7 and 120,000 feet. However, some of his instruments had been knocked badly awry. His inertial reference unit was displaying an altitude that was more than 100,000 feet higher than his true altitude. In addition, the MH-96 flight-control system made a fatal error.

It set up a severe pitch oscillation by operating at full gain, as it moved the hori­zontal stabilizers up and down to full deflection, rapidly and repeatedly. This system should have reduced its gain as the aircraft entered increasingly dense atmosphere, but instead it kept the gain at its highest value. The wild pitching produced extreme nose-up and nose-down attitudes that brought very high drag, along with decel­erations as great as 15 g – Adams found himself immobilized, pinned in his seat by forces far beyond what his plane could withstand. It broke up at 62,000 feet, still traveling at Mach 3-9. The wings and tail came off; the fuselage fractured into three pieces. Adams failed to eject and died when he struck the ground.97

“We set sail on this new sea,” John Kennedy declared in 1962, “because there is new knowledge to be gained, and new rights to be won.” Yet these achievements came at a price, which Adams paid in full.98

Scramjets Pass Their Peak

From the outset, scramjets received attention for the propulsion of tactical mis­siles. In 1959 APL’s Gordon Dugger and Frederick Billig disclosed a concept that took the name SCRAM, Supersonic Combustion Ramjet Missile. Boosted by a solid-fuel rocket, SCRAM was to cruise at Mach 8.5 and an altitude of 100,000 feet, with range of more than 400 miles. This cruise speed resulted in a temperature of3,800°F at the nose, which was viewed as the limit attainable with coated materi­als.1

The APL researchers had a strong interest in fuels other than liquid hydrogen, which could not be stored. The standard fuel, a boron-rich blend, used ethyl deca – borane. It ignited easily and gave some 25 percent more energy per pound than gasoline. Other tests used blends of pentaborane with heavy hydrocarbons, with the pentaborane promoting their ignition. The APL group went on to construct and test a complete scramjet of 10-inch diameter.2

Paralleling this Navy-sponsored work, the Air Force strengthened its own efforts in scramjets. In 1963 Weldon Worth, chief scientist at the Aero Propulsion Labo­ratory, joined with Antonio Ferri and recommended scramjets as a topic meriting attention. Worth proceeded by funding new scramjet initiatives at General Electric and Pratt & Whitney. This was significant; these firms were the nations leading builders of turbojet and turbofan engines.

GE’s complete scramjet was axisymmetric, with a movable centerbody that included the nose spike. It was water-cooled and had a diameter of nine inches, with this size being suited to the company’s test facility. It burned hydrogen, which was quite energetic. Yet the engine failed to deliver net thrust, with this force being more than canceled out by drag.3

The Pratt & Whitney effort drew on management and facilities at nearby United Aircraft Research Laboratories. Its engine also was axisymmetric and used a long cowl that extended well to the rear, forming the outer wall of the nozzle duct. This entire cowl moved as a unit, thereby achieving variable geometry for all three major components: inlet, combustor, and nozzle. The effort culminated in fabrication of a complete water-cooled test unit of 18-inch diameter.4

A separate Aero Propulsion Lab initiative, the Incremental Flight Test Vehicle (IFTV), also went forward for a time. It indeed had the status of a flight vehicle, with Marquardt holding the prime contract and taking responsibility for the engine. Lockheed designed and built the vehicle and conducted wind-tunnel tests at its Rye Canyon facility, close to Marquardt’s plant in Van Nuys, California.

The concept called for this craft to ride atop a solid-fuel Castor rocket, which was the second stage of the Scout launch vehicle. Castor was to accelerate the IFTV to 5,400 feet per second, with this missile then separating and entering free flight. Burning hydrogen, its engines were to operate for at least five seconds, adding an “increment” of velocity of at least 600 feet per second. Following launch over the Pacific from Vandenberg AFB, it was to telemeter its data to the ground.

This was the first attempt to develop a scramjet as the centerpiece of a flight pro­gram, and much of what could go wrong did go wrong. The vehicle grew in weight during development. It also increased its drag and found itself plagued for a time with inlets that failed to start. The scramjets themselves gave genuine net thrust but still fell short in performance.

The flight vehicle mounted four scramjets. The target thrust was 597 pounds. The best value was 477 pounds. However, the engines needed several hundred pounds of thrust merely to overcome drag on the vehicle and accelerate, and this reduction in performance meant that the vehicle could attain not quite half of the desired velocity increase of 600 feet per second.5

Just then, around 1967, the troubles of the IFTV were mirrored by troubles in the overall scramjet program. Scramjets had held their promise for a time, with a NASA/Air Force Ad Hoc Working Group, in a May 1965 report, calling for an expanded program that was to culminate in a piloted hypersonic airplane. The SAB had offered its own favorable words, while General Bernard Schriever, head of the Air Force Systems Command—the ARDC, its name having changed in 1961— attempted to secure $50 million in new funding.6

He did not get it, and the most important reason was that conventional ramjets, their predecessors, had failed to win a secure role. The ramjet-powered programs of the 1950s, including Navaho and Bomarc, now appeared as mere sidelines within a grand transformation that took the Air Force in only 15 years from piston-pow­ered B-36 and B-50 bombers to the solid-fuel Minuteman ICBM and the powerful Titan III launch vehicle. The Air Force was happy with both and saw no reason for scramjet craft as alternatives. This was particularly true because Aerospaceplane had come up with nothing compelling.

The Aero Propulsion Laboratory had funded the IFTV and the GE and Pratt scramjets, but it had shown that it would support this engine only if it could be developed quickly and inexpensively. Neither had proved to be the case. The IFTV effort, for one, had escalated in cost from $3.5 million to $12 million, with its engine being short on power and its airframe having excessive drag and weight.7

After Schriever’s $50-million program failed to win support, Air Force scramjet efforts withered and died. More generally, between 1966 and 1968, three actions ended Air Force involvement in broad-based hypersonic research and brought an end to a succession of halcyon years. The Vietnam War gave an important reason for these actions, for the war placed great pressure on budgets and led to cancellation of many programs that lacked urgency.

The first decision ended Air Force support for the X-15- In July 1966 the joint NASA-Air Force Aeronautics and Astronautics Coordinating Board determined that NASA was to accept all budgetary responsibility for the X-15 as of 1 January 1968. This meant that NASA was to pay for further flights—which it refused to do. This brought an end to the prospect of using this research airplane for flight testing of hypersonic engines.8

The second decision, in August 1967, terminated IFTV. Arthur Thomas, the Marquardt program manager, later stated that it had been a major error to embark on a flight program before ground test had established attainable performance levels. When asked why this systematic approach had not been pursued, Thomas pointed to the pressure of a fast-paced schedule that ruled out sequential development. He added that Marquardt would have been judged “nonresponsive” if its proposal had called for sequential development at the outset. In turn, this tight schedule reflected the basic attitude of the Aero Propulsion Lab: to develop a successful scramjet quickly and inexpensively, or not to develop one at all.9

Then in September 1968 the Navy elected to close its Ordnance Aerophysics Laboratory (OAL). This facility had stood out because it could accommodate test engines of realistic size. In turn, its demise brought a premature end to the P & W scramjet effort. That project succeeded in testing its engine at OAL at Mach 5, but only about 20 runs were conducted before OAL shut down, which was far too few for serious development. Nor could this engine readily find a new home; its 18-inch diameter had been sized to fit the capabilities of OAL. This project therefore died both from withdrawal of Air Force support and from loss of its principal test facil­ity.10

As dusk fell on the Air Force hypersonics program, Antonio Ferri was among the first to face up to the consequences. After 1966 he became aware that no major new contracts would be coming from the Aero Propulsion Lab, and he decided to leave GASL, where he had been president. New York University gave him strong encour­agement, offering him the endowed Astor Professorship. He took this appointment during the spring of 1967-11

He proceeded to build new research facilities in the Bronx, as New York Univer­sity bought a parcel of land for his new lab. A landmark was a vacuum sphere for his wind tunnel, which his friend Louis Nucci called “the hallmark of hypersonic flow” as it sucks high-pressure air from a stored supply. Ferri had left a trail of such spheres at his previous appointments: NACA-Langley, Brooklyn Polytechnic, GASL. But his new facilities were far less capable than those of GASL, and his opportunities were correspondingly reduced. He set up a consulting practice within an existing firm, Advanced Technology Labs, and conducted analytical studies. Still, Nucci recalls that “Ferris love was to do experiments. To have only [Advanced Technology Labs] was like having half a body.”

GASL took a significant blow in August 1967, as the Air Force canceled IFTV. The company had been giving strong support to the developmental testing of its

engine, and in Nucci’s words, “we had to use our know-how in flow and combus­tion.” Having taken over from Ferri as company president, he won a contract from the Department of Transportation to study the aerodynamics of high-speed trains running in tubes.

“We had to retread everybody,” Nucci adds. Boeing held a federal contract to develop a supersonic transport; GASL studied its sonic boom. GASL also investi­gated the “parasol wing,” a low-drag design that rode atop its fuselage at the end of a pylon. There also was work on pollution for the local utility, Long Island Light­ing Company, which hoped to reduce its smog-forming emissions. The company stayed alive, but its employment dropped from 80 people in 1967 to only 45 five years later.12

Marquardt made its own compromises. It now was building small rocket engines, including attitude-control thrusters for Apollo and later for the space shuttle. But it too left the field of hypersonics. Arthur Thomas had managed the company’s work on IFTV, and as he recalls, “I was chief engineer and assistant general manager. I got laid off. We laid off two-thirds of our people in one day.” He knew that there was no scramjet group he might join, but he hoped for the next-best thing; conventional ramjets, powering high-speed missiles. “I went all over the country,” he continues. “Everything in ramjet missiles had collapsed.” He had to settle for a job working with turbojets, at McDonnell Douglas in St. Louis.13

Did these people ever doubt the value of their work? “Never,” says Billig. Nucci, Ferris old friend, gives the same answer: “Never. He always had faith.” The problem they faced was not to allay any doubts of their own, but to overcome the misgivings of others and to find backers who would give them new funding. From time to time a small opportunity appeared. Then, as Billig recalls, “we were highly competitive. Who was going to get the last bits of money? As money got tighter, competition got stronger. I hope it was a friendly competition, but each of us thought he could do the job best.”14

Amid this dark night of hypersonic research, two candles still flickered. There was APL, where a small group continued to work on missiles powered by scramjets that were to burn conventional fuels. More significantly, there was the Hypersonic Propulsion Branch at NASA-Langley, which maintained itself as the one place where important work on hydrogen-fueled scramjets still could go forward. As scramjets died within the Air Force, the Langley group went ahead, first with its Hypersonic Research Engine (HRE) and then with more advanced airframe-integrated designs.

First Thoughts of. Hypersonic Propulsion

Three new aircraft engines emerged from World War II: the turbojet, the ramjet, and the liquid rocket. The turbojet was not suitable for hypersonic flight, but the rocket and the ramjet both gave rise to related airbreathing concepts that seemed to hold promise.

Airbreathing rockets drew interest, but it was not possible to pump in outside air with a conventional compressor. Such rockets instead used liquid hydrogen fuel as a coolant, to liquefy air, with this liquid air being pumped to the engine. This arrangement wasted cooling power by also liquefying the air’s nonflammable hydro­gen, and so investigators sought ways to remove this nitrogen. They wanted a flow of nearly pure liquid oxygen, taken from the air, for use as the oxidizer.

Ramjets provided higher flight speeds than turbojets, but they too had limits. Antonio Ferri, one of Langley’s leading researchers, took the lead in conceiving of ramjets that appeared well suited to flight at hypersonic and perhaps even orbital speeds, at least on paper. Other investigators studied combined-cycle engines. The ejector ramjet, for one, sought to integrate a rocket with a ramjet, yielding a single compact unit that might fly from a runway to orbit.

Was it possible to design a flight vehicle that in fact would do this? Ferri thought so, as did his colleague Alexander Kartveli of Republic Aviation. Air Force officials encouraged such views by sponsoring a program of feasibility studies called Aero – spaceplane. Designers at several companies contributed their own ideas.

These activities unfolded within a world where work with conventional rockets was advancing vigorously.1 In particular, liquid hydrogen was entering the main­stream of rocket engineering.2 Ramjets also won acceptance as standard military engines, powering such missiles as Navaho, Bomarc, Talos, and the X-7. With this background, for a time some people believed that even an Aerospaceplane might prove feasible.

Scramjets at NASA-Langley

The road to a Langley scramjet project had its start at North American Aviation, builder of the X-15- During 1962 manager Edwin Johnston crafted a proposal to modify one of the three flight vehicles to serve as a testbed for hypersonic engines.

This suggestion drew little initial interest, but in November a serious accident reopened the question. Though badly damaged, the aircraft, Tail Number 66671, proved to be repairable. It returned to flight in June 1964, with modifications that indeed gave it the option for engine testing.

The X-15 program thus had this flight-capable testbed in prospect during 1963, at a time when engines for test did not even exist on paper. It was not long, though, before NASA responded to its opportunity, as Hugh Dryden, the Agency’s Deputy Administrator, joined with Robert Seamans, the Associate Administrator, in approv­ing a new program that indeed sought to build a test engine. It took the name of Hypersonic Research Engine (HRE).

Three companies conducted initial studies: General Electric, Marquardt, and Garrett AiResearch. All eyes soon were on Garrett, as it proposed an axisymmet – ric configuration that was considerably shorter than the others. John Becker later wrote that it “was the smallest, simplest, easiest to cool, and had the best struc­tural approach of the three designs.” Moreover, Garrett had shown strong initiative through the leadership of its study manager, Anthony duPont.15

He was a member of the famous duPont family in the chemical industry. Casual and easygoing, he had already shown a keen eye for the technologies of the future. As early as 1954, as a student, he had applied for a patent on a wing made of

Scramjets at NASA-Langley

composite materials. He flew as a co-pilot with Pan Ameri­can, commemorating those days with a framed picture of a Stratocruiser airliner in his office. He went on to Douglas Aircraft, where he managed studies of Aerospaceplane. Then Clifford Garrett, who had a strong interest in scram – jets, recruited him to direct his company’s efforts.16

NASA’s managers soon offered an opportunity to the HRE competitors. The Ord­nance Aerophysics Laboratory was still in business, and any of them could spend a month there testing hardware—if they could build scramjet components on short notice. Drawing on $250,000 in company funds, DuPont crafted a full-scale HRE combustor in only sixty days. At OAL, it yielded more than five hours of test data. Neither GE nor Marquardt showed similar adroitness, while DuPont’s initiative suggested that the final HRE combustor would be easy to build. With this plus the advantages noted by Becker, Garrett won the contract. In July 1966 the program then moved into a phase of engine development and test.17

Number 66671 was flying routinely, and it proved possible to build a dummy HRE that could be mounted to the lower fin of that X-15. This led to a flight – test program that approached disaster in October 1967, when the test pilot Pete Knight flew to Mach 6.72. “We burned the engine off,” Knight recalls. “I was on my way back to Edwards; my concern was to get the airplane back in one piece.” He landed safely, but historian Richard Hallion writes that the airplane “resembled

burnt firewood__ It was the closest any X-15 came to structural failure induced by

heating.”18

Once again it went back to the shops, marked for extensive repair. Then in mid – November another X-15 was lost outright in the accident that killed its test pilot, Mike Adams. Suddenly the X-15 was down from three flight-rated airplanes to only one, and while Number 66671 returned to the flight line the following June, it never flew again. Nor would it fly again with the HRE. This dummy engine had set up the patterns of airflow that had caused the shock-impingement heating that had nearly destroyed it.19

In a trice then the HRE program was completely turned on its head. It had begun with the expectation of using the X-15 for flight test of advanced engines, at a moment when no such engines existed. Now Garrett was building them—but

Scramjets at NASA-Langley

Test pilot William “Pete” Knight initiates his record flight, which reached Mach 6.72. (NASA)

the X-15 could not be allowed to fly with them. Indeed, it soon stopped flying alto­gether. Thus, during 1968, it became clear that the HRE could survive only through a complete shift in focus to ground test.

Earlier plans had called for a hydrogen-cooled flightweight engine. Now the program’s research objectives were to be addressed using two separate wind-tunnel versions. Each was to have a diameter of 18 inches, with a configuration and flow path matching those of the earlier flight-rated concept. The test objectives then were divided between them.

A water-cooled Aerothermodynamic Integration Model (AIM) was to serve for hot-fire testing. Lacking provision for hydrogen cooling, it stood at the techni­cal level of the General Electric and Pratt & Whitney test scramjets. In addition, continuing interest in flightweight hydrogen-cooled engine structures brought a requirement for the Structures Assembly Model (SAM), which did not burn fuel. It operated at high temperature in Langley’s eight-foot diameter High Temperature Structures Tunnel, which reached Mach 7-20

SAM arrived at NASA-Langley in August 1970. Under test, its inlet lip showed robustness for it stood up to the impact of small particles, some of which blocked thin hydrogen flow passages. Other impacts produced actual holes as large as 1/16

inch in diameter. The lip nevertheless rode through the subsequent shock-impinge­ment heating without coolant starvation or damage from overheating. This repre­sented an important advance in scramjet technology, for it demonstrated the feasi­bility of crafting a flightweight fuel-cooled structure that could withstand foreign object damage along with very severe heating.21

AIM was also on the agenda. It reached its test center at Plum Brook, Ohio, in August 1971, but the facility was not ready. It took a year before the program under­took data runs, and then most of another year before the first run that was successful. Indeed, of 63 test runs conducted across 18 months, 42 returned little or no useful data. Moreover, while scramjet advocates had hoped to achieve shock-free flow, it certainly did not do this. In addition, only about half of the injected fuel actually burned. But shocks in the subsonic-combustion zone heated the downstream flow and unexpectedly enabled the rest of the fuel to burn. In Becker’s words, “without this bonanza, AIM performance would have been far below its design values.”22

The HRE was axisymmetric. A practical engine of this type would have been mounted in a pod, like a turbojet in an airliner. An airliner’s jet engines use only a small portion of the air that flows past the wings and fuselage, but scramjets have far less effectiveness. Therefore, to give enough thrust for acceleration at high Mach, they must capture and process as much as possible of the air flowing along the vehicle.

Scramjets at NASA-LangleyPodded engines like the HRE cannot do this. The axisymmetry of the HRE made it easy to study because it had a two-dimensional layout, but it was not suitable for an operational engine. The scramjet that indeed could capture and process most of the airflow is known as an airframe-integrated engine, in which much of the air­craft serves as part of the propulsion system. Its layout is three-dimen­sional and hence is more complex, but only an airframe-integrated con­cept has the additional power that can make it practical for propulsion.

Подпись: Contributions to scramjet thrust from airframe integration. (NASA) Paper studies of air­frame-integrated con­cepts began at Lang­ley in 1968, breaking completely with those of HRE. These investi­gations considered the

entire undersurface of a hypersonic aircraft as an element of the propulsion system. The forebody produced a strong oblique shock that precompressed the airflow prior to its entry into the inlet. The afterbody was curved and swept upward to form a half-nozzle. This concept gave a useful shape for the airplane while retaining the advantages of airframe-integrated scramjet operation.

Подпись: Airframe-integrated scramjet concept. (Garrett Corp.) Within the Hypersonic Pro­pulsion Branch, John Henry and Shimer Pinckney devel­oped the initial concept. Their basic installation was a module, rectangular in shape, with a number of them set side by side to encircle the lower fuse­lage and achieve the required high capture of airflow. Their inlet had a swept opening that angled backward at 48 degrees.

This provided a cutaway that readily permitted spillage of airflow, which otherwise could choke the inlet when starting.

The bow shock gave greater compression of the flow at high Mach, thereby reducing the height of the cowl and the required size of the engine. At Mach 10 this reduction was by a factor of three. While this shock compressed the flow vertically, wedge-shaped sidewalls com­pressed it horizontally. This two-plane compression diminished changes in the inlet flow field with increasing Mach, making it possible to cover a broad Mach range in fixed geometry.

Like the inlet, the combustor was to cover a large Mach range in fixed geometry. This called for thermal compression, and Langley contracted with Antonio Ferri at New York University to conduct analyses. This brought Ferri back into the world of scramjets. The design called for struts as fuel injectors, swept at 48 degrees to paral­lel the inlet and set within the combustor flow path. They promised more effective fuel injection than the wall-mounted injectors of earlier designs.

The basic elements of the Langley concept thus included fixed geometry, air­frame integration, a swept inlet, thermal compression, and use of struts for fuel injection. These elements showed strong synergism, for in addition to the aircraft undersurface contributing to the work of the inlet and nozzle, the struts also served as part of the inlet and thereby made it shorter. This happened because the flow from the inlet underwent further compression as it passed between the struts.23

Scramjets at NASA-Langley

Fuel-injecting strut. Arrows show how hydrogen is injected either parallel or perpendicular to the flow. (Garrett Corporation)

Experimental work paced the Langley effort as it went forward during the 1970s and much of the 1980s. Early observations, published in 1970, showed that struts were practical for a large supersonic combustor in flight at Mach 8. This work sup­ported the selection of strut injection as the preferred mode.24

Initial investigations involved inlets and combustors that were treated as sepa­rate components. These represented preludes to studies made with complete engine modules at two critical simulated flight speeds: Mach 4 and Mach 7. At Mach 4 the inlet was particularly sensitive to unstarts. The inlet alone had worked well, as had the strut, but now it was necessary to test them together and to look for unpleas­ant surprises. The Langley researchers therefore built a heavily instrumented engine of nickel and tested it at GASL, thereby bringing new work in hypersonics to that center as well.

Mach 7 brought a different set of problems. Unstarts now were expected to be less of a difficulty, but it was necessary to show that the fuel indeed could mix and burn within the limited length of the combustor. Mach 7 also approached the limitations of available wind tunnels. A new Langley installation, the Arc-Heated Scramjet Test Facility, reached temperatures of 3,500°F and provided the appropri­ate flows.

Scramjets at NASA-Langley

Scramjets at NASA-Langley

Integration of scramjets with an aircraft. (NASA)

 

AIRFRAME-INTEGRATED SCRAMJET CONCEPT

 

ЙчРПМу WH

 

7

 

VEHICLE ATB-DD Y SERVES AS AM EXTERNAL EXPANSION H022EL fop THE SERAMJfT

 

AIRCRAFT FD4EEDCY PRECflM PRESSES ЛІН AMD THUS PROVIDES A PART OF IK INLET FUSimCM

 

SCRiMjfT

WOOULES

 

SEETICfi 4-А I ONE 5-3RAMJET MODULI]

 

CON’.BUSTOfl

 

INLET

COMPRESSION

5URF4CE

 

N07ZL£ INTERNAL TO SCRAf. yr MODULE

 

STRUTS PROVIDE ADDITION*! AIRFLOW OOWPRESSIW AND Sifl’/iAS INJECTION STATIONS POfl THE HyDrOIEn F-J£L

 

Scramjets at NASA-LangleyScramjets at NASA-Langley

Scramjets at NASA-Langley

Airflow within an airframe-integrated scramjet. (NASA)

Separate engines operated at GASL and Langley. Both used heat sink, with the run times being correspondingly short. Because both engines were designed for use in research, they were built for easy substitution of components. An inlet, combus­tor, nozzle, or set of fuel-injecting struts could be installed without having to modify the rest of the engine. This encouraged rapid prototyping without having to con­struct entirely new scramjets.

More than 70 runs at Mach 4 were made during 1978, first with no fuel injec­tion to verify earlier results from inlet tests, and then with use of hydrogen. Simple theoretical calculations showed that “thermal choking” was likely, with heat addi­tion in the combustor limiting the achievable flow rate, and indeed it appeared. Other problems arose from fuel injection. The engine used three struts, a main one on the centerline flanked by two longer ones, and fuel from these side struts showed poor combustion when injected parallel to the flow. Some unwanted inlet-combus­tor interactions sharply reduced the measured thrust. These occurred because the engine ingested boundary-layer flow from the top inner surface of the wind-tunnel duct. This simulated the ingestion of an aircraft boundary layer by a flight engine.

The thermal choking and the other interactions were absent when the engine ran very fuel-lean, and the goal of the researchers was to eliminate them while burning as much fuel as possible. They eased the problem of thermal choking by returning to a fuel-injection method that had been used on the HRE, with some fuel being injected downstream as the wall. However, the combustor-inlet interactions proved to be more recalcitrant. They showed up when the struts were injecting only about half as much fuel as could burn in the available airflow, which was not the formula for a high-thrust engine.25

Mach 7 brought its own difficulties, as the Langley group ran off 90 tests between April 1977 and February 1979- Here too there were inlet-combustor interactions, ranging from increased inlet spillage that added drag and reduced the thrust, to complete engine unstarts. When the latter occurred, the engine would put out good thrust when running lean; when the fuel flow increased, so did the measured force. In less than a second, though, the inlet would unstart and the measured thrust would fall to zero.26

No simple solution appeared capable of addressing these issues. This meant that in the wake of those tests, as had been true for more than a decade, the Langley group did not have a working scramjet. Rather, they had a research problem. They addressed it after 1980 with two new engines, the Parametric Scramjet and the Step-Strut Engine. The Parametric engine lacked a strut but was built for ease of modification. In 1986 the analysts Burton Northam and Griffin Anderson wrote;

This engine allows for easy variation of inlet contraction ratio, internal area ratio and axial fuel injection location. Sweep may be incorporated in the

inlet portion of the engine, but the remainder of the engine is unswept. In fact, the hardware is designed in sections so that inlet sweep can be changed (by substituting new inlet sidewalls) without removing the engine from the wind tunnel.

The Parametric Scramjet explored techniques for alleviating combustor-inlet interactions at Mach 4. The Step-Strut design also addressed this issue, mount­ing a single long internal strut fitted with fuel injectors, with a swept leading edge that resembled a staircase.

Подпись: Performance of scramjets. Note that figures are missing from the axes. (NASA) Northam and Anderson wrote that it “was also tested at Mach 4 and demonstrated good per­formance without combustor – inlet interaction.”27

How, specifically, did Lang­ley develop a workable scram­jet? Answers remain classified, with Northam and Anderson noting that “several of the fig­ures have no dimension on the axes and a discussion of the fig­ures omits much of the detail.”

A 1998 review was no more helpful. However, as early as 1986 the Langley researchers openly published a plot show­ing data taken at Mach 4 and at Mach 7. Curves showed values of thrust and showed that the scramjets of the mid-1980s indeed could produce net thrust. Even at Mach 7, at which the thrust was less, these engines could overcome the drag of a complete vehicle and produce acceleration. In the words of Northam and Anderson, “at both Mach 4 and Mach 7 flight condi­tions, there is ample thrust for acceleration and cruise.”28

Ramjets As Military Engines

The ramjet and turbojet relied on fundamentally the same thermodynamic cycle. Both achieved compression of inflowing air, heated the compressed flow by burning

Ramjets As Military Engines

fuel, and obtained thrust by allowing the hot airflow to expand through a nozzle. The turbojet achieved compression by using a rotating turbocompressor, which inevitably imposed a requirement to tap a considerable amount of power from its propulsive jet by placing the turbine within the flow. A ramjet dispensed with this turbomachinery, compressing the incoming flow by the simple method of process­ing it through a normal or oblique shock. This brought the promise of higher flight speed. However, a ramjet paid for this advantage by requiring an auxiliary boost, typically with a rocket, to accelerate it to speeds at which this shock could form.3

The X-7 served as a testbed for development of ramjet engines as mainstream propulsion units. With an initial concept that dated to December 1946, it took shape as a three-stage flight vehicle. The first stage, a B-29 and later a B-50 bomber, played the classic role of lifting the aircraft to useful altitudes. Such bombers also served in this fashion as mother ships for the X-l series, the X-2, and in time the X-15. For the X-7, a solid propellant rocket served as the second stage, accelerating the test aircraft to a speed high enough for sustained ramjet operation. The ramjet engine, slung beneath its fuselage, provided further acceleration along with high­speed cruise. Recovery was achieved using a parachute and a long nose spike that pierced the ground like a lance. This enabled the X-7 to remain upright, which protected the airframe and engine for possible re-use.

The X-7 craft were based at Holloman Air Force Base in New Mexico, which was an early center for missile flight test. The first flight took place in April 1951, with a ramjet of 20-inch diameter built by Wright Aeronautical. The X-7 soon took on the role of supporting developmental tests of a 28-inch engine built by the Marquardt Company and intended for use with the Bomarc missile. Flights with this 28-inch design began in December 1952 and achieved a substantial success the following April. The engine burned for some 20 seconds; the vehicle reached 59,500 feet and Mach 2.6. This exceeded the Mach 2.44 of the X-1A rocket plane in December 1953, piloted by Chuck Yeager. Significantly, although the X-7 was unpiloted, it remained aerodynamically stable during this flight. By contrast, the X – 1A lost stability and fell out of the sky, dropping 51,000 feet before Yeager brought it back under control.4

The X-7 soon became a workhorse, running off some one hundred missions between 1955 and the end of the program in July I960. It set a number of records, including range and flight time of 134 miles and 552 seconds, respectively. Its alti­tude mark of 106,000 feet, achieved with an airbreathing ramjet, compared with 126,200 feet for the rocket-powered X-2 research airplane.5

Other achievements involved speed. The vehicle had been built of heat-treated 4130 steel, with the initial goal being Mach 3- The program achieved this—and simply kept going. On 29 August 1957 it reached Mach 3-95 with a 28-inch Mar­quardt engine. Following launch from a B-50 at 33,700 feet, twin solid motors mounted beneath the wings boosted the craft to Mach 2.25. These boosters fell away; the ramjet ignited, and the vehicle began to climb at a 20-degree angle before leveling out at 54,500 feet. It then went into a very shallow dive. The engine contin­ued to operate, as it ran for a total of 91 seconds, and acceleration continued until the craft attained its peak Mach at fuel exhaustion. It was recovered through use of its parachute and nose spike, and temperature-sensitive paints showed that it had experienced heating to more than 600°F. This heating also brought internal damage to the engine.6

Even so, the X-7 was not yet at the limit of its capabilities. Fitted with a 36-inch ramjet, again from Marquardt, it flew to Mach 4.31 on 17 April 1958. This time the drop from the B-50 came at 28,500 feet, with the engine igniting following rocket boost to Mach 1.99- It operated for 70.9 seconds, propelling the vehicle to a peak altitude of 60,000 feet. By then it was at Mach 3-5, continuing to acceler­ate as it transitioned to level flight. It reached its maximum Mach—and sustained a sharp drop in thrust three seconds later, apparently due to an engine tailpipe failure. Breakup of the vehicle occurred immediately afterward, with the X-7 being demolished.7

This flight set a record for airbreathing propulsion that stands to this day. Its speed of 2,881 miles per hour (mph) compares with the record for an afterburning
turbojet of 2,193 mph, set in an SR-71 in 1976.8 Moreover, while the X-7 was flying to glory, the Bomarc program that it supported was rolling toward opera­tional deployment.

Ramjets As Military EnginesThe name “Bomarc” derives from the contractors Boeing and the Michigan Aeronautical Research Center, which conducted early studies. It was a single – stage, ground-launched antiaircraft mis­sile that could carry a nuclear warhead. A built-in liquid-propellant rocket pro­vided boost; it was replaced by a solid rocket in a later version. Twin ramjets sustained cruise at Mach 2.6. Range of the initial operational model was 250 miles, later extended to 440 miles.9

Specifications for this missile were written in September 1950. In January 1951 an Air Force letter contract desig­nated Boeing as the prime contractor, with Marquardt Aircraft winning a sub­contract to build its ramjet. The devel­opment of this engine went forward rapidly. In July 1953 officials of the Air Force’s Air Materiel Command declared that work on the 28-inch engine was essentially complete.10 The Bomarc. (U. S. Air Force) Flight tests were soon under way. An

Air Force review notes that a test vehicle “traveled 70 miles in 1.5 minutes to complete a most successful test of 17 June 1954.” The missile “cruised at Mach 3+ for 15 seconds and reached an altitude of 56,000 feet.” In another flight in February 1955, it reached a peak altitude of

72,0 feet as its ramjet burned for 245 seconds. This success brought a decision to order Bomarc into production. Four more test missiles flew with their ramjets later that year, with all four scoring successes.11

Other activity turned Bomarc from a missile into a weapon system, integrating it with the electronic Semi-Automatic Ground Environment (SAGE) that controlled air defense within North America. In October 1958, Bomarcs scored a spectacular success. Controlled remotely from a SAGE center 1,500 miles away, two missiles
homed in on target drones that were more than 100 miles out to sea. The Bomarcs dived on them and made intercepts. The missiles were unarmed, but one of them actually rammed its target. A similar success occurred a year later when a Bomarc made a direct hit on a Regulus 2 supersonic target over the Gulf of Mexico. The mis­sile first achieved operational status in September 1959- Three years later, Bomarc was in service at eight Air Force sites, with deployment of Canadian squadrons fol­lowing. These missiles remained on duty until 1972.12

Ramjets As Military EnginesParalleling Bomarc, the Navy pursued an independent effort that developed a ship-based antiaircraft missile named Talos, after a mythical defender of the island of Crete. It took shape at a major ramjet center, the Applied Physics Laboratory (APL) of Johns Hopkins University. Like Bomarc, Talos was nuclear-capa­ble; Jane’s gave its speed as Mach 2.5 and its range as 65 miles.

Подпись: The Talos. (National Archives and Records Administration)An initial version first flew in 1952, at New Mexico’s White Sands Missile Range. A proto­type of a nuclear-capable ver­sion made its own first flight in December 1953. The Korean War had sparked development of this missile, but the war ended in mid-1953 and the urgency diminished. When the Navy selected the light cruiser USS Galveston for the first operational deployment of Talos, the conversion of this ship became a four-year task. Nevertheless, Talos finally joined the fleet in 1958, with other cruisers installing it as well. It remained in service until 1980.13

One military ramjet project, that of Navaho, found itself broken in mid-stride. Although the ultimate version was to have intercontinental range, the emphasis during the 1950s was on an interim model with range of 2,500 miles, with the missile cruising at Mach 2.75 and 76,800 feet. The missile used a rocket-powered booster with liquid-fueled engines built by Rocketdyne. The airplane-like Navaho mounted two 51-inch ramjets from Wright Aeronautical, which gave it the capabil­ity to demonstrate long-duration supersonic cruise under ramjet power.14

Flight tests began in November

1956, Ramjets As Military Engineswith launches of complete missiles taking place at Cape Canav­eral. The first four were flops; none even got far enough to permit igni­tion of the ramjets. In mid-July of

1957, three weeks after the first launch of an Atlas, the Air Force canceled Navaho. Lee Atwood, presi­dent of North American Aviation, recalls that Atlas indeed had greater promise: “Navaho would approach its target at Mach 3; a good antiair­craft missile might shoot it down. But Atlas would come in at Mach 20. There was no way that anyone would shoot it down.”

There nevertheless was hardware for several more launches, and there was considerable interest in exercis­ing the ramjets. Accordingly, seven more Navahos were launched during the following 18 months, with the best flight taking place in January

1958,

Подпись: The Navaho. (Smithsonian Institution No. 77-10905)The missile accelerated on rocket power and leveled off, the twin ramjet engines ignited, and it stabilized in cruise at 64,000 feet. It continued in this fashion for half an hour. Then, approach­ing the thousand-mile mark in range, its autopilot initiated a planned turnaround to enable this Navaho to fly back uprange. The turn was wide, and ground control­lers responded by tightening it under radio control. This disturbed the airflow near the inlet of the right ramjet, which flamed out. The missile lost speed, its left engine also flamed out, and the vehicle fell into the Atlantic. It had been airborne for 42 minutes, covering 1,237 miles.15

Because the program had been canceled and the project staff was merely flying off its leftover hardware, there were no funds to address what clearly was a serious inlet problem. Still, Navaho at least had flown. By contrast, another project—the Air Forces XF-103 fighter, which aimed at Mach 3.7—never even reached the pro­totype stage.

Ramjets As Military Engines

The XF-103 in artists rendering. (U. S. Air Force)

Its engine, also from Wright Aeronautical, combined a turbojet and ramjet within a single package. The ramjet doubled as an afterburner, with internal doors closing off the ramjets long inlet duct. Conversion to pure ramjet operation took seven seconds. This turboramjet showed considerable promise. At Arnold Engineer­ing Development Center, an important series of ground tests was slated to require as much as six weeks. They took only two weeks, with the engine running on the first day.

Unfortunately, the XP-103 outstayed its welcome. The project dated to 1951; it took until December 1956 to carry out the AEDC tests. Much of the reason for this long delay involved the planes highly advanced design, which made extensive use of titanium. Still, the Mach 1.8 XF-104 took less than 18 months to advance from the start of engineering design to first flight, and the XF-103 was not scheduled to fly until I960. The Air Force canceled it in August 1957, and aviation writer Rich­ard DeMeis pronounced its epitaph: “No matter how promising or outstanding an aircraft may be, if development takes inordinately long, the mortality rate rises in proportion.”15

Among the five cited programs, three achieved operational status, with the X-7 actually outrunning its initial planned performance. The feasibility of Navaho was never in doubt; the inlet problem was one of engineering development, not one that would call its practicality into question. Only the XF-103 encountered seri­ous problems of technology that lay beyond the state of the art. The ramjet of the 1950s thus was an engine whose time had come, and which had become part of mainstream design.

Ramjets As Military Engines

The scramjet. Oblique shocks in the isolator prevent disturbances in the combustor from propagat­ing upstream, where they would disrupt flow in the inlet. (Courtesy of Frederick Billig)

There was a ramjet industry, featuring the firms of Marquardt Aviation and Wright Aeronautical. Facilities for developmental testing existed, not only at these companies but at NACA-Lewis and the Navy’s Ordnance Aerophysics Laboratory, which had a large continuous-flow supersonic wind tunnel. With this background, a number of investigators looked ahead to engines derived from ramjets that could offer even higher performance.

The Advent of NASP

With test engines well on their way in development, there was the prospect of experimental aircraft that might exercise them in flight test. Such a vehicle might come forth as a successor to Number 66671, the X-l 5 that had been slated to fly the

HRE. An aircraft of this type indeed took shape before long, with the designation X-30. However, it did not originate purely as a technical exercise. Its background lay in presidential politics.

The 1980 election took place less than a year after the Soviets invaded Afghan­istan. President Jimmy Carter had placed strong hope in arms control and had negotiated a major treaty with his Soviet counterpart, Leonid Brezhnev. But the incursion into Afghanistan took Carter by surprise and destroyed the climate of international trust that was essential for Senate ratification of this treaty. Reagan thus came to the White House with arms-control prospects on hold and with the Cold War once more in a deep freeze. He responded by launching an arms buildup that particularly included new missiles for Europe.29

Peace activist Randall Forsberg replied by taking the lead in calling for a nuclear freeze, urging the superpowers to halt the “testing, production and deployment of nuclear weapons” as an important step toward “lessening the risk of nuclear war.” His arguments touched a nerve within the general public, for within two years, support for a freeze topped 70 percent. Congressman Edward Markey introduced a nuclear-freeze resolution in the House of Representatives. It failed by a margin of only one vote, with Democratic gains in the 1982 mid-term elections making pas­sage a near certainty. By the end of that year half the states in the Union adopted their own freeze resolutions, as did more than 800 cities, counties, and towns.30

To Reagan, a freeze was anathema. He declared that it “would be largely unverifi – able…. It would reward the Soviets for their massive military buildup while prevent­ing us from modernizing our aging and increasingly vulnerable forces.” He asserted that Moscow held a “present margin of superiority” and that a freeze would leave America “prohibited from catching up.”31

With the freeze ascendant, Admiral James Watkins, the Chief of Naval Opera­tions, took a central role in seeking an approach that might counter its political appeal. Exchanges with Robert McFarlane and John Poindexter, deputies within the National Security Council, drew his thoughts toward missile defense. Then in Janu­ary 1983 he learned that the Joint Chiefs were to meet with Reagan on 11 February. As preparation, he met with a group of advisors that included the physicist Edward Teller.

Trembling with passion, Teller declared that there was enormous promise in a new concept: the x-ray laser. This was a nuclear bomb that was to produce intense beams of x-rays that might be aimed to destroy enemy missiles. Watkins agreed that the broad concept of missile defense indeed was attractive. It could introduce a new prospect: that America might counter the Soviet buildup, not with a buildup of its own but by turning to its strength in advanced technology.

Watkins succeeded in winning support from his fellow Joint Chiefs, including the chairman, General John Vessey. Vessey then gave Reagan a half-hour briefing at the 11 February meeting, as he drew extensively on the views of Watkins. Reagan showed strong interest and told the Chiefs that he wanted a written proposal. Robert McFarlane, Deputy to the National Security Advisor, already had begun to explore concepts for missile defense. During the next several weeks his associates took the lead in developing plans for a program and budget.32

On 23 March 1983 Reagan spoke to the nation in a televised address. He dealt broadly with issues of nuclear weaponry. Toward the end of the speech, he offered new thoughts:

“Let me share with you a vision of the future which offers hope. It is that we embark on a program to counter the awesome Soviet missile threat with measures that are defensive. Let us turn to the very strengths in technology that spawned our great industrial base and that have given us the quality of life we enjoy today.

What if free people could live secure in the knowledge that their security did not rest upon the threat of instant U. S. retaliation to deter a Soviet attack, that we could intercept and destroy strategic ballistic missiles before they reached our own soil or that of our allies?…

I call upon the scientific community in our country, those who gave us nuclear weapons, to turn their great talents now to the cause of mankind and world peace, to give us the means of rendering these nuclear weapons impotent and obsolete.”33

The ensuing Strategic Defense Initiative never deployed weapons that could shoot down a missile. Yet from the outset it proved highly effective in shooting down the nuclear freeze. That movement reached its high-water mark in May 1983, as a strengthened Democratic majority in the House indeed passed Markeys resolu­tion. But the Senate was still held by Republicans, and the freeze went no further. The SDI gave everyone something new to talk about. Reagans speech helped him to regain the initiative, and in 1984 he swept to re-election with an overwhelming majority.34

The SDI brought the prospect of a major upsurge in traffic to orbit, raising the prospect of a flood of new military payloads. SDI supporters asserted that some one hundred orbiting satellites could provide an effective strategic defense, although the Union of Concerned Scientists, a center of criticism, declared that the number would be as large as 2,400. Certainly, though, an operational missile defense was likely to place new and extensive demands on means for access to space.

Within the Air Force Systems Command, there already was interest in a next – generation single-stage-to-orbit launch vehicle that was to use the existing Space Shuttle Main Engine. Lieutenant General Lawrence Skantze, Commander of the

Air Force Systems Command’s Aero­nautical Systems Division (ASD), launched work in this area early in 1982 by directing the ASD planning staff to conduct an in-house study of post-shuttle launch vehicles. It then went forward under the leader­ship of Stanley Tremaine, the ASD’s Deputy for Development Planning, who christened these craft as Trans – atmospheric Vehicles. In December 1984 Tremaine set up aTAV Program Office, directed by Lieutenant Colo­nel Vince Rausch.35

Подпись: Transatmospheric Vehicle concepts, 1984. (U.S. Air Force) Moreover, General Skantze was advancing into high-level realms of command, where he could make his voice heard. In August 1982 he went to Air Force Headquarters, where he took the post of Deputy Chief of Staff for Research, Development, and Acquisition. This gave him responsi­bility for all Air Force programs in these areas. In October 1983 he pinned on his fourth star as he took an appointment as Air Force Vice Chief of Staff. In August 1984 he became Commander of the Air Force Systems Command.36

He accepted these Washington positions amid growing military disenchantment with the space shuttle. Experience was showing that it was costly and required a long time to prepare for launch. There also was increasing concern for its safety, with a 1982 Rand Corporation study flatly predicting that as many as three shuttle orbiters would be lost to accidents during the life of the program. The Air Force was unwilling to place all its eggs in such a basket. In February 1984 Defense Secretary Caspar Weinberger approved a document stating that total reliance on the shuttle “represents an unacceptable national security risk.” Air Force Secretary Edward Aldridge responded by announcing that he would remove 10 payloads from the shuttle beginning in 1988 and would fly them on expendables.37

Just then the Defense Advanced Research Projects Agency was coming to the forefront as an important new center for studies of TAV-like vehicles. DARPA was already reviving the field of flight research with its X-29, which featured a forward – swept wing along with an innovative array of control systems and advanced materi­als. Robert Cooper, DARPA’s director, held a strong interest in such projects and saw them as a way to widen his agency’s portfolio. He found encouragement during

The Advent of NASP

1982 as a group of ramjet specialists met with Richard De Lauer, the Undersecretary of Defense Research and Engineering. They urged him to keep the field alive with enough new funds to prevent them from having to break up their groups. De Lauer responded with letters that he sent to the Navy, Air Force, and DARPA, asking them to help.38

This provided an opening for Tony duPont, who had designed the HRE. He had taken a strong interest in combined-cycle concepts and decided that the scram – lace was the one he preferred. It was to eliminate the big booster that every ramjet needed, by using an ejector, but experimental versions weren’t very powerful. DuPont thought he could do better by using the HRE as a point of departure, as he added an auxiliary inlet for LACE and a set of ejector nozzles upstream of the com­bustor. He filed for a patent on his engine in 1970 and won it two years later.39

In 1982 he still believed in it, and he learned that Anthony Tether was the DARPA man who had been attending TAV meetings. The two men met several times, with Tether finally sending him up to talk with Cooper. Cooper listened to duPont and sent him over to Robert Williams, one of DARPA’s best aerodynami- cists. Cooper declares that Williams “was the right guy; he knew the most in this area. This wasn’t his specialty, but he was an imaginative fellow.”40

Williams had come up within the Navy, working at its David Taylor research center. His specialty was helicopters; he had initiated studies of the X-wing, which was to stop its rotor in midair and fly as a fixed-wing aircraft. He also was inter­ested in high-speed flight. He had studied a missile that was to fight what the Navy

called the “outer air battle,” which might use a scramjet. This had brought him into discussions with Fred Billig, who also worked for the Navy and helped him to learn his hypersonic propulsion. He came to DARPA in 1981 and joined its Tacti­cal Technologies Office, where he became known as the man to see if anyone was interested in scramjets.41

Williams now phoned duPont and gave him a test: “I’ve got a very ambitious problem for you. If you think the airplane can do this, perhaps we can promote a program. Cooper has asked me to check you out.” The problem was to achieve single-stage-to-orbit flight with a scramjet and a suite of heat-resistant materi­als, and duPont recalls his response: “I stayed up all night; I was more and more intrigued with this. Finally I called him back: ‘Okay, Bob, it’s not impossible. Now what?”’42

DuPont had been using a desktop computer, and Williams and Tether responded to his impromptu calculations by giving him $30,000 to prepare a report. Soon Williams was broadening his circle of scramjet specialists by talking with old-timers such as Arthur Thomas, who had been conducting similar studies a quarter-century earlier, and who quickly became skeptical. DuPont had patented his propulsion concept, but Thomas saw it differently: “I recognized it as a Marquardt engine. Tony called it the duPont cycle, which threw me off, but I recognized it as our engine. He claimed he’d improved it.” In fact, “he’d made a mistake in calculating the heat capacity of air. So his engine looked so much better than ours.”

Thomas nevertheless signed on to contribute to the missionary work, joining Williams and duPont in giving presentations to other conceptual-design groups. At Lockheed and Boeing, they found themselves talking to other people who knew scramjets. As Thomas recalls, “The people were amazed at the component efficien­cies that had been assumed in the study. They got me aside and asked if I really believed it. Were these things achievable? Tony was optimistic everywhere: on mass fraction, on air drag of the vehicle, on inlet performance, on nozzle perfor­mance, on combustor performance. The whole thing, across the board. But what salved our conscience was that even if these weren’t all achieved, we still could have something worth while. Whatever we got would still be exciting.”43

Williams recalls that in April 1984, “I put together a presentation for Cooper called ‘Resurrection of the Aerospaceplane.’ He had one hour; I had 150 slides. He came in, sat down, and said Go. We blasted through those slides. Then there was silence. Cooper said, Т want to spend a day on this.’” After hearing addi­tional briefings, he approved a $5.5-million effort known as Copper Canyon, which brought an expanded program of studies and analyses.44

Copper Canyon represented an attempt to show how the SDI could achieve its access to space, and a number of high-level people responded favorably when Cooper asked to give a briefing. He and Williams made a presentation to George Keyworth, Reagan’s science advisor. They then briefed the White House Science

Council. Keyworth recalls that “here were people who normally would ask ques­tions for hours. But after only about a half-hour, David Packard said, ‘What’s keep­ing us? Let’s do it!”’ Packard was Deputy Secretary of Defense.45

During 1985, as Copper Canyon neared conclusion, the question arose of expanding the effort with support from NASA and the Air Force. Cooper attended a classified review and as he recalls, “I went into that meeting with a high degree of skepticism.” But technical presentations brought him around: “For each major problem, there were three or four plausible ways to deal with it. That’s extraordi­nary. Usually it’s—‘Well, we don’t know exactly how we’ll do it, but we’ll do it.’ Or, ‘We have a way to do it, which may work.’ It was really a surprise to me; I couldn’t pick any obvious holes in what they had done. I could find no reason why they couldn’t go forward.”46

Further briefings followed. Williams gave one to Admiral Watkins, whom Cooper describes as “very supportive, said he would commit the Navy to support of the program.” Then in July, Cooper accompanied Williams as they gave a presenta­tion to General Skantze.

They displayed their viewgraphs and in Cooper’s words, “He took one look at our concept and said, ‘Yeah, that’s what I meant. I invented that idea.’” Not even the stars on his shoulders could give him that achievement, but his endorsement reflected the fact that he was dissatisfied with the TAV studies. He had come away appreciating that he needed something better than rocket engines—and here it was. “His enthusiasm came from the fact that this was all he had anticipated,” Cooper continues. “He felt as if he owned it.”

Skantze wanted more than viewgraphs. He wanted to see duPont’s engine in operation. A small version was under test at GASL, without LACE but definitely with its ejector, and one technician had said, “This engine really does put out static thrust, which isn’t obvious for a ramjet.” Skantze saw the demonstration and came away impressed. Then, Williams adds, “the Air Force system began to move with

The Advent of NASPthe speed of a spaceplane. In literally a week and a half, the entire Air Force senior com­mand was briefed.”

Later that year the Secretary of Defense, Caspar Weinberger, granted a briefing. With him were members of his staff, along with senior people from NASA and the military service. After giving the presentation, Williams recalls that “there was Initial version of the duPont engine under test at GASL. silence ІП the ГООГП The Sec-

(GASL)
retary said, ‘Interesting,’ and turned to his staff. Of course, all the groundwork had been laid. All of the people there had been briefed, and we could go for a yes-or-no decision. We had essentially total unanimity around the table, and he decided that the program would proceed as a major Defense Department initiative. With this, we moved immediately to issue requests for proposal to industry.”47

In January 1986 the TAV effort was formally terminated. At Wright-Patterson AFB, the staff of its program office went over to a new Joint Program Office that now supported what was called the National Aerospace Plane. It brought together rep­resentatives from the Air Force, Navy, and NASA. Program management remained at DARPA, where Williams retained his post as the overall manager.48

In this fashion, NASP became a significant federal initiative. It benefited from a rare alignment of the political stars, for Reagan’s SDI cried out for better launch vehicles and Skantze was ready to offer them. Nor did funding appear to be a prob­lem, at least initially. Reagan had shown favor to aerospace through such acts as approving NASA’s space station in 1984. Pentagon spending had surged, and DAR – PA’s Cooper was asserting that an X-30 might be built for an affordable cost.

Yet NASP was a leap into the unknown. Its scramjets now were in the forefront but not because the Langley research had shown that they were ready. Instead they were a focus of hope because Reagan wanted SDI, SDI needed better access to space, and Skantze wanted something better than rockets.

The people who were making Air Force decisions, such as Skantze, did not know much about these engines. The people who did know them, such as Thomas, were well aware of duPont’s optimism. There thus was abundant opportunity for high hope to give way to hard experience.

Origins of the Scramjet

The airflow within a ramjet was subsonic. This resulted from its passage through one or more shocks, which slowed, compressed, and heated the flow. This was true even at high speed, with the Mach 4.31 flight of the X-7 also using a subsonic-com­bustion ramjet. Moreover, because shocks become stronger with increasing Mach, ramjets could achieve greater internal compression of the flow at higher speeds. This increase in compression improved the engine’s efficiency.

Origins of the Scramjet

Comparative performance of scramjets and other engines. Airbreathers have veiy high perfor­mance because they are “energy machines,” which burn fuel to heat air. Rockets have much lower performance because they are “momentum machines,” which physically expel flows of mass.

(Courtesy of William Escher)

Still, there were limits to a ramjet’s effectiveness. Above Mach 5, designers faced increasingly difficult demands for thermal protection of an airframe and for cooling of the ramjet duct. With the internal flow being very hot, it became more difficult to add still more heat by burning fuel, without overtaxing the materials or the cool­ing arrangements. If the engine were to run lean to limit the temperature rise in the combustor, its thrust would fall off. At still higher Mach levels, the issue of heat addition through combustion threatened to become moot. With high internal tem­peratures promoting dissociation of molecules of air, combustion reactions would not go to completion and hence would cease to add heat.

A promising way around this problem involved doing away with a requirement for subsonic internal flow. Instead this airflow was to be supersonic and was to sus­tain combustion. Right at the outset, this approach reduced the need for internal cooling, for this airflow would not heat up excessively if it was fast enough. This relatively cool internal airflow also could continue to gain heat through combus­tion. It would avoid problems due to dissociation of air or failure of chemical reac­tions in combustion to go to completion. On paper, there now was no clear upper limit to speed. Such a vehicle might even fly to orbit.

Yet while a supersonic-combustion ramjet offered tantalizing possibilities, right at the start it posed a fundamental issue: was it feasible to burn fuel in the duct of such an engine without producing shock waves? Such shocks could produce severe internal heating, destroying the benefits of supersonic combustion by slowing the flow to subsonic speeds. Rather than seeking to achieve shock-free supersonic com­bustion in a duct, researchers initially bypassed this difficulty by addressing a sim­pler problem: demonstration of combustion in a supersonic free-stream flow.

The earliest pertinent research appears to have been performed at the Applied Physics Laboratory (APL), during or shortly after World War II. Machine gunners in aircraft were accustomed to making their streams of bullets visible by making every twentieth round a tracer, which used a pyrotechnic. They hoped that a gunner could walk his bullets into a target by watching the glow of the tracers, but experience showed that the pyrotechnic action gave these bullets trajectories of their own. The Navy then engaged two research centers to look into this. In Aberdeen, Maryland, Ballistic Research Laboratories studied the deflection of the tracer rounds them­selves. Near Washington, DC, APL treated the issue as a new effect in aerodynamics and sought to make use of it.

Investigators conducted tests in a Mach 1.5 wind tunnel, burning hydrogen at the base of a shell. A round in flight experienced considerable drag at its base, but the experiments showed that this combustion set up a zone of higher pressure that canceled the drag. This work did not demonstrate supersonic combustion, for while the wind-tunnel flow was supersonic, the flow near the base was subsonic. Still, this work introduced APL to topics that later proved pertinent to supersonic-combus­tion ramjets (which became known as scramjets).17

NACA’s Lewis Flight Propulsion Laboratory, the agency’s center for studies of engines, emerged as an early nucleus of interest in this topic. Initial work involved theoretical studies of heat addition to a supersonic flow. As early as 1950, the Lewis investigators Irving Pinkel and John Serafini treated this problem in a two-dimen­sional case, as in flow over a wing or past an axisymmetric body. In 1952 they specifically treated heat addition under a supersonic wing. They suggested that this might produce more lift than could be obtained by burning the same amount of fuel in a turbojet to power an airplane.18

This conclusion immediately raised the question of whether it was possible to demonstrate supersonic combustion in a wind tunnel. Supersonic tunnels pro­duced airflows having very low pressure, which added to the experimental difficul­ties. However, researchers at Lewis had shown that aluminum borohydride could promote the ignition of pentane fuel at air pressures as low as 0.03 atmosphere. In 1953 Robert Dorsch and Edward Fletcher launched a research program that sought to ignite pure borohydride within a supersonic flow. Two years later they declared that they had succeeded. Subsequent work showed that at Mach 3, combustion of this fuel under a wing more than doubled the lift.19

Also at Lewis, the aerodynamicists Richard Weber and John MacKay published the first important open-literature study of theoretical scramjet performance in 1958. Because they were working entirely with equations, they too bypassed the problem of attaining shock-free flow in a supersonic duct by simply positing that it was feasible. They treated the problem using one-dimensional gas dynamics, corre­sponding to flow in a duct with properties at any location being uniform across the diameter. They restricted their treatment to flow velocities from Mach 4 to 7.

They discussed the issue of maximizing the thrust and the overall engine effi­ciency. They also considered the merits of various types of inlet, showing that a suit­able choice could give a scramjet an advantage over a conventional ramjet. Super­sonic combustion failed to give substantial performance improvements or to lead to an engine of lower weight. Even so, they wrote that “the trends developed herein indicate that the [scramjet] will offer superior performance at higher hypersonic flight speeds.”20

An independent effort proceeded along similar lines at Marquardt, where inves­tigators again studied scramjet performance by treating the flow within an engine duct using one-dimensional gasdynamic theory. In addition, Marquardt researchers carried out their own successful demonstration of supersonic combustion in 1957. They injected hydrogen into a supersonic airflow, with the hydrogen and the air having the same velocity. This work overcame objections from skeptics, who had argued that the work at NACA-Lewis had not truly demonstrated supersonic com­bustion. The Marquardt experimental arrangement was simpler, and its results were less equivocal.21

The Navy’s Applied Physics Laboratory, home ofTalos, also emerged as an early center of interest in scramjets. As had been true at NACA-Lewis and at Marquardt, this group came to the concept by way of external burning under a supersonic wing. ‘William Avery, the leader, developed an initial interest in supersonic combustion around 1955, for he saw the conventional ramjet facing increasingly stiff competi­tion from both liquid rockets and afterburning turbojets. (Two years later such competition killed Navaho.) Avery believed that he could use supersonic combus­tion to extend the performance of ramjets.

His initial opportunity came early in 1956, when the Navy’s Bureau of Ordnance set out to examine the technological prospects for the next 20 years. Avery took on the task of assembling APL’s contribution. He picked scramjets as a topic to study, but he was well aware of an objection. In addition to questioning the fundamental feasibility of shock-free supersonic combustion in a duct, skeptics considered that a hypersonic inlet might produce large pressure losses in the flow, with consequent negation of an engine’s thrust.

Avery sent this problem through Talos management to a young engineer, James Keirsey, who had helped with Talos engine tests. Keirsey knew that if a hypersonic ramjet was to produce useful thrust, it would appear as a small difference between

two large quantities: gross thrust and total drag. In view of uncertainties in both these numbers, he was unable to state with confidence that such an engine would work. Still he did not rule it out, and his “maybe” gave Avery reason to pursue the topic further.

Avery decided to set up a scramjet group and to try to build an engine for test in a wind tunnel. He hired Gordon Dugger, who had worked at NACA-Lewis. Dugger’s first task was to decide which of several engine layouts, both ducted and unducted, was worth pursuing. He and Avery selected an external-burning configuration with the shape of a broad upside-down triangle. The forward slope, angled downward, was to compress the incoming airflow. Fuel could be injected at the apex, with the upward slope at the rear allowing the exhaust to expand. This approach again bypassed the problem of producing shock-free flow in a duct. The use of external burning meant that this concept could produce lift as well as thrust.

Dugger soon became concerned that this layout might be too simple to be effec­tive. Keirsey suggested placing a very short cowl at the apex, thereby easing problems of ignition and combustion. This new design lent itself to incorporation within the wings of a large aircraft of reasonably conventional configuration. At low speeds the wide triangle could retract until it was flat and flush with the wing undersurface, leaving the cowl to extend into the free stream. Following acceleration to supersonic speed, the two shapes would extend and assume their triangular shape, then func­tion as an engine for further acceleration.

Wind-tunnel work also proceeded at APL. During 1958 this center had a Mach 5 facility under construction, and Dugger brought in a young experimentalist named Frederick Billig to work with it. His first task was to show that he too could demonstrate supersonic combustion, which he tried to achieve using hydrogen as his fuel. He tried electric ignition; an APL history states that he “generated gigantic arcs,” but “to no avail.” Like the NACA-Lewis investigators, he turned to fuels that ignited particularly readily. His choice, triethyl aluminum, reacts spontaneously, and violently, on contact with air.

“The results of the tests on 5 March 1959 were dramatic,” the APL history con­tinues. “A vigorous white flame erupted over the rear of [the wind-tunnel model] the instant the triethyl aluminum fuel entered the tunnel, jolting the model against its support. The pressures measured on the rear surface jumped upward.” The device produced less than a pound of thrust. But it generated considerable lift, supporting calculations that had shown that external burning could increase lift. Later tests showed that much of the combustion indeed occurred within supersonic regions of the flow.22

By the late 1950s small scramjet groups were active at NACA-Lewis, Marquardt, and APL. There also were individual investigators, such as James Nicholls of the University of Michigan. Still it is no small thing to invent a new engine, even as an extension of an existing type such as the ramjet. The scramjet needed a really high-level advocate, to draw attention within the larger realms of aerodynamics and propulsion. The man who took on this role was Antonio Ferri.

He had headed the supersonic wind tunnel in Guidonia, Italy. Then in 1943 the Nazis took control of that country and Ferri left his research to command a band of partisans who fought the Nazis with considerable effectiveness. This made him a marked man, and it was not only Germans who wanted him. An American agent, Мое Berg, was also on his trail. Berg found him and persuaded him to come to the States. The war was still on and immigration was nearly impossible, but Berg per­suaded William Donovan, the head of his agency, to seek support from President Franklin Roosevelt himself. Berg had been famous as a baseball catcher in civilian life, and when Roosevelt learned that Ferri now was in the hands of his agent, he remarked, “I see Berg is still catching pretty well.”23

At NACA-Langley after the war, he rose in management and became director of the Gas Dynamics Branch in 1949. He wrote an important textbook, Elements of Aerodynamics of Supersonic Flows (Macmillan, 1949). Holding a strong fondness for the academic world, he took a professorship at Brooklyn Polytechnic Institute in 1951, where in time he became chairman of his department. He built up an aerodynamics laboratory at Brooklyn Poly and launched a new activity as a con­sultant. Soon he was working for major companies, drawing so many contracts that his graduate students could not keep up with them. He responded in 1956 by founding a company, General Applied Science Laboratories (GASL). With financial backing from the Rockefellers, GASL grew into a significant center for research in high-speed flight.24

He was a formidable man. Robert Sanator, a former student, recalls that “you had to really want to be in that course, to learn from him. He was very fast. His mind was constantly moving, redefining the problem, and you had to be fast to keep up with him. He expected people to perform quickly, rapidly.” John Erdos, another ex-student, adds that “if you had been a student of his and later worked for him, you could never separate the professor-student relationship from your normal working relationship.” He remained Dr. Ferri to these people, never Tony, even when they rose to leadership within their companies.25

He came early to the scramjet. Taking this engine as his own, he faced its techni­cal difficulties squarely and asserted that they could be addressed, giving examples of approaches that held promise. He repeatedly emphasized that scramjets could offer performance far higher than that of rockets. He presented papers at international conferences, bringing these ideas to a wider audience. In turn, his strong professional reputation ensured that he was taken seriously. He also performed experiments as he sought to validate his claims. More than anyone else, Ferri turned the scramjet from an idea into an invention, which might be developed and made practical.

His path to the scramjet began during the 1950s, when his work as a consul­tant brought him into a friendship with Alexander Kartveli at Republic Aviation. Louis Nucci, Ferris longtime colleague, recalls that the two men “made good sparks. They were both Europeans and learned men; they liked opera and history.” They also complemented each other professionally, as Kartveli focused on airplane design while Ferri addressed difficult problems in aerodynamics and propulsion. The two men worked together on the XF-103 and fed off each other, each encouraging the other to think bolder thoughts. Among the boldest was a view that there were no natural limits to aircraft speed or performance. Ferri put forth this idea initially; Kartveli then supported it with more detailed studies.26

The key concept, again, was the scramjet. Holding a strong penchant for experi­mentation, Ferri conducted research at Brooklyn Poly. In September 1958, at a conference in Madrid, he declared that steady combustion, without strong shocks, had been accomplished in a supersonic airstream at Mach 3-0. This placed him midway in time between the supersonic-combustion demonstrations at Marquardt and at APL.27

Shock-free flow in a duct continued to loom as a major problem. The Lewis, Marquardt, and APL investigators had all bypassed this issue by treating external combustion in the supersonic flow past a wing, but Ferri did not flinch. He took the problem of shock-free flow as a point of departure, thereby turning the ducted scramjet from a wish into a serious topic for investigation.

In supersonic wind tunnels, shock-free flow was an everyday affair. However, the flow in such tunnels achieved its supersonic Mach values by expanding through a nozzle. By contrast, flow within a scramjet was to pass through a supersonic inlet and then be strongly heated within a combustor. The inlet actually had the purpose of producing a shock, an oblique one that was to slow and compress the flow while allowing it to remain supersonic. However, the combustion process was only too likely to produce unwanted shocks, which would limit an engines thrust and per­formance.

Nicholls, at Michigan, proposed to make a virtue of necessity by turning a com­bustor shock to advantage. Such a shock would produce very strong heating of the flow. If the fuel and air had been mixed upstream, then this combustor shock could produce ignition. Ferri would have none of this. He asserted that “by using a suit­able design, formation of shocks in the burner can be avoided.”28

Specifically, he started with a statement by NACA’s Weber and MacKay on combustors. These researchers had already written that the combustor needed a diverging shape, like that of a rocket nozzle, to overcome potential limits on the airflow rate due to heat addition (“thermal choking”). Ferri proposed that within such a combustor, “fuel is injected parallel to the stream to eliminate formation of shocks…. The fuel gradually mixes with the air and burns…and the combustion process can take place without the formation of shocks.” Parallel injection might take place by building the combustor with a step or sudden widening. The flow could expand as it passed the step, thereby avoiding a shock, while the fuel could be injected at the step.29

Ferri also made an intriguing contribution in dealing with inlets, which are criti­cal to the performance ofscramjets. He did this by introducing a new concept called “thermal compression.” One approaches it by appreciating that a process of heat addition can play the role of a normal shock wave. When an airflow passes through such a shock, it slows in speed and therefore diminishes in Mach, while its tempera­ture and pressure go up. The same consequences occur when a supersonic airflow is heated. It therefore follows that a process of heat addition can substitute for a normal shock.30

Practical inlets use oblique shocks, which are two-dimensional. Such shocks afford good control of the aerodynamics of an inlet. If heat addition is to substitute for an oblique shock, it too must be two-dimensional. Heat addition in a duct is one-dimensional, but Ferri proposed that numerous small burners, set within a flow, could achieve the desired two-dimensionality. By turning individual burners on or off, and by regulating the strength of each ones heating, he could produce the desired pattern of heating that in fact would accomplish the substitution of heating for shock action.31

Why would one want to do this? The nose of a hypersonic aircraft produces a strong bow shock, an oblique shock that accomplishes initial compression of the airflow. The inlet rides well behind the nose and features an enclosing cowl. The cowl, in turn, has a lip or outer rim. For best effectiveness, the inlet should sustain a “shock-on-lip” condition. The shock should not impinge within the inlet, for only the lip is cooled in the face of shock-impingement heating. But the shock also should not ride outside the inlet, or the inlet will fail to capture all of the shock – compressed airflow.

To maintain the shock-on-lip condition across a wide Mach range, an inlet requires variable geometry. This is accomplished mechanically, using sliding seals that must not allow leakage of very hot boundary-layer air. Ferris principle of ther­mal compression raised the prospect that an inlet could use fixed geometry, which was far simpler. It would do this by modulating its burners rather than by physically moving inlet hardware.

Thermal compression brought an important prospect of flexibility. At a given value of Mach, there typically was only one arrangement of a variable-geometry inlet that would produce the desired shock that would compress the flow. By con­trast, the thermal-compression process might be adjusted at will simply by control­ling the heating. Ferri proposed to do this by controlling the velocity of injection of the fuel. He wrote that “the heat release is controlled by the mixing process, [which]

depends on the difference of velocity of the air and of the injected gas.” Shock-free internal flow appeared feasible: “The fuel is injected parallel to the stream to elimi­nate formation of shocks [and] the combustion process can take place without the formation of shocks.” He added,

“The preliminary analysis of supersonic combustion ramjets…indicates that combustion can occur in a fixed-geometry burner-nozzle combination through a large range of Mach numbers of the air entering the combustion region. Because the Mach number entering the burner is permitted to vary with flight Mach number, the inlet and therefore the complete engine does not require variable geometry. Such an engine can operate over a large range of flight Mach numbers and, therefore, can be very attractive as an accelerating engine.”32

There was more. As noted, the inlet was to produce a bow shock of specified character, to slow and compress the incoming air. But if the inflow was too great, the inlet would disgorge its shock. This shock, now outside the inlet, would disrupt the flow within the inlet and hence in the engine, with the drag increasing and the thrust falling off sharply. This was known as an unstart.

Supersonic turbojets, such as the Pratt & Whitney J58 that powered the SR-71 to speeds beyond Mach 3, typically were fitted with an inlet that featured a conical spike at the front, a centerbody that was supposed to translate back and forth to adjust the shock to suit the flight Mach number. Early in the program, it often did not work.33 The test pilot James Eastham was one of the first to fly this spy plane, and he recalls what happened when one of his inlets unstarted.

“An unstart has your foil and undivided attention, right then. The airplane gives a very pronounced yaw; then you are very preoccupied with getting the inlet started again. The speed falls off; you begin to lose altitude. You follow a procedure, putting the spikes forward and opening the bypass doors. Then you would go back to the automatic positioning of the spike— which many times would unstart it again. And when you unstarted on one side, sometimes the other side would also unstart. Then you really had to give it a good massage.”34

The SR-71 initially used a spike-positioning system from Hamilton Standard. It proved unreliable, and Eastham recalls that at one point, “unstarts were literally stopping the whole program.”35 This problem was eventually overcome through development of a more capable spike-positioning system, built by Honeywell.36 Still, throughout the development and subsequent flight career of the SR-71, the positioning of inlet spikes was always done mechanically. In turn, the movable spike represented a prime example of variable geometry

Scramjets faced similar issues, particularly near Mach 4. Ferris thermal-compres­sion principle applied here as well—and raised the prospect of an inlet that might fight against unstarts by using thermal rather than mechanical arrangements. An inlet with thermal compression then might use fixed geometry all the way to orbit, while avoiding unstarts in the bargain.

Ferri presented his thoughts publicly as early as I960. He went on to give a far more detailed discussion in May 1964, at the Royal Aeronautical Society in London. This was the first extensive presentation on hypersonic propulsion for many in the audience, and attendees responded effusively.

One man declared that “this lecture opened up enormous possibilities. Where they had, for lack of information, been thinking of how high in flight speed they could stretch conventional subsonic burning engines, it was now clear that they should be thinking of how far down they could stretch supersonic burning engines.” A. D. Baxter, a Fellow of the Society, added that Ferri “had given them an insight into the prospects and possibilities of extending the speed range of the airbreathing engine far beyond what most of them had dreamed of; in fact, assailing the field which until recently was regarded as the undisputed regime of the rocket.”37

Not everyone embraced thermal compression. “The analytical basis was rather weak,” Marquardt’s Arthur Thomas commented. “It was something that he had in his head, mostly. There were those who thought it was a lot of baloney.” Nor did Ferri help his cause in 1968, when he published a Mach 6 inlet that offered “much better performance” at lower Mach “because it can handle much higher flow.” His paper contained not a single equation.38

But Fred Billig was one who accepted the merits of thermal compression and gave his own analyses. He proposed that at Mach 5, thermal compression could increase an engine’s specific impulse, an important measure of its performance, by 61 percent. Years later he recalled Ferris “great capability for visualizing, a strong physical feel. He presented a full plate of ideas, not all of which have been real­ized.”39

The Decline of NASP

NASP was one of Reagan’s programs, and for a time it seemed likely that it would not long survive the change in administrations after he left office in 1989- That fiscal year brought a high-water mark for the program, as its budget peaked at $320 million. During the spring of that year officials prepared budgets for FY 1991, which President George H. W Bush would send to Congress early in 1990. Military spending was already trending downward, and within the Pentagon, analyst David Chu recommended canceling all Defense Department spending for NASP. The new Secretary of Defense, Richard Cheney, accepted this proposal. With this, NASP appeared dead.

NASP had a new program manager, Robert Barthelemy, who had replaced Wil­liams. Working through channels, he found support in the White House from Vice President Dan Quayle. Quayle chaired the National Space Council, which had been created by law in 1958 and that just then was active for the first time in a decade. He

The Decline of NASP

X-30 concept of 1985. (NASA)

used it to rescue NASP. He led the Space Council to recommend proceeding with the program under a reduced but stable budget, and with a schedule slip. This plan won acceptance, giving the program leeway to face a new issue: excessive technical optimism.49

During 1984, amid the Copper Canyon activities, Tony duPont devised a con­ceptual configuration that evolved into the program’s baseline. It had a gross weight of 52,650 pounds, which included a 2,500-pound payload that it was to carry to polar orbit. Its weight of fuel was 28,450 pounds. The propellant mass fraction, the ratio of these quantities, then was 0.54.50

The fuel had low density and was bulky, demanding high weight for the tank­age and airframe. To save weight, duPont’s concept had no landing gear. It lacked reserves of fuel; it was to reach orbit by burning its last drops. Once there it could not execute a controlled deorbit, for it lacked maneuvering rockets as well as fuel and oxidizer for them. DuPont also made no provision for a reserve of weight to accommodate normal increases during development.51

Williams’s colleagues addressed these deficiencies, although they continued to accept duPont’s optimism in the areas of vehicle drag and engine performance. The new concept had a gross weight of 80,000 pounds. Its engines gave a specific impulse of 1,400 seconds, averaged over the trajectory, which corresponded to a mean exhaust velocity of 45,000 feet per second. (That of the SSME was 453-5 sec­onds in vacuum, or 14,590 feet per second.) The effective velocity increase for the X-30 was calculated at 47,000 feet per second, with orbital velocity being 25,000 feet

per second; the difference represented loss due to drag. This version of the X-30 was designated the “government baseline” and went to the contractors for further study.52

The initial round of contract awards was announced in April 1986. Five airframe firms developed new conceptual designs, introducing their own estimates of drag and engine performance along with their own choices of materials. They gave the following weight estimates for the X-30:

Подпись:Rockwell International McDonnell Douglas General Dynamics Boeing Lockheed

A subsequent downselection, in October 1987, eliminated the two heaviest con­cepts while retaining Rockwell, McDonnell Douglas, and General Dynamics for further work.53

What brought these weight increases? Much of the reason lay in a falloff in estimated engine performance, which fell as low as 1,070 seconds of averaged spe­cific impulse. New estimates of drag pushed the required effective velocity increase during ascent to as much as

52,0 feet per second.

A 1989 technical review, sponsored by the National Research Council, showed what this meant. The chair­man, Jack Kerrebrock, was an experienced propulsion spe­cialist from MIT. His panel included other men of similar background: Seymour Bog – donoff of Princeton, Artur Mager of Marquardt, Frank Marble from Caltech. Their report stated that for the X-30 to reach orbit as a single stage,

“a fuel fraction of approxi­mately 0.75 is required.”54

One gains insight by con – X-30 concept of 1990, which had grown considerably, sidering three hydrogen-fueled (U. s. Air Force)

rocket stages of NASA and calculating their values of propellant mass fraction if both their hydrogen and oxygen tanks were filled with NASP fuel. This was slush hydrogen, a slurry of the solid and liquid. The stages are the S-II and S-IVB of Apollo and the space shuttle’s external tank. Liquid hydrogen has 1/16 the density of liquid oxygen. With NASP slush having 1.16 times the density of liquid hydro­gen,55 the propellant mass fractions are as follows:56

S-IVB, third stage of the Saturn V

0.722

S-II, second stage of the Saturn V

0.753

External Tank

0.868

The S-II, which comes close to Kerrebrock’s value of 0.75, was an insulated shell that mounted five rocket engines. It withstood compressive loads along its length that resulted from the weight of the S-IVB and the Apollo moonship but did not require reinforcement to cope with major bending loads. It was constructed of alu­minum alloy and lacked landing gear, thermal protection, wings, and a flight deck.

How then did NASP offer an X-30 concept that constituted a true hypersonic airplane rather than a mere rocket stage? The answer lay in adding weight to the fuel, which boosted the pro­pellant mass fraction. The I I «!*■■■. ІНІЦИНІН £¥ IduJ ІЇ FP£

The Decline of NASPvehicle was not to reach orbit entirely on slush – fueled scramjets but was to use a rocket for final ascent.

It used tanked oxygen— with nearly 14 times the density of slush hydrogen.

In addition, design require­ments specified a tripro­pellant system that was to burn liquid methane during the early part of the flight.

This fuel had less energy than hydrogen, but it too added weight because it was relatively dense. The recom­mended mix called for 69 Evolution of the X-30. The government baseline of 1986 had percent hydrogen, 20 per – IsP ofJ1’40J° seconds’ delta’V “reach ^t?*7’?™***per

1 j second, and propellant mass fraction of 0.54. Its 1992 counter-

Cent Oxygen, and 11 percent part had less Isp, more drag, propellant mass fraction of 0.75, methane.57 and could not reach orbit. (NASP National Program Office)

In 1984, with optimism at its height, Cooper had asserted that the X-30 would be the size of an SR-71 and could be ready in three years. DuPont argued that his concept could lead to a “5-5-50” program by building a 50,000-pound vehicle in five years for $5 billion.58 Eight years later, in October 1990, the program had a new chosen configuration. It was rectangular in cross section, with flat sides. Three scramjet engines were to provide propulsion. Two small vertical stabilizers were at the rear, giving better stability than a single large one. A single rocket engine of approximately 60,000 pounds of thrust, integrated into the airframe, completed the layout. Other decisions selected the hot structure as the basic approach to thermal protection. The primary structure was to be of titanium-matrix composite, with insulated panels of carbon to radiate away the heat.59

This 1990 baseline design showed little resemblance to its 1984 ancestor. As revised in 1992, it no longer was to fly to a polar orbit but would take off on a due-east launch from Kennedy Space Center, thereby gaining some 1,340 feet per second of launch velocity. Its gross weight was quoted at 400,000 pounds, some 40 percent heavier than the General Dynamics weight that had been the heaviest acceptable in the 1987 downselect. Yet even then the 1992 concept was expected to fall short of orbit by some 3,000 feet per second. An uprated version, with a gross weight of at least 450,000 pounds, appeared necessary to reach orbital velocity. The prospective program budget came to $15 billion or more, with the time to first flight being eight to ten years.60

During 1992 both the Defense Science Board (DSB) and Congress’s General Accounting Office (GAO) conducted major program reviews. The immediate issue was whether to proceed as planned by making a commitment that would actually build and fly the X-30. Such a decision would take the program from its ongoing phase of research and study into a new phase of mainstream engineering develop­ment.

Both reviews focused on technology, but international issues were in the back­ground, for the Cold War had just ended. The Soviet Union had collapsed in 1991, with communists falling from power while that nation dissolved into 15 constituent states. Germany had already reunified; the Berlin Wall had fallen, and the whole of Eastern Europe had won independence from Moscow. The western border of Russia now approximated that of 1648, at the end of the Thirty Years’ War. Two complete tiers of nominally independent nations now stood between Russia and the West.

These developments greatly diminished the military urgency of NASP, while the reviews’ conclusions gave further reason to reduce its priority. The GAO noted that program managers had established 38 technical milestones that were to be satisfied before proceeding to mainstream development. These covered the specific topics of X-30 design, propulsion, structures and materials, and use of slush hydrogen as a fuel. According to the contractors themselves, only 17 of those milestones—fewer than half—were to be achieved by September 1993. The situation was particularly worrisome in the critical area of structures and materials, for which only six of 19 milestones were slated for completion. The GAO therefore recommended delaying a commitment to mainstream development “until critical technologies are devel­oped and demonstrated.”61

The DSB concurred, highlighting specific technical deficiencies. The most important involved the prediction of scramjet performance and of boundary-layer transition. In the latter, an initially laminar or smoothly flowing boundary layer becomes turbulent. This brings large increases in heat transfer and skin friction, a major source of drag. The locations of transition thus had to be known.

The scramjet-performance problem arose because of basic limitations in the capabilities of ground-test facilities. The best of them could accommodate a com­plete engine, with inlet, combustor, and nozzle, but could conduct tests only below Mach 8. “Even at Mach 8,” the DSB declared, “the scramjet cycle is just beginning to be established and consequently, there is uncertainty associated with extrapolat­ing the results into the higher Mach regime. At speeds above Mach 8, only small components of the scramjet can be tested.” This brought further uncertainty when predicting the performance of complete engines.

Boundary-layer transition to turbulence also demanded attention: “It is essential to understand the boundary-layer behavior at hypersonic speeds in order to ensure thermal survival of the airplane structure as designed, as well as to accurately predict the propulsion system performance and airplane drag. Excessive conservatism in boundary-layer predictions will lead to an overweight design incapable of achieving [single stage to orbit], while excessive optimism will lead to an airplane unable to survive in the hypersonic flight environment.”

The DSB also showed strong concern over issues of control in flight of the X – 30 and its engines. These were not simple matters of using ailerons or pushing throttles. The report stated that “controllability issues for NASP are so complex, so widely ranging in dynamics and frequency, and so interactive between technical disciplines as to have no parallels in aeronautical history…the most fundamental initial requirements for elementary aircraft control are not yet fully comprehended.” An onboard computer was to manage the vehicle and its engines in flight, but an understanding of the pertinent forces and moments “is still in an embryonic state.” Active cooling of the vehicle demanded a close understanding of boundary-layer transition. Active cooling of the engine called for resolution of “major uncertain­ties… connected with supersonic burning.” In approaching these issues, “very great uncertainties exist at a fundamental level.”

The DSB echoed the GAO in calling for extensive additional research before proceeding into mainstream development of the X-30:

We have concluded [that] fundamental uncertainties will continue to exist in at least four critical areas: boundary-layer transition; stability and controllability; propulsion performance; and structural and subsystem weight. Boundary-layer transition and scramjet performance cannot be validated in existing ground-test facilities, and the weight estimates have insufficient reserves for the inevitable growth attendant to material

allowables, fastening and joining, and detailed configuration issues________

Using optimistic assumptions on transition and scramjet performance, and the present weight estimates on material performance and active cooling, the vehicle design does not yet close; the velocity achieved is short of orbital requirements.62

Faced with the prospect that the flight trajectory of the X-30 would merely amount to a parabola, budget makers turned the curve of program funding into a parabola as well. The total budget had held at close to $250 million during FY 1990 and 1991, falling to $205 million in 1992. But in 1993 it took a sharp dip to $140 million. The NASP National Program Office tried to rescue the situation by proposing a six-year program with a budget of $2 billion, called Fiyflite, that was to conduct a series of unmanned flight tests. The Air Force responded with a new technical group, the Independent Review Team, that turned thumbs down on Hyflite and called instead for a “minimum” flight test program. Such an effort was to address the key problem of reducing uncertainties in scramjet performance at high Mach.

The National Program Office came back with a proposal for a new program called HySTP. Its budget request came to $400 million over five years, which would have continued the NASP effort at a level only slightly higher than its allocation of $60 million for FY 1994. Yet even this minimal program budget proved to be unavailable. In January 1995 the Air Force declined to approve the FiySTP budget and initiated the formal termination of the NASP program.63

In this fashion, NASP lived and died. Like SDI and the space station, one could view it as another in a series of exercises in Reaganesque optimism that fell short. Yet from the outset, supporters of NASP had emphasized that it was to make important contributions in such areas as propulsion, hypersonic aerodynamics, computational fluid dynamics, and materials. The program indeed did these things and thereby laid groundwork for further developments.