Category Facing the Heat Barrier: a History of Hypersonics

Aerospaceplane

“I remember when Sputnik was launched,” says Arthur Thomas, a leader in early work on scramjets at Marquardt. The date was 4 October 1957- “I was doing analy­sis of scramjet boosters to go into orbit. We were claiming back in those days that we could get the cost down to a hundred dollars per pound by using airbreathers.” He adds that “our job was to push the frontiers. We were extremely excited and optimistic that we were really on the leading edge of something that was going to be big.”49

At APL, other investigators proposed what may have been the first concept for a hypersonic airplane that merited consideration. In an era when the earliest jet airliners were only beginning to enter service, William Avery leaped beyond the supersonic transport to the hypersonic transport, at least in his thoughts. His col­league Eugene Pietrangeli developed a concept for a large aircraft with a wingspan of 102 feet and length of 175 feet, fitted with turbojets and with the Dugger-Keirsey external-burning scramjet, with its short cowl, under each wing. It was to accelerate to Mach 3.6 using the turbojets, then go over to scramjet propulsion and cruise at Mach 7. Carrying 130 passengers, it was to cross the country in half an hour and achieve a range of 7,000 miles. Its weight of 600,000 pounds was nearly twice that of the Boeing 707 Intercontinental, largest of that family of jetliners.50

Within the Air Force, an important prelude to similar concepts came in 1957 with Study Requirement 89774. It invited builders of large missiles to consider what modifications might make them reusable. It was not hard to envision that they might return to a landing on a runway by fitting them with wings and jet engines, but most such rocket stages were built of aluminum, which raised serious issues of thermal protection. Still, Convair at least had a useful point of departure. Its Atlas used stainless steel, which had considerably better heat resistance.51

The Convair concept envisioned a new version of this missile, fitted out as a reusable first stage for a launch vehicle. Its wings were to use the X-15’s structure. A crew compartment, set atop a rounded nose, recalled that company’s B-36 heavy bomber. To ease the thermal problem, designers were aware that this stage, having burned its propellants, would be light in weight. It therefore could execute a hyper­sonic glide while high in the atmosphere, losing speed slowly and diminishing the rate of heating.52

It did not take long before Convair officials began to view this reusable Atlas as merely a first step into space, for the prospect of LACE opened new vistas. Begin­ning late in 1957, using a combination of Air Force and in-house funding, the company launched paper studies of a new concept called Space Plane. It took shape as a large single-stage vehicle with highly-swept delta wings and a length of 235 feet. Propulsion was to feature a combination of ramjets and LACE with ACES, installed as separate engines, with the ACES being distillation type. The gross weight at take­off, 450,000 pounds, was to include 270,000 pounds of liquid hydrogen.

Aerospaceplane

Convair’s Space Plane concept. (Art by Dennis Jenkins)

Space Plane was to take off from a runway, using LACE and ACES while pump­ing the oxygen-rich condensate directly to the LACE combustion chambers. It would climb to 40,000 feet and Mach 3, cut off the rocket, and continue to fly using hydrogen-fueled ramjets. It was to use ACES for air collection while cruising at Mach 5-5 and 66,000 feet, trading liquid hydrogen for oxygen-rich liquid air while taking on more than 600,000 pounds of this oxidizer. Now weighing more than a million pounds, Space Plane would reach Mach 7 on its ramjets, then shut them down and go over completely to rocket power. Drawing on its stored oxidizer, it could fly to orbit while carrying a payload of 38,000 pounds.

The concept was born in exuberance. Its planners drew on estimates “that by 1970 the orbital payload accumulated annually would be somewhere between two million and 20 million pounds.” Most payloads were to run near 10,000 pounds, thereby calling for a schedule of three flights per day. Still the concept lacked an important element, for if scramjets were nowhere near the state of the art, at Con – vair they were not even the state of the imagination.53 Space Plane, as noted, used ramjets with subsonic combustion, installing them in pods like turbojets on a B-52. Scramjets lay beyond the thoughts of other companies as well. Thus, Northrop expected to use LACE with its Propulsive Fluid Accumulator (PROFAC) concept, which also was to cruise in the atmosphere while building up a supply of liquefied air. Like Space Plane, PROFAC also specified conventional ramjets.54

But Republic Aviation was home to the highly imaginative Kartveli, with Ferri being just a phone call away. Here the scramjet was very much a part of people’s thinking. Like the Convair designers, Kartveli looked ahead to flight to orbit with a single stage. He also expected that this goal was too demanding to achieve in a single jump, and he anticipated that intermediate projects would lay groundwork. He presented his thoughts in August I960 at a national meeting of the Institute of Aeronautical Sciences.55

The XF-103 had been dead and buried for three years, but Kartveli had crafted the F-105, which topped Mach 2 as early as 1956 and went forward into produc­tion. He now expected to continue with a Mach 2.3 fighter-bomber with enough power to lift off vertically as if levitating and to cruise at 75,000 feet. Next on the agenda was a strategic bomber powered by nuclear ramjets, which would use atomic power to heat internal airflow, with no need to burn fuel. It would match the peak speed of the X-7 by cruising at Mach 4.25, or 2,800 mph, and at 85,000 feet.56

Kartveli set Mach 7, or 5,000 mph, as the next goal. He anticipated achieving this speed with another bomber that was to cruise at 120,000 feet. Propulsion was to come from two turbojets and two ramjets, with this concept pressing the limits of subsonic combustion. Then for flight to orbit, his masterpiece was slated for Mach 25. It was to mount four J58 turbojets, modified to burn hydrogen, along with four scramjets. Ferri had convinced him that such engines could accelerate this craft all the way to orbit, with much of the gain in speed taking place while flying at 200,000 feet. A small rocket engine might provide a final boost into space, but Kartveli placed his trust in Ferris scramjets, planning to use neither LACE nor ACES.57

These concepts drew attention, and funding, from the Aero Propulsion Labora­tory at Wright-Patterson Air Force Base. Its technical director, Weldon Worth, had been closely involved with ramjets since the 1940s. Within a world that the turbojet had taken by storm, he headed a Nonrotating Engine Branch that focused on ram­jets and liquid-fuel rockets. Indeed, he regarded the ramjet as holding the greater

promise, taking this topic as his own while leaving the rockets to his deputy, Lieu­tenant Colonel Edward Hall. He launched the first Air Force studies of hypersonic propulsion as early as 1957. In October 1959 he chaired a session on scramjets at the Second USAF Symposium on Advanced Propulsion Concepts.

In the wake of this meeting, he built on the earlier SR-89774 efforts and launched a new series of studies called Aerospaceplane. It did not aim at anything so specific as a real airplane that could fly to orbit. Rather, it supported design studies and conducted basic research in advanced propulsion, seeking to develop a base for the evolution of such craft in the distant future. Marquardt and GASL became heavily involved, as did Convair, Republic, North American, GE, Lockheed, Northrop, and Douglas Aircraft.58

The new effort broadened the scope of the initial studies, while encouraging companies to pursue their concepts to greater depth. Convair, for one, had issued single-volume reports on Space Plane in October 1959, April I960, and December I960. In February 1961 it released an 11-volume set of studies, with each of them addressing a specific topic such as Aerodynamic Heating, Propulsion, Air Enrich­ment Systems, Structural Analysis, and Materials.59

Aerospaceplane proved too hot to keep under wraps, as a steady stream of disclo­sures presented concept summaries to the professional community and the general public. Aviation Week, hardly shy in these matters, ran a full-page article in October 1960:

USAF PLANS RADICAL SPACE PLANE

Studies costing $20 million sought in next budget, Earth-to-orbit vehicle

would need no large booster.60

At the Los Angeles Times, the aerospace editor Marvin Miles published headlined stories of his own. The first appeared in November:

LOCKHEED WORKING ON PLANE ABLE TO GO INTO ORBIT

ALONE

Air Force Interested in Project51

Two months later another of his articles ran as a front-page headline:

HUGE BOOSTER NOT NEEDED BY AIR FORCE SPACE PLANE

Proposed Wing Vehicle Would Take Off, Return Like Conventional Craft

It particularly cited Convair’s Space Plane, with a Times artist presenting a view of this craft in flight.62

Participants in the new studies took to the work with enthusiasm matching that of Arthur Thomas at Marquardt. Robert Sanator, a colleague of Kartveli at Repub­lic, recalls the excitement: “This one had everything. There wasn’t a single thing in it that was off-the-shelf. Whatever problem there was in aerospace—propulsion, materials, cooling, aerodynamics—Aerospaceplane had it. It was a lifetime work and it had it all. I naturally jumped right in.”63

Aerospaceplane also drew attention from the Air Forces Scientific Advisory Board, which set up an ad hoc committee to review its prospects. Its chairman, Alexander Flax, was the Air Forces chief scientist. Members specializing in propul­sion included Ferri, along with Seymour Bogdonoflf of Princeton University, a lead­ing experimentalist; Perry Pratt of Pratt & Whitney, who had invented the twin – spool turbojet; NASA’s Alfred Eggers; and the rocket specialist George P. Sutton. There also were hands-on program managers: Robert Widmer of Convair, builder of the Mach 2 B-58 bomber; Harrison Storms of North American, who had shaped the X-15 and the Mach З XB-70 bomber.64

This all-star group came away deeply skeptical of the prospects for Aerospace­plane. Its report, issued in December 1960, addressed a number of points and gave an overall assessment:

The proposed designs for Aerospace Plane…appear to violate no physical principles, but the attractive performance depends on an estimated combination of optimistic assumptions for the performance of components and subsystems. There are practically no experimental data which support these assumptions.

Aerodynamics

In March 1984, with the Copper Canyon studies showing promise, a classified program review was held near San Diego. In the words of George Baum, a close
associate of Robert Williams, “We had to put together all the technology pieces to make it credible to the DARPA management, to get them to come out to a meeting in La Jolla and be willing to sit down for three full days. It wasn’t hard to get people out to the West Coast in March; the problem was to get them off the beach.”

One of the attendees, Robert Whitehead of the Office of Naval Research, gave a talk on CFD. Was the mathematics ready; were computers at hand? Williams recalls that “he explained, in about 15 minutes, the equations of fluid mechanics, in a memorable way. With a few simple slides, he could describe their nature in almost an offhand manner, laying out these equations so the computer could solve them, then showing that the computer technology was also there. We realized that we could compute our way to Mach 25, with high confidence. That was a high point of the presentations.”1

Подпись: Development of CFD prior to NASP. In addition to vast im- аП<^ coefficients, provement in computers, there also was similar advance in the performance of codes. (NASA) Whiteheads point of departure lay in the fundamental equations of fluid flow: the Navier-Stokes equations, named for the nineteenth-century physicists Claude – Louis-Marie Navier and Sir George Stokes. They form a set of nonlinear partial differential equations that contain 60 partial derivative terms. Their physical con­tent is simple, comprising the basic laws of conserva­tion of mass, momentum, and energy, along with an equation of state. Yet their solutions, when available, cover the entire realm of fluid mechanics.2

An example of an important development, contemporaneous with Whiteheads presentation, was a 1985 treatment of flow over a complete X – 24C vehicle at Mach 5.95. The authors, Joseph Shang and S. J. Scheer, were at the Air Forces Wright Aeronautical Laboratories. They used a Cray X-MP supercomputer and gave

Availability of test facilities. Continuous-flow wind tunnels are far below the requirements of real­istic simulation of full-size aircraft in flight. Impulse facilities, such as shock tunnels, come close to the requirements but are limited by their very short run times. (NASA)

cD

cL

L/D

Experimental data

0.03676

0.03173

1.158

Numerical results

0.03503

0.02960

1.183

Percent error

4.71

6.71

2.16

Подпись: REVHOUB h JH6EH
Подпись: TEST FACILITY AVAILABILITY — A MAJOR ISSUE TE$r FACILITY REYNOLDS NUMBER CAPABILITY 10»
Подпись: 1&T
Aerodynamics

(Source: AIAA Paper 85-1509)

In that year the state of the art permitted extensive treatments of scramjets. Complete three-dimensional simulations of inlets were available, along with two – dimensional discussions of scramjet flow fields that covered the inlet, combustor, and nozzle. In 1984 Fred Billig noted that simulation of flow through an inlet using complete Navier-Stokes equations typically demanded a grid of 80,000 points and up to 12,000 time steps, with each run demanding four hours on a Control Data Cyber 203 supercomputer. A code adapted for supersonic flow was up to a hundred times faster. This made it useful for rapid surveys of a number of candidate inlets, with full Navier-Stokes treatments being reserved for a few selected choices.4

CFD held particular promise because it had the potential of overcoming the limitations of available facilities. These limits remained in place all through the NASP era. A 1993 review found “adequate” test capability only for classical aerody­namic experiments in a perfect gas, namely helium, which could support such work to Mach 20. Between Mach 13 and 17 there was “limited” ability to conduct tests that exhibited real-gas effects, such as molecular excitation and dissociation. Still, available facilities were too small to capture effects associated with vehicle size, such as determining the location of boundary-layer transition to turbulence.

For scramjet studies, the situation was even worse. There was “limited” abil­ity to test combustors out to Mach 7, but at higher Mach the capabilities were “inadequate.” Shock tunnels supported studies of flows in rarefied air from Mach 16 upward, but the whole of the nation’s capacity for such tests was “inadequate.” Some facilities existed that could study complete engines, either by themselves or in airframe-integrated configurations, but again the whole of this capability was “inadequate.”5

Yet it was an exaggeration in 1984, and remains one to this day, to propose that CFD could remedy these deficiencies by computing one’s way to orbital speeds “with high confidence.” Experience has shown that CFD falls short in two areas: prediction of transition to turbulence, which sharply increases drag due to skin fric­tion, and in the simulation of turbulence itself.

For NASP, it was vital not only to predict transition but to understand the prop­erties of turbulence after it appeared. One could see this by noting that hypersonic propulsion differs substantially from propulsion of supersonic aircraft. In the latter, the art of engine design allows engineers to ensure that there is enough margin of thrust over drag to permit the vehicle to accelerate. A typical concept for a Mach 3 supersonic airliner, for instance, calls for gross thrust from the engines of 123,000 pounds, with ram drag at the inlets of 54,500. The difference, nearly 80,000 pounds of thrust, is available to overcome skin-friction drag during cruise, or to accelerate.

At Mach 6, a representative hypersonic-transport design shows gross thrust of

330,0 pounds and ram drag of 220,000. Again there is plenty of margin for what, after all, is to be a cruise vehicle. But in hypersonic cruise at Mach 12, the numbers typically are 2.1 million pounds for gross thrust—and 1.95 million for ram drag! Here the margin comes to only 150,000 pounds of thrust, which is narrow indeed. It could vanish if skin-friction drag proves to be higher than estimated, perhaps because of a poor forecast of the location of transition. The margin also could vanish if the thrust is low, due to the use of optimistic turbulence models.6

Any high-Mach scramjet-powered craft must not only cruise but accelerate. In turn, the thrust driving this acceleration appears as a small difference between two quantities: total drag and net thrust, the latter being net of losses within the engines. Accordingly, valid predictions concerning transition and turbulence are matters of the first importance.

NASP-era analysts fell back on the “eN method,” which gave a greatly simplified sum­mary of the pertinent physics but still gave results that were often viewed as useful. It used the Navier-Stokes equa­tions to solve for the overall flow in the lami – nary boundary layer, upstream of transition. This method then intro­duced new and simple equations derived from the original Navier – Stokes. These were linear and traced the

Подпись: Experimentally determined locations of the onset of transition to turbulent flow. The strong scatter of the data points defeats attempts to find a predictive rule. (NASA) growth of a small disturbance as one followed the flow downstream. When it had grown by a factor of 22,000—e10, with N = 10—the analyst accepted that transition to turbulence had occurred.7

One can obtain a solution in this fashion, but transition results from local rough­nesses along a surface, and these can lead to results that vary dramatically. Thus, the repeated re-entries of the space shuttle, during dozens of missions, might have given numerous nearly identical data sets. In fact, transition has occurred at Mach numbers from 6 to 19! A 1990 summary presented data from wind tunnels, ballistic ranges, and tests of re-entry vehicles in free flight. There was a spread of as much as 30 to one in the measured locations of transition, with the free-flight data showing transition positions that typically were five times farther back from a nose or leading edge than positions observed using other methods. At Mach 7, observed locations covered a range of 20 to one.8

One may ask whether transition can be predicted accurately even in principle because it involves minute surface roughnesses whose details are not known a priori and may even change in the course of a re-entry. More broadly, the state of transi­tion was summarized in a 1987 review of problems in NASP hypersonics that was written by three NASA leaders in CFD:

Almost nothing is known about the effects of heat transfer, pressure gradient, three-dimensionality, chemical reactions, shock waves, and other

influences on hypersonic transition. This is caused by the difficulty of conducting meaningful hypersonic transition experiments in noisy ground – based facilities and the expense and difficulty of carrying out detailed and carefully controlled experiments in flight where it is quiet. Without an adequate, detailed database, development of effective transition models will be impossible.9

Matters did not improve in subsequent years. In 1990 Mujeeb Malik, a leader in studies of transition, noted “the long-held view that conventional, noisy ground facilities are simply not suitable for simulation of flight transition behavior.” A sub­sequent critique added that “we easily recognize that there is today no reasonably reliable predictive capability for engineering applications” and commented that “the reader…is left with some feeling of helplessness and discouragement.”10 A contem­porary review from the Defense Science Board pulled no punches: “Boundary layer transition…cannot be validated in existing ground test facilities.”11

There was more. If transition could not be predicted, it also was not generally possible to obtain a valid simulation, from first principles, of a flow that was known to be turbulent. The Navier-Stokes equations carried the physics of turbulence at all scales. The problem was that in flows of practical interest, the largest turbulent eddies were up to 100,000 times bigger than the smallest ones of concern. This meant that complete numerical simulations were out of the question.

Late in the nineteenth century the physicist Osborne Reynolds tried to bypass this difficulty by rederiving these equations in averaged form. He considered the flow velocity at any point as comprising two elements: a steady-flow part and a turbulent part that contained all the motion due to the eddies. Using the Navier – Stokes equations, he obtained equations for averaged quantities, with these quanti­ties being based on the turbulent velocities.

He found, though, that the new equations introduced additional unknowns. Other investigators, pursuing this approach, succeeded in deriving additional equations for these extra unknowns—only to find that these introduced still more unknowns. Reynolds’s averaging procedure thus led to an infinite regress, in which at every stage there were more unknown variables describing the turbulence than there were equations with which to solve for them. This contrasted with the Navier – Stokes equations themselves, which in principle could be solved because the number of these equations and the number of their variables was equal.

This infinite regress demonstrated that it was not sufficient to work from the Navier-Stokes equations alone—something more was needed. This situation arose because the averaging process did not preserve the complete physical content of the Navier-Stokes formulation. Information had been lost in the averaging. The problem of turbulence thus called for additional physics that could replace the lost information, end the regress, and give a set of equations for turbulent flow in which the number of equations again would match the number of unknowns.12

The standard means to address this issue has been a turbulence model. This takes the form of one or more auxiliary equations, either algebraic or partial-differential, which are solved simultaneously with the Navier-Stokes equations in Reynolds-aver­aged form. In turn, the turbulence model attempts to derive one or more quantities that describe the turbulence and to do so in a way that ends the regress.

Viscosity, a physical property of every liquid and gas, provides a widely used point of departure. It arises at the molecular level, and the physics of its origin is well understood. In a turbulent flow, one may speak of an “eddy viscosity” that arises by analogy, with the turbulent eddies playing the role of molecules. This quantity describes how rapidly an ink drop will mix into a stream—or a parcel of hydrogen into the turbulent flow of a scramjet combustor.13

Like the eN method in studies of transition, eddy viscosity presents a view of tur­bulence that is useful and can often be made to work, at least in well-studied cases. The widely used Baldwin-Lomax model is of this type, and it uses constants derived from experiment. Antony Jameson of Princeton University, a leading writer of flow codes, described it in 1990 as “the most popular turbulence model in the industry, primarily because it’s easy to program.”14

This approach indeed gives a set of equations that are solvable and avoid the regress, but the analyst pays a price: Eddy viscosity lacks standing as a concept supported by fundamental physics. Peter Bradshaw of Stanford University virtu­ally rejects it out of hand, declaring, “Eddy viscosity does not even deserve to be described as a ‘theory’ of turbulence!” He adds more broadly, “The present state is that even the most sophisticated turbulence models are based on brutal simplifica­tion of the N-S equations and hence cannot be relied on to predict a large range of flows with a fixed set of empirical coefficients.”15

Other specialists gave similar comments throughout the NASP era. Thomas Coakley of NASA-Ames wrote in 1983 that “turbulence models that are now used for complex, compressible flows are not well advanced, being essentially the same models that were developed for incompressible attached boundary layers and shear flows. As a consequence, when applied to compressible flows they yield results that vary widely in terms of their agreement with experimental measurements.”16

A detailed critique of existing models, given in 1985 by Budugur Lakshminara – yana of Pennsylvania State University, gave pointed comments on algebraic models, which included Baldwin-Lomax. This approach “provides poor predictions” for flows with “memory effects,” in which the physical character of the turbulence does not respond instantly to a change in flow conditions but continues to show the influ­ence of upstream effects. Such a turbulence model “is not suitable for flows with curvature, rotation, and separation. The model is of little value in three-dimensional complex flows and in situations where turbulence transport effects are important.”

“Two-equation models,” which used two partial differential equations to give more detail, had their own faults. In the view of Lakshminarayana, they “fail to cap­ture many of the features associated with complex flows.” This class of models “fails for flows with rotation, curvature, strong swirling flows, three-dimensional flows, shock-induced separation, etc.”17

Rather than work with eddy viscosity, some investigators used “Reynolds stress” models. Reynolds stresses were not true stresses, which contributed to drag. Rather, they were terms that appeared in the Reynolds-averaged Navier-Stokes equations alongside other terms that indeed represented stress. Models of this type offered greater physical realism, but again this came at the price of severe computational difficulty.18

A group at NASA-Langley, headed by Thomas Gatski, offered words of caution in 1990: “…even in the low-speed incompressible regime, it has not been possible to construct a turbulence closure model which can be applied over a wide class of

flows__ In general, Reynolds stress closure models have not been very successful in

handling the effects of rotation or three-dimensionality even in the incompressible regime; therefore, it is not likely that these effects can be treated successfully in the compressible regime with existing models.”19

Anatol Roshko of Caltech, widely viewed as a dean of aeronautics, has his own view: “History proves that each time you get into a new area, the existing models are found to be inadequate.” Such inadequacies have been seen even in simple flows, such as flow over a flat plate. The resulting skin friction is known to an accuracy of around one percent. Yet values calculated from turbulence models can be in error by up to 10 percent. “You can always take one of these models and fix it so it gives the right answer for a particular case,” says Bradshaw. “Most of us choose the flat plate. So if you cant get the flat plate right, your case is indeed piteous.”20

Another simple case is flow within a channel that suddenly widens. Downstream of the point of widening, the flow shows a zone of strongly whirling circulation. It narrows until the main flow reattaches, flowing in a single zone all the way to the now wider wall. Can one predict the location of this reattachment point? “This is a very severe test,” says John Lumley of Cornell University. “Most of the simple models have trouble getting reattachment within a factor of two.” So-called “k-epsi – lon models,” he says, are off by that much. Even so, NASA’s Tom Coakley describes them as “the most popular two-equation model,” whereas Princeton University’s Jameson speaks of them as “probably the best engineering choice around” for such problems as…flow within a channel.21

Turbulence models have a strongly empirical character and therefore often fail to predict the existence of new physics within a flow. This has been seen to cause difficulties even in the elementary case of steady flow past a cylinder at rest, a case so simple that it is presented in undergraduate courses. Nor do turbulence models cope with another feature of some flows: their strong sensitivity to slight changes in conditions. A simple example is the growth of a mixing layer.

In this scenario, two flows that have different velocities proceed along opposite sides of a thin plate, which terminates within a channel. The mixing layer then forms and grows at the interface between these streams. In Roshko’s words, “a one – percent periodic disturbance in the free stream completely changes the mixing layer growth.” This has been seen in experiments and in highly detailed solutions of the Navier-Stokes equations that solve the complete equations using a very fine grid. It has not been seen in solutions of Reynolds-averaged equations that use turbulence models.22

And if simple flows of this type bring such difficulties, what can be said of hyper – sonics? Even in the free stream that lies at some distance from a vehicle, one finds strong aerodynamic heating along with shock waves and the dissociation, recombi­nation, and chemical reaction of air molecules. Flow along the aircraft surface adds a viscous boundary layer that undergoes shock impingement, while flow within the engine adds the mixing and combustion of fuel.

As William Dannevik of Lawrence Livermore National Laboratory describes it, “There’s a fully nonlinear interaction among several fields: an entropy field, an acoustic field, a vortical field.” By contrast, in low-speed aerodynamics, “you can often reduce it down to one field interacting with itself.” Hypersonic turbulence also brings several channels for the flow and exchange of energy: internal energy, density, and vorticity. The experimental difficulties can be correspondingly severe.23

Roshko sees some similarity between turbulence modeling and the astronomy of Ptolemy, who flourished when the Roman Empire was at its height. Ptolemy repre­sented the motions of the planets using epicycles and deferents in a purely empirical fashion and with no basis in physical theory. “Many of us have used that example,” Roshko declares. “It’s a good analogy. People were able to continually keep on fixing up their epicyclic theory, to keep on accounting for new observations, and they were completely wrong in knowing what was going on. I don’t think we’re that badly off, but it’s illustrative of another thing that bothers some people. Every time some new thing comes around, you’ve got to scurry and try to figure out how you’re going to incorporate it.”24

A 1987 review concluded, “In general, the state of turbulence modeling for supersonic, and by extension, hypersonic, flows involving complex physics is poor.” Five years later, late in the NASP era, little had changed, for a Defense Science Board program review pointed to scramjet development as the single most impor­tant issue that lay beyond the state of the art.25

Within NASP, these difficulties meant that there was no prospect of computing one’s way in orbit, or of using CFD to make valid forecasts of high-Mach engine performance. In turn, these deficiencies forced the program to fall back on its test facilities, which had their own limitations.

NACA-Langley and John Becker

During the war the Germans failed to match the Allies in production of air­planes, but they were well ahead in technical design. This was particularly true in the important area of jet propulsion. They fielded an operational jet fighter, the Me-262, and while the Yankees were well along in developing the Lockheed P-80 as a riposte, the war ended before any of those jets could see combat. Nor was the Me – 262 a last-minute work of desperation. It was a true air weapon that showed better speed and acceleration than the improved P-80A in flight test, while demonstrat­ing an equal rate of climb.28 Albert Speer, Hitler’s minister of armaments, asserted in his autobiographical Inside the Third Reich (1970) that by emphasizing produc­tion of such fighters and by deploying the Wasserfall antiaircraft missile that was in development, the Nazis “would have beaten back the Western Allies’ air offensive against our industry from the spring of 1944 on.”29 The Germans thus might have prolonged the war until the advent of nuclear weapons.

Wartime America never built anything resembling the big Mach 4.4 wind tunnels at Peenemunde, but its researchers at least constructed facilities that could compare with the one at Aachen. The American installations did not achieve speeds to match Aachen’s Mach 3-3, but they had larger test sections. Arthur Kantrowitz, a young physicist from Columbia University who was working at Langley, built a nine-inch tunnel that reached Mach 2.5 when it entered operation in 1942. (Aachen’s had been four inches.) Across the country, at NACA’s Ames Aeronautical Laboratory, two other wind tunnels entered service during 1945- Their test sections measured one by three feet, and their flow speeds reached Mach 2.2.30

The Navy also was active. It provided $4.5 million for the nation’s first really large supersonic tunnel, with a test section six feet square. Built at NACA-Ames, operating at Mach 1.3 to 1.8, this installation used 60,000 horsepower and entered service soon after the war.31 The Navy also set up its Ordnance Aerophysics Labora­tory in Daingerfield, Texas, adjacent to the Lone Star Steel Company, which had air compressors that this firm made available. The supersonic tunnel that resulted covered a range of Mach 1.25 to 2.75, with a test section of 19 by 27-5 inches. It became operational in June 1946, alongside a similar installation that served for high-speed engine tests.32

Theorists complemented the wind-tunnel builders. In April 1947 Theodore von Karman, a professor at Caltech who was widely viewed as the dean of American aerodynamicists, gave a review and survey of supersonic flow theory in an address to the Institute of Aeronautical Sciences. His lecture, published three months later in the Journal of the Aeronautical Sciences, emphasized that supersonic flow theory now was mature and ready for general use. Von Karman pointed to a plethora of available methods and solutions that not only gave means to attack a number of important design problems but also gave independent approaches that could permit cross-checks on proposed solutions.

John Stack, a leading Langley aerodynamicist, noted that Prandtl had given a similarly broad overview of subsonic aerodynamics a quarter-century earlier. Stack declared, “Just as Prandtl’s famous paper outlined the direction for the engineer in the development of subsonic aircraft, Dr. von Karmans lecture outlines the direc­tion for the engineer in the development of supersonic aircraft.”33

Yet the United States had no facility, and certainly no large one, that could reach Mach 4.4. As a stopgap, the nation got what it wanted by seizing German wind tun­nels. A Mach 4.4 tunnel was shipped to the Naval Ordnance Laboratory in White Oak, Maryland. Its investigators had fabricated a Mach 5.18 nozzle and had con­ducted initial tests in January 1945- In 1948, in Maryland, this capability became routine.34 Still, if the U. S. was to advance beyond the Germans and develop the true hypersonic capability that Germany had failed to achieve, the nation would have to rely on independent research.

The man who pursued this research, and who built Americas first hypersonic tunnel, was Langleys John Becker. He had been at that center since 1936; during

the latter part of the war he was assistant chief of Stack’s Compressibility Research Division. He specifically was in charge of Langleys 16-Foot High-Speed Tunnel, which had fought its war by investigating cooling problems in aircraft motors as well as the design of propellers. This facility contributed particularly to tests of the B-50 bomber and to the aerodynamic shapes of the first atomic bombs. It also assisted development of the Pratt & Whitney R-2800 Double Wasp, a widely used piston engine that powered several important wartime fighter planes, along with the DC-6 airliner and the C-69 transport, the military version of Lockheed’s Constel­lation.35

It was quite a jump from piston-powered warbirds to hypersonics, but Becker willingly made the leap. The V-2, flying at Mach 5, gave him his justification. In a memo to Langley’s chief of research, dated 3 August 1945, Becker noted that planned facilities were to reach no higher than Mach 3. He declared that this was inadequate: “When it is considered that all of these tunnels will be used, to a large extent, to develop supersonic missiles and projectiles of types which have already been operated at Mach numbers as high as 5.0, it appears that there is a definite need for equipment capable of higher test Mach numbers.”

Within this memo, he outlined a design concept for “a supersonic tunnel having a test section four-foot square and a maximum test Mach number of 7.0.” It was to achieve continuous flow, being operated by a commercially-available compressor of 2,400 horsepower. To start the flow, the facility was to hold air within a tank that was compressed to seven atmospheres. This air was to pass through the wind tunnel before exhausting into a vacuum tank. With pressure upstream pushing the flow and with the evacuated tank pulling it, airspeeds within the test section would be high indeed. Once the flow was started, the compressor would maintain it.

A preliminary estimate indicated that this facility would cost $350,000. This was no mean sum, and Becker’s memo proposed to lay groundwork by first building a model of the big tunnel, with a test section only one foot square. He recommended that this subscale facility should “be constructed and tested before proceeding with a four-foot-square tunnel.” He gave an itemized cost estimate that came to $39,550, including $10,000 for installation and $6,000 for contingency.

Becker’s memo ended in formal fashion: “Approval is requested to proceed with the design and construction of a model supersonic tunnel having a one-foot-square test section at Mach number 7-0. If successful, this model tunnel would not only provide data for the design of economical high Mach number supersonic wind tun­nels, but would itself be a very useful research tool.”36

On 6 August, three days after Becker wrote this memo, the potential useful­ness of this tool increased enormously. On that day, an atomic bomb destroyed Hiroshima. With this, it now took only modest imagination to envision nuclear – tipped V-2s as weapons of the future. The standard V-2 had carried only a one-ton conventional warhead and lacked both range and accuracy. It nevertheless had been technically impressive, particularly since there was no way to shoot it down. But an advanced version with an atomic warhead would be far more formidable.

John Stack strongly supported Beckers proposal, which soon reached the desk of George Lewis, NACA’s Director of Aeronautical Research. Lewis worked at NACA’s Washington Headquarters but made frequent visits to Langley. Stack discussed the proposal with Lewis in the course of such a visit, and Lewis said, “Lets do it.”

Just then, though, there was little money for new projects. NACA faced a post­war budget cut, which took its total appropriation from $40.9 million in FY 1945 to $24 million in FY 1946. Lewis therefore said to Stack, “John, you know I’m a sucker for a new idea, but don’t call it a wind tunnel because I’ll be in trouble with having to raise money in a formal way. That will necessitate Congressional review and approval. Call it a research project.” Lewis designated it as Project 506 and obtained approval from NACA’s Washington office on 18 December.37

A month later, in January 1946, Becker raised new issues in a memo to Stack. He was quite concerned that the high Mach would lead to so low a temperature that air in the flow would liquefy. To prevent this, he called for heating the air, declar­ing that “a temperature of 600°F in the pressure tank is essential.” He expected to achieve this by using “a small electrical heater.”

The pressure in that tank was to be considerably higher than in his plans of August. The tank would hold a pressure of 100 atmospheres. Instead of merely starting the flow, with a powered compressor sustaining in continuous operation, this pressure tank now was to hold enough air for operating times of 40 seconds. This would resolve uncertainties in the technical requirements for continuous oper­ation. Continuous flows were still on the agenda but not for the immediate future. Instead, this wind tunnel was to operate as a blowdown facility.

Here, in outline, was a description of the installation as finally built. Its test sec­tion was 11 inches square. Its pressure tank held 50 atmospheres. It never received a compressor system for continuous flow, operating throughout its life entirely as a blowdown wind tunnel. But by heating its air, it indeed operated routinely at speeds close to Mach 7.38

Taking the name of 11-Inch Hypersonic Tunnel, it operated successfully for the first time on 26 November 1947. It did not heat its compressed air directly within the pressure tank, relying instead on an electric resistance heater as a separate com­ponent. This heater raised the air to temperatures as high as 900°F, eliminating air liquefaction in the test section with enough margin for Mach 8. Specialized experi­ments showed clearly that condensation took place when the initial temperature was not high enough to prevent it. Small particles promoted condensation by serving as nuclei for the formation of droplets. Becker suggested that such particles could have formed through the freezing of C02, which is naturally present in air. Subsequent research confirmed this conjecture.39

NACA-Langley and John Becker

The facility showed initial early problems as well as a long-term problem. The early difficulties centered on the air heater, which showed poor internal heat con­duction, requiring as much as five hours to reach a suitably uniform temperature distribution. In addition, copper tubes within the heater produced minute par­ticles of copper oxide, due to oxidation of this metal at high temperature. These particles, blown within the hypersonic airstream, damaged test models and instru­ments. Becker attacked the problem of slow warmup by circulating hot air through the heater. To eliminate the problem of oxidation, he filled the heater with nitrogen while it was warming up.40

A more recalcitrant difficulty arose because the hot airflow, entering the nozzle, heated it and caused it to undergo thermal expansion. The change in its dimensions was not large, but the nozzle design was highly sensitive to small changes, with this expansion causing the dynamic pressure in the airflow to vary by up to 13 percent in the course of a run. Run times were as long as 90 seconds, and because of this, data taken at the beginning of a test did not agree with similar data recorded a minute later. Becker addressed this by fixing the angle of attack of each test model. He did not permit the angle to vary during a run, even though variation of this angle would have yielded more data. He also made measurements at a fixed time during each run.41

The wind tunnel itself represented an important object for research. No similar facility had ever been built in America, and it was necessary to learn how to use it most effectively. Nozzle design represented an early topic for experimental study. At Mach 7, according to standard tables, the nozzle had to expand by a ratio of 104.1 to 1. This nozzle resembled that of a rocket engine. With an axisymmetric design, a throat of one-inch diameter would have opened into a channel having a diameter slightly greater than 10 inches. However, nozzles for Beckers facility proved difficult to develop.

Conventional practice, carried over from supersonic wind tunnels, called for a two-dimensional nozzle. It featured a throat in the form of a narrow slit, having the full width of the main channel and opening onto that channel. However, for flow at Mach 7, this slit was to be only about 0.1 inch high. Hence, there was considerable interest in nozzles that might be less sensitive to small errors in fabrication.42

Initial work focused on a two-step nozzle. The first step was flat and constant in height, allowing the flow to expand to 10 inches wide in the horizontal plane and to reach Mach 4.36. The second step maintained this width while allowing the flow to expand to 10.5 inches in height, thus achieving Mach 7. But this nozzle performed poorly, with investigators describing its flow as “entirely unsatisfactory for use in a wind tunnel.” The Mach number reached 6.5, but the flow in the test section was “not sufficiently uniform for quantitative wind-tunnel test purposes.” This was due to “a thick boundary layer which developed in the first step” along the flat parallel walls set closely together at the top and bottom.43

A two-dimensional, single-step nozzle gave much better results. Its narrow slit­like throat indeed proved sensitive; this was the nozzle that gave the variation with time of the dynamic pressure. Still, except for this thermal-expansion effect, this nozzle proved “far superior in all respects” when compared with the two-step nozzle. In turn, the thermal expansion in time proved amenable to correction. This expan­sion occurred because the nozzle was made of steel. The commercially available alloy Invar had a far lower coefficient of thermal expansion. A new nozzle, fabricated from this material, entered service in 1954 and greatly reduced problems due to expansion of the nozzle throat.44

Another topic of research addressed the usefulness of the optical techniques used for flow visualization. The test gas, after all, was simply air. Even when it formed shock waves near a model under test, the shocks could not be seen with the unaided eye. Therefore, investigators were accustomed to using optical instruments when studying a flow. Three methods were in use: interferometry, schlieren, and shadow­graph. These respectively observed changes in air density, density gradient, and the rate of change of the gradient.

Such instruments had been in use for decades. Ernst Mach, of the eponymous Mach number, had used a shadowgraph as early as 1887 to photograph shock waves produced by a speeding bullet. Theodor Meyer, a student of Prandtl, used schlie – ren to visualize supersonic flow in a nozzle in 1908. Interferometry gave the most detailed photos and the most information, but an interferometer was costly and dif­ficult to operate. Shadowgraphs gave the least information but were the least costly and easiest to use. Schlieren apparatus was intermediate in both respects and was employed often.45

Still, all these techniques depended on the flow having a minimum density. One could not visualize shock waves in a vacuum because they did not exist. Highly rarefied flows gave similar difficulties, and hypersonic flows indeed were rarefied. At Mach 7, a flow of air fell in pressure to less than one part in 4000 of its initial value, reducing an initial pressure of 40 atmospheres to less than one-hundredth of an atmosphere.46 Higher test-section pressures would have required correspond­ingly higher pressures in the tank and upstream of the nozzle. But low test-section pressures were desirable because they were physically realistic. They corresponded to conditions in the upper atmosphere, where hypersonic missiles were to fly.

Becker reported in 1950 that the limit of usefulness of the schlieren method “is reached at a pressure of about 1 mm of mercury for slender test models at M = 7-0.”47 This corresponded to the pressure in the atmosphere at 150,000 feet, and there was interest in reaching the equivalent of higher altitudes still. A consultant, Joseph Kaplan, recommended using nitrogen as a test gas and making use of an afterglow that persists momentarily within this gas when it has been excited by an electrical discharge. With the nitrogen literally glowing in the dark, it became much easier to see shock waves and other features of the flow field at very low pressures.

“The nitrogen afterglow appears to be usable at static pressures as low as 100 microns and perhaps lower,” Becker wrote.48 This corresponded to pressures of barely a ten-thousandth of an atmosphere, which exist near 230,000 feet. It also corresponded to the pressure in the test section of a blowdown wind tunnel with air in the tank at 50 atmospheres and the flow at Mach 13.8.49 Clearly, flow visualiza­tion would not be a problem.

Condensation, nozzle design, and flow visualization were important topics in their own right. Nor were they merely preliminaries. They addressed an important reason for building this tunnel: to learn how to design and use subsequent hyper­sonic facilities. In addition, although this 1 l-inch tunnel was small, there was much interest in using it for studies in hypersonic aerodynamics.

This early work had a somewhat elementary character, like the hypersonic exper­iments of Erdmann at Peenemunde. When university students take initial courses in aerodynamics, their textbooks and lab exercises deal with simple cases such as flow over a flat plate. The same was true of the first aerodynamic experiments with the 11-inch tunnel. The literature held a variety of theories for calculating lift, drag, and pressure distributions at hypersonic speeds. The experiments produced data that permitted comparison with theory—to check their accuracy and to determine circumstances under which they would fail to hold.

One set of tests dealt with cone-cylinder configurations at Mach 6.86. These amounted to small and simplified representations of a missile and its nose cone. The test models included cones, cylinders with flat ends, and cones with cylindri­cal afterbodies, studied at various angles of attack. For flow over a cone, the British researchers Geoffrey I. Taylor and J. Ml Maccoll published a treatment in 1933- This quantitative discussion was a cornerstone of supersonic theory and showed its merits anew at this high Mach number. An investigation showed that it held “with a high degree of accuracy.”

The method of characteristics, devised by Prandtl and Busemann in 1929, was a standard analytical method for designing surfaces for supersonic flow, including wings and nozzles. It was simple enough to lend itself to hand computation, and it gave useful results at lower supersonic speeds. Tests in the 11-inch facility showed that it continued to give good accuracy in hypersonic flow. For flow with angle of attack, a theory put forth by Antonio Ferri, a leading Italian aerodynamicist, pro­duced “very good results.” Still, not all preexisting theories proved to be accurate. One treatment gave good results for drag but overestimated some pressures and values of lift.50

Boundary-layer effects proved to be important, particularly in dealing with hypersonic wings. Tests examined a triangular delta wing and a square wing, the latter having several airfoil sections. Existing theories gave good results for lift and drag at modest angles of attack. However, predicted pressure distributions were often in error. This resulted from flow separation at high angles of attack—and from the presence of thick laminar boundary layers, even at zero angle of attack. These finds held high significance, for the very purpose of a hypersonic wing was to generate a pressure distribution that would produce lift, without making the vehicle unstable and prone to go out of control while in flight.

The aerodynamicist Charles McLellan, who had worked with Becker in design­ing the 11-inch tunnel and who had become its director, summarized the work within the Journal of the Aeronautical Sciences. He concluded that near Mach 7, the aerodynamic characteristics of wings and bodies “can be predicted by available theo­retical methods with the same order of accuracy usually obtainable at lower speeds, at least for cases in which the boundary layer is laminar.”51

At hypersonic speeds, boundary layers become thick because they sustain large temperature changes between the wall and the free stream. Mitchel Bertram, a col­league of McLellan, gave an approximate theory for the laminar hypersonic boundary layer on a flat plate. Using the 11-inch tunnel, he showed good agreement between his theory and experiment in several significant cases. He noted that boundary – layer effects could increase drag coefficients at least threefold, when compared with

values using theories that include only free-stream flow and ignore the boundary layer. This emphasized anew the importance of the boundary layer in producing hypersonic skin friction.52

These results were fundamental, both for aerodynamics and for wind-tunnel design. With them, the 1 l-inch tunnel entered into a brilliant career. It had been built as a pilot facility, to lay groundwork for a much larger hypersonic tunnel that could sustain continuous flows. This installation, the Continuous Flow Hypersonic Tunnel (CFHT), indeed was built. Entering service in 1962, it had a 31-inch test section and produced flows at Mach 10.53

Still, it took a long time for this big tunnel to come on line, and all through the 1950s the 11-inch facility continued to grow in importance. At its peak, in 1961, it conducted more than 2,500 test runs, for an average of 10 per working day. It remained in use until 1972.54 It set the pace with its use of the blowdown principle, which eliminated the need for costly continuous-flow compressors. Its run times proved to be adequate, and the CFHT found itself hard-pressed to offer much that was new. It had been built for continuous operation but found itself used in a blowdown mode most of the time. Becker wrote that his 11-inch installation “far exceeded” the CFHT “in both the importance and quality of its research output.” He described it as “the only ‘pilot tunnel’ in NACA history to become a major research facility in its own right.”55

Yet while the work of this wind tunnel was fundamental to the development of hypersonics, in 1950 the field of hypersonics was not fundamental to anything in particular. Plenty of people expected that America in time would build missiles and aircraft for flight at such speeds, but in that year no one was doing so. This soon changed, and the keyyearwas 1954. In that year the Air Force embraced theX-15, a hypersonic airplane for which studies in the 11-inch tunnel proved to be essential. Also in that year, advances in the apparently unrelated field of nuclear weaponry brought swift and emphatic approval for the development of the ICBM. With this, hypersonics vaulted to the forefront of national priority.

On LACE and ACES

We consider the estimated LACE-ACES performance very optimistic. In several cases complete failure of the project would result from any significant performance degradation from the present estimates…. Obviously the advantages claimed for the system will not be available unless air can be condensed and purified very rapidly during flight. The figures reported indicate that about 0.8 ton of air per second would have to be processed.

In conventional, i. e., ordinary commercial equipment, this would require a distillation column having a cross section on the order of 500 square feet…. It is proposed to increase the capacity of equipment of otherwise conventional design by using centrifugal force. This may be possible, but as far as the Committee knows this has never been accomplished.

On other propulsion systems:

When reduced to a common basis and compared with the best of current technology, all assumed large advances in the state-of-the-art…. On the basis of the best of current technology, none of the schemes could deliver useful payloads into orbits.

On vehicle design:

We are gravely concerned that too much emphasis may be placed on the more glamorous aspects of the Aerospace Plane resulting in neglect of what appear to be more conventional problems. The achievement of low structural weight is equally important… as is the development of a highly successful propulsion system.

Regarding scramjets, the panel was not impressed with claims that supersonic combustion had been achieved in existing experiments:

These engine ideas are based essentially upon the feasibility of diffusion deflagration flames in supersonic flows. Research should be immediately initiated using existing facilities… to substantiate the feasibility of this type of combustion.

The panelists nevertheless gave thumbs-up to the Aerospaceplane effort as a con­tinuing program of research. Their report urged a broadening of topics, placing greater emphasis on scramjets, structures and materials, and two-stage-to-orbit con­figurations. The array of proposed engines were “all sufficiently interesting so that research on all of them should be continued and emphasized.”65

As the studies went forward in the wake of this review, new propulsion concepts continued to flourish. Lockheed was in the forefront. This firm had initiated com­pany-funded work during the spring of 1959 and had a well-considered single-stage concept two years later. An artists rendering showed nine separate rocket nozzles at its tail. The vehicle also mounted four ramjets, set in pods beneath the wings.

Convair’s Space Plane had used separated nitrogen as a propellant, heating it in the LACE precooler and allowing it to expand through a nozzle to produce thrust. Lockheed’s Aerospace Plane turned this nitrogen into an important system element, with specialized nitrogen rockets delivering 125,000 pounds of thrust. This cer­tainly did not overcome the drag produced by air collection, which would have turned the vehicle into a perpetual motion machine. However, the nitrogen rockets made a valuable contribution.66

On LACE and ACES

Lockheed’s Aerospaceplane concept. The alternate hypersonic in-flight refueling system approach called for propellant transfer at Mach 6. (Art by Dennis Jenkins)

On LACE and ACES

Republic’s Aerospaceplane concept showed extensive engine-airframe integration. (Republic Aviation)

For takeoff, Lockheed expected to use Turbo-LACE. This was a LACE variant that sought again to reduce the inherently hydrogen-rich operation of the basic system. Rather than cool the air until it was liquid, Turbo-Lace chilled it deeply but allowed it to remain gaseous. Being very dense, it could pass through a turbocom­pressor and reach pressures in the hundreds of psi. This saved hydrogen because less was needed to accomplish this cooling. The Turbo-LACE engines were to operate at chamber pressures of 200 to 250 psi, well below the internal pressure of standard rockets but high enough to produce 300,000 pounds of thrust by using turbocom – pressed oxygen.67

Republic Aviation continued to emphasize the scramjet. A new configuration broke with the practice of mounting these engines within pods, as if they were turbojets. Instead, this design introduced the important topic of engine-airframe integration by setting forth a concept that amounted to a single enormous scramjet fitted with wings and a tail. A conical forward fuselage served as an inlet spike. The inlets themselves formed a ring encircling much of the vehicle. Fuel tankage filled most of its capacious internal volume.

This design study took two views regarding the potential performance of its engines. One concept avoided the use of LACE or ACES, assuming again that this craft could scram all the way to orbit. Still, it needed engines for takeoff so turbo­ramjets were installed, with both Pratt & Whitney and General Electric providing candidate concepts. Republic thus was optimistic at high Mach but conservative at low speed.

The other design introduced LACE and ACES both for takeoff and for final ascent to orbit and made use of yet another approach to derichening the hydrogen. This was SuperLACE, a concept from Marquardt that placed slush hydrogen rather than standard liquid hydrogen in the main tank. The slush consisted of liquid that contained a considerable amount of solidified hydrogen. It therefore stood at the freezing point of hydrogen, 14 K, which was markedly lower than the 21 К of liquid hydrogen at the boiling point.68

SuperLACE reduced its use of hydrogen by shunting part of the flow, warmed in the LACE heat exchanger, into the tank. There it mixed with the slush, chilling again to liquid while melting some of the hydrogen ice. Careful control of this flow ensured that while the slush in the tank gradually turned to liquid and rose toward the 21 К boiling point, it did not get there until the air-collection phase of a flight was finished. As an added bonus, the slush was noticeably denser than the liquid, enabling the tank to hold more fuel.69

LACE and ACES remained in the forefront, but there also was much interest in conventional rocket engines. Within the Aerospaceplane effort, this approach took the name POBATO, Propellants On Board At Takeoff. These rocket-powered vehicles gave points of comparison for the more exotic types that used LACE and scramjets, but here too people used their imaginations. Some POBATO vehicles ascended vertically in a classic liftoff, but others rode rocket sleds along a track while angling sharply upward within a cradle.70

In Denver, the Martin Company took rocket-powered craft as its own, for this firm expected that a next-generation launch vehicle of this type could be ready far sooner than one based on advanced airbreathing engines. Its concepts used vertical liftoff, while giving an opening for the ejector rocket. Martin introduced a concept of its own called RENE, Rocket Engine Nozzle Ejector (RENE), and conducted experiments at the Arnold Engineering Development Center. These tests went for­ward during 1961, using a liquid rocket engine, with nozzle of 5-inch diameter set within a shroud of 17-inch width. Test conditions corresponded to flight at Mach 2 and 40,000 feet, with the shrouds or surrounding ducts having various lengths to achieve increasingly thorough mixing. The longest duct gave the best perfor­mance, increasing the rated 2,000-pound thrust of the rocket to as much as 3,100 pounds.71

A complementary effort at Marquardt sought to demonstrate the feasibility of LACE. The work started with tests of heat exchangers built by Garrett AiResearch that used liquid hydrogen as the working fluid. A company-made film showed dark liquid air coming down in a torrent, as seen through a porthole. Further tests used this liquefied air in a small thrust chamber. The arrangement made no attempt to derichen the hydrogen flow; even though it ran very fuel-rich, it delivered up to 275 pounds of thrust. As a final touch, Marquardt crafted a thrust chamber of 18-inch diameter and simulated LACE operation by feeding it with liquid air and gaseous hydrogen from tanks. It showed stable combustion, delivering thrust as high as 5,700 pounds.72

Within the Air Force, the SAB’s Ad Hoc Committee on Aerospaceplane contin­ued to provide guidance along with encouraging words. A review of July 1962 was less skeptical in tone than the one of 18 months earlier, citing “several attractive arguments for a continuation of this program at a significant level of funding”:

It will have the military advantages that accrue from rapid response times and considerable versatility in choice of landing area. It will have many of the advantages that have been demonstrated in the X-15 program, namely, a real pay-off in rapidly developing reliability and operational pace that comes from continuous re-use of the same hardware again and again. It may turn out in the long run to have a cost effectiveness attractiveness… the cost per pound may eventually be brought to low levels. Finally, the Aerospaceplane program will develop the capability for flights in the atmosphere at hypersonic speeds, a capability that may be of future use to the Defense Department and possibly to the airlines.73

Single-stage-to-orbit (SSTO) was on the agenda, a topic that merits separate comment. The space shuttle is a stage-and-a-half system; it uses solid boosters plus a main stage, with all engines burning at liftoff. It is a measure of progress, or its lack, in astronautics that the Soviet R-7 rocket that launched the first Sputniks was also stage-and-a-half.74 The concept of SSTO has tantalized designers for decades, with these specialists being highly ingenious and ready to show a can-do spirit in the face of challenges.

This approach certainly is elegant. It also avoids the need to launch two rockets to do the work of one, and if the Earth’s gravity field resembled that of Mars, SSTO would be the obvious way to proceed. Unfortunately, the Earth’s field is consider­ably stronger. No SSTO has ever reached orbit, either under rocket power or by using scramjets or other airbreathers. The technical requirements have been too severe.

The SAB panel members attended three days of contractor briefings and reached a firm conclusion: “It was quite evident to the Committee from the presentation of nearly all the contractors that a single stage to orbit Aerospaceplane remains a highly speculative effort.” Reaffirming a recommendation from its I960 review, the group urged new emphasis on two-stage designs. It recommended attention to “develop­ment of hydrogen fueled turbo ramjet power plants capable of accelerating the first

stage to Mach 6.0 to 10.0____ Research directed toward the second stage which

will ultimately achieve orbit should be concentrated in the fields of high pressure hydrogen rockets and supersonic burning ramjets and air collection and enrichment systems. n

Convair, home of Space Plane, had offered single-stage configurations as early as I960. By 1962 its managers concluded that technical requirements placed such a vehicle out of reach for at least the next 20 years. The effort shifted toward a two-stage concept that took form as the 1964 Point Design Vehicle. With a gross takeoff weight of700,000 pounds, the baseline approach used turboramjets to reach Mach 5. It cruised at that speed while using ACES to collect liquid oxygen, then accelerated anew using ramjets and rockets. Stage separation occurred at Mach 8.6 and 176,000 feet, with the second stage reaching orbit on rocket power. The pay – load was 23,000 pounds with turboramjets in the first stage, increasing to 35,000 pounds with the more speculative SuperLACE.

The documentation of this 1964 Point Design, filling 16 volumes, was issued during 1963. An important advantage of the two-stage approach proved to lie in the opportunity to optimize the design of each stage for its task. The first stage was a Mach 8 aircraft that did not have to fly to orbit and that carried its heavy wings, structure, and ACES equipment only to staging velocity. The second-stage design showed strong emphasis on re-entry; it had a blunted shape along with only modest requirements for aerodynamic performance. Even so, this Point Design pushed the state of the art in materials. The first stage specified superalloys for the hot underside along with titanium for the upper surface. The second stage called for coated refrac­tory metals on its underside, with superalloys and titanium on its upper surfaces.76

Although more attainable than its single-stage predecessors, the Point Design still relied on untested technologies such as ACES, while anticipating use in aircraft structures of exotic metals that had been studied merely as turbine blades, if indeed they had gone beyond the status of laboratory samples. The opportunity neverthe­less existed for still greater conservatism in an airbreathing design, and the man who pursued it was Ernst Steinhoff. He had been present at the creation, having worked with Wernher von Braun on Germany’s wartime V-2, where he headed up the development of that missiles guidance. After I960 he was at the Rand Corpo­ration, where he examined Aerospaceplane concepts and became convinced that single-stage versions would never be built. He turned to two-stage configurations and came up with an outline of a new one: ROLS, the Recoverable Orbital Launch System. During 1963 he took the post of chief scientist at Holloman Air Force Base and proceeded to direct a formal set of studies.77

The name of ROLS had been seen as early as 1959, in one of the studies that had grown out of SR-89774, but this concept was new. Steinhoff considered that the staging velocity could be as low as Mach 3. At once this raised the prospect that the first stage might take shape as a modest technical extension of the XB-70, a large bomber designed for flight at that speed, which at the time was being readied for flight test. ROLS was to carry a second stage, dropping it from the belly like a bomb, with that stage flying on to orbit. An ACES installation would provide the liquid oxidizer prior to separation, but to reach from Mach 3 to orbital speed, the second stage had to be simple indeed. Steinhoff envisioned a long vehicle resembling a tor­pedo, powered by hydrogen-burning rockets but lacking wings and thermal protec­tion. It was not reusable and would not reenter, but it would be piloted. A project report stated, “Crew recovery is accomplished by means of a reentry capsule of the Gemini-Apollo class. The capsule forms the nose section of the vehicle and serves as the crew compartment for the entire vehicle.”78

ROLS appears in retrospect as a mirror image of NASA’s eventual space shuttle, which adopted a technically simple booster—a pair of large solid-propellant rock­ets—while packaging the main engines and most other costly systems within a fully – recoverable orbiter. By contrast, ROLS used a simple second stage and a highly intricate first stage, in the form of a large delta-wing airplane that mounted eight turbojet engines. Its length of 335 feet was more than twice that of a B-52. Weigh­ing 825,000 pounds at takeoff, ROLS was to deliver a payload of 30,000 pounds to orbit.79

Such two-stage concepts continued to emphasize ACES, while still offering a role for LACE. Experimental test and development of these concepts therefore remained on the agenda, with Marquardt pursuing further work on LACE. The earlier tests, during I960 and 1961, had featured an off-the-shelf thrust chamber that had seen use in previous projects. The new work involved a small LACE engine, the MAI 17, that was designed from the start as an integrated system.

LACE had a strong suit in its potential for a very high specific impulse, I. This is the ratio of thrust to propellant flow rate and has dimensions of seconds. It is a key measure of performance, is equivalent to exhaust velocity, and expresses the engine’s fuel economy. Pratt & Whitney’s RL10, for instance, burned hydrogen and oxygen to give thrust of 15,000 pounds with an I of 433 seconds.80 LACE was an airbreather, and its I could be enormously higher because it took its oxidizer from the atmosphere rather than carrying it in an onboard tank. The term “propellant flow rate” referred to tanked propellants, not to oxidizer taken from the air. For LACE this meant fuel only.

The basic LACE concept produced a very fuel-rich exhaust, but approaches such as RENE and SuperLACE promised to reduce the hydrogen flow substan­tially. Indeed, such concepts raised the prospect that a LACE system might use an optimized mixture ratio of hydrogen and oxidizer, with this ratio being selected to give the highest I. The MAI 17 achieved this performance artificially by using a large flow of liquid hydrogen to liquefy air and a much smaller flow for the thrust chamber. Hot-fire tests took place during December 1962, and a company report stated that “the system produced 83% of the idealized theoretical air flow and 81% of the idealized thrust. These deviations are compatible with the simplifications of the idealized analysis.”81

The best performance run delivered 0.783 pounds per second of liquid air, which burned a flow of 0.0196 pounds per second of hydrogen. Thrust was 73 pounds; I reached 3,717 seconds, more than eight times that of the RL10. Tests of the MAI 17 continued during 1963, with the best measured values of Is topping 4,500 seconds.82

In a separate effort, the Marquardt manager Richard Knox directed the pre­liminary design of a much larger LACE unit, the MAI 16, with a planned thrust of

10,0 pounds. On paper, it achieved substantial derichening by liquefying only one-fifth of the airflow and using this liquid air in precooling, while deeply cooling the rest of the airflow without liquefaction. A turbocompressor then was to pump this chilled air into the thrust chamber. A flow of less than four pounds per second of liquid hydrogen was to serve both as fuel and as primary coolant, with the antici­pated I exceeding 3,000 seconds.83

New work on RENE also flourished. The Air Force had a cooperative agree­ment with NASA’s Marshall Space Flight Center, where Fritz Pauli had developed a subscale rocket engine that burned kerosene with liquid oxygen for a thrust of 450 pounds. Twelve of these small units, mounted to form a ring, gave a basis for this new effort. The earlier work had placed the rocket motor squarely along the center – line of the duct. In the new design, the rocket units surrounded the duct, leaving it unobstructed and potentially capable of use as an ejector ramjet. The cluster was tested successfully at Marshall in September 1963 and then went to the Air Forces AEDC. As in the RENE tests of 1961, the new configuration gave a thrust increase of as much as 52 percent.84

While work on LACE and ejector rockets went forward, ACES stood as a par­ticularly critical action item. Operable ACES systems were essential for the practical success of LACE. Moreover, ACES had importance distinctly its own, for it could provide oxidizer to conventional hydrogen-burning rocket engines, such as those of ROLS. As noted earlier, there were two techniques for air separation: by chemi­cal methods and through use of a rotating fractional distillation apparatus. Both approaches went forward, each with its own contractor.

In Cambridge, Massachusetts, the small firm of Dynatech took up the challenge of chemical separation, launching its effort in May 1961. Several chemical reac­tions appeared plausible as candidates, with barium and cobalt offering particular promise:

2BaO, / 2BaO + 02 2Co304 ^ 6CoO + 02

The double arrows indicate reversibility. The oxidation reactions were exother­mic, occurring at approximately 1,600°F for barium and 1,800°F for cobalt. The reduction reactions, which released the oxygen, were endothermic, allowing the oxides to cool as they yielded this gas.

Dynatechs separator unit consisted of a long rotating drum with its interior divided into four zones using fixed partitions. A pebble bed of oxide-coated particles lined the drum interior; containment screens held the particles in place while allow­ing the drum to rotate past the partitions with minimal leakage. The zones exposed the oxide alternately to high-pressure ram air for oxidation and to low pressure for reduction. The separation was to take place in flight, at speeds of Mach 4 to Mach 5, but an inlet could slow the internal airflow to as little as 50 feet per second, increas­ing the residence time of air within a unit. The company proposed that an array of such separators weighing just under 10 tons could handle 2,000 pounds per second of airflow while producing liquid oxygen of 65 percent purity.85

Ten tons of equipment certainly counts within a launch vehicle, even though it included the weight of the oxygen liquefaction apparatus. Still it was vastly lighter than the alternative: the rotating distillation system. The Linde Division of Union Carbide pursued this approach. Its design called for a cylindrical tank containing the distillation apparatus, measuring nine feet long by nine feet in diameter and rotating at 570 revolutions per minute. With a weight of 9,000 pounds, it was to process 100 pounds per second of liquefied air—which made it 10 times as heavy as the Dynatech system, per pound of product. The Linde concept promised liquid oxygen of 90 percent purity, substantially better than the chemical system could offer, but the cited 9,000-pound weight left out additional weight for the LACE equipment that provided this separator with its liquefied air.8S

A study at Convair, released in October 1963, gave a clear preference to the Dynatech concept. Returning to the single-stage Space Plane of prior years, Convair engineers considered a version with a weight at takeoff of 600,000 pounds, using either the chemical or the distillation ACES. The effort concluded that the Dynatech separator offered a payload to orbit of 35,800 using barium and 27,800 pounds with cobalt. The Linde separator reduced this payload to 9,500 pounds. Moreover, because it had less efficiency, it demanded an additional 31,000 pounds of hydrogen fuel, along with an increase in vehicle volume of 10,000 cubic feet.87

The turn toward feasible concepts such as ROLS, along with the new emphasis on engineering design and test, promised a bright future for Aerospaceplane studies. However, a commitment to serious research and development was another matter. Advanced test facilities were critical to such an effort, but in August 1963 the Air Force canceled plans for a large Mach 14 wind tunnel at AEDC. This decision gave a clear indication of what lay ahead.88

A year earlier Aerospaceplane had received a favorable review from the SAB Ad Hoc Committee. The program nevertheless had its critics, who existed particularly within the SAB’s Aerospace Vehicles and Propulsion panels. In October 1963 they issued a report that dealt with proposed new bombers and vertical-takeoff-and – landing craft, as well as with Aerospaceplane, but their view was unmistakable on that topic:

The difficulties the Air Force has encountered over the past three years in identifying an Aerospaceplane program have sprung from the facts that the requirement for a fully recoverable space launcher is at present only vaguely defined, that today’s state-of-the-art is inadequate to support any real hardware development, and the cost of any such undertaking will be extremely large…. [T]he so-called Aerospaceplane program has had such an erratic history, has involved so many clearly infeasible factors, and has been subject to so much ridicule that from now on this name should be dropped. It is also recommended that the Air Force increase the vigilance that no new program achieves such a difficult position.89

Aerospaceplane lost still more of its rationale in December, as Defense Secretary Robert McNamara canceled Dyna-Soar. This program was building a mini-space shuttle that was to fly to orbit atop a Titan III launch vehicle. This craft was well along in development at Boeing, but program reviews within the Pentagon had failed to find a compelling purpose. McNamara thus disposed of it.90

Prior to this action, it had been possible to view Dyna-Soar as a prelude to opera­tional vehicles of that general type, which might take shape as Aerospaceplanes. The cancellation of Dyna-Soar turned the Aerospaceplane concept into an orphan, a long-term effort with no clear relation to anything currently under way. In the wake of McNamara’s decision, Congress deleted funds for further Aerospaceplane studies, and Defense Department officials declined to press for its restoration within the FY 1964 budget, which was under consideration at that time. The Air Force carried forward with new conceptual studies of vehicles for both launch and hypersonic cruise, but these lacked the focus on advanced airbreathing propulsion that had characterized Aerospaceplane.91

There nevertheless was real merit to some of the new work, for this more realistic and conservative direction pointed out a path that led in time toward NASA’s space shuttle. The Martin Company made a particular contribution. It had designed no Aerospaceplanes; rather, using company funding, its technical staff had examined concepts called Astro rockets, with the name indicating the propulsion mode. Scram – jets and LACE won little attention at Martin, but all-rocket vehicles were another matter. A concept of 1964 had a planned liftoff weight of 1,250 tons, making it intermediate in size between the Saturn I-B and Saturn V. It was a two-stage fully – reusable configuration, with both stages having delta wings and flat undersides. These undersides fitted together at liftoff, belly to belly.

On LACE and ACES

Martin’s Astrorocket. (U. S. Air Force)

The design concepts of that era were meant to offer glimpses of possible futures, but for this Astrorocket, the future was only seven years off. It clearly foreshadowed a class of two-stage fully reusable space shuttles, fitted with delta wings, that came to the forefront in NASA-sponsored studies of 1971- The designers at Martin were not clairvoyant; they drew on the background of Dyna-Soar and on studies at NASA – Ames of winged re-entry vehicles. Still, this concept demonstrated that some design exercises were returning to the mainstream.92

Further work on ACES also proceeded, amid unfortunate results at Dynatech. That company’s chemical separation processes had depended for success on having a very large area of reacting surface within the pebble-bed air separators. This appeared achievable through such means as using finely divided oxide powders or porous particles impregnated with oxide. But the research of several years showed that the oxide tended to sinter at high temperatures, markedly diminishing the reacting sur­face area. This did not make chemical separation impossible, but it sharply increased the size and weight of the equipment, which robbed this approach of its initially strong advantage over the Linde distillation system. This led to abandonment of Dynatech’s approach.93

Linde’s system was heavy and drastically less elegant than Dynatech’s alterna­tive, but it amounted largely to a new exercise in mechanical engineering and went forward to successful completion. A prototype operated in test during 1966, and

while limits to the company’s installed power capacity prevented the device from processing the rated flow of 100 pounds of air per second, it handled 77 pounds per second, yielding a product stream of oxygen that was up to 94 percent pure. Studies of lighter-weight designs also proceeded. In 1969 Linde proposed to build a distil­lation air separator, rated again at 100 pounds per second, weighing 4,360 pounds. This was only half the weight allowance of the earlier configuration.94

In the end, though, Aerospaceplane failed to identify new propulsion concepts that held promise and that could be marked for mainstream development. The program’s initial burst of enthusiasm had drawn on a view that the means were in hand, or soon would be, to leap beyond the liquid-fuel rocket as the standard launch vehicle and to pursue access to orbit using methods that were far more advanced. The advent of the turbojet, which had swiftly eclipsed the piston engine, was on everyone’s mind. Yet for all the ingenuity behind the new engine concepts, they failed to deliver. What was worse, serious technical review gave no reason to believe that they could deliver.

In time it would become clear that hypersonics faced a technical wall. Only limited gains were achievable in airbreathing propulsion, with single-stage-to-orbit remaining out of reach and no easy way at hand to break through to the really advanced performance for which people hoped.

On LACE and ACES

Propulsion

In the spring of 1992 the NASP Joint Program Office presented a final engine design called the E22A. It had a length of 60 feet and included an inlet ramp, cowled inlet, combustor, and nozzle. An isolator, located between the inlet and combustor, sought to prevent unstarts by processing flow from the inlet through a series of oblique shocks, which increased the backpressure from the combustor.

Program officials then constructed two accurately scaled test models. The Sub­scale Parametric Engine (SXPE) was built to one-eighth scale and had a length of eight feet. It was tested from April 1993 to March 1994. The Concept Demonstra­tor Engine (CDE), which followed, was built to a scale of 30 percent. Its length topped 16 feet, and it was described as “the largest airframe-integrated scramjet engine ever tested.”26

In working with the SXPE, researchers had an important goal in achieving com­bustion of hydrogen within its limited length. To promote rapid ignition, the engine used a continuous flow of a silane-hydrogen mixture as a pilot, with the silane ignit­ing spontaneously on exposure to air. In addition, to promote mixing, the model incorporated an accurate replication of the spacing between the fuel-injecting struts and ramps, with this spacing being preserved at the model’s one-eighth scale. The combustor length required to achieve the desired level of mixing then scaled in this fashion as well.

The larger CDE was tested within the Eight-Foot High-Temperature Tunnel, which was Langleys biggest hypersonic facility. The tests mapped the flowfield entering the engine, determined the performance of the inlet, and explored the potential performance of the design. Investigators varied the fuel flow rate, using the combustors to vary its distribution within the engine.

Boundary-layer effects are important in scramjets, and the tests might have rep­licated the boundary layers of a full-scale engine by operating at correspondingly higher flow densities. For the CDE, at 30 percent scale, the appropriate density would have been 1/0.3 or 3-3 times that of the atmospheric density at flight alti­tude. For the SXPE, at one-eighth scale, the test density would have shown an eight­fold increase over atmospheric. However, the SXPE used an arc-heated test facility that was limited in the power that drove its arc, and it provided its engine with air at only one-fiftieth of that density. The High Temperature Tunnel faced limits on its flow rate and delivered its test gas at only one-sixth of the appropriate density.

Engineers sought to compensate by using analytical methods to determine the drag in a full-scale engine. Still, this inability to replicate boundary-layer effects meant that the wind-tunnel tests gave poor simulations of internal drag within the test engines. This could have led to erroneous estimates of true thrust, net of drag. In turn, this showed that even when working with large test models and with test facilities of impressive size, true simulations of the boundary layer were ruled out from the start.27

For takeoff from a runway, the X-30 was to use a Low-Speed System (LSS). It comprised two principal elements: the Special System, an ejector ramjet; and the Low Speed Oxidizer System, which used LACE.28 The two were highly synergistic. The ejector used a rocket, which might have been suitable for the final ascent to orbit, with ejector action increasing its thrust during takeoff and acceleration. By giving an exhaust velocity that was closer to the vehicle velocity, the ejector also increased the fuel economy.

The LACE faced the standard problem of requiring far more hydrogen than could be burned in the air it liquefied. The ejector accomplished some derichen – ing by providing a substantial flow of entrained air that burned some of the excess. Additional hydrogen, warmed in the LACE heat exchanger, went into the fuel tanks, which were full of slush hydrogen. By melting the slush into conventional liquid hydrogen (LH ), some LACE coolant was recycled to stretch the vehicles fuel supply.29

There was good news in at least one area of LACE research: deicing. LACE systems have long been notorious for their tendency to clog with frozen moisture within the air that they liquefy. “The largest LACE ever built made around half a pound per second of liquid air,” Paul Czysz of McDonnell Douglas stated in 1986. “It froze up at six percent relative humidity in the Arizona desert, in 38 seconds.” Investigators went on to invent more than a dozen methods for water alleviation. The most feasible approach called for injecting antifreeze into the system, to enable the moisture to condense out as liquid water without freezing. A rotary separator eliminated the water, with the dehumidified air being so cold as to contain very little residual water vapor.30

The NASP program was not run by shrinking violets, and its managers stated that its LACE was not merely to operate during hot days in the desert near Phoenix. It was to function even on rainy days, for the X-30 was to be capable of flight from anywhere in the world. At NASA-Lewis, James Van Fossen built a water-alleviation system that used ethylene glycol as the antifreeze, spraying it directly onto the cold tubes of a heat exchanger. Water, condensing on those tubes, dissolved some of the glycol and remained liquid as it swept downstream with the flow. Fie reported that this arrangement protected the system against freezing at temperatures as low as ~55°F, with the moisture content of the chilled air being reduced to 0.00018 pounds in each pound of this air. This represented removal of at least 99 percent of the humidity initially present in the airflow.31

Pratt & Whitney conducted tests of a LACE precooler that used this arrange­ment. A company propulsion manager, Walt Lambdin, addressed a NASP technical review meeting in 1991 and reported that it completely eliminated problems of reduced performance of the precooler due to formation of ice. With this, the prob­lem of ice in a LACE system appeared amenable to control.32

It was also possible to gain insight into the LACE state of the art by considering contemporary work that was under way in Japan. The point of departure in that country was the H-2 launch vehicle, which first flew to orbit in February 1994. It was a two-stage expendable rocket, with a liquid-fueled core flanked by two solid boosters. LACE was pertinent because a long-range plan called for upgrades that could replace the solid strap-ons with new versions using LACE engines.33

Mitsubishi Heavy Industries was developing the H-2 s second-stage engine, des­ignated LE-5. It burned hydrogen and oxygen to produce 22,000 pounds of thrust. As an initial step toward LACE, this company built heat exchangers to liquefy air for this engine. In tests conducted during 1987 and 1988, the Mitsubishi heat exchanger demonstrated liquefaction of more than three pounds of air for every pound of LH2. This was close to four to one, the theoretical limit based on the ther­mal properties of LH2 and of air. Still, it takes 34.6 pounds of air to burn a pound of hydrogen, and an all-LACE LE-5 was to run so fuel-rich that its thrust was to be only 6,000 pounds.

But the Mitsubishi group found their own path to prevention of ice buildup. They used a freeze-thaw process, melting ice by switching periodically to the use of ambient air within the cooler after its tubes had become clogged with ice from LH2. The design also provided spaces between the tubes and allowed a high-speed airflow to blow ice from them.34

LACE nevertheless remained controversial, and even with the moisture problem solved, there remained the problem of weight. Czysz noted that an engine with

100,0 pounds of thrust would need 600 pounds per second of liquid air: “The largest liquid-air plant in the world today is the AiResearch plant in Los Angeles, at 150 pounds per second. It covers seven acres. It contains 288,000 tubes welded to headers and 59 miles of 3/32-inch tubing.”35

Still, no law required the use of so much tubing, and advocates of LACE have long been inventive. A 1963 Marquardt concept called for an engine with 10,000 pounds of thrust, which might have been further increased by using an ejector. This appeared feasible because LACE used LH, as the refrigerant. This gave far greater effectiveness than the AiResearch plant, which produced its refrigerant on the spot by chilling air through successive stages.36

For LACE heat exchangers, thin-walled tubing was essential. The Japanese model, which was sized to accommodate the liquid-hydrogen flow rate of the LE – 5, used 5,400 tubes and weighed 304 pounds, which is certainly noticeable when the engine is to put out no more than 6,000 pounds of thrust. During the mid – 1960s investigators at Marquardt and AiResearch fabricated tubes with wall thick­nesses as low as 0.001 inch, or one mil. Such tubes had not been used in any heat exchanger subassemblies, but 2-mil tubes of stainless steel had been crafted into a heat exchanger core module with a length of 18 inches.37

Even so, this remained beyond the state of the art for NASP, a quarter-cen­tury later. Weight estimates for the X-30 LACE heat exchanger were based on the assumed use of З-mil Weldalite tubing, but a 1992 Lockheed review stated, “At present, only small quantities of suitable, leak free, З-mil tubing have been fabri­cated.” The plans of that year called for construction of test prototypes using 6-mil Weldalite tubing, for which “suppliers have been able to provide significant quanti­ties.” Still, a doubled thickness of the tubing wall was not the way to achieve low weight.38

Other weight problems arose in seeking to apply an ingenious technique for derichening the product stream by increasing the heat capacity of the LH2 coolant. Molecular hydrogen, H2, has two atoms in its molecule and exists in two forms: para and ortho, which differ in the orientation of the spins of their electrons. The ortho form has parallel spin vectors, while the para form has spin vectors that are oppositely aligned. The ortho molecule amounts to a higher-energy form and loses energy as heat when it transforms into the para state. The reaction therefore is exo­thermic.

The two forms exist in different equilibrium concentrations, depending on the temperature of the bulk hydrogen. At room temperature the gas is about 25 percent para and 75 percent ortho. When liquefied, the equilibrium state is 100 percent para. Hence it is not feasible to prepare LH2 simply by liquefying the room-tem­perature gas. The large component of ortho will relax to para over several hours, producing heat and causing the liquid to boil away. The gas thus must be exposed to a catalyst to convert it to the para form before it is liquefied.

These aspects of fundamental chemistry also open the door to a molecular shift that is endothermic and that absorbs heat. One achieves this again by using a cata­lyst to convert the LH, from para to ortho. This reaction requires heat, which is obtained from the liquefying airflow within the LACE. As a consequence, the air chills more readily when using a given flow of hydrogen refrigerant. This effect is sufficiently strong to increase the heat-sink capacity of the hydrogen by as much as 25 percent.39

This concept also dates to the 1960s. Experiments showed that ruthenium metal deposited on aluminum oxide provided a suitable catalyst. For 90 percent para-to – ortho conversion, the LACE required a “beta,” a ratio of mass to flow rate, of five to seven pounds of this material for each pound per second of hydrogen flow. Data published in 1988 showed that a beta of five pounds could achieve 85 percent con­version, with this value showing improvement during 1992. However, X-30 weight estimates assumed a beta of two pounds, and this performance remained out of reach.40

During takeoff, the X-30 was to be capable of operating from existing runways and of becoming airborne at speeds similar to those of existing aircraft. The low – speed system, along with its accompanying LACE and ejector systems, therefore needed substantial levels of thrust. The ejector, again, called for a rocket exhaust to serve as a primary flow within a duct, entraining an airstream as the secondary flow. Ejectors gave good performance across a broad range of flight speeds, showing an effectiveness that increased with Mach. In the SR-71 at Mach 2.2, they accounted for 14 percent of the thrust in afterburner; at Mach 3-2 this was 28.4 percent. Nor did the SR-71 ejectors burn fuel. They functioned entirely as aerodynamic devices.41

It was easy to argue during the 1980s that their usefulness might be increased still further. The most important unclassified data had been published during the 1950s. A good engine needed a high pressure increase, but during the mid-1960s studies at Marquardt recommended a pressure rise by a factor of only about 1.5, when turbojets were showing increases that were an order of magnitude higher.42 The best theoretical treatment of ejector action dated to 1974. Its author, NASA’s В. H. Anderson, also wrote a computer program called REJECT that predicted the performance of supersonic ejectors. However, he had done this in 1974, long before the tools of CFD were in hand. A 1989 review noted that since then “little attention has been directed toward a better understanding of the details of the flow mechanism and behavior.”43

Within the NASP program, then, the ejector ramjet stood as a classic example of a problem that was well suited to new research. Ejectors were known to have good effectiveness, which might be increased still further and which stood as a good topic for current research techniques. CFD offered an obvious approach, and NASP activities supplemented computational work with an extensive program of experi­ment.44

The effort began at GASL, where Tony duPont s ejector ramjet went on a static test stand during 1985 and impressed General Skantze. DuPont’s engine design soon took the title of the Government Baseline Engine and remained a topic of active experimentation during 1986 and 1987. Some work went forward at NASA – Langley, where the Combustion Heated Scramjet Test Facility exercised ejectors over the range of Mach 1.2 to 3-5- NASA-Lewis hosted further tests, at Mach 0.06 and from Mach 2 to 3-5 within its 10 by 10 foot Supersonic Wind Tunnel.

The Lewis engine was built to accommodate growth of boundary layers and placed a 17-degree wedge ramp upstream of the inlet. Three flowpaths were mounted side by side, but only the center duct was fueled; the others were “dummies” that gave data on unfueled operation for comparison. The primary flow had a pressure of 1,000 pounds per square inch and a temperature of 1,340°F, which simulated a fuel-rich rocket exhaust. The experiments studied the impact of fuel-to-air ratio on performance, although the emphasis was on development of controls.

Even so, the performance left much to be desired. Values of fuel-to-air ratio greater than 0.52, with unity representing complete combustion, at times brought “buzz” or unwanted vibration of the inlet structure. Even with no primary flow, the inlet failed to start. The main burner never achieved thermal choking, where the flow rate would rise to the maximum permitted by heat from burning fuel. Ingestion of the boundary layer significantly degraded engine performance. Thrust measurements were described as “no good” due to nonuniform thermal expansion across a break between zones of measurement. As a contrast to this litany of woe, operation of the primary gave a welcome improvement in the isolation of the inlet from the combustor.

Also at GASL, again during 1987, an ejector from Boeing underwent static test. It used a markedly different configuration that featured an axisymmetric duct and a fuel-air mixer. The primary flow was fuel-rich, with temperatures and pressures similar to those of NASA-Lewis. On the whole, the results of the Boeing tests were encouraging. Combustion efficiencies appeared to exceed 95 percent, while mea­sured values of thrust, entrained airflow, and pressures were consistent with com­pany predictions. However, the mixer performance was no more than marginal, and its length merited an increase for better performance.45

In 1989 Pratt & Whitney emerged as a major player, beginning with a subscale ejector that used a flow of helium as the primary. It underwent tests at company facilities within the United Technologies Research Center. These tests addressed the basic issue of attempting to increase the entrainment of secondary flow, for which non-combustible helium was useful. Then, between 1990 and 1992, Pratt built three versions of its Low Speed Component Integration Rig (LSCIR), testing them all within facilities of Marquardt.

LSCIR-1 used a design that included a half-scale X-30 flowpath. It included an inlet, front and main combustors, and nozzle, with the inlet cowl featuring fixed geometry. The tests operated using ambient air as well as heated air, with and with­out fuel in the main combustor, while the engine operated as a pure ramjet for several runs. Thermal choking was achieved, with measured combustion efficiencies lying within 2 percent of values suitable for the X-30. But the inlet was unstarted for nearly all the runs, which showed that it needed variable geometry. This refinement was added to LSCIR-2, which was put through its paces in July 1991, at Mach 2.7. The test sequence would have lasted longer but was terminated prematurely due to a burnthrough of the front combustor, which had been operating at 1,740°E Thrust measurements showed only limited accuracy due to flow separation in the nozzle.

LSCIR-3 followed within months. The front combustor was rebuilt with a larger throat area to accommodate increased flow and received a new ignition system that used silane. This gas ignited spontaneously on contact with air. In tests, leaks devel­oped between the main combustor, which was actively cooled, and the uncooled nozzle. A redesigned seal eliminated the leakage. The work also validated a method for calculating heat flux to the wall due to impingement of flow from primaries.

Other results were less successful. Ignition proceeded well enough using pure silane, but a mix of silane and hydrogen failed as an ignitant. Problems continued to recur due to inlet unstarts and nozzle flow separation. The system produced 10,000 pounds of thrust at Mach 0.8 and 47,000 pounds at Mach 2.7, but this perfor­mance still was rated as low.

Within the overall LSS program, a Modified Government Baseline Engine went under test at NASA-Lewis during 1990, at Mach 3-5. The system now included hydraulically-operated cowl and nozzle flaps that provided variable geometry, along with an isolator with flow channels that amounted to a bypass around the combus­tor. This helped to prevent inlet unstarts.

Once more the emphasis was on development of controls, with many tests oper­ating the system as a pure ramjet. Only limited data were taken with the primaries on. Ingestion of the boundary layer gave significant degradation in engine perfor­mance, but in other respects most of the work went well. The ramjet operations were successful. The use of variable geometry provided reliable starting of the inlet, while operation in the ejector mode, with primaries on, again improved the inlet isolation by diminishing the effect of disturbances propagating upstream from the combustor.46

Despite these achievements, a 1993 review at Rocketdyne gave a blunt conclu­sion: “The demonstrated performance of the X-30 special system is lower than the performance level used in the cycle deck…the performance shortfall is primarily associated with restrictions on the amount of secondary flow.” (Secondary flow is entrained by the ejector’s main flow.) The experimental program had taught much concerning the prevention of inlet unstarts and the enhancement of inlet-combus­tor isolation, but the main goal—enhanced performance of the ejector ramjet—still lay out of reach.

Simple enlargement of a basic design offered little promise; Pratt & Whitney had tried that, in LSCIR-3, and had found that this brought inlet flow separation along with reduced inlet efficiency. Then in March 1993, further work on the LSS was canceled due to budget cuts. NASP program managers took the view that they could accelerate an X-30 using rockets for takeoff, as an interim measure, with the LSS being added at a later date. Thus, although the LSS was initially the critical item in duPont’s design, in time it was put on hold and held off for another day.47

Nose Cones and Re-entry

Th e ICBM concept of the early 1950s, called Atlas, was intended to carry an atomic bomb as a warhead, and there were two things wrong with this missile. It was unacceptably large and unwieldy, even with a warhead of reduced weight. In addi­tion, to compensate for this limited yield, Atlas demanded unattainable accuracy in aim. But the advent of the hydrogen bomb solved both problems. The weight issue went away because projected H-bombs were much lighter, which meant that Atlas could be substantially smaller. The accuracy issue also disappeared. Atlas now could miss its target by several miles and still destroy it, by the simple method of blowing away everything that lay between the aim point and the impact point.

Studies by specialists, complemented by direct tests of early H-bombs, brought a dramatic turnaround during 1954 as Atlas vaulted to priority. At a stroke, its design­ers faced the re-entry problem. They needed a lightweight nose cone that could protect the warhead against the heat of atmosphere entry, and nothing suitable was in sight. The Army was well along in research on this problem, but its missiles did not face the severe re-entry environment of Atlas and its re-entry studies were not directly applicable.

The Air Force approached this problem systematically. It began by working with the aerodynamicist Arthur Kantrowitz, who introduced the shock tube as an instrument that could momentarily reproduce flow conditions that were pertinent. Tests with rockets, notably the pilotless X-17, complemented laboratory experi­ments. The solution to the problem of nose-cone design came from George Sutton, a young physicist who introduced the principle of ablation. Test nose cones soon were in flight, followed by prototypes of operational versions.

Widening Prospects. for Re-entry

Th e classic spaceship has wings, and throughout much of the 1950s both NACA and the Air Force struggled to invent such a craft. Design studies addressed issues as fundamental as whether this hypersonic rocket plane should have one particular wing-body configuration, or whether it should be upside down. The focus of the work was Dyna-Soar, a small version of the space shuttle that was to ride to orbit atop a Titan III. It brought remarkable engineering advances, but Pentagon policy makers, led by Defense Secretary Robert McNamara, saw it as offering little more than technical development, with no mission that could offer a military justifica­tion. In December 1963 he canceled it.

Better prospects attended NASA’s effort in manned spaceflight, which culmi­nated in the Apollo piloted flights to the Moon. Apollo used no wings; rather, it relied on a simple cone that used the Allen-Eggers blunt-body principle. Still, its demands were stringent. It had to re-enter successfully with twice the energy of an entry from Earth orbit. Then it had to navigate a corridor, a narrow range of alti­tudes, to bleed off energy without either skipping back into space or encountering g-forces that were too severe. By doing these things, it showed that hypersonics was ready for this challenge.

Materials

No aircraft has ever cruised at Mach 5, and an important reason involves struc­tures and materials. “If I cruise in the atmosphere for two hours,” says Paul Czysz of McDonnell Douglas, “I have a thousand times the heat load into the vehicle that the shuttle gets on its quick transit of the atmosphere.” The thermal environment of
the X-30 was defined by aerodynamic heating and by the separate issue of flutter.48

A single concern dominated issues of structural design: The vehicle was to fly as low as possible in the atmosphere during ascent to orbit. Re-entry called for flight at higher altitudes, and the loads during ascent therefore were higher than those of re-entry. Ascent at lower altitude—200,000 feet, for instance, rather than 250,000—increased the drag on the X-30. But it also increased the thrust, giving a greater margin between thrust and drag that led to increased acceleration. Consider­ations of ascent, not re-entry, therefore shaped the selection of temperature-resistant materials.

Yet the aircraft could not fly too low, or it would face limits set by aerodynamic flutter. This resulted from forces on the vehicle that were not steady but oscillated, at frequencies of oscillation that changed as the vehicle accelerated and lost weight. The wings tended to vibrate at characteristic frequencies, as when bent upward and released to flex up and down. If the frequency of an aerodynamic oscillation matched that at which the wings were prone to flex, the aerodynamic forces could tear the wings off. Stiffness in materials, not strength, was what resisted flutter, and the vehicle was to fly a “flutter-limited trajectory,” staying high enough to avoid the problem.

Подпись: TVPlCAl JUtan TFAJECTWTMaterialsThe mechanical properties of metals depend on their fine­grained structure. An ingot of metal consists of a mass of interlaced grains or crystals, and small grains give higher strength. Quenching, plunging hot metal into water, yields small grains but often makes the metal brittle or hard to form. Alloying a metal, as by adding small quantities of

carbon to make steel, A. . . . .

Ascent trajectory or an airbreatner. (IN ASA)

is another traditional practice. However,

some additives refuse to dissolve or separate out from the parent metal as it cools.

To overcome such restrictions, techniques of powder metallurgy were in the fore­front. These methods gave direct control of the microstructure of metals by forming
them from powder, with the grains of powder sintering or welding together by being pressed in a mold at high temperature. A manufacturer could control the grain size independently of any heat-treating process. Powder metallurgy also overcame restrictions on alloying by mixing in the desired additives as powdered ingredients.

Several techniques existed to produce the powders. Grinding a metal slab to saw­dust was the simplest, yielding relatively coarse grains. “Splat-cooling” gave better control. It extruded molten metal onto the chilled rim of a rotating wheel, which cooled it instantly into a thin ribbon. This represented a quenching process that produced a fine-grained microstructure in the metal. The ribbon then was chemi­cally treated with hydrogen, which made it brittle, so that it could be ground into a fine powder. Heating the powder then drove off the hydrogen.

The Plasma Rotating Electrode Process, developed by the firm of Nuclear Metals, showed particular promise. The parent metal was shaped into a cylinder that rotated at up to 30,000 revolutions per minute and served as an electrode. An electric arc melted the spinning metal, which threw off droplets within an atmosphere of cool inert helium. The droplets plummeted in temperature by thousands of degrees within milliseconds, and their microstructures were so fine as to approach an amor­phous state. Their molecules did not form crystals, even tiny ones, but arranged themselves in formless patterns. This process, called “rapid solidification,” promised particular gains in high-temperature strength.

Standard titanium alloys, for instance, lost strength at temperatures above 700 to 900°E By using rapid solidification, McDonnell Douglas raised this limit to 1,100°F prior to 1986. Philip Parrish, the manager of powder metallurgy at DARPA, noted that his agency had spent some $30 million on rapid-solidification technology since 1975. In 1986 he described it as “an established technology. This technology now can stand along such traditional methods as ingot casting or drop forging.”49

Nevertheless 1,100°F was not enough, for it appeared that the X-30 needed a material that was rated at 1,700°F. This stemmed from the fact that for several years, NASP design and trajectory studies indicated that a flight vehicle indeed would face such temperatures on its fuselage. But after 1990 the development of new baseline configurations led to an appreciation that the pertinent areas of the vehicle would face temperatures no higher than 1,500°F. At that temperature, advanced titanium alloys could serve in “metal matrix composites,” with thin-gauge metals being rein­forced with fibers.

The new composition came from the firm of Titanium Metals and was desig­nated Beta-21S. That company developed it specifically for the X-30 and patented it in 1989- It consisted of titanium along with 15 percent molybdenum, 2.8 percent columbium, 3 percent aluminum, and 0.2 percent silicon. Resistance to oxidation proved to be its strong suit, with this alloy showing resistance that was two orders of magnitude greater than that of conventional aircraft titanium. Tests showed that it

Materials

Comparison of some matrix alloys. (NASA)

also could be exposed repeatedly to leaks of gaseous hydrogen without being subject to embrittlement. Moreover, it lent itself readily to being rolled to foil-gauge thick­nesses of 4 to 5 mil when metal matrix composites were fabricated.50

Such titanium-matrix composites were used in representative X-30 structures. The Non-Integral Fuselage Tank Article (NIFTA) represented a section of X-30 fuselage at one-fourth scale. It was oblong in shape, eight feet long and measuring four by seven feet in cross section, and it contained a splice. Its skin thickness was 0.040 inches, about the same as for the X-30. It held an insulated tank that could hold either liquid nitrogen or LH, in tests, which stood as a substantial engineering item in its own right.

The tank had a capacity of 940 gallons and was fabricated of graphite-epoxy composite. No liner protected the tankage on the inside, for graphite-epoxy was impervious to damage by LH. However, the exterior was insulated with two half­inch thicknesses of Q-felt, a quartz-fiber batting with density of only 3-5 pounds per cubic foot. A thin layer of Astroquartz high-temperature cloth covered the Q-felt. This insulation filled space between the tank wall and the surrounding wall of the main structure, with both this space and the Q-felt being purged with helium.51

The test sequence for NIFTA duplicated the most severe temperatures and stresses of an ascent to orbit. These stresses began on the ground, with the vehicle being heavy with fuel and subject to a substantial bending load. There was also a
large shear load, with portions of the vehicle being pulled transversely in opposite directions. This happened because the landing gear pushed upward to support the entire weight of the craft, while the weight of the hydrogen tank pushed downward only a few feet away. Other major bending and shear loads arose during subsonic climbout, with the X-30 executing a pullup maneuver.

Significant stresses arose near Mach 6 and resulted from temperature differences across the thickness of the stiffened skin. Its outer temperature was to be 800°F, but the tops of the stiffeners, a few inches away, were to be 350°F. These stiffeners were spot-welded to the skin panels, which raised the issue of whether the welds would hold amid the different thermal expansions. Then between Mach 10 and 16, the vehicle was to reach peak temperatures of 1,300°F. The temperature differences between the top and bottom of the vehicle also would be at their maximum.

The tests combined both thermal and mechanical loads and were conducted within a vacuum chamber at Wyle Laboratories during 1991- Banks of quartz lamps applied up to 1.5 megawatts of heat, while jacks imposed bending or shear forces that reached 100 percent of the design limits. Most tests placed nonflammable liquid nitrogen in the tank for safety, but the last of them indeed used LHr With this supercold fuel at -423°F, the lamps raised the exterior temperature of NIFTA to the full 1,300°F, while the jacks applied the full bending load. A 1993 paper noted “100% successful completion of these tests,” including the one with LH2 that had been particularly demanding.52

NIFTA, again, was at one-fourth scale. In a project that ran from 1991 through the summer of 1994, McDonnell Douglas engineers designed and fabricated the substantially larger Full Scale Assembly. Described as “the largest and most repre­sentative NASP fuselage structure built,” it took shape as a component measuring 10 by 12 feet. It simulated a section of the upper mid-fuselage, just aft of the crew compartment.

A1994 review declared that it “was developed to demonstrate manufacturing and assembly of a full scale fuselage panel incorporating all the essential structural details of a flight vehicle fuselage assembly.” Crafted in flightweight, it used individual panels of titanium-matrix composite that were as large as four by eight feet. These were stiffened with longitudinal members of the same material and were joined to circumferential frames and fittings of Ti-1100, a titanium alloy that used no fiber reinforcement. The complete assembly posed manufacturing challenges because the panels were of minimum thickness, having thinner gauges than had been used pre­viously. The finished article was completed just as NASP was reaching its end, but it showed that the thin panels did not introduce significant problems.53

The firm of Textron manufactured the fibers, designated SCS-6 and -9, that reinforced the composites. As a final touch, in 1992 this company opened the worlds first manufacturing plant dedicated to the production of titanium-matrix

composites. “We could get the cost down below a thousand dollars a pound if we had enough volume,” Bill Grant, a company manager, told Aerospace America. His colleague Jim Henshaw added, “We think SCS/titanium composites are fully devel­oped for structural applications.”54

Such materials served to 1,500°F, but on the X-30 substantial areas were to with­stand temperatures approaching 3,000°F, which is hotter than molten iron. If a steelworker were to plunge a hand into a ladle of this metal, the hand would explode from the sudden boiling of water in its tissues. In such areas, carbon-carbon was necessary. It had not been available for use in Dyna-Soar, but the Pentagon spent $200 million to fund its development between 1970 and 1985.55

Much of this supported the space shuttle, on which carbon-carbon protected such hot areas as the nose cap and wing leading edges. For the X-30, these areas expanded to cover the entire nose and much of the vehicle undersurface, along with the rudders and both the top and bottom surfaces of the wings. The X-30 was to execute 150 test flights, exposing its heat shield to prolonged thermal soaks while still in the atmosphere. This raised the problem of protection against oxidation.56

Materials

Selection of NASP materials based on temperature. (General Accounting Office)

Standard approaches called for mixing oxidation inhibitors into the carbon matrix and covering the surface with a coating of silicon carbide. However, there was a mismatch between the thermal expansions of the coating and the carbon – carbon substrate, which led to cracks. An interlayer of glass-forming sealant, placed between them, produced an impervious barrier that softened at high temperatures to fill the cracks. But these glasses did not flow readily at temperatures below 1,500°F. This meant that air could penetrate the coating and reach the carbon through open cracks to cause loss by oxidation.57

The goal was to protect carbon-carbon against oxidation for all 150 of those test flights, or 250 hours. These missions included 75 to orbit and 75 in hypersonic cruise. The work proceeded initially by evaluating several dozen test samples that were provided by commercial vendors. Most of these materials proved to resist oxi­dation for only 10 to 20 hours, but one specimen from the firm of Hitco reached 70 hours. Its surface had been grooved to promote adherence of the coating, and it gave hope that long operational life might be achieved.58

Complementing the study of vendors’ samples, researchers ordered new types of carbon-carbon and conducted additional tests. The most durable came from the firm of Rohr, with a coating by Science Applications International. It easily with­stood 2,000°F for 200 hours and was still going strong at 2,500 °F when the tests stopped after 150 hours. This excellent performance stemmed from its use of large quantities of oxidation inhibitors, which promoted long life, and of multiple glass layers in the coating.

But even the best of these carbon-carbons showed far poorer performance when tested in arcjets at 2,500°F. The high-speed airflows forced oxygen into cracks and pores within the material, while promoting evaporation of the glass sealants. Power­ful roars within the arcjets imposed acoustic loads that contributed to cracking, with other cracks arising from thermal shock as test specimens were suddenly plunged into a hot flow stream. The best results indicated lifetimes of less than two hours.

Fortunately, actual X-30 missions were to impose 2,500°F temperatures for only a few minutes during each launch and reentry. Even a single hour of lifetime therefore could permit panels of carbon-carbon to serve for a number of flights. A 1992 review concluded that “maximum service temperatures should be limited to 2,800°F; above this temperature the silicon-based coating systems afford little prac­tical durability,” due to active oxidation. In addition, “periodic replacement of parts may be inevitable.”59

New work on carbon-carbon, reported in 1993, gave greater encouragement as it raised the prospect of longer lifetimes. The effort evaluated small samples rather than fabricated panels and again used the arcjet installations of NASA-Johnson and Ames. Once again there was an orders-of-magnitude difference in the observed lifetimes of the carbon-carbon, but now the measured lifetimes extended into the hundreds of minutes. A formulation from the firm of Carbon-Carbon Advanced

Technologies gave the best results, suggesting 25 reuses for orbital missions of the X-30 and 50 reuses for the less-demanding missions of hypersonic cruise.60

There also was interest in using carbon-carbon for primary structure. Here the property that counted was not its heat resistance but its light weight. In an impor­tant experiment, the firm of LTV fabricated half of an entire wing box of this mate­rial. An airplanes wing box is a major element of aircraft structure that joins the wings and provides a solid base for attachment of the fuselage fore and aft. Indeed, one could compare it with the keel of a ship. It extends to left and right of the air­craft centerline, and LTV s box constituted the portion to the left of this line. Built at full scale, it represented a hot-structure wing proposed by General Dynamics. It measured five by eight feet with a maximum thickness of 16 inches. Three spars ran along its length; five ribs were mounted transversely, and the complete assembly weighed 802 pounds.

The test plan called for it to be pulled upward at the tip to reproduce the bend­ing loads of a wing in flight. Torsion or twisting was to be applied by pulling more strongly on the front or rear spar. The maximum load corresponded to having the X – 30 execute a pullup maneuver at Mach 2.2, with the wing box at room temperature. With the ascent continuing and the vehicle undergoing aerodynamic heating, the next key event brought the maximum difference in the temperatures of the top and bottom of the wing box, with the former being 994°F and the latter at 1,671°F. At that moment the load on the wing box corresponded to 34 percent of the Mach 2.2 maximum. Farther along, the wing box was to reach its peak temperature, 1,925°F, on the lower surface. These three points were to be reproduced through mechanical forces applied at the ends of the spars and through the use of graphite heaters.

But several key parts delaminated during their fabrication, seriously compromis­ing the ability of the wing box to bear its specified load. Plans to impose the peak or Mach 2.2 load were abandoned, with the maximum planned load being reduced to the 34 percent associated with the maximum temperature difference. For the same reason, the application of torsion was deleted from the test program. Amid these reductions in the scope of the structural tests, two exercises went forward during December 1991. The first took place at room temperature and successfully reached the mark of 34 percent, without causing further damage to the wing box.

The second test, a week later, reproduced the condition of peak temperature dif­ference while briefly applying the calculated load of 34 percent. The plan then called for further heating to the peak temperature of 1,925°F. As the wing box approached this value, a problem arose due to the use of metal fasteners in its assembly. Some were made from coated columbium and were rated for 2,300°F, but most were of a nickel alloy that had a permissible temperature of 2,000°F. However, an instru­mented nickel-alloy fastener overheated and reached 2,l47°F- The wing box showed a maximum temperature of 1,917°F at that moment, and the test was terminated because the strength of the fasteners now was in question. This test nevertheless

counted as a success because it had come within 8°F of the specified temperature.61

Both tests thus were marked as having achieved their goals, but their merits were largely in the mind of the beholder. The entire project would have been far more impressive if it had avoided delamination, successfully achieved the Mach 2.2 peak load, incorporated torsion, and subjected the wing box to repeated cycles of bending, torsion, and heating. This effort stood as a bold leap toward a future in which carbon-carbon might take its place as a mainstream material, suitable for a hot primary structure, but it was clear that this future would not arrive during the NASP program.

Then there was beryllium. It had only two-thirds the density of aluminum and possessed good strength, but its temperature range was limited. The conventional metal had a limit of some 850°F, but an alloy from Lockheed called Lockalloy, which contained 38 percent aluminum, was rated only for 600°F. It had never become a mainstream engineering material like titanium, but for NASP it offered the advan­tage of high thermal conductivity. Work with titanium had greatly increased its tem­peratures of use, and there was hope of achieving similar results with beryllium.

Initial efforts used rapid-solidification techniques and sought temperature limits as high as 1,500°F. These attempts bore no fruit, and from 1988 onward the temper­ature goal fell lower and lower. In May 1990 a program review shifted the emphasis away from high-temperature formulations toward the development of beryllium as a material suitable for use at cryogenic temperatures. Standard forms of this metal became unacceptably brittle when only slightly colder than ~100°F, but cryo-beryl – lium proved to be out of reach as well. By 1992 investigators were working with ductile alloys of beryllium and were sacrificing all prospect of use at temperatures beyond a few hundred degrees but were winning only modest improvements in low – temperature capability. Terence Ronald, the NASP materials director, wrote in 1995 of rapid-solidification versions with temperature limits as low as 500°F, which was not what the X-30 needed to reach orbit.62

In sum, the NASP materials effort scored a major advance with Beta-21S, but the genuinely radical possibilities failed to emerge. These included carbon-carbon as primary structure, along with alloys of beryllium that were rated for temperatures well above 1,000°F. The latter, if available, might have led to a primary structure with the strength and temperature resistance of Beta-2 IS but with less than half the weight. Indeed, such weight savings would have ramified through the entire design, leading to a configuration that would have been smaller and lighter overall.

Overall, work with materials fell well short of its goals. In dealing with struc­tures and materials, the contractors and the National Program Office established 19 program milestones that were to be accomplished by September 1993- A General Accounting Office program review, issued in December 1992, noted that only six of them would indeed be completed.63 This slow progress encouraged conservatism in drawing up the bill of materials, but this conservatism carried a penalty.

When the scramjets faltered in their calculated performance and the X-30 gained weight while falling short of orbit, designers lacked recourse to new and very light materials—structural carbon-carbon, high-temperature beryllium—that might have saved the situation. With this, NASP spiraled to its end. It also left its support­ers with renewed appreciation for rockets as launch vehicles, which had been flying to orbit for decades.

Materials

The Move Toward Missiles

In August 1945 it took little imagination to envision that the weapon of the future would be an advanced V-2, carrying an atomic bomb as the warhead and able to cross oceans. It took rather more imagination, along with technical knowledge, to see that this concept was so far beyond the state of the art as not to be worth pursu­ing. Thus, in December Vannevar Bush, wartime head of the Office of Scientific Research and Development, gave his views in congressional testimony:

“There has been a great deal said about a 3,000 miles high-angle rocket.

In my opinion, such a thing is impossible for many years. The people have been writing these things that annoy me, have been talking about a 3,000 mile high-angle rocket shot from one continent to another, carrying an atomic bomb and so directed as to be a precise weapon which would land exactly on a certain target, such as a city. I say, technically, I don’t think anyone in the world knows how to do such a thing, and I feel confident that it will not be done for a very long period of time to come. I think we can leave that out of our thinking.”1

Propulsion and re-entry were major problems, but guidance was worse. For intercontinental range, the Air Force set the permitted miss distance at 5,000 feet and then at 1,500 feet. The latter equaled the error of experienced bombardiers who were using radar bombsights to strike at night from 25,000 feet. The view at the Pentagon was that an ICBM would have to do as well when flying all the way to Moscow. This accuracy corresponded to hitting a golf ball a mile and having it make a hole in one. Moreover, each ICBM was to do this entirely through auto­matic control.2

The Air Force therefore emphasized bombers during the early postwar years, paying little attention to missiles. Its main program, such as it was, called for a mis­sile that was neither ballistic nor intercontinental. It was a cruise missile, which was to solve its guidance problem by steering continually. The first thoughts dated to November 1945. At North American Aviation, chief engineer Raymond Rice and chief scientist William Bollay proposed to “essentially add wings to the V-2 and design a missile fundamentally the same as the A-9.”

Like the supersonic wind tunnel at the Naval Ordnance Laboratory, here was another concept that was to carry a German project to completion. The initial design had a specified range of 500 miles,3 which soon increased. Like the A-9, this missile—designated MX-770—was to follow a boost-glide trajectory and then extend its range with a supersonic glide. But by 1948 the U. S. Air Force had won its independence from the Army and had received authority over missile programs with ranges of 1,000 miles and more. Shorter-range missiles remained the con­cern of the Army. Accordingly, late in February, Air Force officials instructed North American to stretch the range of the MX-770 to a thousand miles.

A boost-glide trajectory was not well suited for a doubled range. At Wright Field, the Air Force development center, Colonel M. S. Roth proposed to increase the range by adding ramjets.4 This drew on work at Wright, where the Power Plant

Laboratory had a Nonrotating Engine Branch that was funding development of both ramjets and rocket engines. Its director, Weldon Worth, dealt specifically with ramjets.5 A modification of the MX-770 design added two ramjet engines, mount­ing them singly at the tips of the vertical fins.6 The missile also received a new name: Navaho. This reflected a penchant at North American for names beginning with “NA.”7

Then, within a few months during 1949 and 1950, the prospect of world war emerged. In 1949 the Soviets exploded their first atomic bomb. At nearly the same time, China’s Mao Zedong defeated the Nationalists of Chiang Kai-shek and pro­claimed the People’s Republic of China. The Soviets had already shown aggressive­ness by subverting the democratic government of Czechoslovakia and by blockading Berlin. These new developments raised the prospect of a unified communist empire armed with the industry that had defeated the Nazis, wielding atomic weapons, and deploying the limitless manpower of China.

President Truman responded both publicly and with actions that were classi­fied. In January 1950 he announced a stepped-up nuclear program, directing “the Atomic Energy Commission to continue its work on all forms of atomic weapons, including the so-called hydrogen or super bomb.” In April he gave his approval to a secret policy document, NSC-68. It stated that the United States would resist com­munist expansion anywhere in the world and would devote up to twenty percent of the gross national product to national defense.8 Then in June, in China’s back yard, North Korea invaded the South, and America again was at war.

These events had consequences for the missile program, as the design and mis­sion of Navaho changed dramatically during 1950. Bollay’s specialists, working with Air Force counterparts, showed that they could anticipate increases in its range to as much as 5,500 nautical miles. Conferences among Air Force officials, held at the Pentagon in August, set this intercontinental range as a long-term goal. A letter from Major General Donald Putt, Director of Research and Development within the Air Materiel Command, became the directive instructing North American to pursue this objective. An interim version, Navaho II, with range of 2,500 nautical miles, appeared technically feasible. The full-range Navaho III represented a long­term project that was slated to go forward as a parallel effort.

The thousand-mile Navaho of 1948 had taken approaches based on the V-2 to their limit. Navaho II, the initial focus of effort, took shape as a two-stage missile with a rocket-powered booster. The booster was to use two such engines, each with thrust of 120,000 pounds. A ramjet-powered second stage was to ride it during initial ascent, accelerating to the supersonic speed at which the ramjet engines could produce their rated thrust. This second stage was then to fly onward as a cruise mis­sile, at a planned flight speed of Mach 2.75.9

A rival to Navaho soon emerged. At Convair, structural analyst Karel Bossart held a strong interest in building an ICBM. As a prelude, he had built three rockets in the shape of a subscale V-2 and had demonstrated his ideas for lightweight struc­ture in flight test. The Rand Corporation, an influential Air Force think tank, had been keeping an eye on this work and on the burgeoning technology of missiles. In December 1950 it issued a report stating that long-range ballistic missiles now were in reach. A month later the Air Force responded by giving Bossart, and Convair, a new study contract. In August 1951 he christened this missile Atlas, after Convair’s parent company, the Atlas Corporation.

The initial concept was a behemoth. Carrying an 8,000-pound warhead, it was to weigh 670,000 pounds, stand 160 feet tall by 12 feet in diameter, and use seven of Bollay’s new 120,000-pound engines. It was thoroughly unwieldy and repre­sented a basis for further studies rather than a concept for a practical weapon. Still, it stood as a milestone. For the first time, the Air Force had a concept for an ICBM that it could pursue using engines that were already in development.10

For the ICBM to compete with Navaho, it had to shrink considerably. Within the Air Force’s Air Research and Development Command, Brigadier General John Sessums, a strong advocate of long-range missiles, proposed that this could be done by shrinking the warhead. The size and weight of Atlas were to scale in proportion with the weight of its atomic weapon, and Sessums asserted that new developments in warhead design indeed would give high yield while cutting the weight.

He carried his argument to the Air Staff, which amounted to the Air Forces board of directors. This brought further studies, which indeed led to a welcome reduction in the size of Atlas. The concept of 1953 called for a length of 110 feet and a loaded weight of 440,000 pounds, with the warhead tipping the scale at only 3,000 pounds. The number of engines went down from seven to five.11

There also was encouraging news in the area of guidance. Radio guidance was out of the question for an operational missile; it might be jammed or the ground-based guidance center might be destroyed in an attack. Instead, missile guidance was to be entirely self-contained. All concepts called for the use of sensitive accelerometers along with an onboard computer, to determine velocity and location. Navaho was to add star trackers, which were to null out errors by tracking stars even in daylight. In addition, Charles Stark Draper of MIT was pursuing inertial guidance, which was to use no external references of any sort. His 1949 system was not truly inertial, for it included a magnetic compass and a Sun-seeker. But when flight-tested aboard a B-29, over distances as great at 1,737 nautical miles, it showed a mean error of only 5 nautical miles.12

For Atlas, though, the permitted miss distance remained at 1,500 feet, with the range being 5500 nautical miles. The program plan of October 1953 called for a leisurely advance over the ensuing decade, with research and development being completed only “sometime after 1964,” and operational readiness being achieved in 1965- The program was to emphasize work on the major components: propulsion, guidance, nose cone, lightweight structure. In addition, it was to conduct extensive ground tests before proceeding toward flight.13

This concept continued to call for an atomic bomb as the warhead, but by then the hydrogen bomb was in the picture. The first test version, named Mike, deto­nated at Eniwetok Atoll in the Pacific on 1 November 1952. Its fireball spread so far and fast as to terrify distant observers, expanding until it was more than three miles across. “The thing was enormous,” one man said. “It looked as if it blotted out the whole horizon, and I was standing 30 miles away.” The weapons designer Theodore Taylor described it as “so huge, so brutal—as if things had gone too far. When the heat reached the observers, it stayed and stayed and stayed, not for seconds but for minutes.” Mike yielded 10.4 megatons, nearly a thousand times greater than the 13 kilotons of the Hiroshima bomb of 1945-

Mike weighed 82 tons.14 It was not a weapon; it was a physics experiment. Still, its success raised the prospect that warheads of the future might be smaller and yet might increase sharply in explosive power. Theodore von Karman, chairman of the Air Force Scientific Advisory Board, sought estimates from the Atomic Energy Commission of the size and weight of future bombs. The AEC refused to release this information. Lieutenant General James Doolittle, Special Assistant to the Air Force Chief of Staff, recommended creating a special panel on nuclear weapons within the SAB. This took form in March 1953, with the mathematician John von Neumann as its chairman. Its specialists included Hans Bethe, who later won the Nobel Prize, and Norris Bradbury who headed the nations nuclear laboratory at Los Alamos, New Mexico.

In June this group reported that a thermonuclear warhead with the 3,000-pound Atlas weight could have a yield of half a megaton. This was substantially higher than that of the pure-fission weapons considered previously. It gave renewed strength to the prospect of a less stringent aim requirement, for Atlas now might miss by far more than 1,500 feet and still destroy its target.

Three months later the Air Force Special Weapons Center issued its own esti­mate, anticipating that a hydrogen bomb of half-megaton yield could weigh as little as 1,500 pounds. This immediately opened the prospect of a further reduction in the size of Atlas, which might fall in weight from 440,000 pounds to as little as 240,000. Such a missile also would need fewer engines.15

Also during September, Bruno Augenstein of the Rand Corporation launched a study that sought ways to accelerate the development of an ICBM. In Washing­ton, Trevor Gardner was Special Assistant for Research and Development, report­ing to the Air Force Secretary. In October he set up his own review committee. He recruited von Neumann to serve anew as its chairman and then added a dazzling array of talent from Caltech, Bell Labs, MIT, and Hughes Aircraft. In Gardner’s words, “The aim was to create a document so hot and of such eminence that no one could pooh-pooh it.”16

He called his group the Teapot Committee. He wanted particularly to see it call for less stringent aim, for he believed that a 1,500-foot miss distance was prepos­

terous. The Teapot Committee drew on findings by Augenstein’s group at Rand, which endorsed a 1,500-pound warhead and a three-mile miss distance. The formal Teapot report, issued in February 1954, declared “the military requirement” on miss distance “should be relaxed from the present 1,500 feet to at least two, and prob­ably three, nautical miles.” Moreover, “the warhead weight might be reduced as far as 1,500 pounds, the precise figure to be determined after the Castle tests and by missile systems optimization.”17

The latter recommendation invoked Operation Castle, a series of H-bomb tests that began a few weeks later. The Mike shot of 1952 had used liquid deuterium, a form of liquid hydrogen. It existed at temperatures close to absolute zero and demanded much care in handling. But the Castle series was to test devices that used lithium deuteride, a dry powder that resembled salt. The Mike approach had been chosen because it simplified the weapons physics, but a dry bomb using lithium promised to be far more practical.

The first such bomb was detonated on 1 March as Castle Bravo. It produced 15 megatons, as its fireball expanded to almost four miles in diameter. Other Castle H-bombs performed similarly, as Castle Romeo went to 11 megatons and Castle Yankee, a variant of Romeo, reached 13.5 megatons. “I was on a ship that was 30 miles away,” the physicist Marshall Rosenbluth recalls about Bravo, “and we had this horrible white stuff raining out on us.” It was radioactive fallout that had condensed from vaporized coral. “It was pretty frightening. There was a huge fireball with these turbulent rolls going in and out. The thing was glowing. It looked to me like a diseased brain.” Clearly, though, bombs of the lithium type could be as powerful as anyone wished—and these test bombs were readily weaponizable.18

The Castle results, strongly complementing the Rand and Teapot reports, cleared the way for action. Within the Pentagon, Gardner took the lead in pushing for Atlas. On 11 March he met with Air Force Secretary Harold Talbott and with the Chief of Staff, General Nathan Twining. He proposed a sped-up program that would nearly double the Fiscal Year (FY) 1955 Atlas budget and would have the first missiles ready to launch as early as 1958. General Thomas White, the Vice Chief of Staff, weighed in with his own endorsement later that week, and Talbott responded by directing Twining to accelerate Atlas immediately.

White carried the ball to the Air Staff, which held responsibility for recommend­ing approval of new programs. He told its members that “ballistic missiles were here to stay, and the Air Staff had better realize this fact and get on with it.” Then on 14 May, having secured concurrence from the Secretary of Defense, White gave Atlas the highest Air Force development priority and directed its acceleration “to the maximum extent that technology would allow.” Gardner declared that Whites order meant “the maximum effort possible with no limitation as to funding.”19

This was a remarkable turnaround for a program that at the moment lacked even a proper design. Many weapon concepts have gone as far as the prototype stage without winning approval, but Atlas gained its priority at a time when the accepted configuration still was the 440,000-pound, five-engine concept of 1953- Air Force officials still had to establish a formal liaison with the AEC to win access to information on projected warhead designs. Within the AEC, lightweight bombs still were well in the future. A specialized device, tested in the recent series as Castle Nectar, delivered 1.69 megatons but weighed 6,520 pounds. This was four times the warhead weight proposed for Atlas.

But in October the AEC agreed that it could develop warheads weighing 1,500 to 1,700 pounds, with a yield of one megaton. This opened the door to a new Atlas design having only three engines. It measured 75 feet long and 10 feet in diameter, with a weight of240,000 pounds—and its miss distance could be as great as five miles. This took note of the increased yield of the warhead and further eased the problem of guidance. The new configuration won Air Force approval in December.20

Winged Spacecraft and Dyna-Soar

Boost-glide rockets, with wings, entered the realm of advanced conceptual design with postwar studies at Bell Aircraft called Bomi, Bomber Missile. The director of the work, Walter Dornberger, had headed Germany’s wartime rocket development program and had been in charge of the V-2. The new effort involved feasibility studies that sought to learn what might be done with foreseeable technology, but Bomi was a little too advanced for some of Dornberger’s colleagues. Historian Roy Houchin writes that when Dornberger faced “abusive and insulting remarks” from an Air Force audience, he responded by declaring that his Bomi would be receiving more respect if he had had the chance to fly it against the United States during the war. In Houchin’s words, “The silence was deafening.”1

Winged Spacecraft and Dyna-Soar

The initial Bomi concept, dating back to 1951, took form as an in-house effort. It called for a two-stage rocket, with both stages being piloted and fitted with delta wings. The lower stage was mostly of aluminum, with titanium leading edges and nose; the upper stage was entirely of titanium and used radiative cooling. With an initial range of 3,500 miles, it was to come over the target above 100,000 feet and at speeds greater than Mach 4. Operational concepts called for bases in England or Spain, targets in the western Soviet Union, and a landing site in northern Africa.2

During the spring of 1952, Bell officials sought funds for further study from Wright Air Development Center (WADC). A year passed, and WADC responded with a firm no. The range was too short. Thermal protection and onboard cooling raised unanswered questions. Values assumed for L/D appeared highly optimistic, and no information was available on stability, control, or aerodynamic flutter at the proposed speeds. Bell responded by offering to consider higher speeds and greater range. Basic feasibility then lay even farther in the future, but the Air Forces inter­est in the Atlas ICBM meant that it wanted missiles of longer range, even though shorter-range designs could be available sooner. An intercontinental Bomi at least could be evaluated as a potential alternative to Atlas, and it might find additional roles such as strategic reconnaissance.3

In April 1954, with that ICBM very much in the ascendancy, WADC awarded Bell its desired study contract. Bomi now had an Air Force designation, MX-2276. Bell examined versions of its two-stage concept with 4,000- and 6,000-mile ranges while introducing a new three-stage configuration with the stages mounted belly – to-back. Liftoff thrust was to be 1.2 million pounds, compared with 360,000 for the three-engine Atlas. Bomi was to use a mix of liquid oxygen and liquid fluorine, the latter being highly corrosive and hazardous, whereas Atlas needed only liquid oxygen, which was much safer. The new Bomi was to reach 22,000 feet per second, slightly less than Atlas, but promised a truly global glide range of 12,000 miles. Even so, Atlas clearly was preferable.4

But the need for reconnaissance brought new life to the Bell studies. At WADC, in parallel with initiatives that were sparking interest in unpiloted reconnaissance satellites, officials defined requirements for Special Reconnaissance System 118P. These called initially for a range of 3,500 miles at altitudes above 100,000 feet. Bell won funding in September 1955, as a follow-on to its recently completed MX – 2276 activity, and proposed a two-stage vehicle with a Mach 15 glider. In March 1956 the company won a new study contract for what now was called Brass Bell. It took shape as a fairly standard advanced concept of the mid-1950s, with a liquid – fueled expendable first stage boosting a piloted craft that showed sharply swept delta wings. The lower stage was conventional in design, burning Atlas propellants with uprated Atlas engines, but the glider retained the company’s preference for fluorine. Officials at Bell were well aware of its perils, but John Sloop at NACA-Lewis was successfully testing a fluorine rocket engine with 20,000 pounds of thrust, and this gave hope.5

The Brass Bell study contract went into force at a moment when prospects for boost-glide were taking a sharp step upward. In February 1956 General Thomas Power, head of the Air Research and Development Command (ARDC), stated that the Air Force should stop merely considering such radical concepts and begin developing them. High on his list was a weapon called Robo, Rocket Bomber, for which several firms were already conducting in-house work as a prelude to funded study contracts. Robo sought to advance beyond Brass Bell, for it was to circle the globe and hence required near-orbital speed. In June ARDC Headquarters set forth System Requirement 126 that defined the scope of the studies. Convair, Douglas, and North American won the initial awards, with Martin, Bell, and Lockheed later participating as well.

The X-15 by then was well along in design, but it clearly was inadequate for the performance requirements of Brass Bell and Robo. This raised the prospect of a new and even more advanced experimental airplane. At ARDC Headquarters, Major George Colchagoff took the initiative in pursuing studies of such a craft, which took the name HYWARDS: Hypersonic Weapons Research and Development Support­ing System. In November 1956 the ARDC issued System Requirement 131, thereby placing this new X-pIane on the agenda as well.6

The initial HYWARDS concept called for a flight speed of Mach 12. However, in December Bell Aircraft raised the speed of Brass Bell to Mach 18. This increased the boost-glide range to 6,300 miles, but it opened a large gap between the perfor­mance of the two craft, inviting questions as to the applicability of HYWARDS results. In January a group at NACA-Langley, headed by John Becker, weighed in with a report stating that Mach 18, or 18,000 feet per second, was appropriate for HYWARDS. The reason was that “at this speed boost gliders approached their peak heating environment. The rapidly increasing flight altitudes at speeds above Mach 18 caused a reduction in the heating rates.”7

With the prospect now strong that Brass Bell and HYWARDS would have the same flight speed, there was clear reason not to pursue them as separate projects but to consolidate them into a single program. A decision at Air Force Headquarters, made in March 1957, accomplished this and recognized their complementary char­acters. They still had different goals, with HYWARDS conducting flight research and Brass Bell being the operational reconnaissance system, but HYWARDS now was to stand as a true testbed.8

Robo still was a separate project, but events during 1957 brought it into the fold as well. In June an ad hoc review group, which included members from ARDC and WADC, looked at Robo concepts from contractors. Robert Graham, a NACA attendee, noted that most proposals called for “a boost-glide vehicle which would fly at Mach 20-25 at an altitude above 150,000 feet.” This was well beyond the state of the art, but the panel concluded that with several years of research, an experimental craft could enter flight test in 1965, an operational hypersonic glider in 1968, and Robo in 19747

On 10 October—less than a week after the Soviets launched their first Sputnik— ARDC endorsed this three-part plan by issuing a lengthy set of reports, “Abbre­viated Systems Development Plan, System 464L—Hypersonic Strategic Weapon System.” It looked ahead to a research vehicle capable of 18,000 feet per second and

350,0 feet, to be followed by Brass Bell with the same speed and 170,000 feet, and finally Robo, rated at 25,000 feet per second and 300,000 feet but capable of orbital flight.

The ARDC’s Lieutenant Colonel Carleton Strathy, a division chief and a strong advocate of program consolidation, took the proposed plan to Air Force Head­quarters. He won endorsement from Brigadier General Don Zimmerman, Deputy

Winged Spacecraft and Dyna-Soar

Top and side views of Dyna-Soar. (U. S. Air Force)

Director of Development Planning, and from Brigadier General Homer Boushey, Deputy Director of Research and Development. NACA’s John Crowley, Associate Director for Research, gave strong approval to the proposed test vehicle, viewing it as a logical step beyond the X-15- On 25 November, having secured support from his superiors, Boushey issued Development Directive 94, allocating $3 million to proceed with more detailed studies following a selection of contractors.10

The new concept represented another step in the sequence that included Eugen Sanger’s Silbervogel, his suborbital skipping vehicle, and among live rocket craft, the X-15- It was widely viewed as a tribute to Sanger, who was still living. It took the name Dyna-Soar, which drew on “dynamic soaring,” Sanger’s name for his skipping technique, and which also stood for “dynamic ascent and soaring flight,” or boost – glide. Boeing and Martin emerged as the finalists in June 1958, with their roles being defined in November 1959- Boeing was to take responsibility for the winged spacecraft. Martin, described as the associate contractor, was to provide the Titan missile that would serve as the launch vehicle.11

The program now demanded definition of flight modes, configuration, struc­ture, and materials. The name of Sanger was on everyone’s lips, but his skipping flight path had already proven to be uncompetitive. He and his colleague Bredt had treated its dynamics, but they had not discussed the heating. That task fell to NACA’s Allen and Eggers, along with their colleague Stanford Neice.

In 1954, following their classic analysis of ballistic re-entry, Eggers and Allen turned their attention to comparison of this mode with boost-glide and skipping entries. They assumed the use of active cooling and found that boost-glide held the advantage:

The glide vehicle developing lift-drag ratios in the neighborhood of 4 is far superior to the ballistic vehicle in ability to convert velocity into range. It has the disadvantage of having far more heat convected to it; however, it has the compensating advantage that this heat can in the main be radiated back to the atmosphere. Consequently, the mass of coolant material may be kept relatively low.

A skip vehicle offered greater range than the alternatives, in line with Sanger’s advocacy of this flight mode. But it encountered more severe heating, along with high aerodynamic loads that necessitated a structurally strong and therefore heavy vehicle. Extra weight meant extra coolant, with the authors noting that “ulti­mately the coolant is being added to cool coolant. This situation must obviously be avoided.” They concluded that “the skip vehicle is thought to be the least promising of the three types of hypervelocity vehicle considered here.”12

Following this comparative assessment of flight modes, Eggers worked with his colleague Clarence Syvertson to address the issue of optimum configuration. This issue had been addressed for the X-15; it was a mid-wing airplane that generally resembled the high-performance fighters of its era. In treating Dyna-Soar, following the Robo review of mid-1957, NACA’s Robert Graham wrote that “high-wing, mid­wing and low-wing configurations were proposed. All had a highly swept wing, and a small angle cone as the fuselage or body.” This meant that while there was agree­ment on designing the fuselage, there was no standard way to design the wing.13

Eggers and Syvertson proceeded by treating the design problem entirely as an exercise in aerodynamics. They concluded that the highest values of L/D were attain­able by using a high-wing concept with the fuselage mounted below as a slender half-cone and the wing forming a flat top. Large fins at the wing tips, canted sharply downward, directed the airflow under the wings downward and increased the lift. Working with a hypersonic wind tunnel at NACA-Ames, they measured a maximum L/D of 6.65 at Mach 5, in good agreement with a calculated value of 6.85-14

This configuration had attractive features, not the least of which was that the base of its half-cone could readily accommodate a rocket engine. Still, it was not long before other specialists began to argue that it was upside down. Instead of having a flat top with the fuselage below, it was to be flipped to place the wing below the fuselage, giving it a flat bottom. This assertion came to the forefront during Becker’s HYWARDS study, which identified its preferred velocity as 18,000 feet per second. His colleague Peter Korycinski worked with Becker to develop heating analyses of flat-top and flat-bottom candidates, with Roger Anderson and others within Langleys Structures Division providing estimates for the weight of thermal protection.

A simple pair of curves, plotted on graph paper, showed that under specified assumptions the flat-bottom weight at that velocity was 21,400 pounds and was increasing at a modest rate at higher speeds. The flat-top weight was 27,600 pounds and was rising steeply. Becker wrote that the flat-bottom craft placed its fuselage “in the relatively cool shielded region on the top or lee side of the wing—i. e., the wing was used in effect as a partial heat shield for the fuselage— This ‘flat-bot­tomed’ design had the least possible critical heating area…and this translated into least circulating coolant, least area of radiative heat shields, and least total thermal protection in flight.”15

These approaches—flat-top at Ames, flat-bottom at Langley—brought a debate between these centers that continued through 1957. At Ames, the continuing strong interest in high L/D reflected an ongoing emphasis on excellent supersonic aerody­namics for military aircraft, which needed high L/D as a matter of course. To ease the heating problem, Ames held for a time to a proposed speed of 11,000 feet per second, slower than the Langley concept but lighter in weight and more attainable in technology while still offering a considerable leap beyond the X-15. Officials at NACA diplomatically described the Ames and Langley HYWARDS concepts respectively as “high L/D” and “low heating,” but while the debate continued, there remained no standard approach to the design of wings for a hypersonic glider.16

There was a general expectation that such a craft would require active cooling. Bell Aircraft, which had been studying Bomi, Brass Bell, and lately Robo, had the most experience in the conceptual design of such arrangements. Its Brass Bell of 1957, designed to enter its glide at 18,000 feet per second and 170,000 feet in alti­tude, featured an actively cooled insulated hot structure. The primary or load-bear­ing structure was of aluminum and relied on cooling in a closed-loop arrangement that used water-glycol as the coolant. Wing leading edges had their own closed-loop cooling system that relied on a mix of sodium and potassium metals. Liquid hydro­gen, pumped initially to 1,000 pounds per square inch, flowed first through a heat exchanger and cooled the heated water-glycol, then proceeded to a second heat exchanger to cool the hot sodium-potassium. In an alternate design concept, this gas cooled the wing leading edges directly, with no intermediate liquid-metal cool­ant loop. The warmed hydrogen ran a turbine within an onboard auxiliary power unit and then was exhausted overboard. The leading edges reached a maximum temperature of 1,400°F, for which Inconel X was a suitable material.17

During August of that year Becker and Korycinski launched a new series of stud­ies that further examined the heating and thermal protection of their flat-bottom

glider. They found that for a glider of global range, flying with angle of attack of 45 degrees, an entry trajectory near the upper limit of permissible altitudes gave peak uncooled skin temperatures of 2,000°F. This appeared achievable with improved metallic or ceramic hot structures. Accordingly, no coolant at all was required!18

This conclusion, published in 1959, influenced the configura­tion of subsequent boost-glide vehi­cles—Dyna-Soar, the space shut­tle—much as the Eggers-Allen paper of 1953 had defined the blunt-body shape for ballistic entry. Prelimi­nary and unpublished results were in hand more than a year prior to publication, and when the prospect emerged of eliminating active cool­ing, the concepts that could do this were swept into prominence. They were of the flat-bottom type, with Dyna-Soar being the first to proceed into mainstream development.

Winged Spacecraft and Dyna-SoarThis uncooled configuration proved robust enough to accommo­date substantial increases in flight speed and performance. In 1959 Herbert York, the Defense Director of Research and Engineer­ing, stated that Dyna-Soar was to fly at 15,000 miles per hour. This was well above the planned speed of Brass Bell but still below orbital velocity. During subsequent years the booster changed from Martin’s Titan I to the more capable Titan II and then to the powerful Titan III-C, which could easily boost it to orbit. A new plan, approved in December 1961, dropped suborbital missions and called for “the early attainment of orbital flight.” Subsequent planning anticipated that Dyna-Soar would reach orbit with the Titan III upper stage, execute several circuits of the Earth, and then come down from orbit by using this stage as a retrorocket.19

After that, though, advancing technical capabilities ran up against increasingly stringent operational requirements. The Dyna-Soar concept had grown out of HYWARDS, being intended initially to serve as a testbed for the reconnaissance

Winged Spacecraft and Dyna-Soar

Full-scale model of Dyna-Soar, on display at an Air Force exhibition in 1962. The scalloped pat­tern on the base was intended to suggest Sanger’s skipping entry. (Boeing Company archives)

boost-glider Brass Bell and for the manned rocket-powered bomber Robo. But the rationale for both projects became increasingly questionable during the early 1960s. The hypersonic Brass Bell gave way to a new concept, the Manned Orbiting Labo­ratory (MOL), which was to fly in orbit as a small space station while astronauts took reconnaissance photos. Robo fell out of the picture completely, for the success of the Minuteman ICBM, which used solid propellant, established such missiles as the nations prime strategic force. Some people pursued new concepts that contin­ued to hold out hope for Dyna-Soar applications, with satellite interception stand­ing in the forefront. The Air Force addressed this with studies of its Saint project, but Dyna-Soar proved unsuitable for such a mission.20

Dyna-Soar was a potentially superb technology demonstrator, but Defense Sec­retary Robert McNamara took the view that it had to serve a military role in its own right or lead to a follow-on program with clear military application. The cost of Dyna-Soar was approaching a billion dollars, and in October 1963 he declared that he could not justify spending such a sum if it was a dead-end program with no ultimate purpose. He canceled it on 10 December, noting that it was not to serve as a cargo rocket, could not carry substantial payloads, and could not stay in orbit for

Winged Spacecraft and Dyna-Soar

Artist’s rendering showing Dyna-Soar in orbit. (Boeing Company archives)

long durations. He approved MOL as a new program, thereby giving the Air Force continuing reason to hope that it would place astronauts in orbit, but stated that Dyna-Soar would serve only “a very narrow objective.”21

At that moment the program called for production of 10 flight vehicles, and Boeing had completed some 42 percent of the necessary tasks. McNamara’s deci­sion therefore was controversial, particularly because the program still had high – level supporters. These included Eugene Zuckert, Air Force Secretary; Alexander Flax, Assistant Secretary for Research and Development; and Brockway McMillan, Zuckert’s Under Secretary and Flax’s predecessor as Assistant Secretary. Still, McNa­mara gave more attention to Harold Brown, the Defense Director of Research and Engineering, who made the specific proposal that McNamara accepted: to cancel Dyna-Soar and proceed instead with MOL.22

Dyna-Soar never flew. The program had expended $410 million when canceled, but the schedule still called for another $373 million, and the vehicle was still some two and a half years away from its first flight. Even so, its technology remained avail­able for further development, contributing to the widening prospects for reentry that marked the era.23