Category Facing the Heat Barrier: a History of Hypersonics

Widening Prospects. for Re-entry

Th e classic spaceship has wings, and throughout much of the 1950s both NACA and the Air Force struggled to invent such a craft. Design studies addressed issues as fundamental as whether this hypersonic rocket plane should have one particular wing-body configuration, or whether it should be upside down. The focus of the work was Dyna-Soar, a small version of the space shuttle that was to ride to orbit atop a Titan III. It brought remarkable engineering advances, but Pentagon policy makers, led by Defense Secretary Robert McNamara, saw it as offering little more than technical development, with no mission that could offer a military justifica­tion. In December 1963 he canceled it.

Better prospects attended NASA’s effort in manned spaceflight, which culmi­nated in the Apollo piloted flights to the Moon. Apollo used no wings; rather, it relied on a simple cone that used the Allen-Eggers blunt-body principle. Still, its demands were stringent. It had to re-enter successfully with twice the energy of an entry from Earth orbit. Then it had to navigate a corridor, a narrow range of alti­tudes, to bleed off energy without either skipping back into space or encountering g-forces that were too severe. By doing these things, it showed that hypersonics was ready for this challenge.


No aircraft has ever cruised at Mach 5, and an important reason involves struc­tures and materials. “If I cruise in the atmosphere for two hours,” says Paul Czysz of McDonnell Douglas, “I have a thousand times the heat load into the vehicle that the shuttle gets on its quick transit of the atmosphere.” The thermal environment of
the X-30 was defined by aerodynamic heating and by the separate issue of flutter.48

A single concern dominated issues of structural design: The vehicle was to fly as low as possible in the atmosphere during ascent to orbit. Re-entry called for flight at higher altitudes, and the loads during ascent therefore were higher than those of re-entry. Ascent at lower altitude—200,000 feet, for instance, rather than 250,000—increased the drag on the X-30. But it also increased the thrust, giving a greater margin between thrust and drag that led to increased acceleration. Consider­ations of ascent, not re-entry, therefore shaped the selection of temperature-resistant materials.

Yet the aircraft could not fly too low, or it would face limits set by aerodynamic flutter. This resulted from forces on the vehicle that were not steady but oscillated, at frequencies of oscillation that changed as the vehicle accelerated and lost weight. The wings tended to vibrate at characteristic frequencies, as when bent upward and released to flex up and down. If the frequency of an aerodynamic oscillation matched that at which the wings were prone to flex, the aerodynamic forces could tear the wings off. Stiffness in materials, not strength, was what resisted flutter, and the vehicle was to fly a “flutter-limited trajectory,” staying high enough to avoid the problem.

Подпись: TVPlCAl JUtan TFAJECTWTMaterialsThe mechanical properties of metals depend on their fine­grained structure. An ingot of metal consists of a mass of interlaced grains or crystals, and small grains give higher strength. Quenching, plunging hot metal into water, yields small grains but often makes the metal brittle or hard to form. Alloying a metal, as by adding small quantities of

carbon to make steel, A. . . . .

Ascent trajectory or an airbreatner. (IN ASA)

is another traditional practice. However,

some additives refuse to dissolve or separate out from the parent metal as it cools.

To overcome such restrictions, techniques of powder metallurgy were in the fore­front. These methods gave direct control of the microstructure of metals by forming
them from powder, with the grains of powder sintering or welding together by being pressed in a mold at high temperature. A manufacturer could control the grain size independently of any heat-treating process. Powder metallurgy also overcame restrictions on alloying by mixing in the desired additives as powdered ingredients.

Several techniques existed to produce the powders. Grinding a metal slab to saw­dust was the simplest, yielding relatively coarse grains. “Splat-cooling” gave better control. It extruded molten metal onto the chilled rim of a rotating wheel, which cooled it instantly into a thin ribbon. This represented a quenching process that produced a fine-grained microstructure in the metal. The ribbon then was chemi­cally treated with hydrogen, which made it brittle, so that it could be ground into a fine powder. Heating the powder then drove off the hydrogen.

The Plasma Rotating Electrode Process, developed by the firm of Nuclear Metals, showed particular promise. The parent metal was shaped into a cylinder that rotated at up to 30,000 revolutions per minute and served as an electrode. An electric arc melted the spinning metal, which threw off droplets within an atmosphere of cool inert helium. The droplets plummeted in temperature by thousands of degrees within milliseconds, and their microstructures were so fine as to approach an amor­phous state. Their molecules did not form crystals, even tiny ones, but arranged themselves in formless patterns. This process, called “rapid solidification,” promised particular gains in high-temperature strength.

Standard titanium alloys, for instance, lost strength at temperatures above 700 to 900°E By using rapid solidification, McDonnell Douglas raised this limit to 1,100°F prior to 1986. Philip Parrish, the manager of powder metallurgy at DARPA, noted that his agency had spent some $30 million on rapid-solidification technology since 1975. In 1986 he described it as “an established technology. This technology now can stand along such traditional methods as ingot casting or drop forging.”49

Nevertheless 1,100°F was not enough, for it appeared that the X-30 needed a material that was rated at 1,700°F. This stemmed from the fact that for several years, NASP design and trajectory studies indicated that a flight vehicle indeed would face such temperatures on its fuselage. But after 1990 the development of new baseline configurations led to an appreciation that the pertinent areas of the vehicle would face temperatures no higher than 1,500°F. At that temperature, advanced titanium alloys could serve in “metal matrix composites,” with thin-gauge metals being rein­forced with fibers.

The new composition came from the firm of Titanium Metals and was desig­nated Beta-21S. That company developed it specifically for the X-30 and patented it in 1989- It consisted of titanium along with 15 percent molybdenum, 2.8 percent columbium, 3 percent aluminum, and 0.2 percent silicon. Resistance to oxidation proved to be its strong suit, with this alloy showing resistance that was two orders of magnitude greater than that of conventional aircraft titanium. Tests showed that it


Comparison of some matrix alloys. (NASA)

also could be exposed repeatedly to leaks of gaseous hydrogen without being subject to embrittlement. Moreover, it lent itself readily to being rolled to foil-gauge thick­nesses of 4 to 5 mil when metal matrix composites were fabricated.50

Such titanium-matrix composites were used in representative X-30 structures. The Non-Integral Fuselage Tank Article (NIFTA) represented a section of X-30 fuselage at one-fourth scale. It was oblong in shape, eight feet long and measuring four by seven feet in cross section, and it contained a splice. Its skin thickness was 0.040 inches, about the same as for the X-30. It held an insulated tank that could hold either liquid nitrogen or LH, in tests, which stood as a substantial engineering item in its own right.

The tank had a capacity of 940 gallons and was fabricated of graphite-epoxy composite. No liner protected the tankage on the inside, for graphite-epoxy was impervious to damage by LH. However, the exterior was insulated with two half­inch thicknesses of Q-felt, a quartz-fiber batting with density of only 3-5 pounds per cubic foot. A thin layer of Astroquartz high-temperature cloth covered the Q-felt. This insulation filled space between the tank wall and the surrounding wall of the main structure, with both this space and the Q-felt being purged with helium.51

The test sequence for NIFTA duplicated the most severe temperatures and stresses of an ascent to orbit. These stresses began on the ground, with the vehicle being heavy with fuel and subject to a substantial bending load. There was also a
large shear load, with portions of the vehicle being pulled transversely in opposite directions. This happened because the landing gear pushed upward to support the entire weight of the craft, while the weight of the hydrogen tank pushed downward only a few feet away. Other major bending and shear loads arose during subsonic climbout, with the X-30 executing a pullup maneuver.

Significant stresses arose near Mach 6 and resulted from temperature differences across the thickness of the stiffened skin. Its outer temperature was to be 800°F, but the tops of the stiffeners, a few inches away, were to be 350°F. These stiffeners were spot-welded to the skin panels, which raised the issue of whether the welds would hold amid the different thermal expansions. Then between Mach 10 and 16, the vehicle was to reach peak temperatures of 1,300°F. The temperature differences between the top and bottom of the vehicle also would be at their maximum.

The tests combined both thermal and mechanical loads and were conducted within a vacuum chamber at Wyle Laboratories during 1991- Banks of quartz lamps applied up to 1.5 megawatts of heat, while jacks imposed bending or shear forces that reached 100 percent of the design limits. Most tests placed nonflammable liquid nitrogen in the tank for safety, but the last of them indeed used LHr With this supercold fuel at -423°F, the lamps raised the exterior temperature of NIFTA to the full 1,300°F, while the jacks applied the full bending load. A 1993 paper noted “100% successful completion of these tests,” including the one with LH2 that had been particularly demanding.52

NIFTA, again, was at one-fourth scale. In a project that ran from 1991 through the summer of 1994, McDonnell Douglas engineers designed and fabricated the substantially larger Full Scale Assembly. Described as “the largest and most repre­sentative NASP fuselage structure built,” it took shape as a component measuring 10 by 12 feet. It simulated a section of the upper mid-fuselage, just aft of the crew compartment.

A1994 review declared that it “was developed to demonstrate manufacturing and assembly of a full scale fuselage panel incorporating all the essential structural details of a flight vehicle fuselage assembly.” Crafted in flightweight, it used individual panels of titanium-matrix composite that were as large as four by eight feet. These were stiffened with longitudinal members of the same material and were joined to circumferential frames and fittings of Ti-1100, a titanium alloy that used no fiber reinforcement. The complete assembly posed manufacturing challenges because the panels were of minimum thickness, having thinner gauges than had been used pre­viously. The finished article was completed just as NASP was reaching its end, but it showed that the thin panels did not introduce significant problems.53

The firm of Textron manufactured the fibers, designated SCS-6 and -9, that reinforced the composites. As a final touch, in 1992 this company opened the worlds first manufacturing plant dedicated to the production of titanium-matrix

composites. “We could get the cost down below a thousand dollars a pound if we had enough volume,” Bill Grant, a company manager, told Aerospace America. His colleague Jim Henshaw added, “We think SCS/titanium composites are fully devel­oped for structural applications.”54

Such materials served to 1,500°F, but on the X-30 substantial areas were to with­stand temperatures approaching 3,000°F, which is hotter than molten iron. If a steelworker were to plunge a hand into a ladle of this metal, the hand would explode from the sudden boiling of water in its tissues. In such areas, carbon-carbon was necessary. It had not been available for use in Dyna-Soar, but the Pentagon spent $200 million to fund its development between 1970 and 1985.55

Much of this supported the space shuttle, on which carbon-carbon protected such hot areas as the nose cap and wing leading edges. For the X-30, these areas expanded to cover the entire nose and much of the vehicle undersurface, along with the rudders and both the top and bottom surfaces of the wings. The X-30 was to execute 150 test flights, exposing its heat shield to prolonged thermal soaks while still in the atmosphere. This raised the problem of protection against oxidation.56


Selection of NASP materials based on temperature. (General Accounting Office)

Standard approaches called for mixing oxidation inhibitors into the carbon matrix and covering the surface with a coating of silicon carbide. However, there was a mismatch between the thermal expansions of the coating and the carbon – carbon substrate, which led to cracks. An interlayer of glass-forming sealant, placed between them, produced an impervious barrier that softened at high temperatures to fill the cracks. But these glasses did not flow readily at temperatures below 1,500°F. This meant that air could penetrate the coating and reach the carbon through open cracks to cause loss by oxidation.57

The goal was to protect carbon-carbon against oxidation for all 150 of those test flights, or 250 hours. These missions included 75 to orbit and 75 in hypersonic cruise. The work proceeded initially by evaluating several dozen test samples that were provided by commercial vendors. Most of these materials proved to resist oxi­dation for only 10 to 20 hours, but one specimen from the firm of Hitco reached 70 hours. Its surface had been grooved to promote adherence of the coating, and it gave hope that long operational life might be achieved.58

Complementing the study of vendors’ samples, researchers ordered new types of carbon-carbon and conducted additional tests. The most durable came from the firm of Rohr, with a coating by Science Applications International. It easily with­stood 2,000°F for 200 hours and was still going strong at 2,500 °F when the tests stopped after 150 hours. This excellent performance stemmed from its use of large quantities of oxidation inhibitors, which promoted long life, and of multiple glass layers in the coating.

But even the best of these carbon-carbons showed far poorer performance when tested in arcjets at 2,500°F. The high-speed airflows forced oxygen into cracks and pores within the material, while promoting evaporation of the glass sealants. Power­ful roars within the arcjets imposed acoustic loads that contributed to cracking, with other cracks arising from thermal shock as test specimens were suddenly plunged into a hot flow stream. The best results indicated lifetimes of less than two hours.

Fortunately, actual X-30 missions were to impose 2,500°F temperatures for only a few minutes during each launch and reentry. Even a single hour of lifetime therefore could permit panels of carbon-carbon to serve for a number of flights. A 1992 review concluded that “maximum service temperatures should be limited to 2,800°F; above this temperature the silicon-based coating systems afford little prac­tical durability,” due to active oxidation. In addition, “periodic replacement of parts may be inevitable.”59

New work on carbon-carbon, reported in 1993, gave greater encouragement as it raised the prospect of longer lifetimes. The effort evaluated small samples rather than fabricated panels and again used the arcjet installations of NASA-Johnson and Ames. Once again there was an orders-of-magnitude difference in the observed lifetimes of the carbon-carbon, but now the measured lifetimes extended into the hundreds of minutes. A formulation from the firm of Carbon-Carbon Advanced

Technologies gave the best results, suggesting 25 reuses for orbital missions of the X-30 and 50 reuses for the less-demanding missions of hypersonic cruise.60

There also was interest in using carbon-carbon for primary structure. Here the property that counted was not its heat resistance but its light weight. In an impor­tant experiment, the firm of LTV fabricated half of an entire wing box of this mate­rial. An airplanes wing box is a major element of aircraft structure that joins the wings and provides a solid base for attachment of the fuselage fore and aft. Indeed, one could compare it with the keel of a ship. It extends to left and right of the air­craft centerline, and LTV s box constituted the portion to the left of this line. Built at full scale, it represented a hot-structure wing proposed by General Dynamics. It measured five by eight feet with a maximum thickness of 16 inches. Three spars ran along its length; five ribs were mounted transversely, and the complete assembly weighed 802 pounds.

The test plan called for it to be pulled upward at the tip to reproduce the bend­ing loads of a wing in flight. Torsion or twisting was to be applied by pulling more strongly on the front or rear spar. The maximum load corresponded to having the X – 30 execute a pullup maneuver at Mach 2.2, with the wing box at room temperature. With the ascent continuing and the vehicle undergoing aerodynamic heating, the next key event brought the maximum difference in the temperatures of the top and bottom of the wing box, with the former being 994°F and the latter at 1,671°F. At that moment the load on the wing box corresponded to 34 percent of the Mach 2.2 maximum. Farther along, the wing box was to reach its peak temperature, 1,925°F, on the lower surface. These three points were to be reproduced through mechanical forces applied at the ends of the spars and through the use of graphite heaters.

But several key parts delaminated during their fabrication, seriously compromis­ing the ability of the wing box to bear its specified load. Plans to impose the peak or Mach 2.2 load were abandoned, with the maximum planned load being reduced to the 34 percent associated with the maximum temperature difference. For the same reason, the application of torsion was deleted from the test program. Amid these reductions in the scope of the structural tests, two exercises went forward during December 1991. The first took place at room temperature and successfully reached the mark of 34 percent, without causing further damage to the wing box.

The second test, a week later, reproduced the condition of peak temperature dif­ference while briefly applying the calculated load of 34 percent. The plan then called for further heating to the peak temperature of 1,925°F. As the wing box approached this value, a problem arose due to the use of metal fasteners in its assembly. Some were made from coated columbium and were rated for 2,300°F, but most were of a nickel alloy that had a permissible temperature of 2,000°F. However, an instru­mented nickel-alloy fastener overheated and reached 2,l47°F- The wing box showed a maximum temperature of 1,917°F at that moment, and the test was terminated because the strength of the fasteners now was in question. This test nevertheless

counted as a success because it had come within 8°F of the specified temperature.61

Both tests thus were marked as having achieved their goals, but their merits were largely in the mind of the beholder. The entire project would have been far more impressive if it had avoided delamination, successfully achieved the Mach 2.2 peak load, incorporated torsion, and subjected the wing box to repeated cycles of bending, torsion, and heating. This effort stood as a bold leap toward a future in which carbon-carbon might take its place as a mainstream material, suitable for a hot primary structure, but it was clear that this future would not arrive during the NASP program.

Then there was beryllium. It had only two-thirds the density of aluminum and possessed good strength, but its temperature range was limited. The conventional metal had a limit of some 850°F, but an alloy from Lockheed called Lockalloy, which contained 38 percent aluminum, was rated only for 600°F. It had never become a mainstream engineering material like titanium, but for NASP it offered the advan­tage of high thermal conductivity. Work with titanium had greatly increased its tem­peratures of use, and there was hope of achieving similar results with beryllium.

Initial efforts used rapid-solidification techniques and sought temperature limits as high as 1,500°F. These attempts bore no fruit, and from 1988 onward the temper­ature goal fell lower and lower. In May 1990 a program review shifted the emphasis away from high-temperature formulations toward the development of beryllium as a material suitable for use at cryogenic temperatures. Standard forms of this metal became unacceptably brittle when only slightly colder than ~100°F, but cryo-beryl – lium proved to be out of reach as well. By 1992 investigators were working with ductile alloys of beryllium and were sacrificing all prospect of use at temperatures beyond a few hundred degrees but were winning only modest improvements in low – temperature capability. Terence Ronald, the NASP materials director, wrote in 1995 of rapid-solidification versions with temperature limits as low as 500°F, which was not what the X-30 needed to reach orbit.62

In sum, the NASP materials effort scored a major advance with Beta-21S, but the genuinely radical possibilities failed to emerge. These included carbon-carbon as primary structure, along with alloys of beryllium that were rated for temperatures well above 1,000°F. The latter, if available, might have led to a primary structure with the strength and temperature resistance of Beta-2 IS but with less than half the weight. Indeed, such weight savings would have ramified through the entire design, leading to a configuration that would have been smaller and lighter overall.

Overall, work with materials fell well short of its goals. In dealing with struc­tures and materials, the contractors and the National Program Office established 19 program milestones that were to be accomplished by September 1993- A General Accounting Office program review, issued in December 1992, noted that only six of them would indeed be completed.63 This slow progress encouraged conservatism in drawing up the bill of materials, but this conservatism carried a penalty.

When the scramjets faltered in their calculated performance and the X-30 gained weight while falling short of orbit, designers lacked recourse to new and very light materials—structural carbon-carbon, high-temperature beryllium—that might have saved the situation. With this, NASP spiraled to its end. It also left its support­ers with renewed appreciation for rockets as launch vehicles, which had been flying to orbit for decades.


The Move Toward Missiles

In August 1945 it took little imagination to envision that the weapon of the future would be an advanced V-2, carrying an atomic bomb as the warhead and able to cross oceans. It took rather more imagination, along with technical knowledge, to see that this concept was so far beyond the state of the art as not to be worth pursu­ing. Thus, in December Vannevar Bush, wartime head of the Office of Scientific Research and Development, gave his views in congressional testimony:

“There has been a great deal said about a 3,000 miles high-angle rocket.

In my opinion, such a thing is impossible for many years. The people have been writing these things that annoy me, have been talking about a 3,000 mile high-angle rocket shot from one continent to another, carrying an atomic bomb and so directed as to be a precise weapon which would land exactly on a certain target, such as a city. I say, technically, I don’t think anyone in the world knows how to do such a thing, and I feel confident that it will not be done for a very long period of time to come. I think we can leave that out of our thinking.”1

Propulsion and re-entry were major problems, but guidance was worse. For intercontinental range, the Air Force set the permitted miss distance at 5,000 feet and then at 1,500 feet. The latter equaled the error of experienced bombardiers who were using radar bombsights to strike at night from 25,000 feet. The view at the Pentagon was that an ICBM would have to do as well when flying all the way to Moscow. This accuracy corresponded to hitting a golf ball a mile and having it make a hole in one. Moreover, each ICBM was to do this entirely through auto­matic control.2

The Air Force therefore emphasized bombers during the early postwar years, paying little attention to missiles. Its main program, such as it was, called for a mis­sile that was neither ballistic nor intercontinental. It was a cruise missile, which was to solve its guidance problem by steering continually. The first thoughts dated to November 1945. At North American Aviation, chief engineer Raymond Rice and chief scientist William Bollay proposed to “essentially add wings to the V-2 and design a missile fundamentally the same as the A-9.”

Like the supersonic wind tunnel at the Naval Ordnance Laboratory, here was another concept that was to carry a German project to completion. The initial design had a specified range of 500 miles,3 which soon increased. Like the A-9, this missile—designated MX-770—was to follow a boost-glide trajectory and then extend its range with a supersonic glide. But by 1948 the U. S. Air Force had won its independence from the Army and had received authority over missile programs with ranges of 1,000 miles and more. Shorter-range missiles remained the con­cern of the Army. Accordingly, late in February, Air Force officials instructed North American to stretch the range of the MX-770 to a thousand miles.

A boost-glide trajectory was not well suited for a doubled range. At Wright Field, the Air Force development center, Colonel M. S. Roth proposed to increase the range by adding ramjets.4 This drew on work at Wright, where the Power Plant

Laboratory had a Nonrotating Engine Branch that was funding development of both ramjets and rocket engines. Its director, Weldon Worth, dealt specifically with ramjets.5 A modification of the MX-770 design added two ramjet engines, mount­ing them singly at the tips of the vertical fins.6 The missile also received a new name: Navaho. This reflected a penchant at North American for names beginning with “NA.”7

Then, within a few months during 1949 and 1950, the prospect of world war emerged. In 1949 the Soviets exploded their first atomic bomb. At nearly the same time, China’s Mao Zedong defeated the Nationalists of Chiang Kai-shek and pro­claimed the People’s Republic of China. The Soviets had already shown aggressive­ness by subverting the democratic government of Czechoslovakia and by blockading Berlin. These new developments raised the prospect of a unified communist empire armed with the industry that had defeated the Nazis, wielding atomic weapons, and deploying the limitless manpower of China.

President Truman responded both publicly and with actions that were classi­fied. In January 1950 he announced a stepped-up nuclear program, directing “the Atomic Energy Commission to continue its work on all forms of atomic weapons, including the so-called hydrogen or super bomb.” In April he gave his approval to a secret policy document, NSC-68. It stated that the United States would resist com­munist expansion anywhere in the world and would devote up to twenty percent of the gross national product to national defense.8 Then in June, in China’s back yard, North Korea invaded the South, and America again was at war.

These events had consequences for the missile program, as the design and mis­sion of Navaho changed dramatically during 1950. Bollay’s specialists, working with Air Force counterparts, showed that they could anticipate increases in its range to as much as 5,500 nautical miles. Conferences among Air Force officials, held at the Pentagon in August, set this intercontinental range as a long-term goal. A letter from Major General Donald Putt, Director of Research and Development within the Air Materiel Command, became the directive instructing North American to pursue this objective. An interim version, Navaho II, with range of 2,500 nautical miles, appeared technically feasible. The full-range Navaho III represented a long­term project that was slated to go forward as a parallel effort.

The thousand-mile Navaho of 1948 had taken approaches based on the V-2 to their limit. Navaho II, the initial focus of effort, took shape as a two-stage missile with a rocket-powered booster. The booster was to use two such engines, each with thrust of 120,000 pounds. A ramjet-powered second stage was to ride it during initial ascent, accelerating to the supersonic speed at which the ramjet engines could produce their rated thrust. This second stage was then to fly onward as a cruise mis­sile, at a planned flight speed of Mach 2.75.9

A rival to Navaho soon emerged. At Convair, structural analyst Karel Bossart held a strong interest in building an ICBM. As a prelude, he had built three rockets in the shape of a subscale V-2 and had demonstrated his ideas for lightweight struc­ture in flight test. The Rand Corporation, an influential Air Force think tank, had been keeping an eye on this work and on the burgeoning technology of missiles. In December 1950 it issued a report stating that long-range ballistic missiles now were in reach. A month later the Air Force responded by giving Bossart, and Convair, a new study contract. In August 1951 he christened this missile Atlas, after Convair’s parent company, the Atlas Corporation.

The initial concept was a behemoth. Carrying an 8,000-pound warhead, it was to weigh 670,000 pounds, stand 160 feet tall by 12 feet in diameter, and use seven of Bollay’s new 120,000-pound engines. It was thoroughly unwieldy and repre­sented a basis for further studies rather than a concept for a practical weapon. Still, it stood as a milestone. For the first time, the Air Force had a concept for an ICBM that it could pursue using engines that were already in development.10

For the ICBM to compete with Navaho, it had to shrink considerably. Within the Air Force’s Air Research and Development Command, Brigadier General John Sessums, a strong advocate of long-range missiles, proposed that this could be done by shrinking the warhead. The size and weight of Atlas were to scale in proportion with the weight of its atomic weapon, and Sessums asserted that new developments in warhead design indeed would give high yield while cutting the weight.

He carried his argument to the Air Staff, which amounted to the Air Forces board of directors. This brought further studies, which indeed led to a welcome reduction in the size of Atlas. The concept of 1953 called for a length of 110 feet and a loaded weight of 440,000 pounds, with the warhead tipping the scale at only 3,000 pounds. The number of engines went down from seven to five.11

There also was encouraging news in the area of guidance. Radio guidance was out of the question for an operational missile; it might be jammed or the ground-based guidance center might be destroyed in an attack. Instead, missile guidance was to be entirely self-contained. All concepts called for the use of sensitive accelerometers along with an onboard computer, to determine velocity and location. Navaho was to add star trackers, which were to null out errors by tracking stars even in daylight. In addition, Charles Stark Draper of MIT was pursuing inertial guidance, which was to use no external references of any sort. His 1949 system was not truly inertial, for it included a magnetic compass and a Sun-seeker. But when flight-tested aboard a B-29, over distances as great at 1,737 nautical miles, it showed a mean error of only 5 nautical miles.12

For Atlas, though, the permitted miss distance remained at 1,500 feet, with the range being 5500 nautical miles. The program plan of October 1953 called for a leisurely advance over the ensuing decade, with research and development being completed only “sometime after 1964,” and operational readiness being achieved in 1965- The program was to emphasize work on the major components: propulsion, guidance, nose cone, lightweight structure. In addition, it was to conduct extensive ground tests before proceeding toward flight.13

This concept continued to call for an atomic bomb as the warhead, but by then the hydrogen bomb was in the picture. The first test version, named Mike, deto­nated at Eniwetok Atoll in the Pacific on 1 November 1952. Its fireball spread so far and fast as to terrify distant observers, expanding until it was more than three miles across. “The thing was enormous,” one man said. “It looked as if it blotted out the whole horizon, and I was standing 30 miles away.” The weapons designer Theodore Taylor described it as “so huge, so brutal—as if things had gone too far. When the heat reached the observers, it stayed and stayed and stayed, not for seconds but for minutes.” Mike yielded 10.4 megatons, nearly a thousand times greater than the 13 kilotons of the Hiroshima bomb of 1945-

Mike weighed 82 tons.14 It was not a weapon; it was a physics experiment. Still, its success raised the prospect that warheads of the future might be smaller and yet might increase sharply in explosive power. Theodore von Karman, chairman of the Air Force Scientific Advisory Board, sought estimates from the Atomic Energy Commission of the size and weight of future bombs. The AEC refused to release this information. Lieutenant General James Doolittle, Special Assistant to the Air Force Chief of Staff, recommended creating a special panel on nuclear weapons within the SAB. This took form in March 1953, with the mathematician John von Neumann as its chairman. Its specialists included Hans Bethe, who later won the Nobel Prize, and Norris Bradbury who headed the nations nuclear laboratory at Los Alamos, New Mexico.

In June this group reported that a thermonuclear warhead with the 3,000-pound Atlas weight could have a yield of half a megaton. This was substantially higher than that of the pure-fission weapons considered previously. It gave renewed strength to the prospect of a less stringent aim requirement, for Atlas now might miss by far more than 1,500 feet and still destroy its target.

Three months later the Air Force Special Weapons Center issued its own esti­mate, anticipating that a hydrogen bomb of half-megaton yield could weigh as little as 1,500 pounds. This immediately opened the prospect of a further reduction in the size of Atlas, which might fall in weight from 440,000 pounds to as little as 240,000. Such a missile also would need fewer engines.15

Also during September, Bruno Augenstein of the Rand Corporation launched a study that sought ways to accelerate the development of an ICBM. In Washing­ton, Trevor Gardner was Special Assistant for Research and Development, report­ing to the Air Force Secretary. In October he set up his own review committee. He recruited von Neumann to serve anew as its chairman and then added a dazzling array of talent from Caltech, Bell Labs, MIT, and Hughes Aircraft. In Gardner’s words, “The aim was to create a document so hot and of such eminence that no one could pooh-pooh it.”16

He called his group the Teapot Committee. He wanted particularly to see it call for less stringent aim, for he believed that a 1,500-foot miss distance was prepos­

terous. The Teapot Committee drew on findings by Augenstein’s group at Rand, which endorsed a 1,500-pound warhead and a three-mile miss distance. The formal Teapot report, issued in February 1954, declared “the military requirement” on miss distance “should be relaxed from the present 1,500 feet to at least two, and prob­ably three, nautical miles.” Moreover, “the warhead weight might be reduced as far as 1,500 pounds, the precise figure to be determined after the Castle tests and by missile systems optimization.”17

The latter recommendation invoked Operation Castle, a series of H-bomb tests that began a few weeks later. The Mike shot of 1952 had used liquid deuterium, a form of liquid hydrogen. It existed at temperatures close to absolute zero and demanded much care in handling. But the Castle series was to test devices that used lithium deuteride, a dry powder that resembled salt. The Mike approach had been chosen because it simplified the weapons physics, but a dry bomb using lithium promised to be far more practical.

The first such bomb was detonated on 1 March as Castle Bravo. It produced 15 megatons, as its fireball expanded to almost four miles in diameter. Other Castle H-bombs performed similarly, as Castle Romeo went to 11 megatons and Castle Yankee, a variant of Romeo, reached 13.5 megatons. “I was on a ship that was 30 miles away,” the physicist Marshall Rosenbluth recalls about Bravo, “and we had this horrible white stuff raining out on us.” It was radioactive fallout that had condensed from vaporized coral. “It was pretty frightening. There was a huge fireball with these turbulent rolls going in and out. The thing was glowing. It looked to me like a diseased brain.” Clearly, though, bombs of the lithium type could be as powerful as anyone wished—and these test bombs were readily weaponizable.18

The Castle results, strongly complementing the Rand and Teapot reports, cleared the way for action. Within the Pentagon, Gardner took the lead in pushing for Atlas. On 11 March he met with Air Force Secretary Harold Talbott and with the Chief of Staff, General Nathan Twining. He proposed a sped-up program that would nearly double the Fiscal Year (FY) 1955 Atlas budget and would have the first missiles ready to launch as early as 1958. General Thomas White, the Vice Chief of Staff, weighed in with his own endorsement later that week, and Talbott responded by directing Twining to accelerate Atlas immediately.

White carried the ball to the Air Staff, which held responsibility for recommend­ing approval of new programs. He told its members that “ballistic missiles were here to stay, and the Air Staff had better realize this fact and get on with it.” Then on 14 May, having secured concurrence from the Secretary of Defense, White gave Atlas the highest Air Force development priority and directed its acceleration “to the maximum extent that technology would allow.” Gardner declared that Whites order meant “the maximum effort possible with no limitation as to funding.”19

This was a remarkable turnaround for a program that at the moment lacked even a proper design. Many weapon concepts have gone as far as the prototype stage without winning approval, but Atlas gained its priority at a time when the accepted configuration still was the 440,000-pound, five-engine concept of 1953- Air Force officials still had to establish a formal liaison with the AEC to win access to information on projected warhead designs. Within the AEC, lightweight bombs still were well in the future. A specialized device, tested in the recent series as Castle Nectar, delivered 1.69 megatons but weighed 6,520 pounds. This was four times the warhead weight proposed for Atlas.

But in October the AEC agreed that it could develop warheads weighing 1,500 to 1,700 pounds, with a yield of one megaton. This opened the door to a new Atlas design having only three engines. It measured 75 feet long and 10 feet in diameter, with a weight of240,000 pounds—and its miss distance could be as great as five miles. This took note of the increased yield of the warhead and further eased the problem of guidance. The new configuration won Air Force approval in December.20

Winged Spacecraft and Dyna-Soar

Boost-glide rockets, with wings, entered the realm of advanced conceptual design with postwar studies at Bell Aircraft called Bomi, Bomber Missile. The director of the work, Walter Dornberger, had headed Germany’s wartime rocket development program and had been in charge of the V-2. The new effort involved feasibility studies that sought to learn what might be done with foreseeable technology, but Bomi was a little too advanced for some of Dornberger’s colleagues. Historian Roy Houchin writes that when Dornberger faced “abusive and insulting remarks” from an Air Force audience, he responded by declaring that his Bomi would be receiving more respect if he had had the chance to fly it against the United States during the war. In Houchin’s words, “The silence was deafening.”1

Winged Spacecraft and Dyna-Soar

The initial Bomi concept, dating back to 1951, took form as an in-house effort. It called for a two-stage rocket, with both stages being piloted and fitted with delta wings. The lower stage was mostly of aluminum, with titanium leading edges and nose; the upper stage was entirely of titanium and used radiative cooling. With an initial range of 3,500 miles, it was to come over the target above 100,000 feet and at speeds greater than Mach 4. Operational concepts called for bases in England or Spain, targets in the western Soviet Union, and a landing site in northern Africa.2

During the spring of 1952, Bell officials sought funds for further study from Wright Air Development Center (WADC). A year passed, and WADC responded with a firm no. The range was too short. Thermal protection and onboard cooling raised unanswered questions. Values assumed for L/D appeared highly optimistic, and no information was available on stability, control, or aerodynamic flutter at the proposed speeds. Bell responded by offering to consider higher speeds and greater range. Basic feasibility then lay even farther in the future, but the Air Forces inter­est in the Atlas ICBM meant that it wanted missiles of longer range, even though shorter-range designs could be available sooner. An intercontinental Bomi at least could be evaluated as a potential alternative to Atlas, and it might find additional roles such as strategic reconnaissance.3

In April 1954, with that ICBM very much in the ascendancy, WADC awarded Bell its desired study contract. Bomi now had an Air Force designation, MX-2276. Bell examined versions of its two-stage concept with 4,000- and 6,000-mile ranges while introducing a new three-stage configuration with the stages mounted belly – to-back. Liftoff thrust was to be 1.2 million pounds, compared with 360,000 for the three-engine Atlas. Bomi was to use a mix of liquid oxygen and liquid fluorine, the latter being highly corrosive and hazardous, whereas Atlas needed only liquid oxygen, which was much safer. The new Bomi was to reach 22,000 feet per second, slightly less than Atlas, but promised a truly global glide range of 12,000 miles. Even so, Atlas clearly was preferable.4

But the need for reconnaissance brought new life to the Bell studies. At WADC, in parallel with initiatives that were sparking interest in unpiloted reconnaissance satellites, officials defined requirements for Special Reconnaissance System 118P. These called initially for a range of 3,500 miles at altitudes above 100,000 feet. Bell won funding in September 1955, as a follow-on to its recently completed MX – 2276 activity, and proposed a two-stage vehicle with a Mach 15 glider. In March 1956 the company won a new study contract for what now was called Brass Bell. It took shape as a fairly standard advanced concept of the mid-1950s, with a liquid – fueled expendable first stage boosting a piloted craft that showed sharply swept delta wings. The lower stage was conventional in design, burning Atlas propellants with uprated Atlas engines, but the glider retained the company’s preference for fluorine. Officials at Bell were well aware of its perils, but John Sloop at NACA-Lewis was successfully testing a fluorine rocket engine with 20,000 pounds of thrust, and this gave hope.5

The Brass Bell study contract went into force at a moment when prospects for boost-glide were taking a sharp step upward. In February 1956 General Thomas Power, head of the Air Research and Development Command (ARDC), stated that the Air Force should stop merely considering such radical concepts and begin developing them. High on his list was a weapon called Robo, Rocket Bomber, for which several firms were already conducting in-house work as a prelude to funded study contracts. Robo sought to advance beyond Brass Bell, for it was to circle the globe and hence required near-orbital speed. In June ARDC Headquarters set forth System Requirement 126 that defined the scope of the studies. Convair, Douglas, and North American won the initial awards, with Martin, Bell, and Lockheed later participating as well.

The X-15 by then was well along in design, but it clearly was inadequate for the performance requirements of Brass Bell and Robo. This raised the prospect of a new and even more advanced experimental airplane. At ARDC Headquarters, Major George Colchagoff took the initiative in pursuing studies of such a craft, which took the name HYWARDS: Hypersonic Weapons Research and Development Support­ing System. In November 1956 the ARDC issued System Requirement 131, thereby placing this new X-pIane on the agenda as well.6

The initial HYWARDS concept called for a flight speed of Mach 12. However, in December Bell Aircraft raised the speed of Brass Bell to Mach 18. This increased the boost-glide range to 6,300 miles, but it opened a large gap between the perfor­mance of the two craft, inviting questions as to the applicability of HYWARDS results. In January a group at NACA-Langley, headed by John Becker, weighed in with a report stating that Mach 18, or 18,000 feet per second, was appropriate for HYWARDS. The reason was that “at this speed boost gliders approached their peak heating environment. The rapidly increasing flight altitudes at speeds above Mach 18 caused a reduction in the heating rates.”7

With the prospect now strong that Brass Bell and HYWARDS would have the same flight speed, there was clear reason not to pursue them as separate projects but to consolidate them into a single program. A decision at Air Force Headquarters, made in March 1957, accomplished this and recognized their complementary char­acters. They still had different goals, with HYWARDS conducting flight research and Brass Bell being the operational reconnaissance system, but HYWARDS now was to stand as a true testbed.8

Robo still was a separate project, but events during 1957 brought it into the fold as well. In June an ad hoc review group, which included members from ARDC and WADC, looked at Robo concepts from contractors. Robert Graham, a NACA attendee, noted that most proposals called for “a boost-glide vehicle which would fly at Mach 20-25 at an altitude above 150,000 feet.” This was well beyond the state of the art, but the panel concluded that with several years of research, an experimental craft could enter flight test in 1965, an operational hypersonic glider in 1968, and Robo in 19747

On 10 October—less than a week after the Soviets launched their first Sputnik— ARDC endorsed this three-part plan by issuing a lengthy set of reports, “Abbre­viated Systems Development Plan, System 464L—Hypersonic Strategic Weapon System.” It looked ahead to a research vehicle capable of 18,000 feet per second and

350,0 feet, to be followed by Brass Bell with the same speed and 170,000 feet, and finally Robo, rated at 25,000 feet per second and 300,000 feet but capable of orbital flight.

The ARDC’s Lieutenant Colonel Carleton Strathy, a division chief and a strong advocate of program consolidation, took the proposed plan to Air Force Head­quarters. He won endorsement from Brigadier General Don Zimmerman, Deputy

Winged Spacecraft and Dyna-Soar

Top and side views of Dyna-Soar. (U. S. Air Force)

Director of Development Planning, and from Brigadier General Homer Boushey, Deputy Director of Research and Development. NACA’s John Crowley, Associate Director for Research, gave strong approval to the proposed test vehicle, viewing it as a logical step beyond the X-15- On 25 November, having secured support from his superiors, Boushey issued Development Directive 94, allocating $3 million to proceed with more detailed studies following a selection of contractors.10

The new concept represented another step in the sequence that included Eugen Sanger’s Silbervogel, his suborbital skipping vehicle, and among live rocket craft, the X-15- It was widely viewed as a tribute to Sanger, who was still living. It took the name Dyna-Soar, which drew on “dynamic soaring,” Sanger’s name for his skipping technique, and which also stood for “dynamic ascent and soaring flight,” or boost – glide. Boeing and Martin emerged as the finalists in June 1958, with their roles being defined in November 1959- Boeing was to take responsibility for the winged spacecraft. Martin, described as the associate contractor, was to provide the Titan missile that would serve as the launch vehicle.11

The program now demanded definition of flight modes, configuration, struc­ture, and materials. The name of Sanger was on everyone’s lips, but his skipping flight path had already proven to be uncompetitive. He and his colleague Bredt had treated its dynamics, but they had not discussed the heating. That task fell to NACA’s Allen and Eggers, along with their colleague Stanford Neice.

In 1954, following their classic analysis of ballistic re-entry, Eggers and Allen turned their attention to comparison of this mode with boost-glide and skipping entries. They assumed the use of active cooling and found that boost-glide held the advantage:

The glide vehicle developing lift-drag ratios in the neighborhood of 4 is far superior to the ballistic vehicle in ability to convert velocity into range. It has the disadvantage of having far more heat convected to it; however, it has the compensating advantage that this heat can in the main be radiated back to the atmosphere. Consequently, the mass of coolant material may be kept relatively low.

A skip vehicle offered greater range than the alternatives, in line with Sanger’s advocacy of this flight mode. But it encountered more severe heating, along with high aerodynamic loads that necessitated a structurally strong and therefore heavy vehicle. Extra weight meant extra coolant, with the authors noting that “ulti­mately the coolant is being added to cool coolant. This situation must obviously be avoided.” They concluded that “the skip vehicle is thought to be the least promising of the three types of hypervelocity vehicle considered here.”12

Following this comparative assessment of flight modes, Eggers worked with his colleague Clarence Syvertson to address the issue of optimum configuration. This issue had been addressed for the X-15; it was a mid-wing airplane that generally resembled the high-performance fighters of its era. In treating Dyna-Soar, following the Robo review of mid-1957, NACA’s Robert Graham wrote that “high-wing, mid­wing and low-wing configurations were proposed. All had a highly swept wing, and a small angle cone as the fuselage or body.” This meant that while there was agree­ment on designing the fuselage, there was no standard way to design the wing.13

Eggers and Syvertson proceeded by treating the design problem entirely as an exercise in aerodynamics. They concluded that the highest values of L/D were attain­able by using a high-wing concept with the fuselage mounted below as a slender half-cone and the wing forming a flat top. Large fins at the wing tips, canted sharply downward, directed the airflow under the wings downward and increased the lift. Working with a hypersonic wind tunnel at NACA-Ames, they measured a maximum L/D of 6.65 at Mach 5, in good agreement with a calculated value of 6.85-14

This configuration had attractive features, not the least of which was that the base of its half-cone could readily accommodate a rocket engine. Still, it was not long before other specialists began to argue that it was upside down. Instead of having a flat top with the fuselage below, it was to be flipped to place the wing below the fuselage, giving it a flat bottom. This assertion came to the forefront during Becker’s HYWARDS study, which identified its preferred velocity as 18,000 feet per second. His colleague Peter Korycinski worked with Becker to develop heating analyses of flat-top and flat-bottom candidates, with Roger Anderson and others within Langleys Structures Division providing estimates for the weight of thermal protection.

A simple pair of curves, plotted on graph paper, showed that under specified assumptions the flat-bottom weight at that velocity was 21,400 pounds and was increasing at a modest rate at higher speeds. The flat-top weight was 27,600 pounds and was rising steeply. Becker wrote that the flat-bottom craft placed its fuselage “in the relatively cool shielded region on the top or lee side of the wing—i. e., the wing was used in effect as a partial heat shield for the fuselage— This ‘flat-bot­tomed’ design had the least possible critical heating area…and this translated into least circulating coolant, least area of radiative heat shields, and least total thermal protection in flight.”15

These approaches—flat-top at Ames, flat-bottom at Langley—brought a debate between these centers that continued through 1957. At Ames, the continuing strong interest in high L/D reflected an ongoing emphasis on excellent supersonic aerody­namics for military aircraft, which needed high L/D as a matter of course. To ease the heating problem, Ames held for a time to a proposed speed of 11,000 feet per second, slower than the Langley concept but lighter in weight and more attainable in technology while still offering a considerable leap beyond the X-15. Officials at NACA diplomatically described the Ames and Langley HYWARDS concepts respectively as “high L/D” and “low heating,” but while the debate continued, there remained no standard approach to the design of wings for a hypersonic glider.16

There was a general expectation that such a craft would require active cooling. Bell Aircraft, which had been studying Bomi, Brass Bell, and lately Robo, had the most experience in the conceptual design of such arrangements. Its Brass Bell of 1957, designed to enter its glide at 18,000 feet per second and 170,000 feet in alti­tude, featured an actively cooled insulated hot structure. The primary or load-bear­ing structure was of aluminum and relied on cooling in a closed-loop arrangement that used water-glycol as the coolant. Wing leading edges had their own closed-loop cooling system that relied on a mix of sodium and potassium metals. Liquid hydro­gen, pumped initially to 1,000 pounds per square inch, flowed first through a heat exchanger and cooled the heated water-glycol, then proceeded to a second heat exchanger to cool the hot sodium-potassium. In an alternate design concept, this gas cooled the wing leading edges directly, with no intermediate liquid-metal cool­ant loop. The warmed hydrogen ran a turbine within an onboard auxiliary power unit and then was exhausted overboard. The leading edges reached a maximum temperature of 1,400°F, for which Inconel X was a suitable material.17

During August of that year Becker and Korycinski launched a new series of stud­ies that further examined the heating and thermal protection of their flat-bottom

glider. They found that for a glider of global range, flying with angle of attack of 45 degrees, an entry trajectory near the upper limit of permissible altitudes gave peak uncooled skin temperatures of 2,000°F. This appeared achievable with improved metallic or ceramic hot structures. Accordingly, no coolant at all was required!18

This conclusion, published in 1959, influenced the configura­tion of subsequent boost-glide vehi­cles—Dyna-Soar, the space shut­tle—much as the Eggers-Allen paper of 1953 had defined the blunt-body shape for ballistic entry. Prelimi­nary and unpublished results were in hand more than a year prior to publication, and when the prospect emerged of eliminating active cool­ing, the concepts that could do this were swept into prominence. They were of the flat-bottom type, with Dyna-Soar being the first to proceed into mainstream development.

Winged Spacecraft and Dyna-SoarThis uncooled configuration proved robust enough to accommo­date substantial increases in flight speed and performance. In 1959 Herbert York, the Defense Director of Research and Engineer­ing, stated that Dyna-Soar was to fly at 15,000 miles per hour. This was well above the planned speed of Brass Bell but still below orbital velocity. During subsequent years the booster changed from Martin’s Titan I to the more capable Titan II and then to the powerful Titan III-C, which could easily boost it to orbit. A new plan, approved in December 1961, dropped suborbital missions and called for “the early attainment of orbital flight.” Subsequent planning anticipated that Dyna-Soar would reach orbit with the Titan III upper stage, execute several circuits of the Earth, and then come down from orbit by using this stage as a retrorocket.19

After that, though, advancing technical capabilities ran up against increasingly stringent operational requirements. The Dyna-Soar concept had grown out of HYWARDS, being intended initially to serve as a testbed for the reconnaissance

Winged Spacecraft and Dyna-Soar

Full-scale model of Dyna-Soar, on display at an Air Force exhibition in 1962. The scalloped pat­tern on the base was intended to suggest Sanger’s skipping entry. (Boeing Company archives)

boost-glider Brass Bell and for the manned rocket-powered bomber Robo. But the rationale for both projects became increasingly questionable during the early 1960s. The hypersonic Brass Bell gave way to a new concept, the Manned Orbiting Labo­ratory (MOL), which was to fly in orbit as a small space station while astronauts took reconnaissance photos. Robo fell out of the picture completely, for the success of the Minuteman ICBM, which used solid propellant, established such missiles as the nations prime strategic force. Some people pursued new concepts that contin­ued to hold out hope for Dyna-Soar applications, with satellite interception stand­ing in the forefront. The Air Force addressed this with studies of its Saint project, but Dyna-Soar proved unsuitable for such a mission.20

Dyna-Soar was a potentially superb technology demonstrator, but Defense Sec­retary Robert McNamara took the view that it had to serve a military role in its own right or lead to a follow-on program with clear military application. The cost of Dyna-Soar was approaching a billion dollars, and in October 1963 he declared that he could not justify spending such a sum if it was a dead-end program with no ultimate purpose. He canceled it on 10 December, noting that it was not to serve as a cargo rocket, could not carry substantial payloads, and could not stay in orbit for

Winged Spacecraft and Dyna-Soar

Artist’s rendering showing Dyna-Soar in orbit. (Boeing Company archives)

long durations. He approved MOL as a new program, thereby giving the Air Force continuing reason to hope that it would place astronauts in orbit, but stated that Dyna-Soar would serve only “a very narrow objective.”21

At that moment the program called for production of 10 flight vehicles, and Boeing had completed some 42 percent of the necessary tasks. McNamara’s deci­sion therefore was controversial, particularly because the program still had high – level supporters. These included Eugene Zuckert, Air Force Secretary; Alexander Flax, Assistant Secretary for Research and Development; and Brockway McMillan, Zuckert’s Under Secretary and Flax’s predecessor as Assistant Secretary. Still, McNa­mara gave more attention to Harold Brown, the Defense Director of Research and Engineering, who made the specific proposal that McNamara accepted: to cancel Dyna-Soar and proceed instead with MOL.22

Dyna-Soar never flew. The program had expended $410 million when canceled, but the schedule still called for another $373 million, and the vehicle was still some two and a half years away from its first flight. Even so, its technology remained avail­able for further development, contributing to the widening prospects for reentry that marked the era.23

Hypersonics After NASP

On 7 December 1995 the entry probe of the Galileo spacecraft plunged into the atmosphere of Jupiter. It did not plummet directly downward but sliced into that planet’s hydrogen-rich envelope at a gentle angle as it followed a trajectory that took it close to Jupiter’s edge. The probe entered at Mach 50, with its speed of 29.5 miles per second being four times that of a return to Earth from the Moon. Peak heating came to 11,800 BTU per square foot-second, corresponding to a radia­tive equilibrium temperature of 12,000°F. The heat load totaled 141,800 BTU per square foot, enough to boil 150 pounds of water for each square foot of heatshield surface.1 The deceleration peaked at 228 g, which was tantamount to slamming from a speed of 5,000 miles per hour to a standstill in a single second. Yet the probe survived. It deployed a parachute and transmitted data from every one of its onboard instruments for nearly an hour, until it was overwhelmed within the depths of the atmosphere.2

It used an ablative heatshield, and as an exercise in re-entry technology, the design was straightforward. The nose cap was of chopped and molded carbon phenolic composite; the rest of the main heatshield was tape-wrapped carbon phenolic. The maximum thickness was 5.75 inches. The probe also mounted an aft heatshield, which was of phenolic nylon. The value of these simple materials, under the extreme conditions of Jupiter atmosphere entry, showed beyond doubt that the problem of re-entry was well in hand.3

Other activities have done less well. The X-33 and X-34 projects, which sought to build next-generation shuttles using NASP materials, failed utterly. Test scramjets have lately taken to flight but only infrequently. Still, work in CFD continues to flourish. Today’s best supercomputers offer a million times more power than the ancestral Illiac 4, the top computer of the mid-1970s. This ushers in the important new topic of Large Eddy Simulation (LES). It may enable us to learn, via computa­tion, just how good scramjets may become.

Hypersonics After NASP

The DC-X, which flew, and the X-33, which did not. (NASA)

Approaching the Nose Cone

An important attribute of a nose cone was its shape, and engineers were reduc­ing drag to a minimum by crafting high-speed airplanes that displayed the ultimate in needle-nose streamlining. The X-3 research aircraft, designed for Mach 2, had a long and slender nose that resembled a church steeple. Atlas went even further, with an early concept having a front that resembled a flagpole. This faired into a long and slender cone that could accommodate the warhead.21

This intuitive approach fell by the wayside in 1953, as the NACA-Ames aero – dynamicists H. Julian Allen and Alfred Eggers carried through an elegant analysis of the motion and heating of a re-entering nose cone. This work showed that they were masters of the simplifying assumption. To make such assumptions successfully represents a high art, for the resulting solutions must capture the most essential aspects of the pertinent physics while preserving mathematical tractability. Their paper stands to this day as a landmark. Quite probably, it is the single most impor­tant paper ever written in the field of hypersonics.

They calculated total heat input to a re-entry vehicle, seeking shapes that would minimize this. That part of the analysis enabled them to critique the assertion that a slender and sharply-pointed shape was best. For a lightweight nose cone, which would slow significantly in the atmosphere due to drag, they found a surprising result: the best shape, minimizing the total heat input, was blunt rather than sharp.

The next issue involved the maximum rate of heat transfer when averaged over an entire vehicle. To reduce this peak heating rate to a minimum, a nose cone of realistic weight might be either very sharp or very blunt. Missiles of intermediate slenderness gave considerably higher peak heating rates and “were definitely to be avoided.”

This result applied to the entire vehicle, but heat-transfer rates were highest at the nose-cone tip. It was particularly important to minimize the heating at the tip, and again their analysis showed that a blunt nose cone would be best. As Allen and Eggers put it, “not only should pointed bodies be avoided, but the rounded nose should have as large a radius as possible.”22

How could this be? The blunt body set up a very strong shock wave, which pro­duced intense heating of the airflow. However, most of this heat was carried away in the flow. The boundary layer served to insulate the vehicle, and relatively little of this heat reached its surface. By contrast, a sharp and slender nose cone produced a shock that stood very close to this surface. At the tip, the boundary layer was too thin to offer protection. In addition, skin friction produced still more heating, for the boundary layer now received energy from shock-heated air flowing close to the vehicle surface.23

This paper was published initially as a classified document, but it took time to achieve its full effect. The Air Force did not adopt its principle for nose-cone design until 1956.24 Still, this analysis outlined the shape of things to come. Blunt heat shields became standard on the Mercury, Gemini, and Apollo capsules. The space shuttle used its entire undersurface as a heat shield that was particularly blunt, rais­ing its nose during re-entry to present this undersurface to the flow.

Yet while analysis could indicate the general shape for a nose cone, only experi­ment could demonstrate the validity of a design. At a stroke, Beckers Mach 7 facil­ity, which had been far in the forefront only recently, suddenly became inadequate. An ICBM nose cone was to re-enter the atmosphere at speeds above Mach 20. Its kinetic energy would vaporize five times its weight of iron. Temperatures behind the bow shock would reach 9000 K, hotter than the surface of the Sun. Research scien­tist Peter Rose wrote that this velocity would be “large enough to dissociate all the oxygen molecules into atoms, dissociate about half of the nitrogen, and thermally ionize a considerable fraction of the air.”25

Though hot, the 9000 К air actually would be cool, considering its situation, because its energy would go into dissociating molecules of gas. However, the ions and dissociated atoms were only too likely to recombine at the surface of the nose cone, thereby delivering additional heat. Such chemical effects also might trip the boundary layer from laminar to turbulent flow, with the rate of heat transfer increas­ing substantially as a result. In the words of Rose:

“The presence of free-atoms, electrons, and molecules in excited states can be expected to complicate heat transfer through the boundary layer by additional modes of energy transport, such as atom diffusion, carrying the energy of dissociation. Radiation by transition from excited energy states may contribute materially to radiative heat transfer. There is also a possibility of heat transfer by electrons and ions. The existence of large amounts of energy in any of these forms will undoubtedly influence the familiar flow phenomena.”26

Within the Air Force, the Aircraft Panel of the Scientific Advisory Board (SAB) issued a report in October 1954 that looked ahead to the coming decade:

“In the aerodynamics field, it seems to us pretty clear that over the next 10 years the most important and vital subject for research and development is the field of hypersonic flows; and in particular, hypersonic flows with [temperatures at a nose-cone tip] which may run up to the order of thousands of degrees. This is one of the fields in which an ingenious and clever application of the existing laws of mechanics is probably not adequate. It is one in which much of the necessary physical knowledge still remains unknown at present and must be developed before we arrive at a true understanding and competence. The reason for this is that the temperatures which are associated with these velocities are higher than temperatures which have been produced on the globe, except in connection with the nuclear developments of the past 10 or 15 years and that there are problems of dissociation, relaxation times, etc., about which the basic physics is still unknown.”27

The Atlas program needed a new experimental technique, one that could over­come the fact that conventional wind tunnels produced low temperatures due to their use of expanding gases, and hence the pertinent physics and chemistry asso­ciated with the heat of re-entry were not replicated. Its officials found what they wanted at a cocktail party.

This social gathering took place at Cornell University around Thanksgiving of 1954. The guests included university trustees along with a number of deans and senior professors. One trustee, Victor Emanuel, was chairman of Avco Corpora­tion, which already was closely involved in work on the ICBM. He had been in Washington and had met with Air Force Secretary Harold Talbott, who told him of his concern about problems of re-entry. Emanuel raised this topic at the party while talking with the dean of engineering, who said, “I believe we have someone right here who can help you.”28

That man was Arthur Kantrowitz, a former researcher at NACA-Langley who had taken a faculty position at Cornell following the war. While at Langley during the late 1930s, he had used a $5,000 budget to try to invent controlled thermo­nuclear fusion. He did not get very far. Indeed, he failed to gain results that were sufficient even to enable him to write a paper, leaving subsequent pioneers in con­

trolled fusion to start again from scratch. Still, as he recalls, “I continued my inter­est in high temperatures with the hope that someday I could find something that I could use to do fusion.”29

In 1947 this led him to the shock tube. This instrument produced very strong shocks in a laboratory, overcoming the limits of wind tunnels. It used a driving gas at high pressure in a separate chamber. This gas burst through a thin diaphragm to generate the shock, which traveled down a long tube that was filled with a test gas. High-speed instruments could observe this shock. They also could study a small model immersed within the hot flow at high Mach that streamed immediately behind the shock.30

When Kantrowitz came to the shock tube, it already was half a century old. The French chemist Paul Vieille built the first such devices prior to 1900, using them to demonstrate that a shock wave travels faster than the speed of sound. He proposed that his apparatus could prove useful in studying mine explosions, which took place in shafts that resembled his long tubes.31

The next important shock-tube researcher, Britain’s William Payman, worked prior to World War II. He used diaphragm-bursting pressures as high as 1100 pounds per square inch and introduced high-speed photography to observe the shocked flows. He and his colleagues used the shock tube for experimental verifica­tion of equations in gasdynamics that govern the motion of shock waves.32

At Princeton University during that war, the physicist Walter Bleakney went fur­ther. He used shock tubes as precision instruments, writing, “It has been found that successive shots’ in the tube taken with the same initial conditions reproduce one another to a surprising degree. The velocity of the incident shock can be reproduced to 0.1 percent.” He praised the versatility of the device, noting its usefulness “for studying a great variety of problems in fluid dynamics.” In addition to observations of shocks themselves, the instrument could address “problems of detonation and allied phenomena. The tube may be used as a wind tunnel with a Mach number variable over an enormous range.” This was the role it took during the ICBM pro­gram.33

At Cornell, Kantrowitz initiated a reach for high temperatures. This demanded particularly high pressure in the upstream chamber. Payman had simply used com­pressed air from a thick-walled tank, but Kantrowitz filled his upstream chamber with a highly combustible mix of hydrogen and oxygen. Seeking the highest tem­peratures, he avoided choosing air as a test gas, for its diatomic molecules absorbed energy when they dissociated or broke apart, which limited the temperature rise. He turned instead to argon, a monatomic gas that could not dissociate, and reached 18,000 K.

He was a professor at Cornell, with graduate students. One of them, Edwin Resler, wrote a dissertation in 1951, “High Temperature Gases Produced by Strong

Shock Waves.” In Kantrowitz’s hands, the versatility of this instrument appeared anew. With argon as the test gas, it served for studies of thermal ionization, a physi­cal effect separate from dissociation in which hot atoms lost electrons and became electrically charged. Using nitrogen or air, the shock tube examined dissociation as well, which increased with the higher temperatures of stronger shocks. Higher Mach values also lay within reach. As early as 1952, Kantrowitz wrote that “it is possible to obtain shock Mach numbers in the neighborhood of 25 with reasonable pressures and shock tube sizes.”34

Other investigators also worked with these devices. Raymond Seeger, chief of aerodynamics at the Naval Ordnance Laboratory, built one. R. N. Hollyer con­ducted experiments at the University of Michigan. At NACA-Langley, the first shock tube entered service in 1951. The Air Force also was interested. The 1954 report of the SAB pointed to “shock tubes and other devices for producing extremely strong shocks” as an “experimental technique” that could give new insights into fundamen­tal problems of hypersonics.35

Thus, when Emanuel met Kantrowitz at that cocktail party, this academic physi­cist indeed was in a position to help the Atlas effort. He had already gained hands – on experience by conducting shock-tube experiments at temperatures and shock velocities that were pertinent to re-entry of an ICBM. Emanuel then staked him to a new shock-tube center, Avco Research Laboratory, which opened for business early in 1955-

Kantrowitz wanted the highest shock velocities, which he obtained by using lightweight helium as the driver gas. He heated the helium strongly by adding a mixture of gaseous hydrogen and oxygen. Too little helium led to violent burning with unpredictable detonations, but use of 70 percent helium by weight gave a con­trolled burn that was free of detonations. The sudden heating of this driver gas also ruptured the diaphragm.

Standard optical instruments, commonly used in wind-tunnel work, were avail­able for use with shock tubes as well. These included the shadowgraph, schlieren apparatus, and Mach-Zehnder interferometer. To measure the speed of the shock, it proved useful to install ionization-sensitive pickups that responded to changes in electrical resistance as shock waves passed. Several such pickups, spaced along the length of the tube, gave good results at speeds up to Mach 16.

Within the tube, the shock raced ahead of the turbulent mix of driver gases. Between the shock and the driver gases lay a “homogeneous gas sample” (HGS), a cylindrical slug of test gas moving nearly with the speed of the shock. The measured speed of the shock, together with standard laws of gasdynamics, permitted a com­plete calculation of the pressure, temperature, and internal energy of the HGS. Even when the HGS experienced energy absorption due to dissociation of its constituent molecules, it was possible to account for this through a separate calculation.36

The HGS swept over a small model of a nose cone placed within the stress. The time for passage was of the order of 100 microseconds, with the shock tube thus operating as a “wind tunnel” having this duration for a test. This nevertheless was long enough for photography. In addition, specialized instruments permitted study of heat transfer. These included thin-gauge resistance thermometers for temperature measurements and thicker-gauge calorimeters to determine heat transfer rates.

Metals increase their electrical resistance in response to a temperature rise. Both the thermometers and the calorimeters relied on this effect. To follow the sudden temperature increase behind the shock, the thermometer needed a metal film that was thin indeed, and Avco researchers achieved a thickness of 0.3 microns. They did this by using a commercial product, Liquid Bright Platinum No. 05, from Hanovia Chemical and Manufacturing Company. This was a mix of organic compounds of platinum and gold, dissolved in oils. Used as a paint, it was applied with a brush and dried in an oven.

The calorimeters used bulk platinum foil that was a hundred times thicker, at 0.03 millimeters. This thickness diminished their temperature rise and allowed the observed temperature increase to be interpreted as a rate of heat transfer. Both the thermometers and calorimeters were mounted to the surface of nose-cone models, which typically had the shape of a hemisphere that faired smoothly into a cylinder at the rear. The models were made of Pyrex, a commercial glass that did not readily crack. In addition, it was a good insulator.37

The investigator Shao-Chi Lin also used a shock tube to study thermal ioniza­tion, which made the HGS electrically conductive. To measure this conductivity, Shao used a nonconducting shock tube made of glass and produced a magnetic field within its interior. The flow of the conducting HGS displaced the magnetic lines of force, which he observed. He calibrated the system by shooting a slug of metal having known conductivity through the field at a known speed. Measured HGS conductivities showed good agreement with values calculated from theory, over a range from Mach 10 to Mach 17-5. At this highest flow speed, the conductivity of air was an order of magnitude greater than that of seawater.38

With shock tubes generating new data, there was a clear need to complement the data with new solutions in aerodynamics and heat transfer. The original Allen – Eggers paper had given a fine set of estimates, but they left out such realistic effects as dissociation, recombination, ionization, and changes in the ratio of specific heats. Again, it was necessary to make simplifying assumptions. Still, the first computers were at hand, which meant that solutions did not have to be in closed form. They might be equations that were solvable electronically.

Recombination of ions and of dissociated diatomic molecules—oxygen and nitrogen—was particularly important at high Mach, for this chemical process could deliver additional heat within the boundary layer. Two simplified cases stood out. In “equilibrium flow,” the recombination took place instantly, responding immediately to the changing temperature and pressure within the boundary layer. The extent of ionization and dissociation then were simple point functions of the temperature and pressure at any location, and they could be calculated directly.

The other limiting case was “frozen flow.” One hesitates to describe a 9000 К airstream as “frozen,” but here it meant that the chemical state of the boundary layer retained its condition within the free stream behind the bow shock. Essentially this means that recombination proceeded so slowly that the changing conditions within the boundary layer had no effect on the degrees of dissociation and ionization. These again could be calculated directly, although this time as a consequence of conditions behind the shock rather than in the boundary layer. Frozen flow occurred when the air was rarefied.

These approximations avoided the need to deal with the chemistry of finite reac­tion rates, wherein recombination would not instantly respond to the rapidly vary­ing flow conditions across the thickness of a boundary layer but would lag behind the changes. In 1956 the aerodynamicist Lester Lees proposed a heat-transfer theory that specifically covered those two limiting cases.39 Then in 1957, Kantrowitz’s col­leagues at Avco Research Laboratory went considerably further.

The Avco lab had access to the talent of nearby MIT. James Fay, a professor of mechanical engineering, joined with Avco’s Frederick Riddell to treat anew the problem of heat transfer in dissociated air. Finite reaction-rate chemistry was at the heart of their agenda, and again they needed a simplifying assumption: that the airflow velocity was zero. Fiowever, this condition was nearly true at the forward tip of a nose cone, where the heating was most severe.

Starting with a set of partial differential equations, they showed that these equa­tions reduced to a set of nonlinear ordinary differential equations. Using an IBM 650 computer, they found that a numerical solution of these nonlinear equations was reasonably straightforward. In dealing with finite-rate chemistry, they introduced a “reaction rate parameter” that attempted to capture the resulting effects. They showed that a re-entering nose cone could fall through 100,000 feet while transi­tioning from the frozen to the equilibrium regime. Within this transition region, the boundary layer could be expected to be partly frozen, near the free stream, and partly in equilibrium, near the wall.

The Fay-Riddell theory appeared in the February 1958 Journal of the Aeronauti­cal Sciences. That same issue presented experimental results, also from Avco, that tested the merits of this treatment. The researchers obtained shock-tube data with shock Mach numbers as high as 17-5. At this Mach, the corresponding speed of 17,500 feet per second approached the velocity of a satellite in orbit. Pressures within the shock-tube test gas simulated altitudes of 20,000, 70,000, and 120,000 feet, with equilibrium flow occurring in the models’ boundary layers even at the highest equivalent height above the ground.

Most data were taken with calorimeters, although data points from thin-gauge thermometers gave good agreement. The measurements showed scatter but fit neatly on curves calculated from the Fay-Riddell theory. The Lees theory underpredicted heat-transfer rates at the nose-cone tip, calling for rates up to 30 percent lower than those observed. Here, within a single issue of that journal, two papers from Avco gave good reason to believe that theoretical and experimental tools were at hand to learn the conditions that a re-entering ICBM nose cone would face during its moments of crisis.40

Still, this was not the same as actually building a nose cone that could survive this crisis. This problem called for a separate set of insights. These came from the U. S. Army and were also developed independently by an individual: George Sutton of General Electric.

The Technology of Dyna-Soar

Its thermal environment during re-entry was less severe than that of an ICBM nose cone, allowing designers to avoid not only active structural cooling but abla­tive thermal protection as well. This meant that it could be reusable; it did not have to change out its thermal protection after every flight. Even so, its environment imposed temperatures and heat loads that pervaded the choice of engineering solu­tions throughout the vehicle.

Dyna-Soar used radiatively-cooled hot structure, with the primary or load-bear­ing structure being of Rene 41. Trusses formed the primary structure of the wings and fuselage, with many of their beams meeting at joints that were pinned rather than welded. Thermal gradients, imposing differential expansion on separate beams, caused these members to rotate at the pins. This accommodated the gradients with­out imposing thermal stress.

Rene 41 was selected as a commercially available superalloy that had the best available combination of oxidation resistance and high-temperature strength. Its yield strength, 130,000 psi at room temperature, fell off only slightly at 1,200°F and retained useful values at 1,800°F. It could be processed as sheet, strip, wire, tubes, and forgings. Used as the primary structure of Dyna-Soar, it supported a design specification that indeed called for reusability. The craft was to withstand at least four re-entries under the most severe conditions permitted.

As an alloy, Rene 41 had a standard composition of 19 percent chromium, 11 percent cobalt, 10 percent molybdenum, 3 percent titanium, and 1.5 percent alu­

minum, along with 0.09 percent carbon and 0.006 percent boron, with the balance being nickel. It gained strength through age hardening, with the titanium and alu­minum precipitating within the nickel as an intermetallic compound. Age-harden­ing weldments initially showed susceptibility to cracking, which occurred in parts that had been strained through welding or cold working. A new heat-treatment process permitted full aging without cracking, with the fabricated assemblies show­ing no significant tendency to develop cracks.24

As a structural material, the relatively mature state of Rene 41 reflected the fact that it had already seen use in jet engines. It nevertheless lacked the temperature resistance necessary for use in the metallic shingles or panels that were to form the outer skin of the vehicle, reradiating the heat while withstanding temperatures as high as 3,000°F. Here there was far less existing art, and investigators at Boeing had to find their way through a somewhat roundabout path.

Four refractory or temperature-resistant metals initially stood out: tantalum, tungsten, molybdenum, and columbium. Tantalum was too heavy, and tungsten was not available commercially as sheet. Columbium also appeared to be ruled out for it required an antioxidation coating, but vendors were unable to coat it without rendering it brittle. Molybdenum alloys also faced embrittlement due to recrystal­lization produced by a prolonged soak at high temperature in the course of coating formation. A promising alloy, Mo-0.5Ti, overcame this difficulty through addition of 0.07 percent zirconium. The alloy that resulted, Mo-0.5Ti-0.07Zr, was called TZM. For a time it appeared as a highly promising candidate for all the other panels.25

Wing design also promoted its use, for the craft mounted a delta wing with a leading-edge sweep of 73 degrees. Though built for hypersonic re-entry from orbit, it resembled the supersonic delta wings of contemporary aircraft such as the B-58 bomber. However, this wing was designed using the Eggers-Allen blunt-body prin­ciple, with the leading edge being curved or blunted to reduce the rate of heating. The wing sweep then reduced equilibrium temperatures along the leading edge to levels compatible with the use ofTZM.26

Boeings metallurgists nevertheless held an ongoing interest in columbium because in uncoated form it showed superior ease of fabrication and lack of brittle­ness. A new Boeing-developed coating method eliminated embrittlement, putting columbium back in the running. A survey of its alloys showed that they all lacked the hot strength ofTZM. Columbium nevertheless retained its attractiveness because it promised less weight. Based on coatability, oxidation resistance, and thermal emis – sivity, the preferred alloy was Cb-10Ti-5Zr, called D-36. It replaced TZM in many areas of the vehicle but proved to lack strength against creep at the highest tempera­tures. Moreover, coated TZM gave more of a margin against oxidation than coated D-36, again at the most extreme temperatures. D-36 indeed was chosen to cover most of the vehicle, including the flat underside of the wing. But TZM retained its advantage for such hot areas as the wing leading edges.27

The vehicle had some 140 running feet of leading edges and 140 square feet of associated area. This included leading edges of the vertical fins and elevons as well as of the wings. In general, D-36 served where temperatures during re-entry did not exceed 2,700°F, while TZM was used for temperatures between 2,700 and 3,000°F. In accordance with the Stefan-Boltzmann law, all surfaces radiated heat at a rate proportional to the fourth power of the temperature. Hence for equal emissivities, a surface at 3,000°F radiated 43 percent more heat than one at 2,700°F.28

Panels of both TZM and D-36 demanded antioxidation coatings. These coat­ings were formed directly on the surfaces as metallic silicides (silicon compounds), using a two-step process that employed iodine as a chemical intermediary. Boeing introduced a fluidized-bed method for application of the coatings that cut the time for preparation while enhancing uniformity and reliability. In addition, a thin layer of silicon carbide, applied to the surface, gave the vehicle its distinctive black color. It enhanced the emissivity, lowering temperatures by as much as 200°F.

Development testing featured use of an oxyacetylene torch, operated with excess oxygen, which heated small samples of coated refractory sheet to temperatures as high as 3,000°F, measured by optical pyrometer. Test durations ran as long as four hours, with a published review noting that failures of specimens “were easily detected by visual observation as soon as they occurred.” This work showed that although TZM had better oxidation resistance than D-36, both coated alloys could resist oxidation for more than two hours at 3,000°F. This exceeded design requirements. Similar tests applied stress to hot samples by hanging weights from them, thereby demonstrating their ability to withstand stress of 3,100 psi, again at 3,000°F.29

Other tests showed that complete panels could withstand aerodynamic flutter. This issue was important; a report of the Aerospace Vehicles Panel of the Air Force Scientific Advisory Board (SAB)—a panel on panels, as it were—came out in April 1962 and singled out the problem of flutter, citing it as one that called for critical attention. The test program used two NASA wind tunnels: the 4 by 4-foot Unitary facility at Langley that covered a range of Mach 1.6 to 2.8 and the 11 by 11-foot Unitary installation at Ames for Mach 1.2 to 1.4. Heaters warmed test samples to 840°F as investigators started with steel panels and progressed to versions fabricated from Rene nickel alloy.

“Flutter testing in wind tunnels is inherently dangerous,” a Boeing review declared. “To carry the test to the actual flutter point is to risk destruction of the test specimen. Under such circumstances, the safety of the wind tunnel itself is jeopardized.” Panels under test were as large as 24 by 45 inches; actual flutter could easily have brought failure through fatigue, with parts of a specimen being blown through the tunnel at supersonic speed. The work therefore proceeded by starting at modest dynamic pressures, 400 and 500 pounds per square foot, and advancing over 18 months to levels that exceeded the design requirement of close to 1,400 pounds per square foot. The Boeing report concluded that the success of this test program, which ran through mid-1962, “indicates that an adequate panel flutter capability has been achieved.”30

Between the outer panels and the inner primary structure, a corrugated skin of Rene 41 served as a substructure. On the upper wing surface and upper fuselage, where temperatures were no higher than 2,000°F, the thermal-protection panels were also of Rene 41 rather than of a refractory. Measuring 12 by 45 inches, these panels were spot-welded directly to the corrugations of the substructure. For the wing undersurface, and for other areas that were hotter than 2,000°F, designers specified an insulated structure. Standoff clips, each with four legs, were riveted to the underlying corrugations and supported the refractory panels, which also were 12 by 45 inches in size.

The space between the panels and the substructure was to be filled with insula­tion. A survey of candidate materials showed that most of them exhibited a strong tendency to shrink at high temperatures. This was undesirable; it increased the rate of heat transfer and could create uninsulated gaps at seams and corners. Q-felt, a silica fiber from Johns-Manville, also showed shrinkage. However, nearly all of it occurred at 2,000°F and below; above 2,000°F, further shrinkage was negligible. This meant that Q-felt could be “pre-shrunk” through exposure to temperatures above 2,000°F for several hours. The insulation that resulted had density no greater than 6.2 pounds per cubic foot, one-tenth that of water. In addition, it withstood temperatures as high as 3,000°F.31

TZM outer panels, insulated with Q-felt, proved suitable for wing leading edges. These were designed to withstand equilibrium temperatures of 2,825°F and short – duration overtemperatures of 2,900°F. However, the nose cap faced temperatures of 3,680°F, along with a peak heat flux of 143 BTU per square foot-second. This cap had a radius of curvature of 7.5 inches, making it far less blunt than the Project Mercury heat shield that had a radius of 120 inches.32 Its heating was correspond­ingly more severe. Reliable thermal protection of the nose was essential, and so the program conducted two independent development efforts that used separate approaches. The firm of Chance Vought pursued the main line of activity, while Boeing also devised its own nose-cap design.

The work at Vought began with a survey of materials that paralleled Boeings review of refractory metals for the thermal-protection panels. Molybdenum and columbium had no strength to speak of at the pertinent temperatures, but tungsten retained useful strength even at 4,000°F. However, this metal could not be welded, while no known coating could protect it against oxidation. Attention then turned to nonmetallic materials, including ceramics.

Ceramics of interest existed as oxides such as silica and magnesia, which meant that they could not undergo further oxidation. Magnesia proved to be unsuitable because it had low thermal emittance, while silica lacked strength. However, carbon in the form of graphite showed clear promise. It held considerable industrial experi­ence; it was light in weight, while its strength actually increased with temperature. It oxidized readily but could be protected up to 3,000°F by treating it with silicon, in a vacuum and at high temperatures, to form a thin protective layer of silicon car­bide. Near the stagnation point, the temperatures during re-entry would exceed that level. This brought the concept of a nose cap with siliconized graphite as the pri­mary material, with an insulating layer of a temperature-resistant ceramic covering its forward area. With graphite having good properties as a heat sink, it would rise in temperature uniformly and relatively slowly, while remaining below the 3,000°F limit through the full time of re-entry.

Suitable grades of graphite proved to be available commercially from the firm of National Carbon. Candidate insulators included hafnia, thoria, magnesia, ceria, yttria, beryllia, and zirconia. Thoria was the most refractory but was very dense and showed poor resistance to thermal shock. Hafnia brought problems of availabil­ity and of reproducibility of properties. Zirconia stood out. Zirconium, its parent metal, had found use in nuclear reactors; the ceramic was available from the Zirco­nium Corporation of America. It had a melting point above 4,500°F, was chemically stable and compatible with siliconized graphite, offered high emittance with low thermal conductivity, provided adequate resistance to thermal shock and thermal stress, and lent itself to fabrication.33

For developmental testing, Vought used two in-house facilities that simulated the flight environment, particularly during re-entry. A ramjet, fueled with JP-4 and running with air from a wind tunnel, produced an exhaust with velocity up to 4,500 feet per second and temperature up to 3,500°F. It also generated acoustic levels above 170 decibels, reproducing the roar of a Titan III booster and showing that samples under test could withstand the resulting stresses without cracking. A separate installation, built specifically for the Dyna-Soar program, used an array of propane burners to test full-size nose caps.

The final Vought design used a monolithic shell of siliconized graphite that was covered over its full surface by zirconia tiles held in place using thick zirconia pins. This arrangement relieved thermal stresses by permitting mechanical movement of the tiles. A heat shield stood behind the graphite, fabricated as a thick disk-shaped container made of coated TZM sheet metal and fdled with Q-felt. The nose cap attached to the vehicle with a forged ring and clamp that also were of coated TZM. The cap as a whole relied on radiative cooling. It was designed to be reusable; like the primary structure, it was to withstand four re-entries under the most severe conditions permitted.34

The backup Boeing effort drew on that company’s own test equipment. Study of samples used the Plasma Jet Subsonic Splash Facility, which created a jet with tem­perature as high as 8,000°F that splashed over the face of a test specimen. Full-scale nose caps went into the Rocket Test Chamber, which burned gasoline to produce a nozzle exit velocity of 5,800 feet per second and an acoustic level of 154 decibels. Both installations were capable of long-duration testing, reproducing conditions during re-entries that could last for 30 minutes.35

The Boeing concept used a monolithic zirconia nose cap that was reinforced against cracking with two screens of platinum-rhodium wire. The surface of the cap was grooved to relieve thermal stress. Like its counterpart from Vought, this design also installed a heat shield that used Q-felt insulation. However, there was no heat sink behind the zirconia cap. This cap alone provided thermal protection at the nose through radiative cooling. Lacking both pinned tiles and an inner shell, its design was simpler than that ofVought.36

Its fabrication bore comparison to the age-old work of potters, who shape wet clay on a rotating wheel and fire the resulting form in a kiln. Instead of using a potter’s wheel, Boeing technicians worked with a steel die with an interior in the shape of a bowl. A paper honeycomb, reinforced with Elmer’s Glue and laid in place, defined the pattern of stress-relieving grooves within the nose cap surface. The working material was not moist clay, but a mix of zirconia powder with bind­ers, internal lubricants, and wetting agents.

With the honeycomb in position against the inner face of the die, a specialist loaded the die by hand, filling the honeycomb with the damp mix and forming layers of mix that alternated with the wire screens. The finished layup, still in its die, went into a hydraulic press. A pressure of 27,000 psi compacted the form, reducing its porosity for greater strength and less susceptibility to cracks. The cap was dried at 200°F, removed from its die, dried further, and then fired at 3,300°F for 10 hours. The paper honeycomb burned out in the course of the firing. Following visual and x-ray inspection, the finished zirconia cap was ready for machining to shape in the attachment area, where the TZM ring-and-clamp arrangement was to anchor it to the fuselage.37

The nose cap, outer panels, and primary structure all were built to limit their tem­peratures through passive methods: radiation, insulation. Active cooling also played a role, reducing temperatures within the pilots compartment and two equipment bays. These used a “water wall,” which mounted absorbent material between sheet – metal panels to hold a mix of water and a gel. The gel retarded flow of this fluid, while the absorbent wicking kept it distributed uniformly to prevent hot spots.

During reentry, heat reached the water walls as it penetrated into the vehicle. Some of the moisture evaporated as steam, transferring heat to a set of redundant water-glycol cooling loops resembling those proposed for Brass Bell of 1957. In Dyna-Soar, liquid hydrogen from an onboard supply flowed through heat exchang­ers and cooled these loops. Brass Bell had called for its warmed hydrogen to flow through a turbine, operating the onboard Auxiliary Power Unit. Dyna-Soar used an arrangement that differed only slightly: a catalytic bed to combine the stream of warm hydrogen with oxygen that again came from an onboard supply. This produced gas that drove the turbine of the Dyna-Soar APU, which provided both hydraulic and electric power.

A cooled hydraulic system also was necessary to move the control surfaces as on a conventional aircraft. The hydraulic fluid operating temperature was limited to 400°F by using the fluid itself as an initial heat-transfer medium. It flowed through an intermediate water-glycol loop that removed its heat by cooling with hydrogen. Major hydraulic system components, including pumps, were mounted within an actively cooled compartment. Control-surface actuators, along with their associated valves and plumbing, were insulated using inch-thick blankets of Q-felt. Through this combination of passive and active cooling methods, the Dyna-Soar program avoided a need to attempt to develop truly high-temperature hydraulic arrange­ments, remaining instead within the state of the art.38

Specific vehicle parts and components brought their own thermal problems. Bearings, both ball and antifriction, needed strength to carry mechanical loads at high temperatures. For ball bearings, the cobalt-base superalloy Stellite 19 was known to be acceptable up to 1,200°F. Investigation showed that it could perform under high load for short durations at 1,350°F. However, Dyna-Soar needed ball bearings qualified for 1,600°F and obtained them as spheres of Rene 41 plated with gold. The vehicle also needed antifriction bearings as hinges for control surfaces, and here there was far less existing art. The best available bearings used stainless steel and were suitable only to 600°F, whereas Dyna-Soar again faced a requirement of 1,600°F. A survey of 35 candidate materials led to selection of titanium carbide with nickel as a binder.39

Antenna windows demanded transparency to radio waves at similarly high tem­peratures. A separate program of materials evaluation led to selection of alumina, with the best grade being available from the Coors Porcelain Company. Its emit – tance had the low value of 0.4 at 2,500°F, which meant that waveguides beneath these windows faced thermal damage even though they were made of columbium alloy. A mix of oxides of cobalt, aluminum, and nickel gave a suitable coating when fired at 3,000°F, raising the emittance to approximately O.8.40

The pilot needed his own windows. The three main ones, facing forward, were the largest yet planned for a manned spacecraft. They had double panes of fused silica, with infrared-reflecting coatings on all surfaces except the outermost. This inhibited the inward flow of heat by radiation, reducing the load on the active cool­ing of the pilot’s compartment. The window frames expanded when hot; to hold the panes in position, the frames were fitted with springs of Rene 41. The windows also needed thermal protection, and so they were covered with a shield of D-36.

The cockpit was supposed to be jettisoned following re-entry, around Mach 5, but this raised a question: what if it remained attached? The cockpit had two other win­dows, one on each side, which faced a less severe environment and were to be left unshielded throughout a flight. The test pilot Neil Armstrong flew approaches and landings with a modified Douglas F5D fighter and showed that it was possible to land Dyna-Soar safely with side vision only.41

The vehicle was to touch down at 220 knots. It lacked wheeled landing gear, for inflated rubber tires would have demanded their own cooled compartments. For the same reason, it was not possible to use a conventional oil-filled strut as a shock absorber. The craft therefore deployed tricycle landing skids. The two main skids, from Goodyear, were ofWaspaloy nickel steel and mounted wire bristles of Rene 41. These gave a high coefficient of friction, enabling the vehicle to skid to a stop in a planned length of 5,000 feet while accommodating runway irregularities. In place of the usual oleo strut, a long rod of Inconel stretched at the moment of touchdown and took up the energy of impact, thereby serving as a shock absorber. The nose skid, from Bendix, was forged from Rene 4l and had an undercoat of tungsten carbide to resist wear. Fitted with its own energy-absorbing Inconel rod, the front skid had a reduced coefficient of friction, which helped to keep the craft pointing straight ahead during slideout.42

Through such means, the Dyna-Soar program took long strides toward estab­lishing hot structures as a technology suitable for operational use during re-entry from orbit. The X-15 had introduced heat sink fabricated from Inconel X, a nickel steel. Dyna-Soar went considerably further, developing radiation-cooled insulated structures fabricated from Rene 41 superalloy and from refractory materials. A chart from Boeing made the point that in 1958, prior to Dyna-Soar, the state of the art for advanced aircraft structures involved titanium and stainless steel, with tempera­ture limits of 600°F. The X-15 with its Inconel X could withstand temperatures above 1,200°F. Against this background, Dyna-Soar brought substantial advances in the temperature limits of aircraft structures:43





Nose cap



Surface panels



Primary structure



Leading edges



Control surfaces






Meanwhile, while Dyna-Soar was going forward within the Air Force, NASA had its own approaches to putting man in space.

Heat Shields for Mercury and Corona

In November 1957, a month after the first Sputnik reached orbit, the Soviets again startled the world by placing a much larger satellite into space, which held the dog Laika as a passenger. This clearly presaged the flight of cosmonauts, and the question then was how the United States would respond. No plans were ready at the moment, but whatever America did, it would have to be done quickly.

HYWARDS, the nascent Dyna-Soar, was proceeding smartly. In addition, at North American Aviation the company’s chief engineer, Harrison Storms, was in Washington, DC, with a concept designated X-15B. Fitted with thermal protection for return from orbit, it was to fly into space atop a cluster of three liquid-fueled boosters for an advanced Navaho, each with thrust of 415,000 pounds.44 However, neither HYWARDS nor the X-15B could be ready soon. Into this breach stepped Maxime Faget of NACA-Langley, who had already shown a talent for conceptual design during the 1954 feasibility study that led to the original X-15-

In 1958 he was a branch chief within Langley’s Pilotless Aircraft Research Divi­sion. Working on speculation, amid full awareness that the Army or Air Force might win the man-in-space assignment, he initiated a series of paper calculations and wind-tunnel tests of what he described as a “simple nonlifting satellite vehicle which follows a ballistic path in reentering the atmosphere.” He noted that an “attractive feature of such a vehicle is that the research and production experiences of the bal­listic-missile programs are applicable to its design and construction,” and “since it follows a ballistic path, there is a minimum requirement for autopilot, guidance, or control equipment.”45

In seeking a suitable shape, Faget started with the heat shield. Invoking the Allen – Eggers principle, he at first considered a flat face. However, it proved to trap heat by interfering with the rapid airflow that could carry this heat away. This meant that there was an optimum bluntness, as measured by radius of curvature.

Calculating thermal loads and heat-transfer rates using theories of Lees and of Fay and Riddell, and supplementing these estimates with experimental data from his colleague William Stoney, he considered a series of shapes. The least blunt was a cone with a rounded tip that faced the airflow. It had the highest heat input and the highest peak heating rate. A sphere gave better results in both areas, while the best estimates came with a gently rounded surface that faced the flow. It had only two-thirds the total heat input of the rounded cone—and less than one-third the peak heating rate. It also was the bluntest shape of those considered, and it was selected.46

With a candidate heat-shield shape in hand, he turned his attention to the com­plete manned capsule. An initial concept had the shape of a squat dome that was recessed slightly from the edge of the shield, like a circular Bundt cake that does not quite extend to the rim of its plate. The lip of this heat shield was supposed to

produce separated flow over the afterbody to reduce its heating. When tested in a wind tunnel, however, it proved to be unstable at subsonic speeds.

Faget’s group eliminated the open lip and exchanged the domed afterbody for a tall cone with a rounded tip that was to re-enter with its base end forward. It proved to be stable in this attitude, but tests in the 1 l-inch Langley hypersonic wind tunnel showed that it transferred too much heat to the afterbody. Moreover, its forward tip did not give enough room for its parachutes. This brought a return to the domed afterbody, which now was somewhat longer and had a cylinder on top to stow the chutes. Further work evolved the domed shape into a funnel, a conic frustum that retained the cylinder. This configuration provided a basis for design of the Mercury and later of the Gemini capsules, both of which were built by the firm of McDon­nell Aircraft.47

Choice of thermal protection quickly emerged as a critical issue. Fortunately, the thermal environment of a re-entering satellite proved to be markedly less demanding than that of an ICBM. The two vehicles were similar in speed and kinetic energy, but an ICBM was to slam back into the atmosphere at a steep angle, decelerating rapidly due to drag and encountering heating that was brief but very severe. Re­entry from orbit was far easier, taking place over a number of minutes. Indeed, experimental work showed that little if any ablation was to be expected under the relatively mild conditions of satellite entry.

But satellite entry involved high total heat input, while its prolonged duration imposed a new requirement for good materials properties as insulators. They also had to stay cool through radiation. It thus became possible to critique the usefulness of ICBM nose-cone ablators for the prospective new role of satellite reentry.48

Fieat of ablation, in BTU per pound, had been a standard figure of merit. For satellite entry, however, with little energy being carried away by ablation, it could be irrelevant. Phenolic glass, a fine ICBM material with a measured heat of 9,600 BTU per pound, was unusable for a satellite because it had an unacceptably high thermal conductivity. This meant that the prolonged thermal soak of re-entry could have time enough to fry a spacecraft. Teflon, by contrast, had a measured heat only one-third as large. It nevertheless made a superb candidate because of its excellent properties as an insulator.49

Such results showed that it was not necessary to reopen the problem of thermal protection for satellite entry. With appropriate caveats, the experience and research techniques of the ICBM problem could carry over to this new realm. This back­ground made it possible for the Central Intelligence Agency to build operational orbital re-entry vehicles at a time when nose cones for Atlas were still in flight test.

This happened beginning in 1958, when Richard Bissell, a senior manager within the CIA, launched a highly classified reconnaissance program called Corona. General Electric, which was building nose cones for Atlas, won a contract to build the film-return capsule. The company selected ablation as the thermal-protection method, with phenolic nylon as the ablative material.50

The second Corona launch, in April 1959, flew successfully and became the world’s first craft to return safely from orbit. It was supposed to come down near Hawaii, and a ground controller transmitted a command to have the capsule begin re-entry at a particular time. However, he forgot to press a certain button. The director of the recovery effort, Lieutenant Colonel Charles “Moose” Mathison, then learned that it would actually come down near the Norwegian island of Spitzber – gen.

Mathison telephoned a friend in Norway’s air force, Major General Tufte John – sen, and told him to watch for a small spacecraft that was likely to be descending by parachute. Johnsen then phoned a mining company executive on the island and had him send out ski patrols. A three-man patrol soon returned with news: They had seen the orange parachute as the capsule drifted downward near the village of Barentsburg. That was not good because its residents were expatriate Russians. Gen­eral Nathan Twining, Chairman of the Joint Chiefs, summarized the craft’s fate in a memo: “From concentric circular tracks found in the snow at the suspected impact point and leading to one of the Soviet mining concessions on the island, we strongly suspect that the Soviets are in possession of the capsule.”51

Meanwhile, NASA’s Maxime Faget was making decisions concerning thermal protection for his own program, which now had the name Project Mercury. He was well aware of ablation but preferred heat sink. It was heavier, but he doubted that industrial contractors could fabricate an ablative heat shield that had adequate reliability.52

The suitability of ablation could not be tested by flying a subscale heat shield atop a high-speed rocket. Nothing less would do than to conduct a full-scale test using an Atlas ICBM as a booster. This missile was still in development, but in December 1958 the Air Force Ballistic Missile Division agreed to provide one Atlas C within six months, along with eight Atlas Ds over the next several years. This made it possible to test an ablative heat shield for Mercury as early as September 1959.53

The contractor for this shield was General Electric. The ablative material, phe­nolic-fiberglass, lacked the excellent insulating properties of Teflon or phenolic – nylon. Still, it had flown successfully as a ballistic-missile nose cone. The project engineer Aleck Bond adds that “there was more knowledge and experience with fiberglass-phenolic than with other materials. A great deal of ground-test informa­tion was available…. There was considerable background and experience in the fabrication, curing, and machining of assemblies made of Fiberglass.” These could be laid up and cured in an autoclave.54

The flight test was called Big Joe, and it showed conservatism. The shield was heavy, with a density of 108 pounds per cubic foot, but designers added a large safety factor by specifying that it was to be twice as thick as calculations showed to be necessary. The flight was to be suborbital, with range of 1,800 miles but was to simulate a re-entry from orbit that was relatively steep and therefore demanding, producing higher temperatures on the face of the shield and on the afterbody.55

Liftoff came after 3 a. m., a time chosen to coincide with dawn in the landing area so as to give ample daylight for search and recovery. “The night sky lit up and the beach trembled with the roar of the Rocketdyne engines,” notes NASA’s history of Project Mercury. Two of those engines were to fall away during ascent, but they remained as part of the Atlas, increasing its weight and reducing its peak velocity by some 3,000 feet per second. What was more, the capsule failed to separate. It had an onboard attitude-control system that was to use spurts of compressed nitrogen gas to turn it around, to enter the atmosphere blunt end first. But this system used up all its nitrogen trying fruitlessly to swing the big Atlas that remained attached. Separation finally occurred at an altitude of 345,000 feet, while people waited to learn what would happen.56

The capsule performed better than planned. Even without effective attitude con­trol, its shape and distribution of weights gave it enough inherent stability to turn itself around entirely through atmospheric drag. Its reduced speed at re-entry meant that its heat load was only 42 percent of the planned value of 7,100 BTU per square foot. But a particularly steep flight-path angle gave a peak heating rate of 77 percent of the intended value, thereby subjecting the heat shield to a usefully severe test. The capsule came down safely in the Atlantic, some 500 miles short of the planned impact area, but the destroyer USS Strong was not far away and picked it up a few hours later.

Subsequent examination showed that the heating had been uniform over the face of the heat shield. This shield had been built as an ablating laminate with a thickness of 1.075 inches, supported by a structural laminate half as thick. However, charred regions extended only to a depth of 0.20 inch, with further discoloration reaching to 0.35 inch. Weight loss due to ablation came to only six pounds, in line with experimental findings that had shown that little ablation indeed would occur.57

The heat shield not only showed fine thermal performance, it also sustained no damage on striking the water. This validated the manufacturing techniques used in its construction. The overall results from this flight test were sufficiently satisfactory to justify the choice of ablation for Mercury. This made it possible to drop heat sink from consideration and to go over completely to ablation, not only for Mercury but for Gemini, which followed.58

The X-33 and X-34

During the early 1990s, as NASP passed its peak of funding and began to falter, two new initiatives showed that there still was much continuing promise in rockets. The startup firm of Orbital Sciences Corporation had set out to become the first company to develop a launch vehicle as a commercial venture, and this rocket, called Pegasus, gained success on its first attempt. This occurred in April 1990, as NASA’s B-52 took off from Edwards AFB and dropped it into flight. Its first stage mounted wings and tail surfaces. Its third stage carried a small satellite and placed it in orbit.4

In a separate effort, the Strategic Defense Initiative Office funded the DC-X proj­ect of McDonnell Douglas. This single-stage vehicle weighed some 40,000 pounds when fueled and flew with four RL10 rocket engines from Pratt & Whitney. It took off and landed vertically, like Flash Gordon’s rocket ship, using rocket thrust during the descent and avoiding the need for a parachute. It went forward as an exercise in rapid prototyping, with the contract being awarded in August 1991 and the DC-X being rolled out in April 1993- It demonstrated both reusability and low cost, flying with a ground crew of only 15 people along with three more in its control center. It flew no higher than a few thousand feet, but it became the first rocket in history to abort a flight and execute a normal landing.5

The Clinton Administration came to Washington in January 1993- Dan Goldin, the NASA Administrator, soon chartered a major new study of launch options called Access to Space. Arnold Aldrich, Associate Administrator for Space Systems Development, served as its director. With NASP virtually on its deathbed, the work comprised three specific investigations. Each addressed a particular path toward a new generation of launch vehicles, which could include a new shuttle.

Managers at NASA Headquarters and at NASA-Johnson considered how upgrades to current expendables, and to the existing shuttle, might maintain them in service through the year 2030. At NASA-Marshall, a second group looked at prospects for new expendables that could replace existing rockets, including the shuttle, beginning in 2005. A collaboration between Headquarters and Marshall also considered a third approach: development of an entirely new reusable launch vehicle, to replace the shuttle and current expendables beginning in 2008.6

Engineers in industry were ready with ideas of their own. At Lockheed’s famous Skunk Works, manager David Urie already had a concept for a fully-reusable single – stage vehicle that was to fly to orbit. It used a lifting-body configuration that drew on an in-house study of a vehicle to rescue crews from the space station. Urie’s design was to be built as a hot structure with metal external panels for thermal pro­tection and was to use high-performing rocket engines from Rocketdyne that would burn liquid hydrogen and liquid oxygen. This concept led to the X-33.7

Orbital Sciences was also stirring the pot. During the spring of 1993, this com­pany conducted an internal study that examined prospects for a Pegasus follow-on. Pegasus used solid propellant in all three of its stages, but the new effort specifically considered the use of liquid propellants for higher performance. Its concept took shape as an air-launched two-stage vehicle, with the first stage being winged and fully reusable while the second stage, carried internally, was to fly to orbit without being recovered. Later that year executives of Orbital Sciences approached officials of NASA-Marshall to ask whether they might be interested, for this concept might complement that of Lockheed by lifting payloads of much lesser weight. This initia­tive led in time to the X-34.8

NASA’s Access to Space report was in print in January 1994. Managers of the three option investigations had sought to make as persuasive a case as possible for their respective alternatives, and the view prevailed that technology soon would be in hand to adopt Lockheed’s approach. In the words of the report summary,

The study concluded that the most beneficial option is to develop and deploy a fully reusable single-stage-to-orbit (SSTO) pure-rocket launch

vehicle fleet incorporating advanced technologies, and to phase out current systems beginning in the 2008 time period….

The study determined that while the goal of achieving SSTO fully reusable rocket launch vehicles had existed for a long time, recent advances in technology made such a vehicle feasible and practical in the near term provided that necessary technologies were matured and demonstrated prior to start of vehicle development.9

Within weeks NASA followed with a new effort, the Advanced Launch Technol­ogy Program. It sought to lay technical groundwork for a next-generation shuttle, as it solicited initiatives from industry that were to pursue advances in structures, thermal protection, and propulsion.10

The Air Force had its own needs for access to space and had generally been more conservative than NASA. During the late 1970s, while that agency had been build­ing the shuttle, the Air Force had pursued the Titan 34D as a new version of its Titan 3- More recently that service had gone forward with its upgraded Titan 4.11 In May 1994 Lieutenant General Thomas Moorman, Vice Commander of the Air Forces Space Command, released his own study that was known as the Space Launch Mod­ernization Plan. It considered a range of options that paralleled NASA’s, includ­ing development of “a new reusable launch system.” However, whereas NASA had embraced SSTO as its preferred direction, the Air Force study did not even men­tion this as a serious prospect. Nor did it recommend a selected choice of launch system. In a cover letter to the Deputy Secretary of Defense, John Deutch, Moor­man wrote that “this study does not recommend a specific program approach” but was intended to “provide the Department of Defense a range of choices.” Still, the report made a number of recommendations, one of which proved to carry particular weight: “Assign DOD the lead role in expendable launch vehicles and NASA the lead in reusables.”12

The NASA and Air Force studies both went to the White House, where in August the Office of Science and Technology Policy issued a new National Space Transportation Policy. It divided the responsibilities for new launch systems in the manner that the Air Force had recommended and gave NASA the opportunity to pursue its own wishes as well:

The Department of Defense (DoD) will be the lead agency for improvement and evolution of the current U. S. expendable launch vehicle (ELV) fleet, including appropriate technology development.

The National Aeronautics and Space Administration (NASA) will provide for the improvement of the Space Shuttle system, focusing on reliability, safety, and cost-effectiveness.

The National Aeronautics and Space Administration will be the lead agency for technology development and demonstration for next generation reusable space transportation systems, such as the single-stage-to-orbit concept.13

The Pentagon’s assignment led to the Evolved Expendable Launch Vehicle Pro­gram, which brought development of the Delta 4 family and of new versions of the Atlas.14

The new policy broke with past procurement practices, whereby NASA had paid the full cost of the necessary research and development and had purchased flight vehicles under contract. Instead, the White House took the view that the private sector could cover these costs, developing the next space shuttle as if it were a new commercial airliner. NASA’s role still was critical, but this was to be the longstand­ing role of building experimental flight craft to demonstrate pertinent technologies. The policy document made this clear:

The objective of NASA’s technology development and demonstration effort is to support government and private sector decisions by the end of this decade on development of an operational next generation reusable launch system.

Research shall be focused on technologies to support a decision no later than December 1996 to proceed with a sub-scale flight demonstration

which would prove the concept of single-stage-to-orbit___

It is envisioned that the private sector could have a significant role in managing the development and operation of a new reusable space transportation system. In anticipation of this role, NASA shall actively involve the private sector in planning and evaluating its launch technology activities.15

This flight demonstrator became the X-33, with the smaller X-34 being part of the program as well. In mid-October NASA issued Cooperative Agreement Notices, which resembled requests for proposals, for the two projects. At a briefing to indus­try representatives held at NASA-Marshall on 19 October 1994, agency officials presented year-by-year projections of their spending plans. The X-33 was to receive $660 million in federal funds—later raised to $941 million—while the X-34 was slated for $70 million. Contractors were to add substantial amounts of their own and to cover the cost of overruns. Orbital Sciences was a potential bidder and held no contract, but its president, David Thompson, was well aware that he needed deeper pockets. He turned to Rockwell International and set up a partnership.16

The X-34 was the first to go to contract, as NASA selected the Orbital Sciences proposal in March 1995- Matching NASA’s $70 million, this company and Rock­well each agreed to put up $60 million, which meant that the two corporations together were to provide more than 60 percent of the funding. Their partnership, called American Space Lines, anticipated developing an operational vehicle, the X – 34B, that would carry 2,500 pounds to orbit. Weighing 108,500 pounds when fully fueled, it was to fly from NASA’s Boeing 747 that served as the shuttle’s carrier aircraft. Its length of 88 feet compared with 122 feet for the space shuttle orbiter.17

Very quickly an imbroglio developed over the choice of rocket engine for NASA’s test craft. The contract called for use of a Russian engine, the Energomash RD-120 that was being marketed by Pratt & Whitney. Rockwell, which owned Rocketdyne, soon began demanding that its less powerful RS-27 engine be used instead. “The bottom line is Rockwell came in two weeks ago and said ‘Use our engine or we’ll walk,”’ a knowledgeable industry observer told Aviation Week.19

As the issue remained unresolved, Orbital Sciences missed program milestone dates for airframe design and for selecting between configurations. Early in Novem­ber NASA responded by handing Orbital a 14-day suspension notice. This led to further discussions, but even the personal involvement of Dan Goldin failed to resolve the matter. In addition, the X-34B concept had grown to as much as 140,000 pounds. Within the program, strong private-sector involvement meant that private – sector criteria of profitability were important, and Orbital determined that the new and heavy configuration carried substantial risk of financial loss. Early in 1996 com­pany officials called for a complete redesign of NASA’s X-34 that would substan­tially reduce its size. The agency responded by issuing a stop-work order. Rockwell then made its move by bailing out as well. With this, the X-34 appeared dead.

But it soon returned to life, as NASA prepared to launch it anew. It now was necessary to go back to square one and again ask for bids and proposals, and again Orbital Sciences was in the running, this time without a partner. The old X-34 had amounted to a prototype of the operational X-34B, approaching it in size and weight while also calling for use of NASA’s Boeing 747. The company’s new concept was only 58 feet long compared with 83; its gross weight was to be 45,000 pounds rather than 120,000. It was not to launch payloads into orbit but was to serve as a technology demonstrator for an eventual (and larger) first stage by flying to Mach 8. In June 1996 NASA selected Orbital again as the winner, choosing its proposal over competing concepts from such major players as McDonnell Douglas, Northrop Grumman, Rockwell, and the Lockheed Martin Skunk Works.19

Preparations for the X-33 had meanwhile been going forward as well. Design studies had been under way, with Lockheed Martin, Rockwell, and McDonnell Douglas as the competitors. In July 1996 Vice President Albert Gore announced that Lockheed had won the prize. This company envisioned a commercial SSTO craft named VentureStar as its eventual goal. It was to carry a payload of 59,000 pounds to low Earth orbit, topping the 51,000 pounds of the shuttle. Lockheed’s X-33 amounted to a version of this vehicle built at 53 percent scale. It was to fly to

Mach 15, well short of orbital velocity, but would subject its thermal protection to a demanding test.20

No rocket craft of any type had ever flown to orbit as a single stage. NASA hoped that vehicles such as VentureStar not only would do this but would achieve low cost, cutting the cost of a pound in orbit from the $10,000 of the space shuttle to as little as $1,000.21 The X-33 was to demonstrate the pertinent technology, which was being pursued under NASA’s Advanced Launch Technology Program of 1994. Developments based on this program were to support the X-34 as well.

Lightweight structures were essential, particularly for the X-33. Accordingly, there was strong interest in graphite-composite tanks and primary structure. This represented a continuation of NASP activity, which had anticipated a main hydro­gen tank of graphite-epoxy. The DC-X supported the new work, as NASA took it over and renamed it the DC-ХА. Its oxygen tank had been aluminum; a new one, built in Russia, used an aluminum-lithium alloy. Its hydrogen tank, also of aluminum, gave way to one of graphite-epoxy with lightweight foam for internal insulation. This material also served for an intertank structure and a feedline and valve assembly.22

Rapid turnaround offered a particularly promising road to low launch costs, and the revamped DC-ХА gave support in this area as well. Two launches, conducted in June 1996, demonstrated turnaround and reflight in only 26 hours, again with its ground crew of only 15-23

Thermal protection raised additional issues. The X-34 was to fly only to Mach 8 and drew on space shuttle technology. Its surface was to be protected with insulation blankets that resembled those in use on the shuttle orbiter. These included the High Heat Blanket for the X-34 undersurface, rated for 2,000°F, with a Nextel 440 fabric and Saffll batting. The nose cap as well as the wing and rudder leading edges were protected with Fibrous Refractory Composite Insulation, which formed the black silica tiles of the shuttle orbiter. For the X-34, these tiles were to be impregnated with silicone to make them water resistant, impermeable to flows of hot gas, and easier to repair.24

VentureStar faced the demands of entry from orbit, but its re-entry environment was to be more benign than that of the shuttle. The shuttle orbiter was compact in size and relatively heavy and lost little of its orbital energy until well into the atmo­sphere. By contrast, VentureStar would resemble a big lightweight balloon when it re-entered after expending its propellants. The VentureStar thermal protection system was to be tested in flight on the X-33- It had the form of a hot structure, with radiative surface panels of carbon-carbon, Inconel 617 nickel alloy, and titanium, depending on the temperature.25

In an effort separate from that of the X-33, elements of this thermal protec­tion were given a workout by being mounted to the space shuttle Endeavour and tested during re-entry. Thoughts of such tests dated to 1981 and finally were real­

ized during Mission STS-77 in May 1996. Panels of Inconel 617 and of Ті-1100 titanium, measuring 7 by 10 inches, were mounted in recessed areas of the fuselage that lay near the vertical tail and which were heated only to approximately 1,000°F during re-entry. Both materials were rated for considerably higher temperatures, but this successful demonstration put one more arrow in NASA’s quiver.26

For both VentureStar and its supporting X-33, light weight was critical. The X-30 of NASP had been designed for SSTO operation, with a structural mass frac­tion—the ratio of unfueled weight to fully fueled weight—of 25 percent.27 This requirement was difficult to achieve because most of the fuel was slush hydrogen, which has a very low density. This ballooned the size of the X-30 and increased the surface area that needed structural support and thermal protection. VentureStar was to use rockets, which had less performance than scramjets. It therefore needed more fuel, and its structural mass fraction, including payload, engines, and thermal pro­tection, was less than 12 percent. However, this fuel included a great deal of liquid oxygen, which was denser than water and drove up the weight of the propellant. This low structural mass fraction therefore appeared within reach, and for the X-33, the required value was considerably less stringent. Its design called for an empty weight of 63,000 pounds and a loaded weight of 273,000, for a structural mass fraction of 23 percent.28

Even this design goal imposed demands, for while liquid oxygen was dense and compact, liquid hydrogen still was bulky and again enlarged the surface area. Design­ers thus made extensive use of lightweight composites, specifying graphite-epoxy for the hydrogen tanks. A similar material, graphite-bismaleimide, was to serve for load-bearing trusses as well as for the outer shell that was to support the thermal protection. This represented the X-30 s road not taken, for the NASP thermal envi­ronment during ascent had been so severe that its design had demanded a primary structure of titanium-matrix composite, which was heavier. The lessened require­ments of VentureStar s thermal protection meant that Lockheed could propose to reach orbit using materials that were considerably less heavy—that indeed were lighter than aluminum. The X-33 design saved additional weight because it was to be unpiloted, needing no flight deck and no life-support system for a crew.29

But aircraft often gain weight during development, and the X-33 was no excep­tion. Starting in mid-1996 with a dry weight of 63,000 pounds, it was at 80,000 a year later, although a weight-reduction exercise trimmed this to 73,000.30 Managers responded by cutting the planned top speed from Mach 15 or more to Mach 13.8. Jerry Rising, vice president at the Skunk Works that was the X-33 s home, explained that such a top speed still would permit validation of the thermal protection in flight test. The craft would lift off from Edwards AFB and follow a boost-glide tra­jectory, reaching a peak altitude of 300,000 feet. The vehicle then would be lower in the atmosphere than previously planned, and the heating rate would consequently be higher to properly exercise the thermal protection. The X-33 then was to glide onward to a landing at Malmstrom AFB in northern Montana, 950 miles from Edwards.31

The original program plan called for rollout of a complete flight vehicle on 1 November 1998. When that date arrived, though, the effort faced a five-month schedule slip. This resulted from difficulties with the rocket engines.32 Then in December, two days before Christmas, the program received a highly unwelcome present. A hydrogen fuel tank, under construction at a Lockheed Martin facility in Sunnyvale, California, sustained major damage within an autoclave. An inner wall of the tank showed delamination over 90 percent of its area, while another wall sprang loose from its frame. The tank had been inspected using ultrasound, but this failed to disclose the incipient problem, which raised questions as to the adequacy of inspection procedures as well as of the tank design itself. Another delay was at hand of up to seven months.

By May 1999 the weight at main engine cutoff was up to 83,000 pounds, includ­ing unburned residual propellant. Cleon Lacefield, the Lockheed Martin program manager, continued to insist bravely that the vehicle would reach at least Mach 13, but working engineers told Aviation Week that the top speed had been Mach 10 for quite some time and that “the only way it’s getting to Malmstrom is on the back of a truck.”33 The commercial VentureStar concept threatened to be far more demand­ing, and during that month Peter Teets, president and CEO of Lockheed Martin, told the U. S. Senate Commerce and Science Committee that he could not expect to attract the necessary private-sector financing. “Wall Street has spoken,” he declared. “They have picked the status quo; they will finance systems with existing technol­ogy. They will not finance VentureStar.”34

By then the VentureStar design had gone over to aluminum tanks. These were heavier than tanks of graphite-epoxy, but the latter brought unacceptable technical risks because no autoclave existed that was big enough to fabricate such tankage. Lockheed Martin designers reshaped VentureStar and accepted a weight increase from 2.6 million pounds to 3.3 million, (ft had been 2.2 million in 1996.) The use of graphite-epoxy in the X-33 tank now no longer was relevant to VentureStar, but this was what the program held in hand, and a change to aluminum would have added still more weight to the X-33.

During 1999 a second graphite-epoxy hydrogen tank was successfully assem­bled at Lockheed Martin and then was shipped to NASA-Marshall for structural tests. Early in November it experienced its own failure, showing delamination and a ripped outer skin along with several fractures or breaks in the skin. Engineers had been concerned for months about structural weakness, with one knowledgeable specialist telling Aviation Week, “That tank belonged in a junkyard, not a test stand.” The program now was well on its way to becoming an orphan. It was not beloved by NASA, which refused to increase its share of funding above $941 million, while the in-house cost at Lockheed Martin was mounting steadily.35

The X-33 effort nevertheless lingered through the year 2000. This was an elec­tion year, not a good time to cancel a billion-dollar federal program, and A1 Gore was running for president. He had announced the contract award in 1996, and in the words of a congressional staffer, “I think NASA will have a hard time walking away from the X-33 until after the election. For better or worse, A1 Gore now has ownership of it. They can’t admit it’s a failure.”36

The X-34 was still in the picture, as a substantial effort in its own right. Its loaded weight of 47,000 pounds approached the 56,000 of the X-15 with external tanks, built more than 30 years earlier.37 Yet despite this reduced weight, the X-34 was to reach Mach 8, substantially exceeding the Mach 6.7 of the X-15. This reflected the use of advanced materials, for whereas the X-15 had been built of heavy Inconel X, the X-34 design specified lightweight composites for the primary structure and fuel tank, along with aluminum for the liquid-oxygen tank.38

Its construction went forward without major mishaps because it was much smaller than the X-33- The first of them reached completion in February 1999, but during the next two years it never came close to powered flight. The reason was that the X-34 program called for use of an entirely new engine, the 60,000-pound-thrust Fastrak of NASA-Marshall that burned liquid oxygen and kerosene. This engine encountered development problems, and because it was not ready, the X-34 could not fly under power.39

Early in March 2001, with George W Bush in the White House, NASA pulled the plug. Arthur Stephenson, director of NASA-Marshall, canceled the X-34. This reflected the influence of the Strategic Defense Initiative Office, which had main­tained a continuing interest in low-cost access to orbit and had determined that the X-34’s costs outweighed the benefits. Stephenson also announced that the coopera­tive agreement between NASA and Lockheed Martin, which had supported the X – 33, would expire at the end of the month. He then pronounced an epitaph on both programs: “One of the things we have learned is that our technology has not yet advanced to the point that we can successfully develop a new reusable launch vehicle that substantially improves safety, reliability, and affordability.”40

One could say that the X-30 effort went farther than the X-33, for the former successfully exercised a complete hydrogen tank within its NIFTA project, whereas the latter did not. But the NIFTA tank was subscale, whereas those of the X-33 were full-size units intended for flight. The reason that NIFTA appears to have done better is that NASP never got far enough to build and test a full-size tank for its hydrogen slush. Because that tank also was to have been of graphite-epoxy, as with the X-33, it is highly plausible that the X-30 would have run aground on the same shoal of composite-tank structural failure that sank Lockheed Martin’s rocket craft.41


In 1953, on the eve of the Atlas go-ahead, investigators were prepared to con­sider several methods for thermal protection of its nose cone. The simplest was the heat sink, with a heat shield of thick copper absorbing the heat of re-entry. An alternative approach, the hot structure, called for an outer covering of heat-resistant shingles that were to radiate away the heat. A layer of insulation, inside the shingles, was to protect the primary structure. The shingles, in turn, overlapped and could expand freely.

A third approach, transpiration cooling, sought to take advantage of the light weight and high heat capacity of boiling water. The nose cone was to be filled with this liquid; strong g-forces during deceleration in the atmosphere were to press the water against the hot inner skin. The skin was to be porous, with internal steam pressure forcing the fluid through the pores and into the boundary layer. Once injected, steam was to carry away heat. It would also thicken the boundary layer, reducing its temperature gradient and hence its rate of heat transfer. In effect, the nose cone was to stay cool by sweating.41

Still, each of these approaches held difficulties. Though potentially valuable, transpiration cooling was poorly understood as a topic for design. The hot-structure concept raised questions of suitably refractory metals along with the prospect of losing the entire nose cone if a shingle came off. The heat-sink approach was likely to lead to high weight. Even so, it seemed to be the most feasible way to proceed, and early Atlas designs specified use of a heat-sink nose cone.42

The Army had its own activities. Its missile program was separate from that of the Air Force and was centered in Huntsville, Alabama, with the redoubtable Wer – nher von Braun as its chief. He and his colleagues came to Huntsville in 1950 and developed the Redstone missile as an uprated V-2. It did not need thermal protec­tion, but the next missile would have longer range and would certainly need it.43

Von Braun was an engineer. He did not set up a counterpart of Avco Research Laboratory, but his colleagues nevertheless proceeded to invent their way toward a nose cone. Their concern lay at the tip of a rocket, but their point of departure came at the other end. They were accustomed to steering their missiles by using jet vanes, large tabs of heat-resistant material that dipped into the exhaust. These vanes then deflected the exhaust, changing the direction of flight. Von Brauns associates thus had long experience in testing materials by placing them within the blast of a rocket engine. This practice carried over to their early nose-cone work.44

The V-2 had used vanes of graphite. In November 1952, these experimenters began testing new materials, including ceramics. They began working with nose – cone models late in 1953. In July 1954 they tested their first material of a new type: a reinforced plastic, initially a hard melamine resin strengthened with glass fiber. New test facilities entered service in June 1955, including a rocket engine with thrust of 20,000 pounds and a jet diameter of 14.5 inches.45

The pace accelerated after November of that year, as Von Braun won approval from Defense Secretary Charles Wilson to proceed with development of his next missile. This was Jupiter, with a range of 1,500 nautical miles.46 It thus was mark­edly less demanding than Atlas in its thermal-protection requirements, for it was to re-enter the atmosphere at Mach 15 rather than Mach 20 and higher. Even so, the Huntsville group stepped up its work by introducing new facilities. These included a rocket engine of 135,000 pounds of thrust for use in nose-cone studies.

The effort covered a full range of thermal-protection possibilities. Transpira­tion cooling, for one, raised unpleasant new issues. Convair fabricated test nose cones with water tanks that had porous front walls. The pressure in a tank could be adjusted to deliver the largest flow of steam when the heat flux was greatest. But this technique led to hot spots, where inadequate flow brought excessive temperatures. Transpiration thus fell by the wayside.

Heat sink drew attention, with graphite holding promise for a time. It was light in weight and could withstand high temperatures. But it also was a good heat con­ductor, which raised problems in attaching it to a substructure. Blocks of graphite also contained voids and other defects, which made them unusable.

By contrast, hot structures held promise. Researchers crafted lightweight shin­gles of tungsten and molybdenum backed by layers of polished corrugated steel and aluminum, to provide thermal insulation along with structural support. When the shingles topped 3,250°F, the innermost layer stayed cool and remained below 200°E Clearly, hot structures had a future.

The initial work with a reinforced plastic, in 1954, led to many more tests of similar materials. Engineers tested such resins as silicones, phenolics, melamines, Teflon, epoxies, polyesters, and synthetic rubbers. Filler materials included soft glass, fibers of silicon dioxide and aluminum silicate, mica, quartz, asbestos, nylon, graphite, beryllium, beryllium oxide, and cotton.


Jupiter missile with ablative nose cone. (U. S. Army)

Fiber-reinforced polymers proved to hold particular merit. The studies focused on plastics reinforced with glass fiber, with a commercially-available material, Micarta 259-2, demonstrating noteworthy promise. The Army stayed with this choice as it moved toward flight test of subscale nose cones in 1957. The first one used Micarta 259-2 for the plastic, with a glass cloth as the filler.47

In this fashion the Army ran well ahead of the Air Force. Yet the Huntsville work did not influence the Atlas effort, and the reasons ran deeper than interser­vice rivalry. The relevance of that work was open to question because Atlas faced a far more demanding re-entry environment. In addition, Jupiter faced competition from Thor, an Air Force missile of similar range. It was highly likely that only one would enter production, so Air Force designers could not merely become apt pupils of the Army. They had to do their own work, seeking independent approaches and trying to do better than Von Braun.

Amid this independence, George Sutton came to the re-entry problem. He had received his Ph. D. at Caltech in 1955 at age 27, jointly in mechanical engineering and physics. His only experience within the aerospace industry had been a summer job at the Jet Propulsion Laboratory, but he jumped into re-entry with both feet after taking his degree. He joined Lockheed and became closely involved in study­ing materials suitable for thermal protection. Then he was recruited by General Electric, leaving sunny California and arriving in snowy Schenectady, New York, early in 1956.

Heat sinks for Atlas were ascendant at that time, with Lester Lees’s heat-transfer theory appearing to give an adequate account of the thermal environment. Sutton was aware of the issues and wrote a paper on heat-sink nose cones, but his work soon led him in a different direction. There was interest in storing data within a small capsule that would ride with a test nose cone and that might survive re-entry if the main cone were to be lost. This capsule needed its own thermal protection, and it was important to achieve light weight. Hence it could not use a heat sink. Sutton’s management gave him a budget of $75,000 to try to find something more suitable.48

This led him to re-examine the candidate materials that he had studied at Lock­heed. He also learned that other GE engineers were working on a related problem. They had built liquid propellant rocket engines for the Army’s Hermes program, with these missiles being steered by jet vanes in the fashion of the V-2 and Redstone. The vanes were made from alternating layers of glass cloth and thermosetting resins. They had become standard equipment on the Hermes A-З, but some of them failed due to delamination. Sutton considered how to avoid this:

“I theorized that heating would char the resin into a carbonaceous mass of relatively low strength. The role of the fibers should be to hold the carbonaceous char to virgin, unheated substrate. Here, low thermal conductivity was essential to minimize the distance from the hot, exposed surface to the cool substrate, to minimize the mass of material that had to be held by the fibers as well as the degradation of the fibers. The char itself would eventually either be vaporized or be oxidized either by boundary layer oxygen or by C02 in the boundary layer. The fibers would either melt or also vaporize. The question was how to fabricate the material so that the fibers interlocked the resin, which was the opposite design philosophy to existing laminates in which the resin interlocks the fibers. 1 believed that a solution might be the use of short fibers, randomly oriented in a soup of resin, which was then molded into the desired shape. 1 then began to plan the experiments to test this hypothesis.”49

Sutton had no pipeline to Huntsville, but his plan echoed that of Von Braun. He proceeded to fabricate small model nose cones from candidate fiber-reinfo reed plastics, planning to test them by immersion in the exhaust of a rocket engine. GE was developing an engine for the first stage of the Vanguard program; prototypes were at hand, along with test stands. Sutton arranged for an engine to produce an exhaust that contained free oxygen to achieve oxidation of the carbon-rich char.

He used two resins along with five types of fiber reinforcement. The best per­formance came with the use of Refrasil reinforcement, a silicon-dioxide fiber. Both resins yielded composites with a heat capacity of 6,300 BTU per pound or greater. This was astonishing. The materials had a density of 1.6 times that of water. Yet they absorbed more than six times as much heat, pound for pound, as boiling water!50

Here was a new form of thermal protection: ablation. An ablative heat shield could absorb energy through latent heat, when melting or evaporating, and through sensible heat, with its temperature rise. In addition, an outward flow of ablating volatiles thickened the boundary layer, which diminished the heat flux. Ablation promised all the advantages of transpiration cooling, within a system that could be considerably lighter and yet more capable.51

Sutton presented his experimental results in June 1957 at a technical conference held at the firm of Ramo-Wooldridge in Los Angeles. This company was providing technical support to the Air Forces Atlas program management. Following this talk, George Solomon, one of that firm’s leading scientists, rose to his feet and stated that ablation was the solution to the problem of thermal protection.

The Army thought so too. It had invented ablation on its own, considerably ear­lier and amid far deeper investigation. Indeed, at the moment when Sutton gave his talk, Von Braun was only two months away from a successful flight test of a subscale nose cone. People might argue whether the Soviets were ahead of the United States in missiles, but there was no doubt that the Army was ahead of the Air Force in nose cones. Jupiter was already slated for an ablative cone, but Thor was to use heat sink, as was the intercontinental Atlas.

Already, though, new information was available concerning transition from lam­inar to turbulent flow over a nose cone. Turbulent heating would be far more severe, and these findings showed that copper, the best heat-sink material, was inadequate for an ICBM. Materials testing now came to the forefront, and this work needed new facilities. A rocket-engine exhaust could reproduce the rate of heat transfer, but in Kantrowitz’s words, “a rocket is not hot enough.”52 It could not duplicate the temperatures of re-entry.

A shock tube indeed gave a suitably hot flow, but its duration of less than a millisecond was hopelessly inadequate for testing ablative materials. Investigators needed a new type of wind tunnel that could produce a continuous flow, but at temperatures far greater than were available. Fortunately, such an installation did not have to reproduce the hypersonic Mach numbers of re-entry; it sufficed to duplicate the necessary temperatures within the flow. The instrument that did this was the arc tunnel.

It heated the air with an electric arc, which amounted to a man-made stroke of lightning. Such arcs were in routine use in welding; Avco’s Thomas Brogan noted that they reached 6500 K, “a temperature which would exist at the [tip] of a blunt body flying at 22,000 feet per second.” In seeking to develop an arc-heated wind tunnel, a point of departure lay in West Germany, where researchers had built a “plasma jet.”53

This device swirled water around a long carbon rod that served as the cathode. The motion of the water helped to keep the arc focused on the anode, which was also of carbon and which held a small nozzle. The arc produced its plasma as a mix of very hot steam and carbon vapor, which was ejected through the nozzle. This invention achieved pressures of 50 atmospheres, with the plasma temperature at the nozzle exit being measured at 8000 K. The carbon cathode eroded relatively slowly, while the water supply was easily refilled. The plasma jet therefore could operate for fairly long times.54

At NACA-Langley, an experimental arc tunnel went into operation in May 1957- It differed from the German plasma jet by using an electric arc to heat a flow of air, nitrogen, or helium. With a test section measuring only seven millimeters square, it was a proof-of-principle instrument rather than a working facility. Still, its plasma temperatures ranged from 5800 to 7000 K, which was well beyond the reach of a conventional hypersonic wind tunnel.55

At Avco, Kantrowitz paid attention when he heard the word “plasma.” He had been studying such ionized gases ever since he had tried to invent controlled fusion. His first arc tunnel was rated only at 130 kilowatts, a limited power level that restricted the simulated altitude to between 165,000 and 210,000 feet. Its hot plasma flowed from its nozzle at Mach 3.4, but when this flow came to a stop when impinging on samples of quartz, the temperature corresponded to flight velocities as high as 21,000 feet per second. Tests showed good agreement between theory and experiment, with measured surface temperatures of 2700 К falling within three percent of calculated values. The investigators concluded that opaque quartz “will effectively absorb about 4000 BTU per pound for ICBM and [intermediate-range] trajectories.”56

In Huntsville, Von Brauns colleagues found their way as well to the arc tunnel. They also learned of the initial work in Germany. In addition, the small California firm of Plasmadyne acquired such a device and then performed experiments under contract to the Army. In 1958 Rolf Buhler, a company scientist, discovered that when he placed a blunt rod of graphite in the flow, the rod became pointed. Other investigators attributed this result to the presence of a cool core in the arc-heated jet, but Sutton succeeded in deriving this observed shape from theory.

This immediately raised the prospect of nose cones that after all might be sharply pointed rather than blunt. Such re-entry bodies would not slow down in the upper atmosphere, perhaps making themselves tempting targets for antiballistic missiles, but would continue to fall rapidly. Graphite still had the inconvenient features noted previously, but a new material, pyrolytic graphite, promised to ease the problem of its high thermal conductivity.

Pyrolytic graphite was made by chemical vapor deposition. One placed a tem­perature-resistant form in an atmosphere of gaseous hydrocarbons. The hot surface broke up the gas molecules, a process known as pyrolysis, and left carbon on the sur­face. The thermal conductivity then was considerably lower in a direction normal to the surface than when parallel to it. The low value of this conductivity, in the normal direction, made such graphite attractive.57

Having whetted their appetites with the 130-kilowatt facility, Avco went on to build one that was two orders of magnitude more powerful. It used a 15-megawatt power supply and obtained this from a bank of 2,000 twelve-volt truck batteries, with motor-generators to charge them. They provided direct current for run times of up to a minute and could be recharged in an hour.58

With this, Avco added the high-power arc tunnel to the existing array of hyper­sonic flow facilities. These included aerodynamic wind tunnels such as Beckers, along with plasma jets and shock tubes. And while the array of ground installations proliferated, the ICBM program was moving toward a different kind of test: full – scale flight.

Gemini and Apollo

An Apollo spacecraft, returning from the Moon, had twice the kinetic energy of a flight in low orbit and an aerodynamic environment that was nearly three times as severe. Its trajectory also had to thread a needle in its accuracy. Too steep a return would subject its astronauts to excessive g-forces. Too shallow a re-entry meant that it would show insufficient loss of speed within the upper atmosphere and would fly back into space, to make a final entry and then land at an unplanned location. For a simple ballistic trajectory, this “corridor” was as little as seven miles wide, from top to bottom.59

At the outset, these issues raised two problems that were to be addressed in flight test. The heat shield had to be qualified, in tests that resembled those of the X-17 but took place at much higher velocity. In addition, it was necessary to show that a re-entering spacecraft could maneuver with some precision. It was vital to broaden the corridor, and the only way to do this was to use lift. This meant demonstrat­ing successful maneuvers that had to be planned in advance, using data from tests in ground facilities at near-orbital speeds, when such facilities were most prone to error.

Apollo’s Command Module, which was to execute the re-entry, lacked wings. Still, spacecraft of this general type could show lift-to-drag ratios of 0.1 or 0.2 by flying at a nonzero angle of attack, thereby tilting the heat shield and turning it into a lifting surface. Such values were far below those achievable with wings, but they brought useful flexibility during re-entry by permitting maneuver, thereby achiev­ing a more accurate splashdown.

As early as 1958, Faget and his colleagues had noted three methods for trimming a capsule to a nonzero angle. Continuous thrust from a reaction-control system could do this, tilting the craft from its equilibrium attitude. A drag flap could do it as well by producing a modest amount of additional air resistance on one side of the vehicle. The simplest method required no onboard mechanism that might fail in flight and that expended no reaction-control propellant. It called for nothing more than a nonsymmetrical distribution of weight within the spacecraft, creating an offset in the location of the center of gravity. During re-entry, this offset would trim the craft to a tilted attitude, again automatically, due to the extra weight on one side. An astronaut could steer his capsule by using attitude control to roll it about its long axis, thereby controlling the orientation of the lift vector.60

This center-of-gravity offset went into the Gemini capsules that followed those of Project Mercury. The first manned Gemini flight carried the astronauts Virgil “Gus” Grissom and John Young on a three-orbit mission in March 1965. Following re-entry, they splashed down 60 miles short of the carrier USS Intrepid, which was on the aim point. This raised questions as to the adequacy of the preflight hyper­sonic wind-tunnel tests that had provided estimates of the spacecraft L/D used in mission planning.

The pertinent data had come from only two facilities. The Langley 11-inch tunnel had given points near Mach 7, while an industrial hotshot installation cov­ered Mach 15 to 22, which was close to orbital speed. The latter facility lacked instruments of adequate precision and had produced data points that showed a large scatter. Researchers had averaged and curve-fit the measurements, but it was clear that this work had introduced inaccuracies.61

During that year flight data became available from the Grissom-Young mission and from three others, yielding direct measurements of flight angle of attack and L/D. To resolve the discrepancies, investigators at the Air Forces Arnold Engineer­ing Development Center undertook further studies using two additional facilities. Tunnel F, a hotshot, had a 100-inch-diameter test section and reached Mach 20, heating nitrogen with an electric arc and achieving run times of 0.05 to 0.1 seconds. Tunnel L was a low-density, continuous-flow installation that also used arc-heated nitrogen. The Langley 11 -inch data was viewed as valid and was retained in the reanalysis.

This work gave an opportunity to benchmark data from continuous-flow and hotshot tunnels against flight data, at very high Mach numbers. Size did not matter, for the big Tunnel F accommodated a model at one-fifteenth scale that incorpo­rated much detail, whereas Tunnel L used models at scales of 1/120 and 1/180, the latter being nearly small enough to fit on a tie tack. Even so, the flight data points gave a good fit to curves derived using both tunnels. Billy Griffith, supervising the tests, concluded: “Generally, excellent agreement exists” between data from these sources.

The preflight data had brought estimated values of L/D that were too high by 60 percent. This led to a specification for the re-entry trim angle that proved to be off by 4.7 degrees, which produced the miss at splashdown. Julius Lukasiewicz, long­time head of the Von Karman Gas Dynamics Facility at AEDC, later added that if AEDC data had been available prior to the Grissom-Young flight, “the impact point would have been predicted to within ±10 miles.”62

The same need for good data reappeared during Apollo. The first of its orbital missions took place during 1966, flying atop the Saturn I-B. The initial launch, designated AS-201, flew suborbitally and covered 5,000 miles. A failure in the reac­tion controls produced uncontrolled lift during entry, but the craft splashed down 38 miles from its recovery ship. AS-202, six months later, was also suborbital. It executed a proper lifting entry—and undershot its designated aim point by 205 miles. This showed that its L/D had also been mispredicted.63

Estimates of the Apollo L/D had relied largely on experimental data taken during 1962 at Cornell Aeronautical Laboratory and Mach 15.8, and at AEDC and Mach 18.7- Again these measurements lacked accuracy, and once more Billy Griffith of AEDC stepped forward to direct a comprehensive set of new measurements. In addition to Tunnels F and L, used previously, the new work used Tunnels A, B, and C, which with the other facilities covered a range from Mach 3 to 20. To account for effects due to model supports in the wind tunnels, investigators also used a gun range that fired small models as free-flight projectiles, at Mach 6.0 to 8.5.

The 1962 estimates of Apollo L/D proved to be off by 20 percent, with the trim angle being in error by 3 degrees.64 As with the Gemini data, these results showed anew that one could not obtain reliable data by working with a limited range of facilities. But when investigators broadened their reach to use more facili­ties, and sought accuracy through such methods as elimination of model-support errors, they indeed obtained results that matched flight test. This happened twice, with both Gemini and Apollo, with researchers finally getting the accurate estimates they needed.

These studies dealt with aerodynamic data at hypervelocity. In a separate series, other flights sought data on the re-entry environment that could narrow the range of acceptable theories of hypervelocity heating. Two such launches constituted Proj­ect Fire, which flew spacecraft that were approximately two feet across and had the general shape of Apollo’s Command Module. Three layers of beryllium served as calorimeters, with measured temperature rises corresponding to total absorbed heat. Three layers of phenolic-asbestos alternated with those layers to provide thermal protection. Windows of fused quartz, which is both heat-resistant and transparent over a broad range of optical wavelengths, permitted radiometers to directly observe the heat flux due to radiation, at selected locations. These included the nose, where heating was most intense.

The Fire spacecraft rode atop Atlas boosters, with flights taking place in April 1964 and May 1965. Following cutoff of the Atlas, an Antares solid-fuel booster, modified from the standard third stage of the Scout booster, gave the craft an addi­tional 17,000 feet per second and propelled it into the atmosphere at an angle of nearly 15 degrees, considerably steeper than the range of angles that were acceptable for an Apollo re-entry. This increased the rate of heating and enhanced the contri­bution from radiation. Each beryllium calorimeter gave useful data until its outer surface began to melt, which took only 2.5 seconds as the heating approached its maximum. When decelerations due to drag reached specified levels, an onboard controller ejected the remnants of each calorimeter in turn, along with its underly­ing layer of phenolic-asbestos. Because these layers served as insulation, each ejec­tion exposed a cool beryllium surface as well as a clean set of quartz windows.

Fire 1 entered the atmosphere at 38,000 feet per second, markedly faster than the 35,000 feet per second of operational Apollo missions. Existing theories gave a range in estimates of total peak heating rate from 790 to 1,200 BTU per square foot – second. The returned data fell neatly in the middle of this range. Fire 2 did much the same, re-entering at 37,250 feet per second and giving a measured peak heating rate of just over 1,000 BTU per square foot-second. Radiative heating indeed was significant, amounting to some 40 percent of this total. But the measured values, obtained by radiometer, were at or below the minimum estimates obtained using existing theories.65

Earlier work had also shown that radiative heating was no source of concern. The new work also validated the estimates of total heating that had been used in designing the Apollo heat shield. A separate flight test, in August 1964, placed a small vehicle—the R-4—atop a five-stage version of the Scout. As with the X-17, this fifth stage ignited relatively late in the flight, accelerating the test vehicle to its peak speed when it was deep in the upper atmosphere. This speed, 28,000 feet per second, was considerably below that of an Apollo entry. But the increased air density subjected this craft to a particularly high heating rate.56

This was a materials-testing flight. The firm of Avco had been developing abla­tors of lower and lower weight and had come up with its 5026-39 series. They used epoxy-novolac as the resin, with phenolic microballoons added to the silica-fiber filler of an earlier series. Used with a structural honeycomb made of phenolic rein­forced with fiberglass, it cut the density to 35 pounds per cubic foot and, with sub­sequent improvements, to as little as 31 pounds per cubic foot. This was less than three-tenths the density of the ancestral phenolic-fiberglass of Mercury—which merely orbited the Earth and did not fly back from the Moon.67

The new material had the designation Avcoat 5026-39G. The new flight sought to qualify it under its most severe design conditions, corresponding to re-entry at the bottom of the corridor with deceleration of 20 g. The peak aerodynamic load occurred at Mach 16.4 and 102,000 feet. Observed ablation rates proved to be much higher than expected. In fact, the ablative heat shield eroded away completely! This caused serious concern, for if that were to happen during a manned mission, the spacecraft would burn up in the atmosphere and would kill its astronauts.68

The relatively high air pressure had subjected the heat shield to dynamic pres­sures three times higher than those of an Apollo re-entry. Those intense dynamic pressures corresponded to a hypersonic wind that had blown away the ablative char soon after it had formed. This char was important; it protected the underlying virgin ablator, and when it was severely thinned or removed, the erosion rate on the test heat shield increased markedly.

Much the same happened in October 1965, when another subscale heat shield underwent flight test atop another multistage solid rocket, the Pacemaker, that accelerated its test vehicle to Mach 10.6 at 67,500 feet. These results showed that failure to duplicate the true re-entry environment in flight test could introduce unwarranted concern, causing what analysts James Pavlosky and Leslie St. Leger described as “unnecessary anxiety and work.”69

An additional Project Fire flight could indeed have qualified the heat shield under fully realistic re-entry conditions, but NASA officials had gained confidence through their ability to understand the quasi-failure of the R-4. Rather than con­duct further ad hoc heat-shield flight tests, they chose to merge its qualification with unmanned flights of complete Apollo spacecraft. Following three shots aboard the

Saturn I-B that went no further than earth orbit, and which included AS-201 and -202, the next flight lifted off in November 1967. It used a Saturn V to simulate a true lunar return.

No larger rocket had ever flown. This one was immense, standing 36 stories tall. The anchorman Walter Cronkite gave commentary from a nearby CBS News studio, and as this behemoth thundered upward atop a dazzling pillar of yellow – white flame, Cronkite shouted, “Oh, my God, our building is shaking! Part of the roof has come in here!” The roar was as loud as a major volcanic eruption. People saw the ascent in Jacksonville, 150 miles away.70

Heat-shield qualification stood as a major goal. The upper stages operated in sequence, thrusting the spacecraft to an apogee of 11,242 miles. It spent several hours coasting, oriented with the heat shield in the cold soak of shadow to achieve the largest possible thermal gradient around the shield. Re-ignition of the main engine pushed the spacecraft into re-entry at 35,220 feet per second relative to the atmosphere of the rotating Earth. Flying with an effective L/D of 0.365, it came down 10 miles from the aim point and only six miles from the recovery ship, close enough for news photos that showed a capsule in the water with one of its chutes still billowing.

The heat shield now was ready for the Moon, for it had survived a peak heating rate of 425 BTU per square foot-second and a total heat load of 37,522 BTU per pound. Operational lunar flights imposed loads and heating rates that were mark­edly less demanding. In the words of Pavlosky and St. Leger, “the thermal protection subsystem was overdesigned.”71

A 1968 review took something of an offhand view of what once had been seen as an extraordinarily difficult problem. This report stated that thermal performance of ablative material “is one of the lesser criteria in developing a TPS.” Significant changes had been made to enhance access for inspection, relief of thermal stress, manufacturability, performance near windows and other penetrations, and control of the center of gravity to achieve design values of L/D, “but never to obtain better thermal performance of the basic ablator.”72

Thus, on the eve of the first lunar landing, specialists in hypersonics could look at a technology of re-entry whose prospects had widened significantly. A suite of materials now existed that were suitable for re-entry from orbit, having high emis – sivity to keep the temperature down, along with low thermal conductivity to pre­vent overheating during the prolonged heat soak. Experience had shown how care­ful research in ground facilities could produce reliable results and could permit maneuvering entry with accuracy in aim. This had been proven to be feasible for missions as demanding as lunar return.

Dyna-Soar had not flown, but it introduced metallic hot structures that brought the prospect of reusability. It also introduced wings for high L/D and particular

freedom during maneuver. Indeed, by 1970 there was only one major frontier in re-entry: the development of a lightweight heat shield that was simpler than the hot structure of Dyna-Soar and was reusable. This topic was held over for the following decade, amid the development of the space shuttle.

Gemini and Apollo