Category Facing the Heat Barrier: a History of Hypersonics

NACA-Langley and John Becker

During the war the Germans failed to match the Allies in production of air­planes, but they were well ahead in technical design. This was particularly true in the important area of jet propulsion. They fielded an operational jet fighter, the Me-262, and while the Yankees were well along in developing the Lockheed P-80 as a riposte, the war ended before any of those jets could see combat. Nor was the Me – 262 a last-minute work of desperation. It was a true air weapon that showed better speed and acceleration than the improved P-80A in flight test, while demonstrat­ing an equal rate of climb.28 Albert Speer, Hitler’s minister of armaments, asserted in his autobiographical Inside the Third Reich (1970) that by emphasizing produc­tion of such fighters and by deploying the Wasserfall antiaircraft missile that was in development, the Nazis “would have beaten back the Western Allies’ air offensive against our industry from the spring of 1944 on.”29 The Germans thus might have prolonged the war until the advent of nuclear weapons.

Wartime America never built anything resembling the big Mach 4.4 wind tunnels at Peenemunde, but its researchers at least constructed facilities that could compare with the one at Aachen. The American installations did not achieve speeds to match Aachen’s Mach 3-3, but they had larger test sections. Arthur Kantrowitz, a young physicist from Columbia University who was working at Langley, built a nine-inch tunnel that reached Mach 2.5 when it entered operation in 1942. (Aachen’s had been four inches.) Across the country, at NACA’s Ames Aeronautical Laboratory, two other wind tunnels entered service during 1945- Their test sections measured one by three feet, and their flow speeds reached Mach 2.2.30

The Navy also was active. It provided $4.5 million for the nation’s first really large supersonic tunnel, with a test section six feet square. Built at NACA-Ames, operating at Mach 1.3 to 1.8, this installation used 60,000 horsepower and entered service soon after the war.31 The Navy also set up its Ordnance Aerophysics Labora­tory in Daingerfield, Texas, adjacent to the Lone Star Steel Company, which had air compressors that this firm made available. The supersonic tunnel that resulted covered a range of Mach 1.25 to 2.75, with a test section of 19 by 27-5 inches. It became operational in June 1946, alongside a similar installation that served for high-speed engine tests.32

Theorists complemented the wind-tunnel builders. In April 1947 Theodore von Karman, a professor at Caltech who was widely viewed as the dean of American aerodynamicists, gave a review and survey of supersonic flow theory in an address to the Institute of Aeronautical Sciences. His lecture, published three months later in the Journal of the Aeronautical Sciences, emphasized that supersonic flow theory now was mature and ready for general use. Von Karman pointed to a plethora of available methods and solutions that not only gave means to attack a number of important design problems but also gave independent approaches that could permit cross-checks on proposed solutions.

John Stack, a leading Langley aerodynamicist, noted that Prandtl had given a similarly broad overview of subsonic aerodynamics a quarter-century earlier. Stack declared, “Just as Prandtl’s famous paper outlined the direction for the engineer in the development of subsonic aircraft, Dr. von Karmans lecture outlines the direc­tion for the engineer in the development of supersonic aircraft.”33

Yet the United States had no facility, and certainly no large one, that could reach Mach 4.4. As a stopgap, the nation got what it wanted by seizing German wind tun­nels. A Mach 4.4 tunnel was shipped to the Naval Ordnance Laboratory in White Oak, Maryland. Its investigators had fabricated a Mach 5.18 nozzle and had con­ducted initial tests in January 1945- In 1948, in Maryland, this capability became routine.34 Still, if the U. S. was to advance beyond the Germans and develop the true hypersonic capability that Germany had failed to achieve, the nation would have to rely on independent research.

The man who pursued this research, and who built Americas first hypersonic tunnel, was Langleys John Becker. He had been at that center since 1936; during

the latter part of the war he was assistant chief of Stack’s Compressibility Research Division. He specifically was in charge of Langleys 16-Foot High-Speed Tunnel, which had fought its war by investigating cooling problems in aircraft motors as well as the design of propellers. This facility contributed particularly to tests of the B-50 bomber and to the aerodynamic shapes of the first atomic bombs. It also assisted development of the Pratt & Whitney R-2800 Double Wasp, a widely used piston engine that powered several important wartime fighter planes, along with the DC-6 airliner and the C-69 transport, the military version of Lockheed’s Constel­lation.35

It was quite a jump from piston-powered warbirds to hypersonics, but Becker willingly made the leap. The V-2, flying at Mach 5, gave him his justification. In a memo to Langley’s chief of research, dated 3 August 1945, Becker noted that planned facilities were to reach no higher than Mach 3. He declared that this was inadequate: “When it is considered that all of these tunnels will be used, to a large extent, to develop supersonic missiles and projectiles of types which have already been operated at Mach numbers as high as 5.0, it appears that there is a definite need for equipment capable of higher test Mach numbers.”

Within this memo, he outlined a design concept for “a supersonic tunnel having a test section four-foot square and a maximum test Mach number of 7.0.” It was to achieve continuous flow, being operated by a commercially-available compressor of 2,400 horsepower. To start the flow, the facility was to hold air within a tank that was compressed to seven atmospheres. This air was to pass through the wind tunnel before exhausting into a vacuum tank. With pressure upstream pushing the flow and with the evacuated tank pulling it, airspeeds within the test section would be high indeed. Once the flow was started, the compressor would maintain it.

A preliminary estimate indicated that this facility would cost $350,000. This was no mean sum, and Becker’s memo proposed to lay groundwork by first building a model of the big tunnel, with a test section only one foot square. He recommended that this subscale facility should “be constructed and tested before proceeding with a four-foot-square tunnel.” He gave an itemized cost estimate that came to $39,550, including $10,000 for installation and $6,000 for contingency.

Becker’s memo ended in formal fashion: “Approval is requested to proceed with the design and construction of a model supersonic tunnel having a one-foot-square test section at Mach number 7-0. If successful, this model tunnel would not only provide data for the design of economical high Mach number supersonic wind tun­nels, but would itself be a very useful research tool.”36

On 6 August, three days after Becker wrote this memo, the potential useful­ness of this tool increased enormously. On that day, an atomic bomb destroyed Hiroshima. With this, it now took only modest imagination to envision nuclear – tipped V-2s as weapons of the future. The standard V-2 had carried only a one-ton conventional warhead and lacked both range and accuracy. It nevertheless had been technically impressive, particularly since there was no way to shoot it down. But an advanced version with an atomic warhead would be far more formidable.

John Stack strongly supported Beckers proposal, which soon reached the desk of George Lewis, NACA’s Director of Aeronautical Research. Lewis worked at NACA’s Washington Headquarters but made frequent visits to Langley. Stack discussed the proposal with Lewis in the course of such a visit, and Lewis said, “Lets do it.”

Just then, though, there was little money for new projects. NACA faced a post­war budget cut, which took its total appropriation from $40.9 million in FY 1945 to $24 million in FY 1946. Lewis therefore said to Stack, “John, you know I’m a sucker for a new idea, but don’t call it a wind tunnel because I’ll be in trouble with having to raise money in a formal way. That will necessitate Congressional review and approval. Call it a research project.” Lewis designated it as Project 506 and obtained approval from NACA’s Washington office on 18 December.37

A month later, in January 1946, Becker raised new issues in a memo to Stack. He was quite concerned that the high Mach would lead to so low a temperature that air in the flow would liquefy. To prevent this, he called for heating the air, declar­ing that “a temperature of 600°F in the pressure tank is essential.” He expected to achieve this by using “a small electrical heater.”

The pressure in that tank was to be considerably higher than in his plans of August. The tank would hold a pressure of 100 atmospheres. Instead of merely starting the flow, with a powered compressor sustaining in continuous operation, this pressure tank now was to hold enough air for operating times of 40 seconds. This would resolve uncertainties in the technical requirements for continuous oper­ation. Continuous flows were still on the agenda but not for the immediate future. Instead, this wind tunnel was to operate as a blowdown facility.

Here, in outline, was a description of the installation as finally built. Its test sec­tion was 11 inches square. Its pressure tank held 50 atmospheres. It never received a compressor system for continuous flow, operating throughout its life entirely as a blowdown wind tunnel. But by heating its air, it indeed operated routinely at speeds close to Mach 7.38

Taking the name of 11-Inch Hypersonic Tunnel, it operated successfully for the first time on 26 November 1947. It did not heat its compressed air directly within the pressure tank, relying instead on an electric resistance heater as a separate com­ponent. This heater raised the air to temperatures as high as 900°F, eliminating air liquefaction in the test section with enough margin for Mach 8. Specialized experi­ments showed clearly that condensation took place when the initial temperature was not high enough to prevent it. Small particles promoted condensation by serving as nuclei for the formation of droplets. Becker suggested that such particles could have formed through the freezing of C02, which is naturally present in air. Subsequent research confirmed this conjecture.39

NACA-Langley and John Becker

The facility showed initial early problems as well as a long-term problem. The early difficulties centered on the air heater, which showed poor internal heat con­duction, requiring as much as five hours to reach a suitably uniform temperature distribution. In addition, copper tubes within the heater produced minute par­ticles of copper oxide, due to oxidation of this metal at high temperature. These particles, blown within the hypersonic airstream, damaged test models and instru­ments. Becker attacked the problem of slow warmup by circulating hot air through the heater. To eliminate the problem of oxidation, he filled the heater with nitrogen while it was warming up.40

A more recalcitrant difficulty arose because the hot airflow, entering the nozzle, heated it and caused it to undergo thermal expansion. The change in its dimensions was not large, but the nozzle design was highly sensitive to small changes, with this expansion causing the dynamic pressure in the airflow to vary by up to 13 percent in the course of a run. Run times were as long as 90 seconds, and because of this, data taken at the beginning of a test did not agree with similar data recorded a minute later. Becker addressed this by fixing the angle of attack of each test model. He did not permit the angle to vary during a run, even though variation of this angle would have yielded more data. He also made measurements at a fixed time during each run.41

The wind tunnel itself represented an important object for research. No similar facility had ever been built in America, and it was necessary to learn how to use it most effectively. Nozzle design represented an early topic for experimental study. At Mach 7, according to standard tables, the nozzle had to expand by a ratio of 104.1 to 1. This nozzle resembled that of a rocket engine. With an axisymmetric design, a throat of one-inch diameter would have opened into a channel having a diameter slightly greater than 10 inches. However, nozzles for Beckers facility proved difficult to develop.

Conventional practice, carried over from supersonic wind tunnels, called for a two-dimensional nozzle. It featured a throat in the form of a narrow slit, having the full width of the main channel and opening onto that channel. However, for flow at Mach 7, this slit was to be only about 0.1 inch high. Hence, there was considerable interest in nozzles that might be less sensitive to small errors in fabrication.42

Initial work focused on a two-step nozzle. The first step was flat and constant in height, allowing the flow to expand to 10 inches wide in the horizontal plane and to reach Mach 4.36. The second step maintained this width while allowing the flow to expand to 10.5 inches in height, thus achieving Mach 7. But this nozzle performed poorly, with investigators describing its flow as “entirely unsatisfactory for use in a wind tunnel.” The Mach number reached 6.5, but the flow in the test section was “not sufficiently uniform for quantitative wind-tunnel test purposes.” This was due to “a thick boundary layer which developed in the first step” along the flat parallel walls set closely together at the top and bottom.43

A two-dimensional, single-step nozzle gave much better results. Its narrow slit­like throat indeed proved sensitive; this was the nozzle that gave the variation with time of the dynamic pressure. Still, except for this thermal-expansion effect, this nozzle proved “far superior in all respects” when compared with the two-step nozzle. In turn, the thermal expansion in time proved amenable to correction. This expan­sion occurred because the nozzle was made of steel. The commercially available alloy Invar had a far lower coefficient of thermal expansion. A new nozzle, fabricated from this material, entered service in 1954 and greatly reduced problems due to expansion of the nozzle throat.44

Another topic of research addressed the usefulness of the optical techniques used for flow visualization. The test gas, after all, was simply air. Even when it formed shock waves near a model under test, the shocks could not be seen with the unaided eye. Therefore, investigators were accustomed to using optical instruments when studying a flow. Three methods were in use: interferometry, schlieren, and shadow­graph. These respectively observed changes in air density, density gradient, and the rate of change of the gradient.

Such instruments had been in use for decades. Ernst Mach, of the eponymous Mach number, had used a shadowgraph as early as 1887 to photograph shock waves produced by a speeding bullet. Theodor Meyer, a student of Prandtl, used schlie – ren to visualize supersonic flow in a nozzle in 1908. Interferometry gave the most detailed photos and the most information, but an interferometer was costly and dif­ficult to operate. Shadowgraphs gave the least information but were the least costly and easiest to use. Schlieren apparatus was intermediate in both respects and was employed often.45

Still, all these techniques depended on the flow having a minimum density. One could not visualize shock waves in a vacuum because they did not exist. Highly rarefied flows gave similar difficulties, and hypersonic flows indeed were rarefied. At Mach 7, a flow of air fell in pressure to less than one part in 4000 of its initial value, reducing an initial pressure of 40 atmospheres to less than one-hundredth of an atmosphere.46 Higher test-section pressures would have required correspond­ingly higher pressures in the tank and upstream of the nozzle. But low test-section pressures were desirable because they were physically realistic. They corresponded to conditions in the upper atmosphere, where hypersonic missiles were to fly.

Becker reported in 1950 that the limit of usefulness of the schlieren method “is reached at a pressure of about 1 mm of mercury for slender test models at M = 7-0.”47 This corresponded to the pressure in the atmosphere at 150,000 feet, and there was interest in reaching the equivalent of higher altitudes still. A consultant, Joseph Kaplan, recommended using nitrogen as a test gas and making use of an afterglow that persists momentarily within this gas when it has been excited by an electrical discharge. With the nitrogen literally glowing in the dark, it became much easier to see shock waves and other features of the flow field at very low pressures.

“The nitrogen afterglow appears to be usable at static pressures as low as 100 microns and perhaps lower,” Becker wrote.48 This corresponded to pressures of barely a ten-thousandth of an atmosphere, which exist near 230,000 feet. It also corresponded to the pressure in the test section of a blowdown wind tunnel with air in the tank at 50 atmospheres and the flow at Mach 13.8.49 Clearly, flow visualiza­tion would not be a problem.

Condensation, nozzle design, and flow visualization were important topics in their own right. Nor were they merely preliminaries. They addressed an important reason for building this tunnel: to learn how to design and use subsequent hyper­sonic facilities. In addition, although this 1 l-inch tunnel was small, there was much interest in using it for studies in hypersonic aerodynamics.

This early work had a somewhat elementary character, like the hypersonic exper­iments of Erdmann at Peenemunde. When university students take initial courses in aerodynamics, their textbooks and lab exercises deal with simple cases such as flow over a flat plate. The same was true of the first aerodynamic experiments with the 11-inch tunnel. The literature held a variety of theories for calculating lift, drag, and pressure distributions at hypersonic speeds. The experiments produced data that permitted comparison with theory—to check their accuracy and to determine circumstances under which they would fail to hold.

One set of tests dealt with cone-cylinder configurations at Mach 6.86. These amounted to small and simplified representations of a missile and its nose cone. The test models included cones, cylinders with flat ends, and cones with cylindri­cal afterbodies, studied at various angles of attack. For flow over a cone, the British researchers Geoffrey I. Taylor and J. Ml Maccoll published a treatment in 1933- This quantitative discussion was a cornerstone of supersonic theory and showed its merits anew at this high Mach number. An investigation showed that it held “with a high degree of accuracy.”

The method of characteristics, devised by Prandtl and Busemann in 1929, was a standard analytical method for designing surfaces for supersonic flow, including wings and nozzles. It was simple enough to lend itself to hand computation, and it gave useful results at lower supersonic speeds. Tests in the 11-inch facility showed that it continued to give good accuracy in hypersonic flow. For flow with angle of attack, a theory put forth by Antonio Ferri, a leading Italian aerodynamicist, pro­duced “very good results.” Still, not all preexisting theories proved to be accurate. One treatment gave good results for drag but overestimated some pressures and values of lift.50

Boundary-layer effects proved to be important, particularly in dealing with hypersonic wings. Tests examined a triangular delta wing and a square wing, the latter having several airfoil sections. Existing theories gave good results for lift and drag at modest angles of attack. However, predicted pressure distributions were often in error. This resulted from flow separation at high angles of attack—and from the presence of thick laminar boundary layers, even at zero angle of attack. These finds held high significance, for the very purpose of a hypersonic wing was to generate a pressure distribution that would produce lift, without making the vehicle unstable and prone to go out of control while in flight.

The aerodynamicist Charles McLellan, who had worked with Becker in design­ing the 11-inch tunnel and who had become its director, summarized the work within the Journal of the Aeronautical Sciences. He concluded that near Mach 7, the aerodynamic characteristics of wings and bodies “can be predicted by available theo­retical methods with the same order of accuracy usually obtainable at lower speeds, at least for cases in which the boundary layer is laminar.”51

At hypersonic speeds, boundary layers become thick because they sustain large temperature changes between the wall and the free stream. Mitchel Bertram, a col­league of McLellan, gave an approximate theory for the laminar hypersonic boundary layer on a flat plate. Using the 11-inch tunnel, he showed good agreement between his theory and experiment in several significant cases. He noted that boundary – layer effects could increase drag coefficients at least threefold, when compared with

values using theories that include only free-stream flow and ignore the boundary layer. This emphasized anew the importance of the boundary layer in producing hypersonic skin friction.52

These results were fundamental, both for aerodynamics and for wind-tunnel design. With them, the 1 l-inch tunnel entered into a brilliant career. It had been built as a pilot facility, to lay groundwork for a much larger hypersonic tunnel that could sustain continuous flows. This installation, the Continuous Flow Hypersonic Tunnel (CFHT), indeed was built. Entering service in 1962, it had a 31-inch test section and produced flows at Mach 10.53

Still, it took a long time for this big tunnel to come on line, and all through the 1950s the 11-inch facility continued to grow in importance. At its peak, in 1961, it conducted more than 2,500 test runs, for an average of 10 per working day. It remained in use until 1972.54 It set the pace with its use of the blowdown principle, which eliminated the need for costly continuous-flow compressors. Its run times proved to be adequate, and the CFHT found itself hard-pressed to offer much that was new. It had been built for continuous operation but found itself used in a blowdown mode most of the time. Becker wrote that his 11-inch installation “far exceeded” the CFHT “in both the importance and quality of its research output.” He described it as “the only ‘pilot tunnel’ in NACA history to become a major research facility in its own right.”55

Yet while the work of this wind tunnel was fundamental to the development of hypersonics, in 1950 the field of hypersonics was not fundamental to anything in particular. Plenty of people expected that America in time would build missiles and aircraft for flight at such speeds, but in that year no one was doing so. This soon changed, and the keyyearwas 1954. In that year the Air Force embraced theX-15, a hypersonic airplane for which studies in the 11-inch tunnel proved to be essential. Also in that year, advances in the apparently unrelated field of nuclear weaponry brought swift and emphatic approval for the development of the ICBM. With this, hypersonics vaulted to the forefront of national priority.

On LACE and ACES

We consider the estimated LACE-ACES performance very optimistic. In several cases complete failure of the project would result from any significant performance degradation from the present estimates…. Obviously the advantages claimed for the system will not be available unless air can be condensed and purified very rapidly during flight. The figures reported indicate that about 0.8 ton of air per second would have to be processed.

In conventional, i. e., ordinary commercial equipment, this would require a distillation column having a cross section on the order of 500 square feet…. It is proposed to increase the capacity of equipment of otherwise conventional design by using centrifugal force. This may be possible, but as far as the Committee knows this has never been accomplished.

On other propulsion systems:

When reduced to a common basis and compared with the best of current technology, all assumed large advances in the state-of-the-art…. On the basis of the best of current technology, none of the schemes could deliver useful payloads into orbits.

On vehicle design:

We are gravely concerned that too much emphasis may be placed on the more glamorous aspects of the Aerospace Plane resulting in neglect of what appear to be more conventional problems. The achievement of low structural weight is equally important… as is the development of a highly successful propulsion system.

Regarding scramjets, the panel was not impressed with claims that supersonic combustion had been achieved in existing experiments:

These engine ideas are based essentially upon the feasibility of diffusion deflagration flames in supersonic flows. Research should be immediately initiated using existing facilities… to substantiate the feasibility of this type of combustion.

The panelists nevertheless gave thumbs-up to the Aerospaceplane effort as a con­tinuing program of research. Their report urged a broadening of topics, placing greater emphasis on scramjets, structures and materials, and two-stage-to-orbit con­figurations. The array of proposed engines were “all sufficiently interesting so that research on all of them should be continued and emphasized.”65

As the studies went forward in the wake of this review, new propulsion concepts continued to flourish. Lockheed was in the forefront. This firm had initiated com­pany-funded work during the spring of 1959 and had a well-considered single-stage concept two years later. An artists rendering showed nine separate rocket nozzles at its tail. The vehicle also mounted four ramjets, set in pods beneath the wings.

Convair’s Space Plane had used separated nitrogen as a propellant, heating it in the LACE precooler and allowing it to expand through a nozzle to produce thrust. Lockheed’s Aerospace Plane turned this nitrogen into an important system element, with specialized nitrogen rockets delivering 125,000 pounds of thrust. This cer­tainly did not overcome the drag produced by air collection, which would have turned the vehicle into a perpetual motion machine. However, the nitrogen rockets made a valuable contribution.66

On LACE and ACES

Lockheed’s Aerospaceplane concept. The alternate hypersonic in-flight refueling system approach called for propellant transfer at Mach 6. (Art by Dennis Jenkins)

On LACE and ACES

Republic’s Aerospaceplane concept showed extensive engine-airframe integration. (Republic Aviation)

For takeoff, Lockheed expected to use Turbo-LACE. This was a LACE variant that sought again to reduce the inherently hydrogen-rich operation of the basic system. Rather than cool the air until it was liquid, Turbo-Lace chilled it deeply but allowed it to remain gaseous. Being very dense, it could pass through a turbocom­pressor and reach pressures in the hundreds of psi. This saved hydrogen because less was needed to accomplish this cooling. The Turbo-LACE engines were to operate at chamber pressures of 200 to 250 psi, well below the internal pressure of standard rockets but high enough to produce 300,000 pounds of thrust by using turbocom – pressed oxygen.67

Republic Aviation continued to emphasize the scramjet. A new configuration broke with the practice of mounting these engines within pods, as if they were turbojets. Instead, this design introduced the important topic of engine-airframe integration by setting forth a concept that amounted to a single enormous scramjet fitted with wings and a tail. A conical forward fuselage served as an inlet spike. The inlets themselves formed a ring encircling much of the vehicle. Fuel tankage filled most of its capacious internal volume.

This design study took two views regarding the potential performance of its engines. One concept avoided the use of LACE or ACES, assuming again that this craft could scram all the way to orbit. Still, it needed engines for takeoff so turbo­ramjets were installed, with both Pratt & Whitney and General Electric providing candidate concepts. Republic thus was optimistic at high Mach but conservative at low speed.

The other design introduced LACE and ACES both for takeoff and for final ascent to orbit and made use of yet another approach to derichening the hydrogen. This was SuperLACE, a concept from Marquardt that placed slush hydrogen rather than standard liquid hydrogen in the main tank. The slush consisted of liquid that contained a considerable amount of solidified hydrogen. It therefore stood at the freezing point of hydrogen, 14 K, which was markedly lower than the 21 К of liquid hydrogen at the boiling point.68

SuperLACE reduced its use of hydrogen by shunting part of the flow, warmed in the LACE heat exchanger, into the tank. There it mixed with the slush, chilling again to liquid while melting some of the hydrogen ice. Careful control of this flow ensured that while the slush in the tank gradually turned to liquid and rose toward the 21 К boiling point, it did not get there until the air-collection phase of a flight was finished. As an added bonus, the slush was noticeably denser than the liquid, enabling the tank to hold more fuel.69

LACE and ACES remained in the forefront, but there also was much interest in conventional rocket engines. Within the Aerospaceplane effort, this approach took the name POBATO, Propellants On Board At Takeoff. These rocket-powered vehicles gave points of comparison for the more exotic types that used LACE and scramjets, but here too people used their imaginations. Some POBATO vehicles ascended vertically in a classic liftoff, but others rode rocket sleds along a track while angling sharply upward within a cradle.70

In Denver, the Martin Company took rocket-powered craft as its own, for this firm expected that a next-generation launch vehicle of this type could be ready far sooner than one based on advanced airbreathing engines. Its concepts used vertical liftoff, while giving an opening for the ejector rocket. Martin introduced a concept of its own called RENE, Rocket Engine Nozzle Ejector (RENE), and conducted experiments at the Arnold Engineering Development Center. These tests went for­ward during 1961, using a liquid rocket engine, with nozzle of 5-inch diameter set within a shroud of 17-inch width. Test conditions corresponded to flight at Mach 2 and 40,000 feet, with the shrouds or surrounding ducts having various lengths to achieve increasingly thorough mixing. The longest duct gave the best perfor­mance, increasing the rated 2,000-pound thrust of the rocket to as much as 3,100 pounds.71

A complementary effort at Marquardt sought to demonstrate the feasibility of LACE. The work started with tests of heat exchangers built by Garrett AiResearch that used liquid hydrogen as the working fluid. A company-made film showed dark liquid air coming down in a torrent, as seen through a porthole. Further tests used this liquefied air in a small thrust chamber. The arrangement made no attempt to derichen the hydrogen flow; even though it ran very fuel-rich, it delivered up to 275 pounds of thrust. As a final touch, Marquardt crafted a thrust chamber of 18-inch diameter and simulated LACE operation by feeding it with liquid air and gaseous hydrogen from tanks. It showed stable combustion, delivering thrust as high as 5,700 pounds.72

Within the Air Force, the SAB’s Ad Hoc Committee on Aerospaceplane contin­ued to provide guidance along with encouraging words. A review of July 1962 was less skeptical in tone than the one of 18 months earlier, citing “several attractive arguments for a continuation of this program at a significant level of funding”:

It will have the military advantages that accrue from rapid response times and considerable versatility in choice of landing area. It will have many of the advantages that have been demonstrated in the X-15 program, namely, a real pay-off in rapidly developing reliability and operational pace that comes from continuous re-use of the same hardware again and again. It may turn out in the long run to have a cost effectiveness attractiveness… the cost per pound may eventually be brought to low levels. Finally, the Aerospaceplane program will develop the capability for flights in the atmosphere at hypersonic speeds, a capability that may be of future use to the Defense Department and possibly to the airlines.73

Single-stage-to-orbit (SSTO) was on the agenda, a topic that merits separate comment. The space shuttle is a stage-and-a-half system; it uses solid boosters plus a main stage, with all engines burning at liftoff. It is a measure of progress, or its lack, in astronautics that the Soviet R-7 rocket that launched the first Sputniks was also stage-and-a-half.74 The concept of SSTO has tantalized designers for decades, with these specialists being highly ingenious and ready to show a can-do spirit in the face of challenges.

This approach certainly is elegant. It also avoids the need to launch two rockets to do the work of one, and if the Earth’s gravity field resembled that of Mars, SSTO would be the obvious way to proceed. Unfortunately, the Earth’s field is consider­ably stronger. No SSTO has ever reached orbit, either under rocket power or by using scramjets or other airbreathers. The technical requirements have been too severe.

The SAB panel members attended three days of contractor briefings and reached a firm conclusion: “It was quite evident to the Committee from the presentation of nearly all the contractors that a single stage to orbit Aerospaceplane remains a highly speculative effort.” Reaffirming a recommendation from its I960 review, the group urged new emphasis on two-stage designs. It recommended attention to “develop­ment of hydrogen fueled turbo ramjet power plants capable of accelerating the first

stage to Mach 6.0 to 10.0____ Research directed toward the second stage which

will ultimately achieve orbit should be concentrated in the fields of high pressure hydrogen rockets and supersonic burning ramjets and air collection and enrichment systems. n

Convair, home of Space Plane, had offered single-stage configurations as early as I960. By 1962 its managers concluded that technical requirements placed such a vehicle out of reach for at least the next 20 years. The effort shifted toward a two-stage concept that took form as the 1964 Point Design Vehicle. With a gross takeoff weight of700,000 pounds, the baseline approach used turboramjets to reach Mach 5. It cruised at that speed while using ACES to collect liquid oxygen, then accelerated anew using ramjets and rockets. Stage separation occurred at Mach 8.6 and 176,000 feet, with the second stage reaching orbit on rocket power. The pay – load was 23,000 pounds with turboramjets in the first stage, increasing to 35,000 pounds with the more speculative SuperLACE.

The documentation of this 1964 Point Design, filling 16 volumes, was issued during 1963. An important advantage of the two-stage approach proved to lie in the opportunity to optimize the design of each stage for its task. The first stage was a Mach 8 aircraft that did not have to fly to orbit and that carried its heavy wings, structure, and ACES equipment only to staging velocity. The second-stage design showed strong emphasis on re-entry; it had a blunted shape along with only modest requirements for aerodynamic performance. Even so, this Point Design pushed the state of the art in materials. The first stage specified superalloys for the hot underside along with titanium for the upper surface. The second stage called for coated refrac­tory metals on its underside, with superalloys and titanium on its upper surfaces.76

Although more attainable than its single-stage predecessors, the Point Design still relied on untested technologies such as ACES, while anticipating use in aircraft structures of exotic metals that had been studied merely as turbine blades, if indeed they had gone beyond the status of laboratory samples. The opportunity neverthe­less existed for still greater conservatism in an airbreathing design, and the man who pursued it was Ernst Steinhoff. He had been present at the creation, having worked with Wernher von Braun on Germany’s wartime V-2, where he headed up the development of that missiles guidance. After I960 he was at the Rand Corpo­ration, where he examined Aerospaceplane concepts and became convinced that single-stage versions would never be built. He turned to two-stage configurations and came up with an outline of a new one: ROLS, the Recoverable Orbital Launch System. During 1963 he took the post of chief scientist at Holloman Air Force Base and proceeded to direct a formal set of studies.77

The name of ROLS had been seen as early as 1959, in one of the studies that had grown out of SR-89774, but this concept was new. Steinhoff considered that the staging velocity could be as low as Mach 3. At once this raised the prospect that the first stage might take shape as a modest technical extension of the XB-70, a large bomber designed for flight at that speed, which at the time was being readied for flight test. ROLS was to carry a second stage, dropping it from the belly like a bomb, with that stage flying on to orbit. An ACES installation would provide the liquid oxidizer prior to separation, but to reach from Mach 3 to orbital speed, the second stage had to be simple indeed. Steinhoff envisioned a long vehicle resembling a tor­pedo, powered by hydrogen-burning rockets but lacking wings and thermal protec­tion. It was not reusable and would not reenter, but it would be piloted. A project report stated, “Crew recovery is accomplished by means of a reentry capsule of the Gemini-Apollo class. The capsule forms the nose section of the vehicle and serves as the crew compartment for the entire vehicle.”78

ROLS appears in retrospect as a mirror image of NASA’s eventual space shuttle, which adopted a technically simple booster—a pair of large solid-propellant rock­ets—while packaging the main engines and most other costly systems within a fully – recoverable orbiter. By contrast, ROLS used a simple second stage and a highly intricate first stage, in the form of a large delta-wing airplane that mounted eight turbojet engines. Its length of 335 feet was more than twice that of a B-52. Weigh­ing 825,000 pounds at takeoff, ROLS was to deliver a payload of 30,000 pounds to orbit.79

Such two-stage concepts continued to emphasize ACES, while still offering a role for LACE. Experimental test and development of these concepts therefore remained on the agenda, with Marquardt pursuing further work on LACE. The earlier tests, during I960 and 1961, had featured an off-the-shelf thrust chamber that had seen use in previous projects. The new work involved a small LACE engine, the MAI 17, that was designed from the start as an integrated system.

LACE had a strong suit in its potential for a very high specific impulse, I. This is the ratio of thrust to propellant flow rate and has dimensions of seconds. It is a key measure of performance, is equivalent to exhaust velocity, and expresses the engine’s fuel economy. Pratt & Whitney’s RL10, for instance, burned hydrogen and oxygen to give thrust of 15,000 pounds with an I of 433 seconds.80 LACE was an airbreather, and its I could be enormously higher because it took its oxidizer from the atmosphere rather than carrying it in an onboard tank. The term “propellant flow rate” referred to tanked propellants, not to oxidizer taken from the air. For LACE this meant fuel only.

The basic LACE concept produced a very fuel-rich exhaust, but approaches such as RENE and SuperLACE promised to reduce the hydrogen flow substan­tially. Indeed, such concepts raised the prospect that a LACE system might use an optimized mixture ratio of hydrogen and oxidizer, with this ratio being selected to give the highest I. The MAI 17 achieved this performance artificially by using a large flow of liquid hydrogen to liquefy air and a much smaller flow for the thrust chamber. Hot-fire tests took place during December 1962, and a company report stated that “the system produced 83% of the idealized theoretical air flow and 81% of the idealized thrust. These deviations are compatible with the simplifications of the idealized analysis.”81

The best performance run delivered 0.783 pounds per second of liquid air, which burned a flow of 0.0196 pounds per second of hydrogen. Thrust was 73 pounds; I reached 3,717 seconds, more than eight times that of the RL10. Tests of the MAI 17 continued during 1963, with the best measured values of Is topping 4,500 seconds.82

In a separate effort, the Marquardt manager Richard Knox directed the pre­liminary design of a much larger LACE unit, the MAI 16, with a planned thrust of

10,0 pounds. On paper, it achieved substantial derichening by liquefying only one-fifth of the airflow and using this liquid air in precooling, while deeply cooling the rest of the airflow without liquefaction. A turbocompressor then was to pump this chilled air into the thrust chamber. A flow of less than four pounds per second of liquid hydrogen was to serve both as fuel and as primary coolant, with the antici­pated I exceeding 3,000 seconds.83

New work on RENE also flourished. The Air Force had a cooperative agree­ment with NASA’s Marshall Space Flight Center, where Fritz Pauli had developed a subscale rocket engine that burned kerosene with liquid oxygen for a thrust of 450 pounds. Twelve of these small units, mounted to form a ring, gave a basis for this new effort. The earlier work had placed the rocket motor squarely along the center – line of the duct. In the new design, the rocket units surrounded the duct, leaving it unobstructed and potentially capable of use as an ejector ramjet. The cluster was tested successfully at Marshall in September 1963 and then went to the Air Forces AEDC. As in the RENE tests of 1961, the new configuration gave a thrust increase of as much as 52 percent.84

While work on LACE and ejector rockets went forward, ACES stood as a par­ticularly critical action item. Operable ACES systems were essential for the practical success of LACE. Moreover, ACES had importance distinctly its own, for it could provide oxidizer to conventional hydrogen-burning rocket engines, such as those of ROLS. As noted earlier, there were two techniques for air separation: by chemi­cal methods and through use of a rotating fractional distillation apparatus. Both approaches went forward, each with its own contractor.

In Cambridge, Massachusetts, the small firm of Dynatech took up the challenge of chemical separation, launching its effort in May 1961. Several chemical reac­tions appeared plausible as candidates, with barium and cobalt offering particular promise:

2BaO, / 2BaO + 02 2Co304 ^ 6CoO + 02

The double arrows indicate reversibility. The oxidation reactions were exother­mic, occurring at approximately 1,600°F for barium and 1,800°F for cobalt. The reduction reactions, which released the oxygen, were endothermic, allowing the oxides to cool as they yielded this gas.

Dynatechs separator unit consisted of a long rotating drum with its interior divided into four zones using fixed partitions. A pebble bed of oxide-coated particles lined the drum interior; containment screens held the particles in place while allow­ing the drum to rotate past the partitions with minimal leakage. The zones exposed the oxide alternately to high-pressure ram air for oxidation and to low pressure for reduction. The separation was to take place in flight, at speeds of Mach 4 to Mach 5, but an inlet could slow the internal airflow to as little as 50 feet per second, increas­ing the residence time of air within a unit. The company proposed that an array of such separators weighing just under 10 tons could handle 2,000 pounds per second of airflow while producing liquid oxygen of 65 percent purity.85

Ten tons of equipment certainly counts within a launch vehicle, even though it included the weight of the oxygen liquefaction apparatus. Still it was vastly lighter than the alternative: the rotating distillation system. The Linde Division of Union Carbide pursued this approach. Its design called for a cylindrical tank containing the distillation apparatus, measuring nine feet long by nine feet in diameter and rotating at 570 revolutions per minute. With a weight of 9,000 pounds, it was to process 100 pounds per second of liquefied air—which made it 10 times as heavy as the Dynatech system, per pound of product. The Linde concept promised liquid oxygen of 90 percent purity, substantially better than the chemical system could offer, but the cited 9,000-pound weight left out additional weight for the LACE equipment that provided this separator with its liquefied air.8S

A study at Convair, released in October 1963, gave a clear preference to the Dynatech concept. Returning to the single-stage Space Plane of prior years, Convair engineers considered a version with a weight at takeoff of 600,000 pounds, using either the chemical or the distillation ACES. The effort concluded that the Dynatech separator offered a payload to orbit of 35,800 using barium and 27,800 pounds with cobalt. The Linde separator reduced this payload to 9,500 pounds. Moreover, because it had less efficiency, it demanded an additional 31,000 pounds of hydrogen fuel, along with an increase in vehicle volume of 10,000 cubic feet.87

The turn toward feasible concepts such as ROLS, along with the new emphasis on engineering design and test, promised a bright future for Aerospaceplane studies. However, a commitment to serious research and development was another matter. Advanced test facilities were critical to such an effort, but in August 1963 the Air Force canceled plans for a large Mach 14 wind tunnel at AEDC. This decision gave a clear indication of what lay ahead.88

A year earlier Aerospaceplane had received a favorable review from the SAB Ad Hoc Committee. The program nevertheless had its critics, who existed particularly within the SAB’s Aerospace Vehicles and Propulsion panels. In October 1963 they issued a report that dealt with proposed new bombers and vertical-takeoff-and – landing craft, as well as with Aerospaceplane, but their view was unmistakable on that topic:

The difficulties the Air Force has encountered over the past three years in identifying an Aerospaceplane program have sprung from the facts that the requirement for a fully recoverable space launcher is at present only vaguely defined, that today’s state-of-the-art is inadequate to support any real hardware development, and the cost of any such undertaking will be extremely large…. [T]he so-called Aerospaceplane program has had such an erratic history, has involved so many clearly infeasible factors, and has been subject to so much ridicule that from now on this name should be dropped. It is also recommended that the Air Force increase the vigilance that no new program achieves such a difficult position.89

Aerospaceplane lost still more of its rationale in December, as Defense Secretary Robert McNamara canceled Dyna-Soar. This program was building a mini-space shuttle that was to fly to orbit atop a Titan III launch vehicle. This craft was well along in development at Boeing, but program reviews within the Pentagon had failed to find a compelling purpose. McNamara thus disposed of it.90

Prior to this action, it had been possible to view Dyna-Soar as a prelude to opera­tional vehicles of that general type, which might take shape as Aerospaceplanes. The cancellation of Dyna-Soar turned the Aerospaceplane concept into an orphan, a long-term effort with no clear relation to anything currently under way. In the wake of McNamara’s decision, Congress deleted funds for further Aerospaceplane studies, and Defense Department officials declined to press for its restoration within the FY 1964 budget, which was under consideration at that time. The Air Force carried forward with new conceptual studies of vehicles for both launch and hypersonic cruise, but these lacked the focus on advanced airbreathing propulsion that had characterized Aerospaceplane.91

There nevertheless was real merit to some of the new work, for this more realistic and conservative direction pointed out a path that led in time toward NASA’s space shuttle. The Martin Company made a particular contribution. It had designed no Aerospaceplanes; rather, using company funding, its technical staff had examined concepts called Astro rockets, with the name indicating the propulsion mode. Scram – jets and LACE won little attention at Martin, but all-rocket vehicles were another matter. A concept of 1964 had a planned liftoff weight of 1,250 tons, making it intermediate in size between the Saturn I-B and Saturn V. It was a two-stage fully – reusable configuration, with both stages having delta wings and flat undersides. These undersides fitted together at liftoff, belly to belly.

On LACE and ACES

Martin’s Astrorocket. (U. S. Air Force)

The design concepts of that era were meant to offer glimpses of possible futures, but for this Astrorocket, the future was only seven years off. It clearly foreshadowed a class of two-stage fully reusable space shuttles, fitted with delta wings, that came to the forefront in NASA-sponsored studies of 1971- The designers at Martin were not clairvoyant; they drew on the background of Dyna-Soar and on studies at NASA – Ames of winged re-entry vehicles. Still, this concept demonstrated that some design exercises were returning to the mainstream.92

Further work on ACES also proceeded, amid unfortunate results at Dynatech. That company’s chemical separation processes had depended for success on having a very large area of reacting surface within the pebble-bed air separators. This appeared achievable through such means as using finely divided oxide powders or porous particles impregnated with oxide. But the research of several years showed that the oxide tended to sinter at high temperatures, markedly diminishing the reacting sur­face area. This did not make chemical separation impossible, but it sharply increased the size and weight of the equipment, which robbed this approach of its initially strong advantage over the Linde distillation system. This led to abandonment of Dynatech’s approach.93

Linde’s system was heavy and drastically less elegant than Dynatech’s alterna­tive, but it amounted largely to a new exercise in mechanical engineering and went forward to successful completion. A prototype operated in test during 1966, and

while limits to the company’s installed power capacity prevented the device from processing the rated flow of 100 pounds of air per second, it handled 77 pounds per second, yielding a product stream of oxygen that was up to 94 percent pure. Studies of lighter-weight designs also proceeded. In 1969 Linde proposed to build a distil­lation air separator, rated again at 100 pounds per second, weighing 4,360 pounds. This was only half the weight allowance of the earlier configuration.94

In the end, though, Aerospaceplane failed to identify new propulsion concepts that held promise and that could be marked for mainstream development. The program’s initial burst of enthusiasm had drawn on a view that the means were in hand, or soon would be, to leap beyond the liquid-fuel rocket as the standard launch vehicle and to pursue access to orbit using methods that were far more advanced. The advent of the turbojet, which had swiftly eclipsed the piston engine, was on everyone’s mind. Yet for all the ingenuity behind the new engine concepts, they failed to deliver. What was worse, serious technical review gave no reason to believe that they could deliver.

In time it would become clear that hypersonics faced a technical wall. Only limited gains were achievable in airbreathing propulsion, with single-stage-to-orbit remaining out of reach and no easy way at hand to break through to the really advanced performance for which people hoped.

On LACE and ACES

Propulsion

In the spring of 1992 the NASP Joint Program Office presented a final engine design called the E22A. It had a length of 60 feet and included an inlet ramp, cowled inlet, combustor, and nozzle. An isolator, located between the inlet and combustor, sought to prevent unstarts by processing flow from the inlet through a series of oblique shocks, which increased the backpressure from the combustor.

Program officials then constructed two accurately scaled test models. The Sub­scale Parametric Engine (SXPE) was built to one-eighth scale and had a length of eight feet. It was tested from April 1993 to March 1994. The Concept Demonstra­tor Engine (CDE), which followed, was built to a scale of 30 percent. Its length topped 16 feet, and it was described as “the largest airframe-integrated scramjet engine ever tested.”26

In working with the SXPE, researchers had an important goal in achieving com­bustion of hydrogen within its limited length. To promote rapid ignition, the engine used a continuous flow of a silane-hydrogen mixture as a pilot, with the silane ignit­ing spontaneously on exposure to air. In addition, to promote mixing, the model incorporated an accurate replication of the spacing between the fuel-injecting struts and ramps, with this spacing being preserved at the model’s one-eighth scale. The combustor length required to achieve the desired level of mixing then scaled in this fashion as well.

The larger CDE was tested within the Eight-Foot High-Temperature Tunnel, which was Langleys biggest hypersonic facility. The tests mapped the flowfield entering the engine, determined the performance of the inlet, and explored the potential performance of the design. Investigators varied the fuel flow rate, using the combustors to vary its distribution within the engine.

Boundary-layer effects are important in scramjets, and the tests might have rep­licated the boundary layers of a full-scale engine by operating at correspondingly higher flow densities. For the CDE, at 30 percent scale, the appropriate density would have been 1/0.3 or 3-3 times that of the atmospheric density at flight alti­tude. For the SXPE, at one-eighth scale, the test density would have shown an eight­fold increase over atmospheric. However, the SXPE used an arc-heated test facility that was limited in the power that drove its arc, and it provided its engine with air at only one-fiftieth of that density. The High Temperature Tunnel faced limits on its flow rate and delivered its test gas at only one-sixth of the appropriate density.

Engineers sought to compensate by using analytical methods to determine the drag in a full-scale engine. Still, this inability to replicate boundary-layer effects meant that the wind-tunnel tests gave poor simulations of internal drag within the test engines. This could have led to erroneous estimates of true thrust, net of drag. In turn, this showed that even when working with large test models and with test facilities of impressive size, true simulations of the boundary layer were ruled out from the start.27

For takeoff from a runway, the X-30 was to use a Low-Speed System (LSS). It comprised two principal elements: the Special System, an ejector ramjet; and the Low Speed Oxidizer System, which used LACE.28 The two were highly synergistic. The ejector used a rocket, which might have been suitable for the final ascent to orbit, with ejector action increasing its thrust during takeoff and acceleration. By giving an exhaust velocity that was closer to the vehicle velocity, the ejector also increased the fuel economy.

The LACE faced the standard problem of requiring far more hydrogen than could be burned in the air it liquefied. The ejector accomplished some derichen – ing by providing a substantial flow of entrained air that burned some of the excess. Additional hydrogen, warmed in the LACE heat exchanger, went into the fuel tanks, which were full of slush hydrogen. By melting the slush into conventional liquid hydrogen (LH ), some LACE coolant was recycled to stretch the vehicles fuel supply.29

There was good news in at least one area of LACE research: deicing. LACE systems have long been notorious for their tendency to clog with frozen moisture within the air that they liquefy. “The largest LACE ever built made around half a pound per second of liquid air,” Paul Czysz of McDonnell Douglas stated in 1986. “It froze up at six percent relative humidity in the Arizona desert, in 38 seconds.” Investigators went on to invent more than a dozen methods for water alleviation. The most feasible approach called for injecting antifreeze into the system, to enable the moisture to condense out as liquid water without freezing. A rotary separator eliminated the water, with the dehumidified air being so cold as to contain very little residual water vapor.30

The NASP program was not run by shrinking violets, and its managers stated that its LACE was not merely to operate during hot days in the desert near Phoenix. It was to function even on rainy days, for the X-30 was to be capable of flight from anywhere in the world. At NASA-Lewis, James Van Fossen built a water-alleviation system that used ethylene glycol as the antifreeze, spraying it directly onto the cold tubes of a heat exchanger. Water, condensing on those tubes, dissolved some of the glycol and remained liquid as it swept downstream with the flow. Fie reported that this arrangement protected the system against freezing at temperatures as low as ~55°F, with the moisture content of the chilled air being reduced to 0.00018 pounds in each pound of this air. This represented removal of at least 99 percent of the humidity initially present in the airflow.31

Pratt & Whitney conducted tests of a LACE precooler that used this arrange­ment. A company propulsion manager, Walt Lambdin, addressed a NASP technical review meeting in 1991 and reported that it completely eliminated problems of reduced performance of the precooler due to formation of ice. With this, the prob­lem of ice in a LACE system appeared amenable to control.32

It was also possible to gain insight into the LACE state of the art by considering contemporary work that was under way in Japan. The point of departure in that country was the H-2 launch vehicle, which first flew to orbit in February 1994. It was a two-stage expendable rocket, with a liquid-fueled core flanked by two solid boosters. LACE was pertinent because a long-range plan called for upgrades that could replace the solid strap-ons with new versions using LACE engines.33

Mitsubishi Heavy Industries was developing the H-2 s second-stage engine, des­ignated LE-5. It burned hydrogen and oxygen to produce 22,000 pounds of thrust. As an initial step toward LACE, this company built heat exchangers to liquefy air for this engine. In tests conducted during 1987 and 1988, the Mitsubishi heat exchanger demonstrated liquefaction of more than three pounds of air for every pound of LH2. This was close to four to one, the theoretical limit based on the ther­mal properties of LH2 and of air. Still, it takes 34.6 pounds of air to burn a pound of hydrogen, and an all-LACE LE-5 was to run so fuel-rich that its thrust was to be only 6,000 pounds.

But the Mitsubishi group found their own path to prevention of ice buildup. They used a freeze-thaw process, melting ice by switching periodically to the use of ambient air within the cooler after its tubes had become clogged with ice from LH2. The design also provided spaces between the tubes and allowed a high-speed airflow to blow ice from them.34

LACE nevertheless remained controversial, and even with the moisture problem solved, there remained the problem of weight. Czysz noted that an engine with

100,0 pounds of thrust would need 600 pounds per second of liquid air: “The largest liquid-air plant in the world today is the AiResearch plant in Los Angeles, at 150 pounds per second. It covers seven acres. It contains 288,000 tubes welded to headers and 59 miles of 3/32-inch tubing.”35

Still, no law required the use of so much tubing, and advocates of LACE have long been inventive. A 1963 Marquardt concept called for an engine with 10,000 pounds of thrust, which might have been further increased by using an ejector. This appeared feasible because LACE used LH, as the refrigerant. This gave far greater effectiveness than the AiResearch plant, which produced its refrigerant on the spot by chilling air through successive stages.36

For LACE heat exchangers, thin-walled tubing was essential. The Japanese model, which was sized to accommodate the liquid-hydrogen flow rate of the LE – 5, used 5,400 tubes and weighed 304 pounds, which is certainly noticeable when the engine is to put out no more than 6,000 pounds of thrust. During the mid – 1960s investigators at Marquardt and AiResearch fabricated tubes with wall thick­nesses as low as 0.001 inch, or one mil. Such tubes had not been used in any heat exchanger subassemblies, but 2-mil tubes of stainless steel had been crafted into a heat exchanger core module with a length of 18 inches.37

Even so, this remained beyond the state of the art for NASP, a quarter-cen­tury later. Weight estimates for the X-30 LACE heat exchanger were based on the assumed use of З-mil Weldalite tubing, but a 1992 Lockheed review stated, “At present, only small quantities of suitable, leak free, З-mil tubing have been fabri­cated.” The plans of that year called for construction of test prototypes using 6-mil Weldalite tubing, for which “suppliers have been able to provide significant quanti­ties.” Still, a doubled thickness of the tubing wall was not the way to achieve low weight.38

Other weight problems arose in seeking to apply an ingenious technique for derichening the product stream by increasing the heat capacity of the LH2 coolant. Molecular hydrogen, H2, has two atoms in its molecule and exists in two forms: para and ortho, which differ in the orientation of the spins of their electrons. The ortho form has parallel spin vectors, while the para form has spin vectors that are oppositely aligned. The ortho molecule amounts to a higher-energy form and loses energy as heat when it transforms into the para state. The reaction therefore is exo­thermic.

The two forms exist in different equilibrium concentrations, depending on the temperature of the bulk hydrogen. At room temperature the gas is about 25 percent para and 75 percent ortho. When liquefied, the equilibrium state is 100 percent para. Hence it is not feasible to prepare LH2 simply by liquefying the room-tem­perature gas. The large component of ortho will relax to para over several hours, producing heat and causing the liquid to boil away. The gas thus must be exposed to a catalyst to convert it to the para form before it is liquefied.

These aspects of fundamental chemistry also open the door to a molecular shift that is endothermic and that absorbs heat. One achieves this again by using a cata­lyst to convert the LH, from para to ortho. This reaction requires heat, which is obtained from the liquefying airflow within the LACE. As a consequence, the air chills more readily when using a given flow of hydrogen refrigerant. This effect is sufficiently strong to increase the heat-sink capacity of the hydrogen by as much as 25 percent.39

This concept also dates to the 1960s. Experiments showed that ruthenium metal deposited on aluminum oxide provided a suitable catalyst. For 90 percent para-to – ortho conversion, the LACE required a “beta,” a ratio of mass to flow rate, of five to seven pounds of this material for each pound per second of hydrogen flow. Data published in 1988 showed that a beta of five pounds could achieve 85 percent con­version, with this value showing improvement during 1992. However, X-30 weight estimates assumed a beta of two pounds, and this performance remained out of reach.40

During takeoff, the X-30 was to be capable of operating from existing runways and of becoming airborne at speeds similar to those of existing aircraft. The low – speed system, along with its accompanying LACE and ejector systems, therefore needed substantial levels of thrust. The ejector, again, called for a rocket exhaust to serve as a primary flow within a duct, entraining an airstream as the secondary flow. Ejectors gave good performance across a broad range of flight speeds, showing an effectiveness that increased with Mach. In the SR-71 at Mach 2.2, they accounted for 14 percent of the thrust in afterburner; at Mach 3-2 this was 28.4 percent. Nor did the SR-71 ejectors burn fuel. They functioned entirely as aerodynamic devices.41

It was easy to argue during the 1980s that their usefulness might be increased still further. The most important unclassified data had been published during the 1950s. A good engine needed a high pressure increase, but during the mid-1960s studies at Marquardt recommended a pressure rise by a factor of only about 1.5, when turbojets were showing increases that were an order of magnitude higher.42 The best theoretical treatment of ejector action dated to 1974. Its author, NASA’s В. H. Anderson, also wrote a computer program called REJECT that predicted the performance of supersonic ejectors. However, he had done this in 1974, long before the tools of CFD were in hand. A 1989 review noted that since then “little attention has been directed toward a better understanding of the details of the flow mechanism and behavior.”43

Within the NASP program, then, the ejector ramjet stood as a classic example of a problem that was well suited to new research. Ejectors were known to have good effectiveness, which might be increased still further and which stood as a good topic for current research techniques. CFD offered an obvious approach, and NASP activities supplemented computational work with an extensive program of experi­ment.44

The effort began at GASL, where Tony duPont s ejector ramjet went on a static test stand during 1985 and impressed General Skantze. DuPont’s engine design soon took the title of the Government Baseline Engine and remained a topic of active experimentation during 1986 and 1987. Some work went forward at NASA – Langley, where the Combustion Heated Scramjet Test Facility exercised ejectors over the range of Mach 1.2 to 3-5- NASA-Lewis hosted further tests, at Mach 0.06 and from Mach 2 to 3-5 within its 10 by 10 foot Supersonic Wind Tunnel.

The Lewis engine was built to accommodate growth of boundary layers and placed a 17-degree wedge ramp upstream of the inlet. Three flowpaths were mounted side by side, but only the center duct was fueled; the others were “dummies” that gave data on unfueled operation for comparison. The primary flow had a pressure of 1,000 pounds per square inch and a temperature of 1,340°F, which simulated a fuel-rich rocket exhaust. The experiments studied the impact of fuel-to-air ratio on performance, although the emphasis was on development of controls.

Even so, the performance left much to be desired. Values of fuel-to-air ratio greater than 0.52, with unity representing complete combustion, at times brought “buzz” or unwanted vibration of the inlet structure. Even with no primary flow, the inlet failed to start. The main burner never achieved thermal choking, where the flow rate would rise to the maximum permitted by heat from burning fuel. Ingestion of the boundary layer significantly degraded engine performance. Thrust measurements were described as “no good” due to nonuniform thermal expansion across a break between zones of measurement. As a contrast to this litany of woe, operation of the primary gave a welcome improvement in the isolation of the inlet from the combustor.

Also at GASL, again during 1987, an ejector from Boeing underwent static test. It used a markedly different configuration that featured an axisymmetric duct and a fuel-air mixer. The primary flow was fuel-rich, with temperatures and pressures similar to those of NASA-Lewis. On the whole, the results of the Boeing tests were encouraging. Combustion efficiencies appeared to exceed 95 percent, while mea­sured values of thrust, entrained airflow, and pressures were consistent with com­pany predictions. However, the mixer performance was no more than marginal, and its length merited an increase for better performance.45

In 1989 Pratt & Whitney emerged as a major player, beginning with a subscale ejector that used a flow of helium as the primary. It underwent tests at company facilities within the United Technologies Research Center. These tests addressed the basic issue of attempting to increase the entrainment of secondary flow, for which non-combustible helium was useful. Then, between 1990 and 1992, Pratt built three versions of its Low Speed Component Integration Rig (LSCIR), testing them all within facilities of Marquardt.

LSCIR-1 used a design that included a half-scale X-30 flowpath. It included an inlet, front and main combustors, and nozzle, with the inlet cowl featuring fixed geometry. The tests operated using ambient air as well as heated air, with and with­out fuel in the main combustor, while the engine operated as a pure ramjet for several runs. Thermal choking was achieved, with measured combustion efficiencies lying within 2 percent of values suitable for the X-30. But the inlet was unstarted for nearly all the runs, which showed that it needed variable geometry. This refinement was added to LSCIR-2, which was put through its paces in July 1991, at Mach 2.7. The test sequence would have lasted longer but was terminated prematurely due to a burnthrough of the front combustor, which had been operating at 1,740°E Thrust measurements showed only limited accuracy due to flow separation in the nozzle.

LSCIR-3 followed within months. The front combustor was rebuilt with a larger throat area to accommodate increased flow and received a new ignition system that used silane. This gas ignited spontaneously on contact with air. In tests, leaks devel­oped between the main combustor, which was actively cooled, and the uncooled nozzle. A redesigned seal eliminated the leakage. The work also validated a method for calculating heat flux to the wall due to impingement of flow from primaries.

Other results were less successful. Ignition proceeded well enough using pure silane, but a mix of silane and hydrogen failed as an ignitant. Problems continued to recur due to inlet unstarts and nozzle flow separation. The system produced 10,000 pounds of thrust at Mach 0.8 and 47,000 pounds at Mach 2.7, but this perfor­mance still was rated as low.

Within the overall LSS program, a Modified Government Baseline Engine went under test at NASA-Lewis during 1990, at Mach 3-5. The system now included hydraulically-operated cowl and nozzle flaps that provided variable geometry, along with an isolator with flow channels that amounted to a bypass around the combus­tor. This helped to prevent inlet unstarts.

Once more the emphasis was on development of controls, with many tests oper­ating the system as a pure ramjet. Only limited data were taken with the primaries on. Ingestion of the boundary layer gave significant degradation in engine perfor­mance, but in other respects most of the work went well. The ramjet operations were successful. The use of variable geometry provided reliable starting of the inlet, while operation in the ejector mode, with primaries on, again improved the inlet isolation by diminishing the effect of disturbances propagating upstream from the combustor.46

Despite these achievements, a 1993 review at Rocketdyne gave a blunt conclu­sion: “The demonstrated performance of the X-30 special system is lower than the performance level used in the cycle deck…the performance shortfall is primarily associated with restrictions on the amount of secondary flow.” (Secondary flow is entrained by the ejector’s main flow.) The experimental program had taught much concerning the prevention of inlet unstarts and the enhancement of inlet-combus­tor isolation, but the main goal—enhanced performance of the ejector ramjet—still lay out of reach.

Simple enlargement of a basic design offered little promise; Pratt & Whitney had tried that, in LSCIR-3, and had found that this brought inlet flow separation along with reduced inlet efficiency. Then in March 1993, further work on the LSS was canceled due to budget cuts. NASP program managers took the view that they could accelerate an X-30 using rockets for takeoff, as an interim measure, with the LSS being added at a later date. Thus, although the LSS was initially the critical item in duPont’s design, in time it was put on hold and held off for another day.47

Nose Cones and Re-entry

Th e ICBM concept of the early 1950s, called Atlas, was intended to carry an atomic bomb as a warhead, and there were two things wrong with this missile. It was unacceptably large and unwieldy, even with a warhead of reduced weight. In addi­tion, to compensate for this limited yield, Atlas demanded unattainable accuracy in aim. But the advent of the hydrogen bomb solved both problems. The weight issue went away because projected H-bombs were much lighter, which meant that Atlas could be substantially smaller. The accuracy issue also disappeared. Atlas now could miss its target by several miles and still destroy it, by the simple method of blowing away everything that lay between the aim point and the impact point.

Studies by specialists, complemented by direct tests of early H-bombs, brought a dramatic turnaround during 1954 as Atlas vaulted to priority. At a stroke, its design­ers faced the re-entry problem. They needed a lightweight nose cone that could protect the warhead against the heat of atmosphere entry, and nothing suitable was in sight. The Army was well along in research on this problem, but its missiles did not face the severe re-entry environment of Atlas and its re-entry studies were not directly applicable.

The Air Force approached this problem systematically. It began by working with the aerodynamicist Arthur Kantrowitz, who introduced the shock tube as an instrument that could momentarily reproduce flow conditions that were pertinent. Tests with rockets, notably the pilotless X-17, complemented laboratory experi­ments. The solution to the problem of nose-cone design came from George Sutton, a young physicist who introduced the principle of ablation. Test nose cones soon were in flight, followed by prototypes of operational versions.

Widening Prospects. for Re-entry

Th e classic spaceship has wings, and throughout much of the 1950s both NACA and the Air Force struggled to invent such a craft. Design studies addressed issues as fundamental as whether this hypersonic rocket plane should have one particular wing-body configuration, or whether it should be upside down. The focus of the work was Dyna-Soar, a small version of the space shuttle that was to ride to orbit atop a Titan III. It brought remarkable engineering advances, but Pentagon policy makers, led by Defense Secretary Robert McNamara, saw it as offering little more than technical development, with no mission that could offer a military justifica­tion. In December 1963 he canceled it.

Better prospects attended NASA’s effort in manned spaceflight, which culmi­nated in the Apollo piloted flights to the Moon. Apollo used no wings; rather, it relied on a simple cone that used the Allen-Eggers blunt-body principle. Still, its demands were stringent. It had to re-enter successfully with twice the energy of an entry from Earth orbit. Then it had to navigate a corridor, a narrow range of alti­tudes, to bleed off energy without either skipping back into space or encountering g-forces that were too severe. By doing these things, it showed that hypersonics was ready for this challenge.

Materials

No aircraft has ever cruised at Mach 5, and an important reason involves struc­tures and materials. “If I cruise in the atmosphere for two hours,” says Paul Czysz of McDonnell Douglas, “I have a thousand times the heat load into the vehicle that the shuttle gets on its quick transit of the atmosphere.” The thermal environment of
the X-30 was defined by aerodynamic heating and by the separate issue of flutter.48

A single concern dominated issues of structural design: The vehicle was to fly as low as possible in the atmosphere during ascent to orbit. Re-entry called for flight at higher altitudes, and the loads during ascent therefore were higher than those of re-entry. Ascent at lower altitude—200,000 feet, for instance, rather than 250,000—increased the drag on the X-30. But it also increased the thrust, giving a greater margin between thrust and drag that led to increased acceleration. Consider­ations of ascent, not re-entry, therefore shaped the selection of temperature-resistant materials.

Yet the aircraft could not fly too low, or it would face limits set by aerodynamic flutter. This resulted from forces on the vehicle that were not steady but oscillated, at frequencies of oscillation that changed as the vehicle accelerated and lost weight. The wings tended to vibrate at characteristic frequencies, as when bent upward and released to flex up and down. If the frequency of an aerodynamic oscillation matched that at which the wings were prone to flex, the aerodynamic forces could tear the wings off. Stiffness in materials, not strength, was what resisted flutter, and the vehicle was to fly a “flutter-limited trajectory,” staying high enough to avoid the problem.

Подпись: TVPlCAl JUtan TFAJECTWTMaterialsThe mechanical properties of metals depend on their fine­grained structure. An ingot of metal consists of a mass of interlaced grains or crystals, and small grains give higher strength. Quenching, plunging hot metal into water, yields small grains but often makes the metal brittle or hard to form. Alloying a metal, as by adding small quantities of

carbon to make steel, A. . . . .

Ascent trajectory or an airbreatner. (IN ASA)

is another traditional practice. However,

some additives refuse to dissolve or separate out from the parent metal as it cools.

To overcome such restrictions, techniques of powder metallurgy were in the fore­front. These methods gave direct control of the microstructure of metals by forming
them from powder, with the grains of powder sintering or welding together by being pressed in a mold at high temperature. A manufacturer could control the grain size independently of any heat-treating process. Powder metallurgy also overcame restrictions on alloying by mixing in the desired additives as powdered ingredients.

Several techniques existed to produce the powders. Grinding a metal slab to saw­dust was the simplest, yielding relatively coarse grains. “Splat-cooling” gave better control. It extruded molten metal onto the chilled rim of a rotating wheel, which cooled it instantly into a thin ribbon. This represented a quenching process that produced a fine-grained microstructure in the metal. The ribbon then was chemi­cally treated with hydrogen, which made it brittle, so that it could be ground into a fine powder. Heating the powder then drove off the hydrogen.

The Plasma Rotating Electrode Process, developed by the firm of Nuclear Metals, showed particular promise. The parent metal was shaped into a cylinder that rotated at up to 30,000 revolutions per minute and served as an electrode. An electric arc melted the spinning metal, which threw off droplets within an atmosphere of cool inert helium. The droplets plummeted in temperature by thousands of degrees within milliseconds, and their microstructures were so fine as to approach an amor­phous state. Their molecules did not form crystals, even tiny ones, but arranged themselves in formless patterns. This process, called “rapid solidification,” promised particular gains in high-temperature strength.

Standard titanium alloys, for instance, lost strength at temperatures above 700 to 900°E By using rapid solidification, McDonnell Douglas raised this limit to 1,100°F prior to 1986. Philip Parrish, the manager of powder metallurgy at DARPA, noted that his agency had spent some $30 million on rapid-solidification technology since 1975. In 1986 he described it as “an established technology. This technology now can stand along such traditional methods as ingot casting or drop forging.”49

Nevertheless 1,100°F was not enough, for it appeared that the X-30 needed a material that was rated at 1,700°F. This stemmed from the fact that for several years, NASP design and trajectory studies indicated that a flight vehicle indeed would face such temperatures on its fuselage. But after 1990 the development of new baseline configurations led to an appreciation that the pertinent areas of the vehicle would face temperatures no higher than 1,500°F. At that temperature, advanced titanium alloys could serve in “metal matrix composites,” with thin-gauge metals being rein­forced with fibers.

The new composition came from the firm of Titanium Metals and was desig­nated Beta-21S. That company developed it specifically for the X-30 and patented it in 1989- It consisted of titanium along with 15 percent molybdenum, 2.8 percent columbium, 3 percent aluminum, and 0.2 percent silicon. Resistance to oxidation proved to be its strong suit, with this alloy showing resistance that was two orders of magnitude greater than that of conventional aircraft titanium. Tests showed that it

Materials

Comparison of some matrix alloys. (NASA)

also could be exposed repeatedly to leaks of gaseous hydrogen without being subject to embrittlement. Moreover, it lent itself readily to being rolled to foil-gauge thick­nesses of 4 to 5 mil when metal matrix composites were fabricated.50

Such titanium-matrix composites were used in representative X-30 structures. The Non-Integral Fuselage Tank Article (NIFTA) represented a section of X-30 fuselage at one-fourth scale. It was oblong in shape, eight feet long and measuring four by seven feet in cross section, and it contained a splice. Its skin thickness was 0.040 inches, about the same as for the X-30. It held an insulated tank that could hold either liquid nitrogen or LH, in tests, which stood as a substantial engineering item in its own right.

The tank had a capacity of 940 gallons and was fabricated of graphite-epoxy composite. No liner protected the tankage on the inside, for graphite-epoxy was impervious to damage by LH. However, the exterior was insulated with two half­inch thicknesses of Q-felt, a quartz-fiber batting with density of only 3-5 pounds per cubic foot. A thin layer of Astroquartz high-temperature cloth covered the Q-felt. This insulation filled space between the tank wall and the surrounding wall of the main structure, with both this space and the Q-felt being purged with helium.51

The test sequence for NIFTA duplicated the most severe temperatures and stresses of an ascent to orbit. These stresses began on the ground, with the vehicle being heavy with fuel and subject to a substantial bending load. There was also a
large shear load, with portions of the vehicle being pulled transversely in opposite directions. This happened because the landing gear pushed upward to support the entire weight of the craft, while the weight of the hydrogen tank pushed downward only a few feet away. Other major bending and shear loads arose during subsonic climbout, with the X-30 executing a pullup maneuver.

Significant stresses arose near Mach 6 and resulted from temperature differences across the thickness of the stiffened skin. Its outer temperature was to be 800°F, but the tops of the stiffeners, a few inches away, were to be 350°F. These stiffeners were spot-welded to the skin panels, which raised the issue of whether the welds would hold amid the different thermal expansions. Then between Mach 10 and 16, the vehicle was to reach peak temperatures of 1,300°F. The temperature differences between the top and bottom of the vehicle also would be at their maximum.

The tests combined both thermal and mechanical loads and were conducted within a vacuum chamber at Wyle Laboratories during 1991- Banks of quartz lamps applied up to 1.5 megawatts of heat, while jacks imposed bending or shear forces that reached 100 percent of the design limits. Most tests placed nonflammable liquid nitrogen in the tank for safety, but the last of them indeed used LHr With this supercold fuel at -423°F, the lamps raised the exterior temperature of NIFTA to the full 1,300°F, while the jacks applied the full bending load. A 1993 paper noted “100% successful completion of these tests,” including the one with LH2 that had been particularly demanding.52

NIFTA, again, was at one-fourth scale. In a project that ran from 1991 through the summer of 1994, McDonnell Douglas engineers designed and fabricated the substantially larger Full Scale Assembly. Described as “the largest and most repre­sentative NASP fuselage structure built,” it took shape as a component measuring 10 by 12 feet. It simulated a section of the upper mid-fuselage, just aft of the crew compartment.

A1994 review declared that it “was developed to demonstrate manufacturing and assembly of a full scale fuselage panel incorporating all the essential structural details of a flight vehicle fuselage assembly.” Crafted in flightweight, it used individual panels of titanium-matrix composite that were as large as four by eight feet. These were stiffened with longitudinal members of the same material and were joined to circumferential frames and fittings of Ti-1100, a titanium alloy that used no fiber reinforcement. The complete assembly posed manufacturing challenges because the panels were of minimum thickness, having thinner gauges than had been used pre­viously. The finished article was completed just as NASP was reaching its end, but it showed that the thin panels did not introduce significant problems.53

The firm of Textron manufactured the fibers, designated SCS-6 and -9, that reinforced the composites. As a final touch, in 1992 this company opened the worlds first manufacturing plant dedicated to the production of titanium-matrix

composites. “We could get the cost down below a thousand dollars a pound if we had enough volume,” Bill Grant, a company manager, told Aerospace America. His colleague Jim Henshaw added, “We think SCS/titanium composites are fully devel­oped for structural applications.”54

Such materials served to 1,500°F, but on the X-30 substantial areas were to with­stand temperatures approaching 3,000°F, which is hotter than molten iron. If a steelworker were to plunge a hand into a ladle of this metal, the hand would explode from the sudden boiling of water in its tissues. In such areas, carbon-carbon was necessary. It had not been available for use in Dyna-Soar, but the Pentagon spent $200 million to fund its development between 1970 and 1985.55

Much of this supported the space shuttle, on which carbon-carbon protected such hot areas as the nose cap and wing leading edges. For the X-30, these areas expanded to cover the entire nose and much of the vehicle undersurface, along with the rudders and both the top and bottom surfaces of the wings. The X-30 was to execute 150 test flights, exposing its heat shield to prolonged thermal soaks while still in the atmosphere. This raised the problem of protection against oxidation.56

Materials

Selection of NASP materials based on temperature. (General Accounting Office)

Standard approaches called for mixing oxidation inhibitors into the carbon matrix and covering the surface with a coating of silicon carbide. However, there was a mismatch between the thermal expansions of the coating and the carbon – carbon substrate, which led to cracks. An interlayer of glass-forming sealant, placed between them, produced an impervious barrier that softened at high temperatures to fill the cracks. But these glasses did not flow readily at temperatures below 1,500°F. This meant that air could penetrate the coating and reach the carbon through open cracks to cause loss by oxidation.57

The goal was to protect carbon-carbon against oxidation for all 150 of those test flights, or 250 hours. These missions included 75 to orbit and 75 in hypersonic cruise. The work proceeded initially by evaluating several dozen test samples that were provided by commercial vendors. Most of these materials proved to resist oxi­dation for only 10 to 20 hours, but one specimen from the firm of Hitco reached 70 hours. Its surface had been grooved to promote adherence of the coating, and it gave hope that long operational life might be achieved.58

Complementing the study of vendors’ samples, researchers ordered new types of carbon-carbon and conducted additional tests. The most durable came from the firm of Rohr, with a coating by Science Applications International. It easily with­stood 2,000°F for 200 hours and was still going strong at 2,500 °F when the tests stopped after 150 hours. This excellent performance stemmed from its use of large quantities of oxidation inhibitors, which promoted long life, and of multiple glass layers in the coating.

But even the best of these carbon-carbons showed far poorer performance when tested in arcjets at 2,500°F. The high-speed airflows forced oxygen into cracks and pores within the material, while promoting evaporation of the glass sealants. Power­ful roars within the arcjets imposed acoustic loads that contributed to cracking, with other cracks arising from thermal shock as test specimens were suddenly plunged into a hot flow stream. The best results indicated lifetimes of less than two hours.

Fortunately, actual X-30 missions were to impose 2,500°F temperatures for only a few minutes during each launch and reentry. Even a single hour of lifetime therefore could permit panels of carbon-carbon to serve for a number of flights. A 1992 review concluded that “maximum service temperatures should be limited to 2,800°F; above this temperature the silicon-based coating systems afford little prac­tical durability,” due to active oxidation. In addition, “periodic replacement of parts may be inevitable.”59

New work on carbon-carbon, reported in 1993, gave greater encouragement as it raised the prospect of longer lifetimes. The effort evaluated small samples rather than fabricated panels and again used the arcjet installations of NASA-Johnson and Ames. Once again there was an orders-of-magnitude difference in the observed lifetimes of the carbon-carbon, but now the measured lifetimes extended into the hundreds of minutes. A formulation from the firm of Carbon-Carbon Advanced

Technologies gave the best results, suggesting 25 reuses for orbital missions of the X-30 and 50 reuses for the less-demanding missions of hypersonic cruise.60

There also was interest in using carbon-carbon for primary structure. Here the property that counted was not its heat resistance but its light weight. In an impor­tant experiment, the firm of LTV fabricated half of an entire wing box of this mate­rial. An airplanes wing box is a major element of aircraft structure that joins the wings and provides a solid base for attachment of the fuselage fore and aft. Indeed, one could compare it with the keel of a ship. It extends to left and right of the air­craft centerline, and LTV s box constituted the portion to the left of this line. Built at full scale, it represented a hot-structure wing proposed by General Dynamics. It measured five by eight feet with a maximum thickness of 16 inches. Three spars ran along its length; five ribs were mounted transversely, and the complete assembly weighed 802 pounds.

The test plan called for it to be pulled upward at the tip to reproduce the bend­ing loads of a wing in flight. Torsion or twisting was to be applied by pulling more strongly on the front or rear spar. The maximum load corresponded to having the X – 30 execute a pullup maneuver at Mach 2.2, with the wing box at room temperature. With the ascent continuing and the vehicle undergoing aerodynamic heating, the next key event brought the maximum difference in the temperatures of the top and bottom of the wing box, with the former being 994°F and the latter at 1,671°F. At that moment the load on the wing box corresponded to 34 percent of the Mach 2.2 maximum. Farther along, the wing box was to reach its peak temperature, 1,925°F, on the lower surface. These three points were to be reproduced through mechanical forces applied at the ends of the spars and through the use of graphite heaters.

But several key parts delaminated during their fabrication, seriously compromis­ing the ability of the wing box to bear its specified load. Plans to impose the peak or Mach 2.2 load were abandoned, with the maximum planned load being reduced to the 34 percent associated with the maximum temperature difference. For the same reason, the application of torsion was deleted from the test program. Amid these reductions in the scope of the structural tests, two exercises went forward during December 1991. The first took place at room temperature and successfully reached the mark of 34 percent, without causing further damage to the wing box.

The second test, a week later, reproduced the condition of peak temperature dif­ference while briefly applying the calculated load of 34 percent. The plan then called for further heating to the peak temperature of 1,925°F. As the wing box approached this value, a problem arose due to the use of metal fasteners in its assembly. Some were made from coated columbium and were rated for 2,300°F, but most were of a nickel alloy that had a permissible temperature of 2,000°F. However, an instru­mented nickel-alloy fastener overheated and reached 2,l47°F- The wing box showed a maximum temperature of 1,917°F at that moment, and the test was terminated because the strength of the fasteners now was in question. This test nevertheless

counted as a success because it had come within 8°F of the specified temperature.61

Both tests thus were marked as having achieved their goals, but their merits were largely in the mind of the beholder. The entire project would have been far more impressive if it had avoided delamination, successfully achieved the Mach 2.2 peak load, incorporated torsion, and subjected the wing box to repeated cycles of bending, torsion, and heating. This effort stood as a bold leap toward a future in which carbon-carbon might take its place as a mainstream material, suitable for a hot primary structure, but it was clear that this future would not arrive during the NASP program.

Then there was beryllium. It had only two-thirds the density of aluminum and possessed good strength, but its temperature range was limited. The conventional metal had a limit of some 850°F, but an alloy from Lockheed called Lockalloy, which contained 38 percent aluminum, was rated only for 600°F. It had never become a mainstream engineering material like titanium, but for NASP it offered the advan­tage of high thermal conductivity. Work with titanium had greatly increased its tem­peratures of use, and there was hope of achieving similar results with beryllium.

Initial efforts used rapid-solidification techniques and sought temperature limits as high as 1,500°F. These attempts bore no fruit, and from 1988 onward the temper­ature goal fell lower and lower. In May 1990 a program review shifted the emphasis away from high-temperature formulations toward the development of beryllium as a material suitable for use at cryogenic temperatures. Standard forms of this metal became unacceptably brittle when only slightly colder than ~100°F, but cryo-beryl – lium proved to be out of reach as well. By 1992 investigators were working with ductile alloys of beryllium and were sacrificing all prospect of use at temperatures beyond a few hundred degrees but were winning only modest improvements in low – temperature capability. Terence Ronald, the NASP materials director, wrote in 1995 of rapid-solidification versions with temperature limits as low as 500°F, which was not what the X-30 needed to reach orbit.62

In sum, the NASP materials effort scored a major advance with Beta-21S, but the genuinely radical possibilities failed to emerge. These included carbon-carbon as primary structure, along with alloys of beryllium that were rated for temperatures well above 1,000°F. The latter, if available, might have led to a primary structure with the strength and temperature resistance of Beta-2 IS but with less than half the weight. Indeed, such weight savings would have ramified through the entire design, leading to a configuration that would have been smaller and lighter overall.

Overall, work with materials fell well short of its goals. In dealing with struc­tures and materials, the contractors and the National Program Office established 19 program milestones that were to be accomplished by September 1993- A General Accounting Office program review, issued in December 1992, noted that only six of them would indeed be completed.63 This slow progress encouraged conservatism in drawing up the bill of materials, but this conservatism carried a penalty.

When the scramjets faltered in their calculated performance and the X-30 gained weight while falling short of orbit, designers lacked recourse to new and very light materials—structural carbon-carbon, high-temperature beryllium—that might have saved the situation. With this, NASP spiraled to its end. It also left its support­ers with renewed appreciation for rockets as launch vehicles, which had been flying to orbit for decades.

Materials

The Move Toward Missiles

In August 1945 it took little imagination to envision that the weapon of the future would be an advanced V-2, carrying an atomic bomb as the warhead and able to cross oceans. It took rather more imagination, along with technical knowledge, to see that this concept was so far beyond the state of the art as not to be worth pursu­ing. Thus, in December Vannevar Bush, wartime head of the Office of Scientific Research and Development, gave his views in congressional testimony:

“There has been a great deal said about a 3,000 miles high-angle rocket.

In my opinion, such a thing is impossible for many years. The people have been writing these things that annoy me, have been talking about a 3,000 mile high-angle rocket shot from one continent to another, carrying an atomic bomb and so directed as to be a precise weapon which would land exactly on a certain target, such as a city. I say, technically, I don’t think anyone in the world knows how to do such a thing, and I feel confident that it will not be done for a very long period of time to come. I think we can leave that out of our thinking.”1

Propulsion and re-entry were major problems, but guidance was worse. For intercontinental range, the Air Force set the permitted miss distance at 5,000 feet and then at 1,500 feet. The latter equaled the error of experienced bombardiers who were using radar bombsights to strike at night from 25,000 feet. The view at the Pentagon was that an ICBM would have to do as well when flying all the way to Moscow. This accuracy corresponded to hitting a golf ball a mile and having it make a hole in one. Moreover, each ICBM was to do this entirely through auto­matic control.2

The Air Force therefore emphasized bombers during the early postwar years, paying little attention to missiles. Its main program, such as it was, called for a mis­sile that was neither ballistic nor intercontinental. It was a cruise missile, which was to solve its guidance problem by steering continually. The first thoughts dated to November 1945. At North American Aviation, chief engineer Raymond Rice and chief scientist William Bollay proposed to “essentially add wings to the V-2 and design a missile fundamentally the same as the A-9.”

Like the supersonic wind tunnel at the Naval Ordnance Laboratory, here was another concept that was to carry a German project to completion. The initial design had a specified range of 500 miles,3 which soon increased. Like the A-9, this missile—designated MX-770—was to follow a boost-glide trajectory and then extend its range with a supersonic glide. But by 1948 the U. S. Air Force had won its independence from the Army and had received authority over missile programs with ranges of 1,000 miles and more. Shorter-range missiles remained the con­cern of the Army. Accordingly, late in February, Air Force officials instructed North American to stretch the range of the MX-770 to a thousand miles.

A boost-glide trajectory was not well suited for a doubled range. At Wright Field, the Air Force development center, Colonel M. S. Roth proposed to increase the range by adding ramjets.4 This drew on work at Wright, where the Power Plant

Laboratory had a Nonrotating Engine Branch that was funding development of both ramjets and rocket engines. Its director, Weldon Worth, dealt specifically with ramjets.5 A modification of the MX-770 design added two ramjet engines, mount­ing them singly at the tips of the vertical fins.6 The missile also received a new name: Navaho. This reflected a penchant at North American for names beginning with “NA.”7

Then, within a few months during 1949 and 1950, the prospect of world war emerged. In 1949 the Soviets exploded their first atomic bomb. At nearly the same time, China’s Mao Zedong defeated the Nationalists of Chiang Kai-shek and pro­claimed the People’s Republic of China. The Soviets had already shown aggressive­ness by subverting the democratic government of Czechoslovakia and by blockading Berlin. These new developments raised the prospect of a unified communist empire armed with the industry that had defeated the Nazis, wielding atomic weapons, and deploying the limitless manpower of China.

President Truman responded both publicly and with actions that were classi­fied. In January 1950 he announced a stepped-up nuclear program, directing “the Atomic Energy Commission to continue its work on all forms of atomic weapons, including the so-called hydrogen or super bomb.” In April he gave his approval to a secret policy document, NSC-68. It stated that the United States would resist com­munist expansion anywhere in the world and would devote up to twenty percent of the gross national product to national defense.8 Then in June, in China’s back yard, North Korea invaded the South, and America again was at war.

These events had consequences for the missile program, as the design and mis­sion of Navaho changed dramatically during 1950. Bollay’s specialists, working with Air Force counterparts, showed that they could anticipate increases in its range to as much as 5,500 nautical miles. Conferences among Air Force officials, held at the Pentagon in August, set this intercontinental range as a long-term goal. A letter from Major General Donald Putt, Director of Research and Development within the Air Materiel Command, became the directive instructing North American to pursue this objective. An interim version, Navaho II, with range of 2,500 nautical miles, appeared technically feasible. The full-range Navaho III represented a long­term project that was slated to go forward as a parallel effort.

The thousand-mile Navaho of 1948 had taken approaches based on the V-2 to their limit. Navaho II, the initial focus of effort, took shape as a two-stage missile with a rocket-powered booster. The booster was to use two such engines, each with thrust of 120,000 pounds. A ramjet-powered second stage was to ride it during initial ascent, accelerating to the supersonic speed at which the ramjet engines could produce their rated thrust. This second stage was then to fly onward as a cruise mis­sile, at a planned flight speed of Mach 2.75.9

A rival to Navaho soon emerged. At Convair, structural analyst Karel Bossart held a strong interest in building an ICBM. As a prelude, he had built three rockets in the shape of a subscale V-2 and had demonstrated his ideas for lightweight struc­ture in flight test. The Rand Corporation, an influential Air Force think tank, had been keeping an eye on this work and on the burgeoning technology of missiles. In December 1950 it issued a report stating that long-range ballistic missiles now were in reach. A month later the Air Force responded by giving Bossart, and Convair, a new study contract. In August 1951 he christened this missile Atlas, after Convair’s parent company, the Atlas Corporation.

The initial concept was a behemoth. Carrying an 8,000-pound warhead, it was to weigh 670,000 pounds, stand 160 feet tall by 12 feet in diameter, and use seven of Bollay’s new 120,000-pound engines. It was thoroughly unwieldy and repre­sented a basis for further studies rather than a concept for a practical weapon. Still, it stood as a milestone. For the first time, the Air Force had a concept for an ICBM that it could pursue using engines that were already in development.10

For the ICBM to compete with Navaho, it had to shrink considerably. Within the Air Force’s Air Research and Development Command, Brigadier General John Sessums, a strong advocate of long-range missiles, proposed that this could be done by shrinking the warhead. The size and weight of Atlas were to scale in proportion with the weight of its atomic weapon, and Sessums asserted that new developments in warhead design indeed would give high yield while cutting the weight.

He carried his argument to the Air Staff, which amounted to the Air Forces board of directors. This brought further studies, which indeed led to a welcome reduction in the size of Atlas. The concept of 1953 called for a length of 110 feet and a loaded weight of 440,000 pounds, with the warhead tipping the scale at only 3,000 pounds. The number of engines went down from seven to five.11

There also was encouraging news in the area of guidance. Radio guidance was out of the question for an operational missile; it might be jammed or the ground-based guidance center might be destroyed in an attack. Instead, missile guidance was to be entirely self-contained. All concepts called for the use of sensitive accelerometers along with an onboard computer, to determine velocity and location. Navaho was to add star trackers, which were to null out errors by tracking stars even in daylight. In addition, Charles Stark Draper of MIT was pursuing inertial guidance, which was to use no external references of any sort. His 1949 system was not truly inertial, for it included a magnetic compass and a Sun-seeker. But when flight-tested aboard a B-29, over distances as great at 1,737 nautical miles, it showed a mean error of only 5 nautical miles.12

For Atlas, though, the permitted miss distance remained at 1,500 feet, with the range being 5500 nautical miles. The program plan of October 1953 called for a leisurely advance over the ensuing decade, with research and development being completed only “sometime after 1964,” and operational readiness being achieved in 1965- The program was to emphasize work on the major components: propulsion, guidance, nose cone, lightweight structure. In addition, it was to conduct extensive ground tests before proceeding toward flight.13

This concept continued to call for an atomic bomb as the warhead, but by then the hydrogen bomb was in the picture. The first test version, named Mike, deto­nated at Eniwetok Atoll in the Pacific on 1 November 1952. Its fireball spread so far and fast as to terrify distant observers, expanding until it was more than three miles across. “The thing was enormous,” one man said. “It looked as if it blotted out the whole horizon, and I was standing 30 miles away.” The weapons designer Theodore Taylor described it as “so huge, so brutal—as if things had gone too far. When the heat reached the observers, it stayed and stayed and stayed, not for seconds but for minutes.” Mike yielded 10.4 megatons, nearly a thousand times greater than the 13 kilotons of the Hiroshima bomb of 1945-

Mike weighed 82 tons.14 It was not a weapon; it was a physics experiment. Still, its success raised the prospect that warheads of the future might be smaller and yet might increase sharply in explosive power. Theodore von Karman, chairman of the Air Force Scientific Advisory Board, sought estimates from the Atomic Energy Commission of the size and weight of future bombs. The AEC refused to release this information. Lieutenant General James Doolittle, Special Assistant to the Air Force Chief of Staff, recommended creating a special panel on nuclear weapons within the SAB. This took form in March 1953, with the mathematician John von Neumann as its chairman. Its specialists included Hans Bethe, who later won the Nobel Prize, and Norris Bradbury who headed the nations nuclear laboratory at Los Alamos, New Mexico.

In June this group reported that a thermonuclear warhead with the 3,000-pound Atlas weight could have a yield of half a megaton. This was substantially higher than that of the pure-fission weapons considered previously. It gave renewed strength to the prospect of a less stringent aim requirement, for Atlas now might miss by far more than 1,500 feet and still destroy its target.

Three months later the Air Force Special Weapons Center issued its own esti­mate, anticipating that a hydrogen bomb of half-megaton yield could weigh as little as 1,500 pounds. This immediately opened the prospect of a further reduction in the size of Atlas, which might fall in weight from 440,000 pounds to as little as 240,000. Such a missile also would need fewer engines.15

Also during September, Bruno Augenstein of the Rand Corporation launched a study that sought ways to accelerate the development of an ICBM. In Washing­ton, Trevor Gardner was Special Assistant for Research and Development, report­ing to the Air Force Secretary. In October he set up his own review committee. He recruited von Neumann to serve anew as its chairman and then added a dazzling array of talent from Caltech, Bell Labs, MIT, and Hughes Aircraft. In Gardner’s words, “The aim was to create a document so hot and of such eminence that no one could pooh-pooh it.”16

He called his group the Teapot Committee. He wanted particularly to see it call for less stringent aim, for he believed that a 1,500-foot miss distance was prepos­

terous. The Teapot Committee drew on findings by Augenstein’s group at Rand, which endorsed a 1,500-pound warhead and a three-mile miss distance. The formal Teapot report, issued in February 1954, declared “the military requirement” on miss distance “should be relaxed from the present 1,500 feet to at least two, and prob­ably three, nautical miles.” Moreover, “the warhead weight might be reduced as far as 1,500 pounds, the precise figure to be determined after the Castle tests and by missile systems optimization.”17

The latter recommendation invoked Operation Castle, a series of H-bomb tests that began a few weeks later. The Mike shot of 1952 had used liquid deuterium, a form of liquid hydrogen. It existed at temperatures close to absolute zero and demanded much care in handling. But the Castle series was to test devices that used lithium deuteride, a dry powder that resembled salt. The Mike approach had been chosen because it simplified the weapons physics, but a dry bomb using lithium promised to be far more practical.

The first such bomb was detonated on 1 March as Castle Bravo. It produced 15 megatons, as its fireball expanded to almost four miles in diameter. Other Castle H-bombs performed similarly, as Castle Romeo went to 11 megatons and Castle Yankee, a variant of Romeo, reached 13.5 megatons. “I was on a ship that was 30 miles away,” the physicist Marshall Rosenbluth recalls about Bravo, “and we had this horrible white stuff raining out on us.” It was radioactive fallout that had condensed from vaporized coral. “It was pretty frightening. There was a huge fireball with these turbulent rolls going in and out. The thing was glowing. It looked to me like a diseased brain.” Clearly, though, bombs of the lithium type could be as powerful as anyone wished—and these test bombs were readily weaponizable.18

The Castle results, strongly complementing the Rand and Teapot reports, cleared the way for action. Within the Pentagon, Gardner took the lead in pushing for Atlas. On 11 March he met with Air Force Secretary Harold Talbott and with the Chief of Staff, General Nathan Twining. He proposed a sped-up program that would nearly double the Fiscal Year (FY) 1955 Atlas budget and would have the first missiles ready to launch as early as 1958. General Thomas White, the Vice Chief of Staff, weighed in with his own endorsement later that week, and Talbott responded by directing Twining to accelerate Atlas immediately.

White carried the ball to the Air Staff, which held responsibility for recommend­ing approval of new programs. He told its members that “ballistic missiles were here to stay, and the Air Staff had better realize this fact and get on with it.” Then on 14 May, having secured concurrence from the Secretary of Defense, White gave Atlas the highest Air Force development priority and directed its acceleration “to the maximum extent that technology would allow.” Gardner declared that Whites order meant “the maximum effort possible with no limitation as to funding.”19

This was a remarkable turnaround for a program that at the moment lacked even a proper design. Many weapon concepts have gone as far as the prototype stage without winning approval, but Atlas gained its priority at a time when the accepted configuration still was the 440,000-pound, five-engine concept of 1953- Air Force officials still had to establish a formal liaison with the AEC to win access to information on projected warhead designs. Within the AEC, lightweight bombs still were well in the future. A specialized device, tested in the recent series as Castle Nectar, delivered 1.69 megatons but weighed 6,520 pounds. This was four times the warhead weight proposed for Atlas.

But in October the AEC agreed that it could develop warheads weighing 1,500 to 1,700 pounds, with a yield of one megaton. This opened the door to a new Atlas design having only three engines. It measured 75 feet long and 10 feet in diameter, with a weight of240,000 pounds—and its miss distance could be as great as five miles. This took note of the increased yield of the warhead and further eased the problem of guidance. The new configuration won Air Force approval in December.20

Winged Spacecraft and Dyna-Soar

Boost-glide rockets, with wings, entered the realm of advanced conceptual design with postwar studies at Bell Aircraft called Bomi, Bomber Missile. The director of the work, Walter Dornberger, had headed Germany’s wartime rocket development program and had been in charge of the V-2. The new effort involved feasibility studies that sought to learn what might be done with foreseeable technology, but Bomi was a little too advanced for some of Dornberger’s colleagues. Historian Roy Houchin writes that when Dornberger faced “abusive and insulting remarks” from an Air Force audience, he responded by declaring that his Bomi would be receiving more respect if he had had the chance to fly it against the United States during the war. In Houchin’s words, “The silence was deafening.”1

Winged Spacecraft and Dyna-Soar

The initial Bomi concept, dating back to 1951, took form as an in-house effort. It called for a two-stage rocket, with both stages being piloted and fitted with delta wings. The lower stage was mostly of aluminum, with titanium leading edges and nose; the upper stage was entirely of titanium and used radiative cooling. With an initial range of 3,500 miles, it was to come over the target above 100,000 feet and at speeds greater than Mach 4. Operational concepts called for bases in England or Spain, targets in the western Soviet Union, and a landing site in northern Africa.2

During the spring of 1952, Bell officials sought funds for further study from Wright Air Development Center (WADC). A year passed, and WADC responded with a firm no. The range was too short. Thermal protection and onboard cooling raised unanswered questions. Values assumed for L/D appeared highly optimistic, and no information was available on stability, control, or aerodynamic flutter at the proposed speeds. Bell responded by offering to consider higher speeds and greater range. Basic feasibility then lay even farther in the future, but the Air Forces inter­est in the Atlas ICBM meant that it wanted missiles of longer range, even though shorter-range designs could be available sooner. An intercontinental Bomi at least could be evaluated as a potential alternative to Atlas, and it might find additional roles such as strategic reconnaissance.3

In April 1954, with that ICBM very much in the ascendancy, WADC awarded Bell its desired study contract. Bomi now had an Air Force designation, MX-2276. Bell examined versions of its two-stage concept with 4,000- and 6,000-mile ranges while introducing a new three-stage configuration with the stages mounted belly – to-back. Liftoff thrust was to be 1.2 million pounds, compared with 360,000 for the three-engine Atlas. Bomi was to use a mix of liquid oxygen and liquid fluorine, the latter being highly corrosive and hazardous, whereas Atlas needed only liquid oxygen, which was much safer. The new Bomi was to reach 22,000 feet per second, slightly less than Atlas, but promised a truly global glide range of 12,000 miles. Even so, Atlas clearly was preferable.4

But the need for reconnaissance brought new life to the Bell studies. At WADC, in parallel with initiatives that were sparking interest in unpiloted reconnaissance satellites, officials defined requirements for Special Reconnaissance System 118P. These called initially for a range of 3,500 miles at altitudes above 100,000 feet. Bell won funding in September 1955, as a follow-on to its recently completed MX – 2276 activity, and proposed a two-stage vehicle with a Mach 15 glider. In March 1956 the company won a new study contract for what now was called Brass Bell. It took shape as a fairly standard advanced concept of the mid-1950s, with a liquid – fueled expendable first stage boosting a piloted craft that showed sharply swept delta wings. The lower stage was conventional in design, burning Atlas propellants with uprated Atlas engines, but the glider retained the company’s preference for fluorine. Officials at Bell were well aware of its perils, but John Sloop at NACA-Lewis was successfully testing a fluorine rocket engine with 20,000 pounds of thrust, and this gave hope.5

The Brass Bell study contract went into force at a moment when prospects for boost-glide were taking a sharp step upward. In February 1956 General Thomas Power, head of the Air Research and Development Command (ARDC), stated that the Air Force should stop merely considering such radical concepts and begin developing them. High on his list was a weapon called Robo, Rocket Bomber, for which several firms were already conducting in-house work as a prelude to funded study contracts. Robo sought to advance beyond Brass Bell, for it was to circle the globe and hence required near-orbital speed. In June ARDC Headquarters set forth System Requirement 126 that defined the scope of the studies. Convair, Douglas, and North American won the initial awards, with Martin, Bell, and Lockheed later participating as well.

The X-15 by then was well along in design, but it clearly was inadequate for the performance requirements of Brass Bell and Robo. This raised the prospect of a new and even more advanced experimental airplane. At ARDC Headquarters, Major George Colchagoff took the initiative in pursuing studies of such a craft, which took the name HYWARDS: Hypersonic Weapons Research and Development Support­ing System. In November 1956 the ARDC issued System Requirement 131, thereby placing this new X-pIane on the agenda as well.6

The initial HYWARDS concept called for a flight speed of Mach 12. However, in December Bell Aircraft raised the speed of Brass Bell to Mach 18. This increased the boost-glide range to 6,300 miles, but it opened a large gap between the perfor­mance of the two craft, inviting questions as to the applicability of HYWARDS results. In January a group at NACA-Langley, headed by John Becker, weighed in with a report stating that Mach 18, or 18,000 feet per second, was appropriate for HYWARDS. The reason was that “at this speed boost gliders approached their peak heating environment. The rapidly increasing flight altitudes at speeds above Mach 18 caused a reduction in the heating rates.”7

With the prospect now strong that Brass Bell and HYWARDS would have the same flight speed, there was clear reason not to pursue them as separate projects but to consolidate them into a single program. A decision at Air Force Headquarters, made in March 1957, accomplished this and recognized their complementary char­acters. They still had different goals, with HYWARDS conducting flight research and Brass Bell being the operational reconnaissance system, but HYWARDS now was to stand as a true testbed.8

Robo still was a separate project, but events during 1957 brought it into the fold as well. In June an ad hoc review group, which included members from ARDC and WADC, looked at Robo concepts from contractors. Robert Graham, a NACA attendee, noted that most proposals called for “a boost-glide vehicle which would fly at Mach 20-25 at an altitude above 150,000 feet.” This was well beyond the state of the art, but the panel concluded that with several years of research, an experimental craft could enter flight test in 1965, an operational hypersonic glider in 1968, and Robo in 19747

On 10 October—less than a week after the Soviets launched their first Sputnik— ARDC endorsed this three-part plan by issuing a lengthy set of reports, “Abbre­viated Systems Development Plan, System 464L—Hypersonic Strategic Weapon System.” It looked ahead to a research vehicle capable of 18,000 feet per second and

350,0 feet, to be followed by Brass Bell with the same speed and 170,000 feet, and finally Robo, rated at 25,000 feet per second and 300,000 feet but capable of orbital flight.

The ARDC’s Lieutenant Colonel Carleton Strathy, a division chief and a strong advocate of program consolidation, took the proposed plan to Air Force Head­quarters. He won endorsement from Brigadier General Don Zimmerman, Deputy

Winged Spacecraft and Dyna-Soar

Top and side views of Dyna-Soar. (U. S. Air Force)

Director of Development Planning, and from Brigadier General Homer Boushey, Deputy Director of Research and Development. NACA’s John Crowley, Associate Director for Research, gave strong approval to the proposed test vehicle, viewing it as a logical step beyond the X-15- On 25 November, having secured support from his superiors, Boushey issued Development Directive 94, allocating $3 million to proceed with more detailed studies following a selection of contractors.10

The new concept represented another step in the sequence that included Eugen Sanger’s Silbervogel, his suborbital skipping vehicle, and among live rocket craft, the X-15- It was widely viewed as a tribute to Sanger, who was still living. It took the name Dyna-Soar, which drew on “dynamic soaring,” Sanger’s name for his skipping technique, and which also stood for “dynamic ascent and soaring flight,” or boost – glide. Boeing and Martin emerged as the finalists in June 1958, with their roles being defined in November 1959- Boeing was to take responsibility for the winged spacecraft. Martin, described as the associate contractor, was to provide the Titan missile that would serve as the launch vehicle.11

The program now demanded definition of flight modes, configuration, struc­ture, and materials. The name of Sanger was on everyone’s lips, but his skipping flight path had already proven to be uncompetitive. He and his colleague Bredt had treated its dynamics, but they had not discussed the heating. That task fell to NACA’s Allen and Eggers, along with their colleague Stanford Neice.

In 1954, following their classic analysis of ballistic re-entry, Eggers and Allen turned their attention to comparison of this mode with boost-glide and skipping entries. They assumed the use of active cooling and found that boost-glide held the advantage:

The glide vehicle developing lift-drag ratios in the neighborhood of 4 is far superior to the ballistic vehicle in ability to convert velocity into range. It has the disadvantage of having far more heat convected to it; however, it has the compensating advantage that this heat can in the main be radiated back to the atmosphere. Consequently, the mass of coolant material may be kept relatively low.

A skip vehicle offered greater range than the alternatives, in line with Sanger’s advocacy of this flight mode. But it encountered more severe heating, along with high aerodynamic loads that necessitated a structurally strong and therefore heavy vehicle. Extra weight meant extra coolant, with the authors noting that “ulti­mately the coolant is being added to cool coolant. This situation must obviously be avoided.” They concluded that “the skip vehicle is thought to be the least promising of the three types of hypervelocity vehicle considered here.”12

Following this comparative assessment of flight modes, Eggers worked with his colleague Clarence Syvertson to address the issue of optimum configuration. This issue had been addressed for the X-15; it was a mid-wing airplane that generally resembled the high-performance fighters of its era. In treating Dyna-Soar, following the Robo review of mid-1957, NACA’s Robert Graham wrote that “high-wing, mid­wing and low-wing configurations were proposed. All had a highly swept wing, and a small angle cone as the fuselage or body.” This meant that while there was agree­ment on designing the fuselage, there was no standard way to design the wing.13

Eggers and Syvertson proceeded by treating the design problem entirely as an exercise in aerodynamics. They concluded that the highest values of L/D were attain­able by using a high-wing concept with the fuselage mounted below as a slender half-cone and the wing forming a flat top. Large fins at the wing tips, canted sharply downward, directed the airflow under the wings downward and increased the lift. Working with a hypersonic wind tunnel at NACA-Ames, they measured a maximum L/D of 6.65 at Mach 5, in good agreement with a calculated value of 6.85-14

This configuration had attractive features, not the least of which was that the base of its half-cone could readily accommodate a rocket engine. Still, it was not long before other specialists began to argue that it was upside down. Instead of having a flat top with the fuselage below, it was to be flipped to place the wing below the fuselage, giving it a flat bottom. This assertion came to the forefront during Becker’s HYWARDS study, which identified its preferred velocity as 18,000 feet per second. His colleague Peter Korycinski worked with Becker to develop heating analyses of flat-top and flat-bottom candidates, with Roger Anderson and others within Langleys Structures Division providing estimates for the weight of thermal protection.

A simple pair of curves, plotted on graph paper, showed that under specified assumptions the flat-bottom weight at that velocity was 21,400 pounds and was increasing at a modest rate at higher speeds. The flat-top weight was 27,600 pounds and was rising steeply. Becker wrote that the flat-bottom craft placed its fuselage “in the relatively cool shielded region on the top or lee side of the wing—i. e., the wing was used in effect as a partial heat shield for the fuselage— This ‘flat-bot­tomed’ design had the least possible critical heating area…and this translated into least circulating coolant, least area of radiative heat shields, and least total thermal protection in flight.”15

These approaches—flat-top at Ames, flat-bottom at Langley—brought a debate between these centers that continued through 1957. At Ames, the continuing strong interest in high L/D reflected an ongoing emphasis on excellent supersonic aerody­namics for military aircraft, which needed high L/D as a matter of course. To ease the heating problem, Ames held for a time to a proposed speed of 11,000 feet per second, slower than the Langley concept but lighter in weight and more attainable in technology while still offering a considerable leap beyond the X-15. Officials at NACA diplomatically described the Ames and Langley HYWARDS concepts respectively as “high L/D” and “low heating,” but while the debate continued, there remained no standard approach to the design of wings for a hypersonic glider.16

There was a general expectation that such a craft would require active cooling. Bell Aircraft, which had been studying Bomi, Brass Bell, and lately Robo, had the most experience in the conceptual design of such arrangements. Its Brass Bell of 1957, designed to enter its glide at 18,000 feet per second and 170,000 feet in alti­tude, featured an actively cooled insulated hot structure. The primary or load-bear­ing structure was of aluminum and relied on cooling in a closed-loop arrangement that used water-glycol as the coolant. Wing leading edges had their own closed-loop cooling system that relied on a mix of sodium and potassium metals. Liquid hydro­gen, pumped initially to 1,000 pounds per square inch, flowed first through a heat exchanger and cooled the heated water-glycol, then proceeded to a second heat exchanger to cool the hot sodium-potassium. In an alternate design concept, this gas cooled the wing leading edges directly, with no intermediate liquid-metal cool­ant loop. The warmed hydrogen ran a turbine within an onboard auxiliary power unit and then was exhausted overboard. The leading edges reached a maximum temperature of 1,400°F, for which Inconel X was a suitable material.17

During August of that year Becker and Korycinski launched a new series of stud­ies that further examined the heating and thermal protection of their flat-bottom

glider. They found that for a glider of global range, flying with angle of attack of 45 degrees, an entry trajectory near the upper limit of permissible altitudes gave peak uncooled skin temperatures of 2,000°F. This appeared achievable with improved metallic or ceramic hot structures. Accordingly, no coolant at all was required!18

This conclusion, published in 1959, influenced the configura­tion of subsequent boost-glide vehi­cles—Dyna-Soar, the space shut­tle—much as the Eggers-Allen paper of 1953 had defined the blunt-body shape for ballistic entry. Prelimi­nary and unpublished results were in hand more than a year prior to publication, and when the prospect emerged of eliminating active cool­ing, the concepts that could do this were swept into prominence. They were of the flat-bottom type, with Dyna-Soar being the first to proceed into mainstream development.

Winged Spacecraft and Dyna-SoarThis uncooled configuration proved robust enough to accommo­date substantial increases in flight speed and performance. In 1959 Herbert York, the Defense Director of Research and Engineer­ing, stated that Dyna-Soar was to fly at 15,000 miles per hour. This was well above the planned speed of Brass Bell but still below orbital velocity. During subsequent years the booster changed from Martin’s Titan I to the more capable Titan II and then to the powerful Titan III-C, which could easily boost it to orbit. A new plan, approved in December 1961, dropped suborbital missions and called for “the early attainment of orbital flight.” Subsequent planning anticipated that Dyna-Soar would reach orbit with the Titan III upper stage, execute several circuits of the Earth, and then come down from orbit by using this stage as a retrorocket.19

After that, though, advancing technical capabilities ran up against increasingly stringent operational requirements. The Dyna-Soar concept had grown out of HYWARDS, being intended initially to serve as a testbed for the reconnaissance

Winged Spacecraft and Dyna-Soar

Full-scale model of Dyna-Soar, on display at an Air Force exhibition in 1962. The scalloped pat­tern on the base was intended to suggest Sanger’s skipping entry. (Boeing Company archives)

boost-glider Brass Bell and for the manned rocket-powered bomber Robo. But the rationale for both projects became increasingly questionable during the early 1960s. The hypersonic Brass Bell gave way to a new concept, the Manned Orbiting Labo­ratory (MOL), which was to fly in orbit as a small space station while astronauts took reconnaissance photos. Robo fell out of the picture completely, for the success of the Minuteman ICBM, which used solid propellant, established such missiles as the nations prime strategic force. Some people pursued new concepts that contin­ued to hold out hope for Dyna-Soar applications, with satellite interception stand­ing in the forefront. The Air Force addressed this with studies of its Saint project, but Dyna-Soar proved unsuitable for such a mission.20

Dyna-Soar was a potentially superb technology demonstrator, but Defense Sec­retary Robert McNamara took the view that it had to serve a military role in its own right or lead to a follow-on program with clear military application. The cost of Dyna-Soar was approaching a billion dollars, and in October 1963 he declared that he could not justify spending such a sum if it was a dead-end program with no ultimate purpose. He canceled it on 10 December, noting that it was not to serve as a cargo rocket, could not carry substantial payloads, and could not stay in orbit for

Winged Spacecraft and Dyna-Soar

Artist’s rendering showing Dyna-Soar in orbit. (Boeing Company archives)

long durations. He approved MOL as a new program, thereby giving the Air Force continuing reason to hope that it would place astronauts in orbit, but stated that Dyna-Soar would serve only “a very narrow objective.”21

At that moment the program called for production of 10 flight vehicles, and Boeing had completed some 42 percent of the necessary tasks. McNamara’s deci­sion therefore was controversial, particularly because the program still had high – level supporters. These included Eugene Zuckert, Air Force Secretary; Alexander Flax, Assistant Secretary for Research and Development; and Brockway McMillan, Zuckert’s Under Secretary and Flax’s predecessor as Assistant Secretary. Still, McNa­mara gave more attention to Harold Brown, the Defense Director of Research and Engineering, who made the specific proposal that McNamara accepted: to cancel Dyna-Soar and proceed instead with MOL.22

Dyna-Soar never flew. The program had expended $410 million when canceled, but the schedule still called for another $373 million, and the vehicle was still some two and a half years away from its first flight. Even so, its technology remained avail­able for further development, contributing to the widening prospects for reentry that marked the era.23

Hypersonics After NASP

On 7 December 1995 the entry probe of the Galileo spacecraft plunged into the atmosphere of Jupiter. It did not plummet directly downward but sliced into that planet’s hydrogen-rich envelope at a gentle angle as it followed a trajectory that took it close to Jupiter’s edge. The probe entered at Mach 50, with its speed of 29.5 miles per second being four times that of a return to Earth from the Moon. Peak heating came to 11,800 BTU per square foot-second, corresponding to a radia­tive equilibrium temperature of 12,000°F. The heat load totaled 141,800 BTU per square foot, enough to boil 150 pounds of water for each square foot of heatshield surface.1 The deceleration peaked at 228 g, which was tantamount to slamming from a speed of 5,000 miles per hour to a standstill in a single second. Yet the probe survived. It deployed a parachute and transmitted data from every one of its onboard instruments for nearly an hour, until it was overwhelmed within the depths of the atmosphere.2

It used an ablative heatshield, and as an exercise in re-entry technology, the design was straightforward. The nose cap was of chopped and molded carbon phenolic composite; the rest of the main heatshield was tape-wrapped carbon phenolic. The maximum thickness was 5.75 inches. The probe also mounted an aft heatshield, which was of phenolic nylon. The value of these simple materials, under the extreme conditions of Jupiter atmosphere entry, showed beyond doubt that the problem of re-entry was well in hand.3

Other activities have done less well. The X-33 and X-34 projects, which sought to build next-generation shuttles using NASP materials, failed utterly. Test scramjets have lately taken to flight but only infrequently. Still, work in CFD continues to flourish. Today’s best supercomputers offer a million times more power than the ancestral Illiac 4, the top computer of the mid-1970s. This ushers in the important new topic of Large Eddy Simulation (LES). It may enable us to learn, via computa­tion, just how good scramjets may become.

Hypersonics After NASP

The DC-X, which flew, and the X-33, which did not. (NASA)

Approaching the Nose Cone

An important attribute of a nose cone was its shape, and engineers were reduc­ing drag to a minimum by crafting high-speed airplanes that displayed the ultimate in needle-nose streamlining. The X-3 research aircraft, designed for Mach 2, had a long and slender nose that resembled a church steeple. Atlas went even further, with an early concept having a front that resembled a flagpole. This faired into a long and slender cone that could accommodate the warhead.21

This intuitive approach fell by the wayside in 1953, as the NACA-Ames aero – dynamicists H. Julian Allen and Alfred Eggers carried through an elegant analysis of the motion and heating of a re-entering nose cone. This work showed that they were masters of the simplifying assumption. To make such assumptions successfully represents a high art, for the resulting solutions must capture the most essential aspects of the pertinent physics while preserving mathematical tractability. Their paper stands to this day as a landmark. Quite probably, it is the single most impor­tant paper ever written in the field of hypersonics.

They calculated total heat input to a re-entry vehicle, seeking shapes that would minimize this. That part of the analysis enabled them to critique the assertion that a slender and sharply-pointed shape was best. For a lightweight nose cone, which would slow significantly in the atmosphere due to drag, they found a surprising result: the best shape, minimizing the total heat input, was blunt rather than sharp.

The next issue involved the maximum rate of heat transfer when averaged over an entire vehicle. To reduce this peak heating rate to a minimum, a nose cone of realistic weight might be either very sharp or very blunt. Missiles of intermediate slenderness gave considerably higher peak heating rates and “were definitely to be avoided.”

This result applied to the entire vehicle, but heat-transfer rates were highest at the nose-cone tip. It was particularly important to minimize the heating at the tip, and again their analysis showed that a blunt nose cone would be best. As Allen and Eggers put it, “not only should pointed bodies be avoided, but the rounded nose should have as large a radius as possible.”22

How could this be? The blunt body set up a very strong shock wave, which pro­duced intense heating of the airflow. However, most of this heat was carried away in the flow. The boundary layer served to insulate the vehicle, and relatively little of this heat reached its surface. By contrast, a sharp and slender nose cone produced a shock that stood very close to this surface. At the tip, the boundary layer was too thin to offer protection. In addition, skin friction produced still more heating, for the boundary layer now received energy from shock-heated air flowing close to the vehicle surface.23

This paper was published initially as a classified document, but it took time to achieve its full effect. The Air Force did not adopt its principle for nose-cone design until 1956.24 Still, this analysis outlined the shape of things to come. Blunt heat shields became standard on the Mercury, Gemini, and Apollo capsules. The space shuttle used its entire undersurface as a heat shield that was particularly blunt, rais­ing its nose during re-entry to present this undersurface to the flow.

Yet while analysis could indicate the general shape for a nose cone, only experi­ment could demonstrate the validity of a design. At a stroke, Beckers Mach 7 facil­ity, which had been far in the forefront only recently, suddenly became inadequate. An ICBM nose cone was to re-enter the atmosphere at speeds above Mach 20. Its kinetic energy would vaporize five times its weight of iron. Temperatures behind the bow shock would reach 9000 K, hotter than the surface of the Sun. Research scien­tist Peter Rose wrote that this velocity would be “large enough to dissociate all the oxygen molecules into atoms, dissociate about half of the nitrogen, and thermally ionize a considerable fraction of the air.”25

Though hot, the 9000 К air actually would be cool, considering its situation, because its energy would go into dissociating molecules of gas. However, the ions and dissociated atoms were only too likely to recombine at the surface of the nose cone, thereby delivering additional heat. Such chemical effects also might trip the boundary layer from laminar to turbulent flow, with the rate of heat transfer increas­ing substantially as a result. In the words of Rose:

“The presence of free-atoms, electrons, and molecules in excited states can be expected to complicate heat transfer through the boundary layer by additional modes of energy transport, such as atom diffusion, carrying the energy of dissociation. Radiation by transition from excited energy states may contribute materially to radiative heat transfer. There is also a possibility of heat transfer by electrons and ions. The existence of large amounts of energy in any of these forms will undoubtedly influence the familiar flow phenomena.”26

Within the Air Force, the Aircraft Panel of the Scientific Advisory Board (SAB) issued a report in October 1954 that looked ahead to the coming decade:

“In the aerodynamics field, it seems to us pretty clear that over the next 10 years the most important and vital subject for research and development is the field of hypersonic flows; and in particular, hypersonic flows with [temperatures at a nose-cone tip] which may run up to the order of thousands of degrees. This is one of the fields in which an ingenious and clever application of the existing laws of mechanics is probably not adequate. It is one in which much of the necessary physical knowledge still remains unknown at present and must be developed before we arrive at a true understanding and competence. The reason for this is that the temperatures which are associated with these velocities are higher than temperatures which have been produced on the globe, except in connection with the nuclear developments of the past 10 or 15 years and that there are problems of dissociation, relaxation times, etc., about which the basic physics is still unknown.”27

The Atlas program needed a new experimental technique, one that could over­come the fact that conventional wind tunnels produced low temperatures due to their use of expanding gases, and hence the pertinent physics and chemistry asso­ciated with the heat of re-entry were not replicated. Its officials found what they wanted at a cocktail party.

This social gathering took place at Cornell University around Thanksgiving of 1954. The guests included university trustees along with a number of deans and senior professors. One trustee, Victor Emanuel, was chairman of Avco Corpora­tion, which already was closely involved in work on the ICBM. He had been in Washington and had met with Air Force Secretary Harold Talbott, who told him of his concern about problems of re-entry. Emanuel raised this topic at the party while talking with the dean of engineering, who said, “I believe we have someone right here who can help you.”28

That man was Arthur Kantrowitz, a former researcher at NACA-Langley who had taken a faculty position at Cornell following the war. While at Langley during the late 1930s, he had used a $5,000 budget to try to invent controlled thermo­nuclear fusion. He did not get very far. Indeed, he failed to gain results that were sufficient even to enable him to write a paper, leaving subsequent pioneers in con­

trolled fusion to start again from scratch. Still, as he recalls, “I continued my inter­est in high temperatures with the hope that someday I could find something that I could use to do fusion.”29

In 1947 this led him to the shock tube. This instrument produced very strong shocks in a laboratory, overcoming the limits of wind tunnels. It used a driving gas at high pressure in a separate chamber. This gas burst through a thin diaphragm to generate the shock, which traveled down a long tube that was filled with a test gas. High-speed instruments could observe this shock. They also could study a small model immersed within the hot flow at high Mach that streamed immediately behind the shock.30

When Kantrowitz came to the shock tube, it already was half a century old. The French chemist Paul Vieille built the first such devices prior to 1900, using them to demonstrate that a shock wave travels faster than the speed of sound. He proposed that his apparatus could prove useful in studying mine explosions, which took place in shafts that resembled his long tubes.31

The next important shock-tube researcher, Britain’s William Payman, worked prior to World War II. He used diaphragm-bursting pressures as high as 1100 pounds per square inch and introduced high-speed photography to observe the shocked flows. He and his colleagues used the shock tube for experimental verifica­tion of equations in gasdynamics that govern the motion of shock waves.32

At Princeton University during that war, the physicist Walter Bleakney went fur­ther. He used shock tubes as precision instruments, writing, “It has been found that successive shots’ in the tube taken with the same initial conditions reproduce one another to a surprising degree. The velocity of the incident shock can be reproduced to 0.1 percent.” He praised the versatility of the device, noting its usefulness “for studying a great variety of problems in fluid dynamics.” In addition to observations of shocks themselves, the instrument could address “problems of detonation and allied phenomena. The tube may be used as a wind tunnel with a Mach number variable over an enormous range.” This was the role it took during the ICBM pro­gram.33

At Cornell, Kantrowitz initiated a reach for high temperatures. This demanded particularly high pressure in the upstream chamber. Payman had simply used com­pressed air from a thick-walled tank, but Kantrowitz filled his upstream chamber with a highly combustible mix of hydrogen and oxygen. Seeking the highest tem­peratures, he avoided choosing air as a test gas, for its diatomic molecules absorbed energy when they dissociated or broke apart, which limited the temperature rise. He turned instead to argon, a monatomic gas that could not dissociate, and reached 18,000 K.

He was a professor at Cornell, with graduate students. One of them, Edwin Resler, wrote a dissertation in 1951, “High Temperature Gases Produced by Strong

Shock Waves.” In Kantrowitz’s hands, the versatility of this instrument appeared anew. With argon as the test gas, it served for studies of thermal ionization, a physi­cal effect separate from dissociation in which hot atoms lost electrons and became electrically charged. Using nitrogen or air, the shock tube examined dissociation as well, which increased with the higher temperatures of stronger shocks. Higher Mach values also lay within reach. As early as 1952, Kantrowitz wrote that “it is possible to obtain shock Mach numbers in the neighborhood of 25 with reasonable pressures and shock tube sizes.”34

Other investigators also worked with these devices. Raymond Seeger, chief of aerodynamics at the Naval Ordnance Laboratory, built one. R. N. Hollyer con­ducted experiments at the University of Michigan. At NACA-Langley, the first shock tube entered service in 1951. The Air Force also was interested. The 1954 report of the SAB pointed to “shock tubes and other devices for producing extremely strong shocks” as an “experimental technique” that could give new insights into fundamen­tal problems of hypersonics.35

Thus, when Emanuel met Kantrowitz at that cocktail party, this academic physi­cist indeed was in a position to help the Atlas effort. He had already gained hands – on experience by conducting shock-tube experiments at temperatures and shock velocities that were pertinent to re-entry of an ICBM. Emanuel then staked him to a new shock-tube center, Avco Research Laboratory, which opened for business early in 1955-

Kantrowitz wanted the highest shock velocities, which he obtained by using lightweight helium as the driver gas. He heated the helium strongly by adding a mixture of gaseous hydrogen and oxygen. Too little helium led to violent burning with unpredictable detonations, but use of 70 percent helium by weight gave a con­trolled burn that was free of detonations. The sudden heating of this driver gas also ruptured the diaphragm.

Standard optical instruments, commonly used in wind-tunnel work, were avail­able for use with shock tubes as well. These included the shadowgraph, schlieren apparatus, and Mach-Zehnder interferometer. To measure the speed of the shock, it proved useful to install ionization-sensitive pickups that responded to changes in electrical resistance as shock waves passed. Several such pickups, spaced along the length of the tube, gave good results at speeds up to Mach 16.

Within the tube, the shock raced ahead of the turbulent mix of driver gases. Between the shock and the driver gases lay a “homogeneous gas sample” (HGS), a cylindrical slug of test gas moving nearly with the speed of the shock. The measured speed of the shock, together with standard laws of gasdynamics, permitted a com­plete calculation of the pressure, temperature, and internal energy of the HGS. Even when the HGS experienced energy absorption due to dissociation of its constituent molecules, it was possible to account for this through a separate calculation.36

The HGS swept over a small model of a nose cone placed within the stress. The time for passage was of the order of 100 microseconds, with the shock tube thus operating as a “wind tunnel” having this duration for a test. This nevertheless was long enough for photography. In addition, specialized instruments permitted study of heat transfer. These included thin-gauge resistance thermometers for temperature measurements and thicker-gauge calorimeters to determine heat transfer rates.

Metals increase their electrical resistance in response to a temperature rise. Both the thermometers and the calorimeters relied on this effect. To follow the sudden temperature increase behind the shock, the thermometer needed a metal film that was thin indeed, and Avco researchers achieved a thickness of 0.3 microns. They did this by using a commercial product, Liquid Bright Platinum No. 05, from Hanovia Chemical and Manufacturing Company. This was a mix of organic compounds of platinum and gold, dissolved in oils. Used as a paint, it was applied with a brush and dried in an oven.

The calorimeters used bulk platinum foil that was a hundred times thicker, at 0.03 millimeters. This thickness diminished their temperature rise and allowed the observed temperature increase to be interpreted as a rate of heat transfer. Both the thermometers and calorimeters were mounted to the surface of nose-cone models, which typically had the shape of a hemisphere that faired smoothly into a cylinder at the rear. The models were made of Pyrex, a commercial glass that did not readily crack. In addition, it was a good insulator.37

The investigator Shao-Chi Lin also used a shock tube to study thermal ioniza­tion, which made the HGS electrically conductive. To measure this conductivity, Shao used a nonconducting shock tube made of glass and produced a magnetic field within its interior. The flow of the conducting HGS displaced the magnetic lines of force, which he observed. He calibrated the system by shooting a slug of metal having known conductivity through the field at a known speed. Measured HGS conductivities showed good agreement with values calculated from theory, over a range from Mach 10 to Mach 17-5. At this highest flow speed, the conductivity of air was an order of magnitude greater than that of seawater.38

With shock tubes generating new data, there was a clear need to complement the data with new solutions in aerodynamics and heat transfer. The original Allen – Eggers paper had given a fine set of estimates, but they left out such realistic effects as dissociation, recombination, ionization, and changes in the ratio of specific heats. Again, it was necessary to make simplifying assumptions. Still, the first computers were at hand, which meant that solutions did not have to be in closed form. They might be equations that were solvable electronically.

Recombination of ions and of dissociated diatomic molecules—oxygen and nitrogen—was particularly important at high Mach, for this chemical process could deliver additional heat within the boundary layer. Two simplified cases stood out. In “equilibrium flow,” the recombination took place instantly, responding immediately to the changing temperature and pressure within the boundary layer. The extent of ionization and dissociation then were simple point functions of the temperature and pressure at any location, and they could be calculated directly.

The other limiting case was “frozen flow.” One hesitates to describe a 9000 К airstream as “frozen,” but here it meant that the chemical state of the boundary layer retained its condition within the free stream behind the bow shock. Essentially this means that recombination proceeded so slowly that the changing conditions within the boundary layer had no effect on the degrees of dissociation and ionization. These again could be calculated directly, although this time as a consequence of conditions behind the shock rather than in the boundary layer. Frozen flow occurred when the air was rarefied.

These approximations avoided the need to deal with the chemistry of finite reac­tion rates, wherein recombination would not instantly respond to the rapidly vary­ing flow conditions across the thickness of a boundary layer but would lag behind the changes. In 1956 the aerodynamicist Lester Lees proposed a heat-transfer theory that specifically covered those two limiting cases.39 Then in 1957, Kantrowitz’s col­leagues at Avco Research Laboratory went considerably further.

The Avco lab had access to the talent of nearby MIT. James Fay, a professor of mechanical engineering, joined with Avco’s Frederick Riddell to treat anew the problem of heat transfer in dissociated air. Finite reaction-rate chemistry was at the heart of their agenda, and again they needed a simplifying assumption: that the airflow velocity was zero. Fiowever, this condition was nearly true at the forward tip of a nose cone, where the heating was most severe.

Starting with a set of partial differential equations, they showed that these equa­tions reduced to a set of nonlinear ordinary differential equations. Using an IBM 650 computer, they found that a numerical solution of these nonlinear equations was reasonably straightforward. In dealing with finite-rate chemistry, they introduced a “reaction rate parameter” that attempted to capture the resulting effects. They showed that a re-entering nose cone could fall through 100,000 feet while transi­tioning from the frozen to the equilibrium regime. Within this transition region, the boundary layer could be expected to be partly frozen, near the free stream, and partly in equilibrium, near the wall.

The Fay-Riddell theory appeared in the February 1958 Journal of the Aeronauti­cal Sciences. That same issue presented experimental results, also from Avco, that tested the merits of this treatment. The researchers obtained shock-tube data with shock Mach numbers as high as 17-5. At this Mach, the corresponding speed of 17,500 feet per second approached the velocity of a satellite in orbit. Pressures within the shock-tube test gas simulated altitudes of 20,000, 70,000, and 120,000 feet, with equilibrium flow occurring in the models’ boundary layers even at the highest equivalent height above the ground.

Most data were taken with calorimeters, although data points from thin-gauge thermometers gave good agreement. The measurements showed scatter but fit neatly on curves calculated from the Fay-Riddell theory. The Lees theory underpredicted heat-transfer rates at the nose-cone tip, calling for rates up to 30 percent lower than those observed. Here, within a single issue of that journal, two papers from Avco gave good reason to believe that theoretical and experimental tools were at hand to learn the conditions that a re-entering ICBM nose cone would face during its moments of crisis.40

Still, this was not the same as actually building a nose cone that could survive this crisis. This problem called for a separate set of insights. These came from the U. S. Army and were also developed independently by an individual: George Sutton of General Electric.