Category Facing the Heat Barrier: a History of Hypersonics

The Technology of Dyna-Soar

Its thermal environment during re-entry was less severe than that of an ICBM nose cone, allowing designers to avoid not only active structural cooling but abla­tive thermal protection as well. This meant that it could be reusable; it did not have to change out its thermal protection after every flight. Even so, its environment imposed temperatures and heat loads that pervaded the choice of engineering solu­tions throughout the vehicle.

Dyna-Soar used radiatively-cooled hot structure, with the primary or load-bear­ing structure being of Rene 41. Trusses formed the primary structure of the wings and fuselage, with many of their beams meeting at joints that were pinned rather than welded. Thermal gradients, imposing differential expansion on separate beams, caused these members to rotate at the pins. This accommodated the gradients with­out imposing thermal stress.

Rene 41 was selected as a commercially available superalloy that had the best available combination of oxidation resistance and high-temperature strength. Its yield strength, 130,000 psi at room temperature, fell off only slightly at 1,200°F and retained useful values at 1,800°F. It could be processed as sheet, strip, wire, tubes, and forgings. Used as the primary structure of Dyna-Soar, it supported a design specification that indeed called for reusability. The craft was to withstand at least four re-entries under the most severe conditions permitted.

As an alloy, Rene 41 had a standard composition of 19 percent chromium, 11 percent cobalt, 10 percent molybdenum, 3 percent titanium, and 1.5 percent alu­

minum, along with 0.09 percent carbon and 0.006 percent boron, with the balance being nickel. It gained strength through age hardening, with the titanium and alu­minum precipitating within the nickel as an intermetallic compound. Age-harden­ing weldments initially showed susceptibility to cracking, which occurred in parts that had been strained through welding or cold working. A new heat-treatment process permitted full aging without cracking, with the fabricated assemblies show­ing no significant tendency to develop cracks.24

As a structural material, the relatively mature state of Rene 41 reflected the fact that it had already seen use in jet engines. It nevertheless lacked the temperature resistance necessary for use in the metallic shingles or panels that were to form the outer skin of the vehicle, reradiating the heat while withstanding temperatures as high as 3,000°F. Here there was far less existing art, and investigators at Boeing had to find their way through a somewhat roundabout path.

Four refractory or temperature-resistant metals initially stood out: tantalum, tungsten, molybdenum, and columbium. Tantalum was too heavy, and tungsten was not available commercially as sheet. Columbium also appeared to be ruled out for it required an antioxidation coating, but vendors were unable to coat it without rendering it brittle. Molybdenum alloys also faced embrittlement due to recrystal­lization produced by a prolonged soak at high temperature in the course of coating formation. A promising alloy, Mo-0.5Ti, overcame this difficulty through addition of 0.07 percent zirconium. The alloy that resulted, Mo-0.5Ti-0.07Zr, was called TZM. For a time it appeared as a highly promising candidate for all the other panels.25

Wing design also promoted its use, for the craft mounted a delta wing with a leading-edge sweep of 73 degrees. Though built for hypersonic re-entry from orbit, it resembled the supersonic delta wings of contemporary aircraft such as the B-58 bomber. However, this wing was designed using the Eggers-Allen blunt-body prin­ciple, with the leading edge being curved or blunted to reduce the rate of heating. The wing sweep then reduced equilibrium temperatures along the leading edge to levels compatible with the use ofTZM.26

Boeings metallurgists nevertheless held an ongoing interest in columbium because in uncoated form it showed superior ease of fabrication and lack of brittle­ness. A new Boeing-developed coating method eliminated embrittlement, putting columbium back in the running. A survey of its alloys showed that they all lacked the hot strength ofTZM. Columbium nevertheless retained its attractiveness because it promised less weight. Based on coatability, oxidation resistance, and thermal emis – sivity, the preferred alloy was Cb-10Ti-5Zr, called D-36. It replaced TZM in many areas of the vehicle but proved to lack strength against creep at the highest tempera­tures. Moreover, coated TZM gave more of a margin against oxidation than coated D-36, again at the most extreme temperatures. D-36 indeed was chosen to cover most of the vehicle, including the flat underside of the wing. But TZM retained its advantage for such hot areas as the wing leading edges.27

The vehicle had some 140 running feet of leading edges and 140 square feet of associated area. This included leading edges of the vertical fins and elevons as well as of the wings. In general, D-36 served where temperatures during re-entry did not exceed 2,700°F, while TZM was used for temperatures between 2,700 and 3,000°F. In accordance with the Stefan-Boltzmann law, all surfaces radiated heat at a rate proportional to the fourth power of the temperature. Hence for equal emissivities, a surface at 3,000°F radiated 43 percent more heat than one at 2,700°F.28

Panels of both TZM and D-36 demanded antioxidation coatings. These coat­ings were formed directly on the surfaces as metallic silicides (silicon compounds), using a two-step process that employed iodine as a chemical intermediary. Boeing introduced a fluidized-bed method for application of the coatings that cut the time for preparation while enhancing uniformity and reliability. In addition, a thin layer of silicon carbide, applied to the surface, gave the vehicle its distinctive black color. It enhanced the emissivity, lowering temperatures by as much as 200°F.

Development testing featured use of an oxyacetylene torch, operated with excess oxygen, which heated small samples of coated refractory sheet to temperatures as high as 3,000°F, measured by optical pyrometer. Test durations ran as long as four hours, with a published review noting that failures of specimens “were easily detected by visual observation as soon as they occurred.” This work showed that although TZM had better oxidation resistance than D-36, both coated alloys could resist oxidation for more than two hours at 3,000°F. This exceeded design requirements. Similar tests applied stress to hot samples by hanging weights from them, thereby demonstrating their ability to withstand stress of 3,100 psi, again at 3,000°F.29

Other tests showed that complete panels could withstand aerodynamic flutter. This issue was important; a report of the Aerospace Vehicles Panel of the Air Force Scientific Advisory Board (SAB)—a panel on panels, as it were—came out in April 1962 and singled out the problem of flutter, citing it as one that called for critical attention. The test program used two NASA wind tunnels: the 4 by 4-foot Unitary facility at Langley that covered a range of Mach 1.6 to 2.8 and the 11 by 11-foot Unitary installation at Ames for Mach 1.2 to 1.4. Heaters warmed test samples to 840°F as investigators started with steel panels and progressed to versions fabricated from Rene nickel alloy.

“Flutter testing in wind tunnels is inherently dangerous,” a Boeing review declared. “To carry the test to the actual flutter point is to risk destruction of the test specimen. Under such circumstances, the safety of the wind tunnel itself is jeopardized.” Panels under test were as large as 24 by 45 inches; actual flutter could easily have brought failure through fatigue, with parts of a specimen being blown through the tunnel at supersonic speed. The work therefore proceeded by starting at modest dynamic pressures, 400 and 500 pounds per square foot, and advancing over 18 months to levels that exceeded the design requirement of close to 1,400 pounds per square foot. The Boeing report concluded that the success of this test program, which ran through mid-1962, “indicates that an adequate panel flutter capability has been achieved.”30

Between the outer panels and the inner primary structure, a corrugated skin of Rene 41 served as a substructure. On the upper wing surface and upper fuselage, where temperatures were no higher than 2,000°F, the thermal-protection panels were also of Rene 41 rather than of a refractory. Measuring 12 by 45 inches, these panels were spot-welded directly to the corrugations of the substructure. For the wing undersurface, and for other areas that were hotter than 2,000°F, designers specified an insulated structure. Standoff clips, each with four legs, were riveted to the underlying corrugations and supported the refractory panels, which also were 12 by 45 inches in size.

The space between the panels and the substructure was to be filled with insula­tion. A survey of candidate materials showed that most of them exhibited a strong tendency to shrink at high temperatures. This was undesirable; it increased the rate of heat transfer and could create uninsulated gaps at seams and corners. Q-felt, a silica fiber from Johns-Manville, also showed shrinkage. However, nearly all of it occurred at 2,000°F and below; above 2,000°F, further shrinkage was negligible. This meant that Q-felt could be “pre-shrunk” through exposure to temperatures above 2,000°F for several hours. The insulation that resulted had density no greater than 6.2 pounds per cubic foot, one-tenth that of water. In addition, it withstood temperatures as high as 3,000°F.31

TZM outer panels, insulated with Q-felt, proved suitable for wing leading edges. These were designed to withstand equilibrium temperatures of 2,825°F and short – duration overtemperatures of 2,900°F. However, the nose cap faced temperatures of 3,680°F, along with a peak heat flux of 143 BTU per square foot-second. This cap had a radius of curvature of 7.5 inches, making it far less blunt than the Project Mercury heat shield that had a radius of 120 inches.32 Its heating was correspond­ingly more severe. Reliable thermal protection of the nose was essential, and so the program conducted two independent development efforts that used separate approaches. The firm of Chance Vought pursued the main line of activity, while Boeing also devised its own nose-cap design.

The work at Vought began with a survey of materials that paralleled Boeings review of refractory metals for the thermal-protection panels. Molybdenum and columbium had no strength to speak of at the pertinent temperatures, but tungsten retained useful strength even at 4,000°F. However, this metal could not be welded, while no known coating could protect it against oxidation. Attention then turned to nonmetallic materials, including ceramics.

Ceramics of interest existed as oxides such as silica and magnesia, which meant that they could not undergo further oxidation. Magnesia proved to be unsuitable because it had low thermal emittance, while silica lacked strength. However, carbon in the form of graphite showed clear promise. It held considerable industrial experi­ence; it was light in weight, while its strength actually increased with temperature. It oxidized readily but could be protected up to 3,000°F by treating it with silicon, in a vacuum and at high temperatures, to form a thin protective layer of silicon car­bide. Near the stagnation point, the temperatures during re-entry would exceed that level. This brought the concept of a nose cap with siliconized graphite as the pri­mary material, with an insulating layer of a temperature-resistant ceramic covering its forward area. With graphite having good properties as a heat sink, it would rise in temperature uniformly and relatively slowly, while remaining below the 3,000°F limit through the full time of re-entry.

Suitable grades of graphite proved to be available commercially from the firm of National Carbon. Candidate insulators included hafnia, thoria, magnesia, ceria, yttria, beryllia, and zirconia. Thoria was the most refractory but was very dense and showed poor resistance to thermal shock. Hafnia brought problems of availabil­ity and of reproducibility of properties. Zirconia stood out. Zirconium, its parent metal, had found use in nuclear reactors; the ceramic was available from the Zirco­nium Corporation of America. It had a melting point above 4,500°F, was chemically stable and compatible with siliconized graphite, offered high emittance with low thermal conductivity, provided adequate resistance to thermal shock and thermal stress, and lent itself to fabrication.33

For developmental testing, Vought used two in-house facilities that simulated the flight environment, particularly during re-entry. A ramjet, fueled with JP-4 and running with air from a wind tunnel, produced an exhaust with velocity up to 4,500 feet per second and temperature up to 3,500°F. It also generated acoustic levels above 170 decibels, reproducing the roar of a Titan III booster and showing that samples under test could withstand the resulting stresses without cracking. A separate installation, built specifically for the Dyna-Soar program, used an array of propane burners to test full-size nose caps.

The final Vought design used a monolithic shell of siliconized graphite that was covered over its full surface by zirconia tiles held in place using thick zirconia pins. This arrangement relieved thermal stresses by permitting mechanical movement of the tiles. A heat shield stood behind the graphite, fabricated as a thick disk-shaped container made of coated TZM sheet metal and fdled with Q-felt. The nose cap attached to the vehicle with a forged ring and clamp that also were of coated TZM. The cap as a whole relied on radiative cooling. It was designed to be reusable; like the primary structure, it was to withstand four re-entries under the most severe conditions permitted.34

The backup Boeing effort drew on that company’s own test equipment. Study of samples used the Plasma Jet Subsonic Splash Facility, which created a jet with tem­perature as high as 8,000°F that splashed over the face of a test specimen. Full-scale nose caps went into the Rocket Test Chamber, which burned gasoline to produce a nozzle exit velocity of 5,800 feet per second and an acoustic level of 154 decibels. Both installations were capable of long-duration testing, reproducing conditions during re-entries that could last for 30 minutes.35

The Boeing concept used a monolithic zirconia nose cap that was reinforced against cracking with two screens of platinum-rhodium wire. The surface of the cap was grooved to relieve thermal stress. Like its counterpart from Vought, this design also installed a heat shield that used Q-felt insulation. However, there was no heat sink behind the zirconia cap. This cap alone provided thermal protection at the nose through radiative cooling. Lacking both pinned tiles and an inner shell, its design was simpler than that ofVought.36

Its fabrication bore comparison to the age-old work of potters, who shape wet clay on a rotating wheel and fire the resulting form in a kiln. Instead of using a potter’s wheel, Boeing technicians worked with a steel die with an interior in the shape of a bowl. A paper honeycomb, reinforced with Elmer’s Glue and laid in place, defined the pattern of stress-relieving grooves within the nose cap surface. The working material was not moist clay, but a mix of zirconia powder with bind­ers, internal lubricants, and wetting agents.

With the honeycomb in position against the inner face of the die, a specialist loaded the die by hand, filling the honeycomb with the damp mix and forming layers of mix that alternated with the wire screens. The finished layup, still in its die, went into a hydraulic press. A pressure of 27,000 psi compacted the form, reducing its porosity for greater strength and less susceptibility to cracks. The cap was dried at 200°F, removed from its die, dried further, and then fired at 3,300°F for 10 hours. The paper honeycomb burned out in the course of the firing. Following visual and x-ray inspection, the finished zirconia cap was ready for machining to shape in the attachment area, where the TZM ring-and-clamp arrangement was to anchor it to the fuselage.37

The nose cap, outer panels, and primary structure all were built to limit their tem­peratures through passive methods: radiation, insulation. Active cooling also played a role, reducing temperatures within the pilots compartment and two equipment bays. These used a “water wall,” which mounted absorbent material between sheet – metal panels to hold a mix of water and a gel. The gel retarded flow of this fluid, while the absorbent wicking kept it distributed uniformly to prevent hot spots.

During reentry, heat reached the water walls as it penetrated into the vehicle. Some of the moisture evaporated as steam, transferring heat to a set of redundant water-glycol cooling loops resembling those proposed for Brass Bell of 1957. In Dyna-Soar, liquid hydrogen from an onboard supply flowed through heat exchang­ers and cooled these loops. Brass Bell had called for its warmed hydrogen to flow through a turbine, operating the onboard Auxiliary Power Unit. Dyna-Soar used an arrangement that differed only slightly: a catalytic bed to combine the stream of warm hydrogen with oxygen that again came from an onboard supply. This produced gas that drove the turbine of the Dyna-Soar APU, which provided both hydraulic and electric power.

A cooled hydraulic system also was necessary to move the control surfaces as on a conventional aircraft. The hydraulic fluid operating temperature was limited to 400°F by using the fluid itself as an initial heat-transfer medium. It flowed through an intermediate water-glycol loop that removed its heat by cooling with hydrogen. Major hydraulic system components, including pumps, were mounted within an actively cooled compartment. Control-surface actuators, along with their associated valves and plumbing, were insulated using inch-thick blankets of Q-felt. Through this combination of passive and active cooling methods, the Dyna-Soar program avoided a need to attempt to develop truly high-temperature hydraulic arrange­ments, remaining instead within the state of the art.38

Specific vehicle parts and components brought their own thermal problems. Bearings, both ball and antifriction, needed strength to carry mechanical loads at high temperatures. For ball bearings, the cobalt-base superalloy Stellite 19 was known to be acceptable up to 1,200°F. Investigation showed that it could perform under high load for short durations at 1,350°F. However, Dyna-Soar needed ball bearings qualified for 1,600°F and obtained them as spheres of Rene 41 plated with gold. The vehicle also needed antifriction bearings as hinges for control surfaces, and here there was far less existing art. The best available bearings used stainless steel and were suitable only to 600°F, whereas Dyna-Soar again faced a requirement of 1,600°F. A survey of 35 candidate materials led to selection of titanium carbide with nickel as a binder.39

Antenna windows demanded transparency to radio waves at similarly high tem­peratures. A separate program of materials evaluation led to selection of alumina, with the best grade being available from the Coors Porcelain Company. Its emit – tance had the low value of 0.4 at 2,500°F, which meant that waveguides beneath these windows faced thermal damage even though they were made of columbium alloy. A mix of oxides of cobalt, aluminum, and nickel gave a suitable coating when fired at 3,000°F, raising the emittance to approximately O.8.40

The pilot needed his own windows. The three main ones, facing forward, were the largest yet planned for a manned spacecraft. They had double panes of fused silica, with infrared-reflecting coatings on all surfaces except the outermost. This inhibited the inward flow of heat by radiation, reducing the load on the active cool­ing of the pilot’s compartment. The window frames expanded when hot; to hold the panes in position, the frames were fitted with springs of Rene 41. The windows also needed thermal protection, and so they were covered with a shield of D-36.

The cockpit was supposed to be jettisoned following re-entry, around Mach 5, but this raised a question: what if it remained attached? The cockpit had two other win­dows, one on each side, which faced a less severe environment and were to be left unshielded throughout a flight. The test pilot Neil Armstrong flew approaches and landings with a modified Douglas F5D fighter and showed that it was possible to land Dyna-Soar safely with side vision only.41

The vehicle was to touch down at 220 knots. It lacked wheeled landing gear, for inflated rubber tires would have demanded their own cooled compartments. For the same reason, it was not possible to use a conventional oil-filled strut as a shock absorber. The craft therefore deployed tricycle landing skids. The two main skids, from Goodyear, were ofWaspaloy nickel steel and mounted wire bristles of Rene 41. These gave a high coefficient of friction, enabling the vehicle to skid to a stop in a planned length of 5,000 feet while accommodating runway irregularities. In place of the usual oleo strut, a long rod of Inconel stretched at the moment of touchdown and took up the energy of impact, thereby serving as a shock absorber. The nose skid, from Bendix, was forged from Rene 4l and had an undercoat of tungsten carbide to resist wear. Fitted with its own energy-absorbing Inconel rod, the front skid had a reduced coefficient of friction, which helped to keep the craft pointing straight ahead during slideout.42

Through such means, the Dyna-Soar program took long strides toward estab­lishing hot structures as a technology suitable for operational use during re-entry from orbit. The X-15 had introduced heat sink fabricated from Inconel X, a nickel steel. Dyna-Soar went considerably further, developing radiation-cooled insulated structures fabricated from Rene 41 superalloy and from refractory materials. A chart from Boeing made the point that in 1958, prior to Dyna-Soar, the state of the art for advanced aircraft structures involved titanium and stainless steel, with tempera­ture limits of 600°F. The X-15 with its Inconel X could withstand temperatures above 1,200°F. Against this background, Dyna-Soar brought substantial advances in the temperature limits of aircraft structures:43

TEMPERATURE LIMITS BEFORE AND AFTER DYNA-SOAR (in °F)

Element

1258

1261

Nose cap

3,200

4,300

Surface panels

1,200

2,750

Primary structure

1,200

1,800

Leading edges

1,200

3,000

Control surfaces

1,200

1,800

Bearings

1,200

1,800

Meanwhile, while Dyna-Soar was going forward within the Air Force, NASA had its own approaches to putting man in space.

Heat Shields for Mercury and Corona

In November 1957, a month after the first Sputnik reached orbit, the Soviets again startled the world by placing a much larger satellite into space, which held the dog Laika as a passenger. This clearly presaged the flight of cosmonauts, and the question then was how the United States would respond. No plans were ready at the moment, but whatever America did, it would have to be done quickly.

HYWARDS, the nascent Dyna-Soar, was proceeding smartly. In addition, at North American Aviation the company’s chief engineer, Harrison Storms, was in Washington, DC, with a concept designated X-15B. Fitted with thermal protection for return from orbit, it was to fly into space atop a cluster of three liquid-fueled boosters for an advanced Navaho, each with thrust of 415,000 pounds.44 However, neither HYWARDS nor the X-15B could be ready soon. Into this breach stepped Maxime Faget of NACA-Langley, who had already shown a talent for conceptual design during the 1954 feasibility study that led to the original X-15-

In 1958 he was a branch chief within Langley’s Pilotless Aircraft Research Divi­sion. Working on speculation, amid full awareness that the Army or Air Force might win the man-in-space assignment, he initiated a series of paper calculations and wind-tunnel tests of what he described as a “simple nonlifting satellite vehicle which follows a ballistic path in reentering the atmosphere.” He noted that an “attractive feature of such a vehicle is that the research and production experiences of the bal­listic-missile programs are applicable to its design and construction,” and “since it follows a ballistic path, there is a minimum requirement for autopilot, guidance, or control equipment.”45

In seeking a suitable shape, Faget started with the heat shield. Invoking the Allen – Eggers principle, he at first considered a flat face. However, it proved to trap heat by interfering with the rapid airflow that could carry this heat away. This meant that there was an optimum bluntness, as measured by radius of curvature.

Calculating thermal loads and heat-transfer rates using theories of Lees and of Fay and Riddell, and supplementing these estimates with experimental data from his colleague William Stoney, he considered a series of shapes. The least blunt was a cone with a rounded tip that faced the airflow. It had the highest heat input and the highest peak heating rate. A sphere gave better results in both areas, while the best estimates came with a gently rounded surface that faced the flow. It had only two-thirds the total heat input of the rounded cone—and less than one-third the peak heating rate. It also was the bluntest shape of those considered, and it was selected.46

With a candidate heat-shield shape in hand, he turned his attention to the com­plete manned capsule. An initial concept had the shape of a squat dome that was recessed slightly from the edge of the shield, like a circular Bundt cake that does not quite extend to the rim of its plate. The lip of this heat shield was supposed to

produce separated flow over the afterbody to reduce its heating. When tested in a wind tunnel, however, it proved to be unstable at subsonic speeds.

Faget’s group eliminated the open lip and exchanged the domed afterbody for a tall cone with a rounded tip that was to re-enter with its base end forward. It proved to be stable in this attitude, but tests in the 1 l-inch Langley hypersonic wind tunnel showed that it transferred too much heat to the afterbody. Moreover, its forward tip did not give enough room for its parachutes. This brought a return to the domed afterbody, which now was somewhat longer and had a cylinder on top to stow the chutes. Further work evolved the domed shape into a funnel, a conic frustum that retained the cylinder. This configuration provided a basis for design of the Mercury and later of the Gemini capsules, both of which were built by the firm of McDon­nell Aircraft.47

Choice of thermal protection quickly emerged as a critical issue. Fortunately, the thermal environment of a re-entering satellite proved to be markedly less demanding than that of an ICBM. The two vehicles were similar in speed and kinetic energy, but an ICBM was to slam back into the atmosphere at a steep angle, decelerating rapidly due to drag and encountering heating that was brief but very severe. Re­entry from orbit was far easier, taking place over a number of minutes. Indeed, experimental work showed that little if any ablation was to be expected under the relatively mild conditions of satellite entry.

But satellite entry involved high total heat input, while its prolonged duration imposed a new requirement for good materials properties as insulators. They also had to stay cool through radiation. It thus became possible to critique the usefulness of ICBM nose-cone ablators for the prospective new role of satellite reentry.48

Fieat of ablation, in BTU per pound, had been a standard figure of merit. For satellite entry, however, with little energy being carried away by ablation, it could be irrelevant. Phenolic glass, a fine ICBM material with a measured heat of 9,600 BTU per pound, was unusable for a satellite because it had an unacceptably high thermal conductivity. This meant that the prolonged thermal soak of re-entry could have time enough to fry a spacecraft. Teflon, by contrast, had a measured heat only one-third as large. It nevertheless made a superb candidate because of its excellent properties as an insulator.49

Such results showed that it was not necessary to reopen the problem of thermal protection for satellite entry. With appropriate caveats, the experience and research techniques of the ICBM problem could carry over to this new realm. This back­ground made it possible for the Central Intelligence Agency to build operational orbital re-entry vehicles at a time when nose cones for Atlas were still in flight test.

This happened beginning in 1958, when Richard Bissell, a senior manager within the CIA, launched a highly classified reconnaissance program called Corona. General Electric, which was building nose cones for Atlas, won a contract to build the film-return capsule. The company selected ablation as the thermal-protection method, with phenolic nylon as the ablative material.50

The second Corona launch, in April 1959, flew successfully and became the world’s first craft to return safely from orbit. It was supposed to come down near Hawaii, and a ground controller transmitted a command to have the capsule begin re-entry at a particular time. However, he forgot to press a certain button. The director of the recovery effort, Lieutenant Colonel Charles “Moose” Mathison, then learned that it would actually come down near the Norwegian island of Spitzber – gen.

Mathison telephoned a friend in Norway’s air force, Major General Tufte John – sen, and told him to watch for a small spacecraft that was likely to be descending by parachute. Johnsen then phoned a mining company executive on the island and had him send out ski patrols. A three-man patrol soon returned with news: They had seen the orange parachute as the capsule drifted downward near the village of Barentsburg. That was not good because its residents were expatriate Russians. Gen­eral Nathan Twining, Chairman of the Joint Chiefs, summarized the craft’s fate in a memo: “From concentric circular tracks found in the snow at the suspected impact point and leading to one of the Soviet mining concessions on the island, we strongly suspect that the Soviets are in possession of the capsule.”51

Meanwhile, NASA’s Maxime Faget was making decisions concerning thermal protection for his own program, which now had the name Project Mercury. He was well aware of ablation but preferred heat sink. It was heavier, but he doubted that industrial contractors could fabricate an ablative heat shield that had adequate reliability.52

The suitability of ablation could not be tested by flying a subscale heat shield atop a high-speed rocket. Nothing less would do than to conduct a full-scale test using an Atlas ICBM as a booster. This missile was still in development, but in December 1958 the Air Force Ballistic Missile Division agreed to provide one Atlas C within six months, along with eight Atlas Ds over the next several years. This made it possible to test an ablative heat shield for Mercury as early as September 1959.53

The contractor for this shield was General Electric. The ablative material, phe­nolic-fiberglass, lacked the excellent insulating properties of Teflon or phenolic – nylon. Still, it had flown successfully as a ballistic-missile nose cone. The project engineer Aleck Bond adds that “there was more knowledge and experience with fiberglass-phenolic than with other materials. A great deal of ground-test informa­tion was available…. There was considerable background and experience in the fabrication, curing, and machining of assemblies made of Fiberglass.” These could be laid up and cured in an autoclave.54

The flight test was called Big Joe, and it showed conservatism. The shield was heavy, with a density of 108 pounds per cubic foot, but designers added a large safety factor by specifying that it was to be twice as thick as calculations showed to be necessary. The flight was to be suborbital, with range of 1,800 miles but was to simulate a re-entry from orbit that was relatively steep and therefore demanding, producing higher temperatures on the face of the shield and on the afterbody.55

Liftoff came after 3 a. m., a time chosen to coincide with dawn in the landing area so as to give ample daylight for search and recovery. “The night sky lit up and the beach trembled with the roar of the Rocketdyne engines,” notes NASA’s history of Project Mercury. Two of those engines were to fall away during ascent, but they remained as part of the Atlas, increasing its weight and reducing its peak velocity by some 3,000 feet per second. What was more, the capsule failed to separate. It had an onboard attitude-control system that was to use spurts of compressed nitrogen gas to turn it around, to enter the atmosphere blunt end first. But this system used up all its nitrogen trying fruitlessly to swing the big Atlas that remained attached. Separation finally occurred at an altitude of 345,000 feet, while people waited to learn what would happen.56

The capsule performed better than planned. Even without effective attitude con­trol, its shape and distribution of weights gave it enough inherent stability to turn itself around entirely through atmospheric drag. Its reduced speed at re-entry meant that its heat load was only 42 percent of the planned value of 7,100 BTU per square foot. But a particularly steep flight-path angle gave a peak heating rate of 77 percent of the intended value, thereby subjecting the heat shield to a usefully severe test. The capsule came down safely in the Atlantic, some 500 miles short of the planned impact area, but the destroyer USS Strong was not far away and picked it up a few hours later.

Subsequent examination showed that the heating had been uniform over the face of the heat shield. This shield had been built as an ablating laminate with a thickness of 1.075 inches, supported by a structural laminate half as thick. However, charred regions extended only to a depth of 0.20 inch, with further discoloration reaching to 0.35 inch. Weight loss due to ablation came to only six pounds, in line with experimental findings that had shown that little ablation indeed would occur.57

The heat shield not only showed fine thermal performance, it also sustained no damage on striking the water. This validated the manufacturing techniques used in its construction. The overall results from this flight test were sufficiently satisfactory to justify the choice of ablation for Mercury. This made it possible to drop heat sink from consideration and to go over completely to ablation, not only for Mercury but for Gemini, which followed.58

The X-33 and X-34

During the early 1990s, as NASP passed its peak of funding and began to falter, two new initiatives showed that there still was much continuing promise in rockets. The startup firm of Orbital Sciences Corporation had set out to become the first company to develop a launch vehicle as a commercial venture, and this rocket, called Pegasus, gained success on its first attempt. This occurred in April 1990, as NASA’s B-52 took off from Edwards AFB and dropped it into flight. Its first stage mounted wings and tail surfaces. Its third stage carried a small satellite and placed it in orbit.4

In a separate effort, the Strategic Defense Initiative Office funded the DC-X proj­ect of McDonnell Douglas. This single-stage vehicle weighed some 40,000 pounds when fueled and flew with four RL10 rocket engines from Pratt & Whitney. It took off and landed vertically, like Flash Gordon’s rocket ship, using rocket thrust during the descent and avoiding the need for a parachute. It went forward as an exercise in rapid prototyping, with the contract being awarded in August 1991 and the DC-X being rolled out in April 1993- It demonstrated both reusability and low cost, flying with a ground crew of only 15 people along with three more in its control center. It flew no higher than a few thousand feet, but it became the first rocket in history to abort a flight and execute a normal landing.5

The Clinton Administration came to Washington in January 1993- Dan Goldin, the NASA Administrator, soon chartered a major new study of launch options called Access to Space. Arnold Aldrich, Associate Administrator for Space Systems Development, served as its director. With NASP virtually on its deathbed, the work comprised three specific investigations. Each addressed a particular path toward a new generation of launch vehicles, which could include a new shuttle.

Managers at NASA Headquarters and at NASA-Johnson considered how upgrades to current expendables, and to the existing shuttle, might maintain them in service through the year 2030. At NASA-Marshall, a second group looked at prospects for new expendables that could replace existing rockets, including the shuttle, beginning in 2005. A collaboration between Headquarters and Marshall also considered a third approach: development of an entirely new reusable launch vehicle, to replace the shuttle and current expendables beginning in 2008.6

Engineers in industry were ready with ideas of their own. At Lockheed’s famous Skunk Works, manager David Urie already had a concept for a fully-reusable single – stage vehicle that was to fly to orbit. It used a lifting-body configuration that drew on an in-house study of a vehicle to rescue crews from the space station. Urie’s design was to be built as a hot structure with metal external panels for thermal pro­tection and was to use high-performing rocket engines from Rocketdyne that would burn liquid hydrogen and liquid oxygen. This concept led to the X-33.7

Orbital Sciences was also stirring the pot. During the spring of 1993, this com­pany conducted an internal study that examined prospects for a Pegasus follow-on. Pegasus used solid propellant in all three of its stages, but the new effort specifically considered the use of liquid propellants for higher performance. Its concept took shape as an air-launched two-stage vehicle, with the first stage being winged and fully reusable while the second stage, carried internally, was to fly to orbit without being recovered. Later that year executives of Orbital Sciences approached officials of NASA-Marshall to ask whether they might be interested, for this concept might complement that of Lockheed by lifting payloads of much lesser weight. This initia­tive led in time to the X-34.8

NASA’s Access to Space report was in print in January 1994. Managers of the three option investigations had sought to make as persuasive a case as possible for their respective alternatives, and the view prevailed that technology soon would be in hand to adopt Lockheed’s approach. In the words of the report summary,

The study concluded that the most beneficial option is to develop and deploy a fully reusable single-stage-to-orbit (SSTO) pure-rocket launch

vehicle fleet incorporating advanced technologies, and to phase out current systems beginning in the 2008 time period….

The study determined that while the goal of achieving SSTO fully reusable rocket launch vehicles had existed for a long time, recent advances in technology made such a vehicle feasible and practical in the near term provided that necessary technologies were matured and demonstrated prior to start of vehicle development.9

Within weeks NASA followed with a new effort, the Advanced Launch Technol­ogy Program. It sought to lay technical groundwork for a next-generation shuttle, as it solicited initiatives from industry that were to pursue advances in structures, thermal protection, and propulsion.10

The Air Force had its own needs for access to space and had generally been more conservative than NASA. During the late 1970s, while that agency had been build­ing the shuttle, the Air Force had pursued the Titan 34D as a new version of its Titan 3- More recently that service had gone forward with its upgraded Titan 4.11 In May 1994 Lieutenant General Thomas Moorman, Vice Commander of the Air Forces Space Command, released his own study that was known as the Space Launch Mod­ernization Plan. It considered a range of options that paralleled NASA’s, includ­ing development of “a new reusable launch system.” However, whereas NASA had embraced SSTO as its preferred direction, the Air Force study did not even men­tion this as a serious prospect. Nor did it recommend a selected choice of launch system. In a cover letter to the Deputy Secretary of Defense, John Deutch, Moor­man wrote that “this study does not recommend a specific program approach” but was intended to “provide the Department of Defense a range of choices.” Still, the report made a number of recommendations, one of which proved to carry particular weight: “Assign DOD the lead role in expendable launch vehicles and NASA the lead in reusables.”12

The NASA and Air Force studies both went to the White House, where in August the Office of Science and Technology Policy issued a new National Space Transportation Policy. It divided the responsibilities for new launch systems in the manner that the Air Force had recommended and gave NASA the opportunity to pursue its own wishes as well:

The Department of Defense (DoD) will be the lead agency for improvement and evolution of the current U. S. expendable launch vehicle (ELV) fleet, including appropriate technology development.

The National Aeronautics and Space Administration (NASA) will provide for the improvement of the Space Shuttle system, focusing on reliability, safety, and cost-effectiveness.

The National Aeronautics and Space Administration will be the lead agency for technology development and demonstration for next generation reusable space transportation systems, such as the single-stage-to-orbit concept.13

The Pentagon’s assignment led to the Evolved Expendable Launch Vehicle Pro­gram, which brought development of the Delta 4 family and of new versions of the Atlas.14

The new policy broke with past procurement practices, whereby NASA had paid the full cost of the necessary research and development and had purchased flight vehicles under contract. Instead, the White House took the view that the private sector could cover these costs, developing the next space shuttle as if it were a new commercial airliner. NASA’s role still was critical, but this was to be the longstand­ing role of building experimental flight craft to demonstrate pertinent technologies. The policy document made this clear:

The objective of NASA’s technology development and demonstration effort is to support government and private sector decisions by the end of this decade on development of an operational next generation reusable launch system.

Research shall be focused on technologies to support a decision no later than December 1996 to proceed with a sub-scale flight demonstration

which would prove the concept of single-stage-to-orbit___

It is envisioned that the private sector could have a significant role in managing the development and operation of a new reusable space transportation system. In anticipation of this role, NASA shall actively involve the private sector in planning and evaluating its launch technology activities.15

This flight demonstrator became the X-33, with the smaller X-34 being part of the program as well. In mid-October NASA issued Cooperative Agreement Notices, which resembled requests for proposals, for the two projects. At a briefing to indus­try representatives held at NASA-Marshall on 19 October 1994, agency officials presented year-by-year projections of their spending plans. The X-33 was to receive $660 million in federal funds—later raised to $941 million—while the X-34 was slated for $70 million. Contractors were to add substantial amounts of their own and to cover the cost of overruns. Orbital Sciences was a potential bidder and held no contract, but its president, David Thompson, was well aware that he needed deeper pockets. He turned to Rockwell International and set up a partnership.16

The X-34 was the first to go to contract, as NASA selected the Orbital Sciences proposal in March 1995- Matching NASA’s $70 million, this company and Rock­well each agreed to put up $60 million, which meant that the two corporations together were to provide more than 60 percent of the funding. Their partnership, called American Space Lines, anticipated developing an operational vehicle, the X – 34B, that would carry 2,500 pounds to orbit. Weighing 108,500 pounds when fully fueled, it was to fly from NASA’s Boeing 747 that served as the shuttle’s carrier aircraft. Its length of 88 feet compared with 122 feet for the space shuttle orbiter.17

Very quickly an imbroglio developed over the choice of rocket engine for NASA’s test craft. The contract called for use of a Russian engine, the Energomash RD-120 that was being marketed by Pratt & Whitney. Rockwell, which owned Rocketdyne, soon began demanding that its less powerful RS-27 engine be used instead. “The bottom line is Rockwell came in two weeks ago and said ‘Use our engine or we’ll walk,”’ a knowledgeable industry observer told Aviation Week.19

As the issue remained unresolved, Orbital Sciences missed program milestone dates for airframe design and for selecting between configurations. Early in Novem­ber NASA responded by handing Orbital a 14-day suspension notice. This led to further discussions, but even the personal involvement of Dan Goldin failed to resolve the matter. In addition, the X-34B concept had grown to as much as 140,000 pounds. Within the program, strong private-sector involvement meant that private – sector criteria of profitability were important, and Orbital determined that the new and heavy configuration carried substantial risk of financial loss. Early in 1996 com­pany officials called for a complete redesign of NASA’s X-34 that would substan­tially reduce its size. The agency responded by issuing a stop-work order. Rockwell then made its move by bailing out as well. With this, the X-34 appeared dead.

But it soon returned to life, as NASA prepared to launch it anew. It now was necessary to go back to square one and again ask for bids and proposals, and again Orbital Sciences was in the running, this time without a partner. The old X-34 had amounted to a prototype of the operational X-34B, approaching it in size and weight while also calling for use of NASA’s Boeing 747. The company’s new concept was only 58 feet long compared with 83; its gross weight was to be 45,000 pounds rather than 120,000. It was not to launch payloads into orbit but was to serve as a technology demonstrator for an eventual (and larger) first stage by flying to Mach 8. In June 1996 NASA selected Orbital again as the winner, choosing its proposal over competing concepts from such major players as McDonnell Douglas, Northrop Grumman, Rockwell, and the Lockheed Martin Skunk Works.19

Preparations for the X-33 had meanwhile been going forward as well. Design studies had been under way, with Lockheed Martin, Rockwell, and McDonnell Douglas as the competitors. In July 1996 Vice President Albert Gore announced that Lockheed had won the prize. This company envisioned a commercial SSTO craft named VentureStar as its eventual goal. It was to carry a payload of 59,000 pounds to low Earth orbit, topping the 51,000 pounds of the shuttle. Lockheed’s X-33 amounted to a version of this vehicle built at 53 percent scale. It was to fly to

Mach 15, well short of orbital velocity, but would subject its thermal protection to a demanding test.20

No rocket craft of any type had ever flown to orbit as a single stage. NASA hoped that vehicles such as VentureStar not only would do this but would achieve low cost, cutting the cost of a pound in orbit from the $10,000 of the space shuttle to as little as $1,000.21 The X-33 was to demonstrate the pertinent technology, which was being pursued under NASA’s Advanced Launch Technology Program of 1994. Developments based on this program were to support the X-34 as well.

Lightweight structures were essential, particularly for the X-33. Accordingly, there was strong interest in graphite-composite tanks and primary structure. This represented a continuation of NASP activity, which had anticipated a main hydro­gen tank of graphite-epoxy. The DC-X supported the new work, as NASA took it over and renamed it the DC-ХА. Its oxygen tank had been aluminum; a new one, built in Russia, used an aluminum-lithium alloy. Its hydrogen tank, also of aluminum, gave way to one of graphite-epoxy with lightweight foam for internal insulation. This material also served for an intertank structure and a feedline and valve assembly.22

Rapid turnaround offered a particularly promising road to low launch costs, and the revamped DC-ХА gave support in this area as well. Two launches, conducted in June 1996, demonstrated turnaround and reflight in only 26 hours, again with its ground crew of only 15-23

Thermal protection raised additional issues. The X-34 was to fly only to Mach 8 and drew on space shuttle technology. Its surface was to be protected with insulation blankets that resembled those in use on the shuttle orbiter. These included the High Heat Blanket for the X-34 undersurface, rated for 2,000°F, with a Nextel 440 fabric and Saffll batting. The nose cap as well as the wing and rudder leading edges were protected with Fibrous Refractory Composite Insulation, which formed the black silica tiles of the shuttle orbiter. For the X-34, these tiles were to be impregnated with silicone to make them water resistant, impermeable to flows of hot gas, and easier to repair.24

VentureStar faced the demands of entry from orbit, but its re-entry environment was to be more benign than that of the shuttle. The shuttle orbiter was compact in size and relatively heavy and lost little of its orbital energy until well into the atmo­sphere. By contrast, VentureStar would resemble a big lightweight balloon when it re-entered after expending its propellants. The VentureStar thermal protection system was to be tested in flight on the X-33- It had the form of a hot structure, with radiative surface panels of carbon-carbon, Inconel 617 nickel alloy, and titanium, depending on the temperature.25

In an effort separate from that of the X-33, elements of this thermal protec­tion were given a workout by being mounted to the space shuttle Endeavour and tested during re-entry. Thoughts of such tests dated to 1981 and finally were real­

ized during Mission STS-77 in May 1996. Panels of Inconel 617 and of Ті-1100 titanium, measuring 7 by 10 inches, were mounted in recessed areas of the fuselage that lay near the vertical tail and which were heated only to approximately 1,000°F during re-entry. Both materials were rated for considerably higher temperatures, but this successful demonstration put one more arrow in NASA’s quiver.26

For both VentureStar and its supporting X-33, light weight was critical. The X-30 of NASP had been designed for SSTO operation, with a structural mass frac­tion—the ratio of unfueled weight to fully fueled weight—of 25 percent.27 This requirement was difficult to achieve because most of the fuel was slush hydrogen, which has a very low density. This ballooned the size of the X-30 and increased the surface area that needed structural support and thermal protection. VentureStar was to use rockets, which had less performance than scramjets. It therefore needed more fuel, and its structural mass fraction, including payload, engines, and thermal pro­tection, was less than 12 percent. However, this fuel included a great deal of liquid oxygen, which was denser than water and drove up the weight of the propellant. This low structural mass fraction therefore appeared within reach, and for the X-33, the required value was considerably less stringent. Its design called for an empty weight of 63,000 pounds and a loaded weight of 273,000, for a structural mass fraction of 23 percent.28

Even this design goal imposed demands, for while liquid oxygen was dense and compact, liquid hydrogen still was bulky and again enlarged the surface area. Design­ers thus made extensive use of lightweight composites, specifying graphite-epoxy for the hydrogen tanks. A similar material, graphite-bismaleimide, was to serve for load-bearing trusses as well as for the outer shell that was to support the thermal protection. This represented the X-30 s road not taken, for the NASP thermal envi­ronment during ascent had been so severe that its design had demanded a primary structure of titanium-matrix composite, which was heavier. The lessened require­ments of VentureStar s thermal protection meant that Lockheed could propose to reach orbit using materials that were considerably less heavy—that indeed were lighter than aluminum. The X-33 design saved additional weight because it was to be unpiloted, needing no flight deck and no life-support system for a crew.29

But aircraft often gain weight during development, and the X-33 was no excep­tion. Starting in mid-1996 with a dry weight of 63,000 pounds, it was at 80,000 a year later, although a weight-reduction exercise trimmed this to 73,000.30 Managers responded by cutting the planned top speed from Mach 15 or more to Mach 13.8. Jerry Rising, vice president at the Skunk Works that was the X-33 s home, explained that such a top speed still would permit validation of the thermal protection in flight test. The craft would lift off from Edwards AFB and follow a boost-glide tra­jectory, reaching a peak altitude of 300,000 feet. The vehicle then would be lower in the atmosphere than previously planned, and the heating rate would consequently be higher to properly exercise the thermal protection. The X-33 then was to glide onward to a landing at Malmstrom AFB in northern Montana, 950 miles from Edwards.31

The original program plan called for rollout of a complete flight vehicle on 1 November 1998. When that date arrived, though, the effort faced a five-month schedule slip. This resulted from difficulties with the rocket engines.32 Then in December, two days before Christmas, the program received a highly unwelcome present. A hydrogen fuel tank, under construction at a Lockheed Martin facility in Sunnyvale, California, sustained major damage within an autoclave. An inner wall of the tank showed delamination over 90 percent of its area, while another wall sprang loose from its frame. The tank had been inspected using ultrasound, but this failed to disclose the incipient problem, which raised questions as to the adequacy of inspection procedures as well as of the tank design itself. Another delay was at hand of up to seven months.

By May 1999 the weight at main engine cutoff was up to 83,000 pounds, includ­ing unburned residual propellant. Cleon Lacefield, the Lockheed Martin program manager, continued to insist bravely that the vehicle would reach at least Mach 13, but working engineers told Aviation Week that the top speed had been Mach 10 for quite some time and that “the only way it’s getting to Malmstrom is on the back of a truck.”33 The commercial VentureStar concept threatened to be far more demand­ing, and during that month Peter Teets, president and CEO of Lockheed Martin, told the U. S. Senate Commerce and Science Committee that he could not expect to attract the necessary private-sector financing. “Wall Street has spoken,” he declared. “They have picked the status quo; they will finance systems with existing technol­ogy. They will not finance VentureStar.”34

By then the VentureStar design had gone over to aluminum tanks. These were heavier than tanks of graphite-epoxy, but the latter brought unacceptable technical risks because no autoclave existed that was big enough to fabricate such tankage. Lockheed Martin designers reshaped VentureStar and accepted a weight increase from 2.6 million pounds to 3.3 million, (ft had been 2.2 million in 1996.) The use of graphite-epoxy in the X-33 tank now no longer was relevant to VentureStar, but this was what the program held in hand, and a change to aluminum would have added still more weight to the X-33.

During 1999 a second graphite-epoxy hydrogen tank was successfully assem­bled at Lockheed Martin and then was shipped to NASA-Marshall for structural tests. Early in November it experienced its own failure, showing delamination and a ripped outer skin along with several fractures or breaks in the skin. Engineers had been concerned for months about structural weakness, with one knowledgeable specialist telling Aviation Week, “That tank belonged in a junkyard, not a test stand.” The program now was well on its way to becoming an orphan. It was not beloved by NASA, which refused to increase its share of funding above $941 million, while the in-house cost at Lockheed Martin was mounting steadily.35

The X-33 effort nevertheless lingered through the year 2000. This was an elec­tion year, not a good time to cancel a billion-dollar federal program, and A1 Gore was running for president. He had announced the contract award in 1996, and in the words of a congressional staffer, “I think NASA will have a hard time walking away from the X-33 until after the election. For better or worse, A1 Gore now has ownership of it. They can’t admit it’s a failure.”36

The X-34 was still in the picture, as a substantial effort in its own right. Its loaded weight of 47,000 pounds approached the 56,000 of the X-15 with external tanks, built more than 30 years earlier.37 Yet despite this reduced weight, the X-34 was to reach Mach 8, substantially exceeding the Mach 6.7 of the X-15. This reflected the use of advanced materials, for whereas the X-15 had been built of heavy Inconel X, the X-34 design specified lightweight composites for the primary structure and fuel tank, along with aluminum for the liquid-oxygen tank.38

Its construction went forward without major mishaps because it was much smaller than the X-33- The first of them reached completion in February 1999, but during the next two years it never came close to powered flight. The reason was that the X-34 program called for use of an entirely new engine, the 60,000-pound-thrust Fastrak of NASA-Marshall that burned liquid oxygen and kerosene. This engine encountered development problems, and because it was not ready, the X-34 could not fly under power.39

Early in March 2001, with George W Bush in the White House, NASA pulled the plug. Arthur Stephenson, director of NASA-Marshall, canceled the X-34. This reflected the influence of the Strategic Defense Initiative Office, which had main­tained a continuing interest in low-cost access to orbit and had determined that the X-34’s costs outweighed the benefits. Stephenson also announced that the coopera­tive agreement between NASA and Lockheed Martin, which had supported the X – 33, would expire at the end of the month. He then pronounced an epitaph on both programs: “One of the things we have learned is that our technology has not yet advanced to the point that we can successfully develop a new reusable launch vehicle that substantially improves safety, reliability, and affordability.”40

One could say that the X-30 effort went farther than the X-33, for the former successfully exercised a complete hydrogen tank within its NIFTA project, whereas the latter did not. But the NIFTA tank was subscale, whereas those of the X-33 were full-size units intended for flight. The reason that NIFTA appears to have done better is that NASP never got far enough to build and test a full-size tank for its hydrogen slush. Because that tank also was to have been of graphite-epoxy, as with the X-33, it is highly plausible that the X-30 would have run aground on the same shoal of composite-tank structural failure that sank Lockheed Martin’s rocket craft.41

Ablation

In 1953, on the eve of the Atlas go-ahead, investigators were prepared to con­sider several methods for thermal protection of its nose cone. The simplest was the heat sink, with a heat shield of thick copper absorbing the heat of re-entry. An alternative approach, the hot structure, called for an outer covering of heat-resistant shingles that were to radiate away the heat. A layer of insulation, inside the shingles, was to protect the primary structure. The shingles, in turn, overlapped and could expand freely.

A third approach, transpiration cooling, sought to take advantage of the light weight and high heat capacity of boiling water. The nose cone was to be filled with this liquid; strong g-forces during deceleration in the atmosphere were to press the water against the hot inner skin. The skin was to be porous, with internal steam pressure forcing the fluid through the pores and into the boundary layer. Once injected, steam was to carry away heat. It would also thicken the boundary layer, reducing its temperature gradient and hence its rate of heat transfer. In effect, the nose cone was to stay cool by sweating.41

Still, each of these approaches held difficulties. Though potentially valuable, transpiration cooling was poorly understood as a topic for design. The hot-structure concept raised questions of suitably refractory metals along with the prospect of losing the entire nose cone if a shingle came off. The heat-sink approach was likely to lead to high weight. Even so, it seemed to be the most feasible way to proceed, and early Atlas designs specified use of a heat-sink nose cone.42

The Army had its own activities. Its missile program was separate from that of the Air Force and was centered in Huntsville, Alabama, with the redoubtable Wer – nher von Braun as its chief. He and his colleagues came to Huntsville in 1950 and developed the Redstone missile as an uprated V-2. It did not need thermal protec­tion, but the next missile would have longer range and would certainly need it.43

Von Braun was an engineer. He did not set up a counterpart of Avco Research Laboratory, but his colleagues nevertheless proceeded to invent their way toward a nose cone. Their concern lay at the tip of a rocket, but their point of departure came at the other end. They were accustomed to steering their missiles by using jet vanes, large tabs of heat-resistant material that dipped into the exhaust. These vanes then deflected the exhaust, changing the direction of flight. Von Brauns associates thus had long experience in testing materials by placing them within the blast of a rocket engine. This practice carried over to their early nose-cone work.44

The V-2 had used vanes of graphite. In November 1952, these experimenters began testing new materials, including ceramics. They began working with nose – cone models late in 1953. In July 1954 they tested their first material of a new type: a reinforced plastic, initially a hard melamine resin strengthened with glass fiber. New test facilities entered service in June 1955, including a rocket engine with thrust of 20,000 pounds and a jet diameter of 14.5 inches.45

The pace accelerated after November of that year, as Von Braun won approval from Defense Secretary Charles Wilson to proceed with development of his next missile. This was Jupiter, with a range of 1,500 nautical miles.46 It thus was mark­edly less demanding than Atlas in its thermal-protection requirements, for it was to re-enter the atmosphere at Mach 15 rather than Mach 20 and higher. Even so, the Huntsville group stepped up its work by introducing new facilities. These included a rocket engine of 135,000 pounds of thrust for use in nose-cone studies.

The effort covered a full range of thermal-protection possibilities. Transpira­tion cooling, for one, raised unpleasant new issues. Convair fabricated test nose cones with water tanks that had porous front walls. The pressure in a tank could be adjusted to deliver the largest flow of steam when the heat flux was greatest. But this technique led to hot spots, where inadequate flow brought excessive temperatures. Transpiration thus fell by the wayside.

Heat sink drew attention, with graphite holding promise for a time. It was light in weight and could withstand high temperatures. But it also was a good heat con­ductor, which raised problems in attaching it to a substructure. Blocks of graphite also contained voids and other defects, which made them unusable.

By contrast, hot structures held promise. Researchers crafted lightweight shin­gles of tungsten and molybdenum backed by layers of polished corrugated steel and aluminum, to provide thermal insulation along with structural support. When the shingles topped 3,250°F, the innermost layer stayed cool and remained below 200°E Clearly, hot structures had a future.

The initial work with a reinforced plastic, in 1954, led to many more tests of similar materials. Engineers tested such resins as silicones, phenolics, melamines, Teflon, epoxies, polyesters, and synthetic rubbers. Filler materials included soft glass, fibers of silicon dioxide and aluminum silicate, mica, quartz, asbestos, nylon, graphite, beryllium, beryllium oxide, and cotton.

Ablation

Jupiter missile with ablative nose cone. (U. S. Army)

Fiber-reinforced polymers proved to hold particular merit. The studies focused on plastics reinforced with glass fiber, with a commercially-available material, Micarta 259-2, demonstrating noteworthy promise. The Army stayed with this choice as it moved toward flight test of subscale nose cones in 1957. The first one used Micarta 259-2 for the plastic, with a glass cloth as the filler.47

In this fashion the Army ran well ahead of the Air Force. Yet the Huntsville work did not influence the Atlas effort, and the reasons ran deeper than interser­vice rivalry. The relevance of that work was open to question because Atlas faced a far more demanding re-entry environment. In addition, Jupiter faced competition from Thor, an Air Force missile of similar range. It was highly likely that only one would enter production, so Air Force designers could not merely become apt pupils of the Army. They had to do their own work, seeking independent approaches and trying to do better than Von Braun.

Amid this independence, George Sutton came to the re-entry problem. He had received his Ph. D. at Caltech in 1955 at age 27, jointly in mechanical engineering and physics. His only experience within the aerospace industry had been a summer job at the Jet Propulsion Laboratory, but he jumped into re-entry with both feet after taking his degree. He joined Lockheed and became closely involved in study­ing materials suitable for thermal protection. Then he was recruited by General Electric, leaving sunny California and arriving in snowy Schenectady, New York, early in 1956.

Heat sinks for Atlas were ascendant at that time, with Lester Lees’s heat-transfer theory appearing to give an adequate account of the thermal environment. Sutton was aware of the issues and wrote a paper on heat-sink nose cones, but his work soon led him in a different direction. There was interest in storing data within a small capsule that would ride with a test nose cone and that might survive re-entry if the main cone were to be lost. This capsule needed its own thermal protection, and it was important to achieve light weight. Hence it could not use a heat sink. Sutton’s management gave him a budget of $75,000 to try to find something more suitable.48

This led him to re-examine the candidate materials that he had studied at Lock­heed. He also learned that other GE engineers were working on a related problem. They had built liquid propellant rocket engines for the Army’s Hermes program, with these missiles being steered by jet vanes in the fashion of the V-2 and Redstone. The vanes were made from alternating layers of glass cloth and thermosetting resins. They had become standard equipment on the Hermes A-З, but some of them failed due to delamination. Sutton considered how to avoid this:

“I theorized that heating would char the resin into a carbonaceous mass of relatively low strength. The role of the fibers should be to hold the carbonaceous char to virgin, unheated substrate. Here, low thermal conductivity was essential to minimize the distance from the hot, exposed surface to the cool substrate, to minimize the mass of material that had to be held by the fibers as well as the degradation of the fibers. The char itself would eventually either be vaporized or be oxidized either by boundary layer oxygen or by C02 in the boundary layer. The fibers would either melt or also vaporize. The question was how to fabricate the material so that the fibers interlocked the resin, which was the opposite design philosophy to existing laminates in which the resin interlocks the fibers. 1 believed that a solution might be the use of short fibers, randomly oriented in a soup of resin, which was then molded into the desired shape. 1 then began to plan the experiments to test this hypothesis.”49

Sutton had no pipeline to Huntsville, but his plan echoed that of Von Braun. He proceeded to fabricate small model nose cones from candidate fiber-reinfo reed plastics, planning to test them by immersion in the exhaust of a rocket engine. GE was developing an engine for the first stage of the Vanguard program; prototypes were at hand, along with test stands. Sutton arranged for an engine to produce an exhaust that contained free oxygen to achieve oxidation of the carbon-rich char.

He used two resins along with five types of fiber reinforcement. The best per­formance came with the use of Refrasil reinforcement, a silicon-dioxide fiber. Both resins yielded composites with a heat capacity of 6,300 BTU per pound or greater. This was astonishing. The materials had a density of 1.6 times that of water. Yet they absorbed more than six times as much heat, pound for pound, as boiling water!50

Here was a new form of thermal protection: ablation. An ablative heat shield could absorb energy through latent heat, when melting or evaporating, and through sensible heat, with its temperature rise. In addition, an outward flow of ablating volatiles thickened the boundary layer, which diminished the heat flux. Ablation promised all the advantages of transpiration cooling, within a system that could be considerably lighter and yet more capable.51

Sutton presented his experimental results in June 1957 at a technical conference held at the firm of Ramo-Wooldridge in Los Angeles. This company was providing technical support to the Air Forces Atlas program management. Following this talk, George Solomon, one of that firm’s leading scientists, rose to his feet and stated that ablation was the solution to the problem of thermal protection.

The Army thought so too. It had invented ablation on its own, considerably ear­lier and amid far deeper investigation. Indeed, at the moment when Sutton gave his talk, Von Braun was only two months away from a successful flight test of a subscale nose cone. People might argue whether the Soviets were ahead of the United States in missiles, but there was no doubt that the Army was ahead of the Air Force in nose cones. Jupiter was already slated for an ablative cone, but Thor was to use heat sink, as was the intercontinental Atlas.

Already, though, new information was available concerning transition from lam­inar to turbulent flow over a nose cone. Turbulent heating would be far more severe, and these findings showed that copper, the best heat-sink material, was inadequate for an ICBM. Materials testing now came to the forefront, and this work needed new facilities. A rocket-engine exhaust could reproduce the rate of heat transfer, but in Kantrowitz’s words, “a rocket is not hot enough.”52 It could not duplicate the temperatures of re-entry.

A shock tube indeed gave a suitably hot flow, but its duration of less than a millisecond was hopelessly inadequate for testing ablative materials. Investigators needed a new type of wind tunnel that could produce a continuous flow, but at temperatures far greater than were available. Fortunately, such an installation did not have to reproduce the hypersonic Mach numbers of re-entry; it sufficed to duplicate the necessary temperatures within the flow. The instrument that did this was the arc tunnel.

It heated the air with an electric arc, which amounted to a man-made stroke of lightning. Such arcs were in routine use in welding; Avco’s Thomas Brogan noted that they reached 6500 K, “a temperature which would exist at the [tip] of a blunt body flying at 22,000 feet per second.” In seeking to develop an arc-heated wind tunnel, a point of departure lay in West Germany, where researchers had built a “plasma jet.”53

This device swirled water around a long carbon rod that served as the cathode. The motion of the water helped to keep the arc focused on the anode, which was also of carbon and which held a small nozzle. The arc produced its plasma as a mix of very hot steam and carbon vapor, which was ejected through the nozzle. This invention achieved pressures of 50 atmospheres, with the plasma temperature at the nozzle exit being measured at 8000 K. The carbon cathode eroded relatively slowly, while the water supply was easily refilled. The plasma jet therefore could operate for fairly long times.54

At NACA-Langley, an experimental arc tunnel went into operation in May 1957- It differed from the German plasma jet by using an electric arc to heat a flow of air, nitrogen, or helium. With a test section measuring only seven millimeters square, it was a proof-of-principle instrument rather than a working facility. Still, its plasma temperatures ranged from 5800 to 7000 K, which was well beyond the reach of a conventional hypersonic wind tunnel.55

At Avco, Kantrowitz paid attention when he heard the word “plasma.” He had been studying such ionized gases ever since he had tried to invent controlled fusion. His first arc tunnel was rated only at 130 kilowatts, a limited power level that restricted the simulated altitude to between 165,000 and 210,000 feet. Its hot plasma flowed from its nozzle at Mach 3.4, but when this flow came to a stop when impinging on samples of quartz, the temperature corresponded to flight velocities as high as 21,000 feet per second. Tests showed good agreement between theory and experiment, with measured surface temperatures of 2700 К falling within three percent of calculated values. The investigators concluded that opaque quartz “will effectively absorb about 4000 BTU per pound for ICBM and [intermediate-range] trajectories.”56

In Huntsville, Von Brauns colleagues found their way as well to the arc tunnel. They also learned of the initial work in Germany. In addition, the small California firm of Plasmadyne acquired such a device and then performed experiments under contract to the Army. In 1958 Rolf Buhler, a company scientist, discovered that when he placed a blunt rod of graphite in the flow, the rod became pointed. Other investigators attributed this result to the presence of a cool core in the arc-heated jet, but Sutton succeeded in deriving this observed shape from theory.

This immediately raised the prospect of nose cones that after all might be sharply pointed rather than blunt. Such re-entry bodies would not slow down in the upper atmosphere, perhaps making themselves tempting targets for antiballistic missiles, but would continue to fall rapidly. Graphite still had the inconvenient features noted previously, but a new material, pyrolytic graphite, promised to ease the problem of its high thermal conductivity.

Pyrolytic graphite was made by chemical vapor deposition. One placed a tem­perature-resistant form in an atmosphere of gaseous hydrocarbons. The hot surface broke up the gas molecules, a process known as pyrolysis, and left carbon on the sur­face. The thermal conductivity then was considerably lower in a direction normal to the surface than when parallel to it. The low value of this conductivity, in the normal direction, made such graphite attractive.57

Having whetted their appetites with the 130-kilowatt facility, Avco went on to build one that was two orders of magnitude more powerful. It used a 15-megawatt power supply and obtained this from a bank of 2,000 twelve-volt truck batteries, with motor-generators to charge them. They provided direct current for run times of up to a minute and could be recharged in an hour.58

With this, Avco added the high-power arc tunnel to the existing array of hyper­sonic flow facilities. These included aerodynamic wind tunnels such as Beckers, along with plasma jets and shock tubes. And while the array of ground installations proliferated, the ICBM program was moving toward a different kind of test: full – scale flight.

Gemini and Apollo

An Apollo spacecraft, returning from the Moon, had twice the kinetic energy of a flight in low orbit and an aerodynamic environment that was nearly three times as severe. Its trajectory also had to thread a needle in its accuracy. Too steep a return would subject its astronauts to excessive g-forces. Too shallow a re-entry meant that it would show insufficient loss of speed within the upper atmosphere and would fly back into space, to make a final entry and then land at an unplanned location. For a simple ballistic trajectory, this “corridor” was as little as seven miles wide, from top to bottom.59

At the outset, these issues raised two problems that were to be addressed in flight test. The heat shield had to be qualified, in tests that resembled those of the X-17 but took place at much higher velocity. In addition, it was necessary to show that a re-entering spacecraft could maneuver with some precision. It was vital to broaden the corridor, and the only way to do this was to use lift. This meant demonstrat­ing successful maneuvers that had to be planned in advance, using data from tests in ground facilities at near-orbital speeds, when such facilities were most prone to error.

Apollo’s Command Module, which was to execute the re-entry, lacked wings. Still, spacecraft of this general type could show lift-to-drag ratios of 0.1 or 0.2 by flying at a nonzero angle of attack, thereby tilting the heat shield and turning it into a lifting surface. Such values were far below those achievable with wings, but they brought useful flexibility during re-entry by permitting maneuver, thereby achiev­ing a more accurate splashdown.

As early as 1958, Faget and his colleagues had noted three methods for trimming a capsule to a nonzero angle. Continuous thrust from a reaction-control system could do this, tilting the craft from its equilibrium attitude. A drag flap could do it as well by producing a modest amount of additional air resistance on one side of the vehicle. The simplest method required no onboard mechanism that might fail in flight and that expended no reaction-control propellant. It called for nothing more than a nonsymmetrical distribution of weight within the spacecraft, creating an offset in the location of the center of gravity. During re-entry, this offset would trim the craft to a tilted attitude, again automatically, due to the extra weight on one side. An astronaut could steer his capsule by using attitude control to roll it about its long axis, thereby controlling the orientation of the lift vector.60

This center-of-gravity offset went into the Gemini capsules that followed those of Project Mercury. The first manned Gemini flight carried the astronauts Virgil “Gus” Grissom and John Young on a three-orbit mission in March 1965. Following re-entry, they splashed down 60 miles short of the carrier USS Intrepid, which was on the aim point. This raised questions as to the adequacy of the preflight hyper­sonic wind-tunnel tests that had provided estimates of the spacecraft L/D used in mission planning.

The pertinent data had come from only two facilities. The Langley 11-inch tunnel had given points near Mach 7, while an industrial hotshot installation cov­ered Mach 15 to 22, which was close to orbital speed. The latter facility lacked instruments of adequate precision and had produced data points that showed a large scatter. Researchers had averaged and curve-fit the measurements, but it was clear that this work had introduced inaccuracies.61

During that year flight data became available from the Grissom-Young mission and from three others, yielding direct measurements of flight angle of attack and L/D. To resolve the discrepancies, investigators at the Air Forces Arnold Engineer­ing Development Center undertook further studies using two additional facilities. Tunnel F, a hotshot, had a 100-inch-diameter test section and reached Mach 20, heating nitrogen with an electric arc and achieving run times of 0.05 to 0.1 seconds. Tunnel L was a low-density, continuous-flow installation that also used arc-heated nitrogen. The Langley 11 -inch data was viewed as valid and was retained in the reanalysis.

This work gave an opportunity to benchmark data from continuous-flow and hotshot tunnels against flight data, at very high Mach numbers. Size did not matter, for the big Tunnel F accommodated a model at one-fifteenth scale that incorpo­rated much detail, whereas Tunnel L used models at scales of 1/120 and 1/180, the latter being nearly small enough to fit on a tie tack. Even so, the flight data points gave a good fit to curves derived using both tunnels. Billy Griffith, supervising the tests, concluded: “Generally, excellent agreement exists” between data from these sources.

The preflight data had brought estimated values of L/D that were too high by 60 percent. This led to a specification for the re-entry trim angle that proved to be off by 4.7 degrees, which produced the miss at splashdown. Julius Lukasiewicz, long­time head of the Von Karman Gas Dynamics Facility at AEDC, later added that if AEDC data had been available prior to the Grissom-Young flight, “the impact point would have been predicted to within ±10 miles.”62

The same need for good data reappeared during Apollo. The first of its orbital missions took place during 1966, flying atop the Saturn I-B. The initial launch, designated AS-201, flew suborbitally and covered 5,000 miles. A failure in the reac­tion controls produced uncontrolled lift during entry, but the craft splashed down 38 miles from its recovery ship. AS-202, six months later, was also suborbital. It executed a proper lifting entry—and undershot its designated aim point by 205 miles. This showed that its L/D had also been mispredicted.63

Estimates of the Apollo L/D had relied largely on experimental data taken during 1962 at Cornell Aeronautical Laboratory and Mach 15.8, and at AEDC and Mach 18.7- Again these measurements lacked accuracy, and once more Billy Griffith of AEDC stepped forward to direct a comprehensive set of new measurements. In addition to Tunnels F and L, used previously, the new work used Tunnels A, B, and C, which with the other facilities covered a range from Mach 3 to 20. To account for effects due to model supports in the wind tunnels, investigators also used a gun range that fired small models as free-flight projectiles, at Mach 6.0 to 8.5.

The 1962 estimates of Apollo L/D proved to be off by 20 percent, with the trim angle being in error by 3 degrees.64 As with the Gemini data, these results showed anew that one could not obtain reliable data by working with a limited range of facilities. But when investigators broadened their reach to use more facili­ties, and sought accuracy through such methods as elimination of model-support errors, they indeed obtained results that matched flight test. This happened twice, with both Gemini and Apollo, with researchers finally getting the accurate estimates they needed.

These studies dealt with aerodynamic data at hypervelocity. In a separate series, other flights sought data on the re-entry environment that could narrow the range of acceptable theories of hypervelocity heating. Two such launches constituted Proj­ect Fire, which flew spacecraft that were approximately two feet across and had the general shape of Apollo’s Command Module. Three layers of beryllium served as calorimeters, with measured temperature rises corresponding to total absorbed heat. Three layers of phenolic-asbestos alternated with those layers to provide thermal protection. Windows of fused quartz, which is both heat-resistant and transparent over a broad range of optical wavelengths, permitted radiometers to directly observe the heat flux due to radiation, at selected locations. These included the nose, where heating was most intense.

The Fire spacecraft rode atop Atlas boosters, with flights taking place in April 1964 and May 1965. Following cutoff of the Atlas, an Antares solid-fuel booster, modified from the standard third stage of the Scout booster, gave the craft an addi­tional 17,000 feet per second and propelled it into the atmosphere at an angle of nearly 15 degrees, considerably steeper than the range of angles that were acceptable for an Apollo re-entry. This increased the rate of heating and enhanced the contri­bution from radiation. Each beryllium calorimeter gave useful data until its outer surface began to melt, which took only 2.5 seconds as the heating approached its maximum. When decelerations due to drag reached specified levels, an onboard controller ejected the remnants of each calorimeter in turn, along with its underly­ing layer of phenolic-asbestos. Because these layers served as insulation, each ejec­tion exposed a cool beryllium surface as well as a clean set of quartz windows.

Fire 1 entered the atmosphere at 38,000 feet per second, markedly faster than the 35,000 feet per second of operational Apollo missions. Existing theories gave a range in estimates of total peak heating rate from 790 to 1,200 BTU per square foot – second. The returned data fell neatly in the middle of this range. Fire 2 did much the same, re-entering at 37,250 feet per second and giving a measured peak heating rate of just over 1,000 BTU per square foot-second. Radiative heating indeed was significant, amounting to some 40 percent of this total. But the measured values, obtained by radiometer, were at or below the minimum estimates obtained using existing theories.65

Earlier work had also shown that radiative heating was no source of concern. The new work also validated the estimates of total heating that had been used in designing the Apollo heat shield. A separate flight test, in August 1964, placed a small vehicle—the R-4—atop a five-stage version of the Scout. As with the X-17, this fifth stage ignited relatively late in the flight, accelerating the test vehicle to its peak speed when it was deep in the upper atmosphere. This speed, 28,000 feet per second, was considerably below that of an Apollo entry. But the increased air density subjected this craft to a particularly high heating rate.56

This was a materials-testing flight. The firm of Avco had been developing abla­tors of lower and lower weight and had come up with its 5026-39 series. They used epoxy-novolac as the resin, with phenolic microballoons added to the silica-fiber filler of an earlier series. Used with a structural honeycomb made of phenolic rein­forced with fiberglass, it cut the density to 35 pounds per cubic foot and, with sub­sequent improvements, to as little as 31 pounds per cubic foot. This was less than three-tenths the density of the ancestral phenolic-fiberglass of Mercury—which merely orbited the Earth and did not fly back from the Moon.67

The new material had the designation Avcoat 5026-39G. The new flight sought to qualify it under its most severe design conditions, corresponding to re-entry at the bottom of the corridor with deceleration of 20 g. The peak aerodynamic load occurred at Mach 16.4 and 102,000 feet. Observed ablation rates proved to be much higher than expected. In fact, the ablative heat shield eroded away completely! This caused serious concern, for if that were to happen during a manned mission, the spacecraft would burn up in the atmosphere and would kill its astronauts.68

The relatively high air pressure had subjected the heat shield to dynamic pres­sures three times higher than those of an Apollo re-entry. Those intense dynamic pressures corresponded to a hypersonic wind that had blown away the ablative char soon after it had formed. This char was important; it protected the underlying virgin ablator, and when it was severely thinned or removed, the erosion rate on the test heat shield increased markedly.

Much the same happened in October 1965, when another subscale heat shield underwent flight test atop another multistage solid rocket, the Pacemaker, that accelerated its test vehicle to Mach 10.6 at 67,500 feet. These results showed that failure to duplicate the true re-entry environment in flight test could introduce unwarranted concern, causing what analysts James Pavlosky and Leslie St. Leger described as “unnecessary anxiety and work.”69

An additional Project Fire flight could indeed have qualified the heat shield under fully realistic re-entry conditions, but NASA officials had gained confidence through their ability to understand the quasi-failure of the R-4. Rather than con­duct further ad hoc heat-shield flight tests, they chose to merge its qualification with unmanned flights of complete Apollo spacecraft. Following three shots aboard the

Saturn I-B that went no further than earth orbit, and which included AS-201 and -202, the next flight lifted off in November 1967. It used a Saturn V to simulate a true lunar return.

No larger rocket had ever flown. This one was immense, standing 36 stories tall. The anchorman Walter Cronkite gave commentary from a nearby CBS News studio, and as this behemoth thundered upward atop a dazzling pillar of yellow – white flame, Cronkite shouted, “Oh, my God, our building is shaking! Part of the roof has come in here!” The roar was as loud as a major volcanic eruption. People saw the ascent in Jacksonville, 150 miles away.70

Heat-shield qualification stood as a major goal. The upper stages operated in sequence, thrusting the spacecraft to an apogee of 11,242 miles. It spent several hours coasting, oriented with the heat shield in the cold soak of shadow to achieve the largest possible thermal gradient around the shield. Re-ignition of the main engine pushed the spacecraft into re-entry at 35,220 feet per second relative to the atmosphere of the rotating Earth. Flying with an effective L/D of 0.365, it came down 10 miles from the aim point and only six miles from the recovery ship, close enough for news photos that showed a capsule in the water with one of its chutes still billowing.

The heat shield now was ready for the Moon, for it had survived a peak heating rate of 425 BTU per square foot-second and a total heat load of 37,522 BTU per pound. Operational lunar flights imposed loads and heating rates that were mark­edly less demanding. In the words of Pavlosky and St. Leger, “the thermal protection subsystem was overdesigned.”71

A 1968 review took something of an offhand view of what once had been seen as an extraordinarily difficult problem. This report stated that thermal performance of ablative material “is one of the lesser criteria in developing a TPS.” Significant changes had been made to enhance access for inspection, relief of thermal stress, manufacturability, performance near windows and other penetrations, and control of the center of gravity to achieve design values of L/D, “but never to obtain better thermal performance of the basic ablator.”72

Thus, on the eve of the first lunar landing, specialists in hypersonics could look at a technology of re-entry whose prospects had widened significantly. A suite of materials now existed that were suitable for re-entry from orbit, having high emis – sivity to keep the temperature down, along with low thermal conductivity to pre­vent overheating during the prolonged heat soak. Experience had shown how care­ful research in ground facilities could produce reliable results and could permit maneuvering entry with accuracy in aim. This had been proven to be feasible for missions as demanding as lunar return.

Dyna-Soar had not flown, but it introduced metallic hot structures that brought the prospect of reusability. It also introduced wings for high L/D and particular

freedom during maneuver. Indeed, by 1970 there was only one major frontier in re-entry: the development of a lightweight heat shield that was simpler than the hot structure of Dyna-Soar and was reusable. This topic was held over for the following decade, amid the development of the space shuttle.

Gemini and Apollo

Scramjets Take Flight

On 28 November 1991 a Soviet engine flew atop an SA-5 surface-to-air mis­sile in an attempt to demonstrate supersonic combustion. The flight was launched from the Baikonur center in Kazakhstan and proceeded ballistically, covering some 112 miles. The engine did not produce propulsive thrust but rode the missile while mounted to its nose. The design had an axisymmetric configuration, resembling that of NASA’s Hypersonic Research Engine, and the hardware had been built at Moscow’s Central Institute of Aviation Motors (СІАМ).

As described by Donat Ogorodnikov, the center director, the engine performed two preprogrammed burns during the flight. The first sought to demonstrate the important function of transition from subsonic to supersonic combustion. It was initiated at 59,000 feet and Mach 3-5, as the rocket continued to accelerate. Ogoro­dnikov asserted that after fifteen seconds, near Mach 5, the engine went over to supersonic combustion and operated in this mode for five seconds, while the rocket accelerated to Mach 6 at 92,000 feet. Within the combustor, internal flow reached a measured speed of Mach 3. Pressures within the combustor were one to two atmo­spheres.

Scramjets Take Flight

The second engine burn lasted ten seconds. This one had the purpose of verify­ing the design of the engine’s ignition system. It took place on the downward leg of the trajectory, as the vehicle descended from 72,000 feet and Mach 4.5 to 59,000 feet and Mach 3.5. This burn involved only subsonic combustion. Vyacheslav Vino­gradov, chief of engine gasdynamics at СІАМ, described the engine as mounting three rows of fuel injectors. Choice of an injector row, out of the three available, was to help in changing the combustion mode.

The engine diameter at the inlet was 9.1 inches; its length was 4.2 feet. The spike, inlet, and combustor were of stainless steel, with the spike tip and cowl lead­

ing edge being fabricated using powder metallurgy. The fuel was liquid hydrogen, and the system used no turbopump. Pressure, within a fuel tank that also was stain­less steel, forced the hydrogen to flow. The combustor was regeneratively cooled; this vaporized the hydrogen, which flowed through a regulator at rates that varied from 0.33 pounds per second in low-Mach flight to 0.11 at high Mach.42

The Russians made these extensive disclosures because they hoped for financial support from the West. They obtained initial assistance from France and conducted a second flight test a year later. The engine was slightly smaller and the trajectory was flatter, reaching 85,000 feet. It ignited near Mach 3-5 and sustained subsonic combustion for several seconds while the rocket accelerated to Mach 5. The engine then transitioned to supersonic combustion and remained in this mode for some fifteen seconds, while acceleration continued to Mach 5-5. Burning then terminated due to exhaustion of the fuel.43

On its face, this program had built a flightworthy scramjet, had achieved a super­sonic internal airflow, and had burned hydrogen within this flow. Even so, this was not necessarily the same as accomplishing supersonic combustion. The alleged tran­sition occurred near Mach 5, which definitely was at the low end for a scramjet.44 In addition, there are a number of ways whereby pockets of subsonic flow might have existed within an internal airstream that was supersonic overall. These could have served as flameholders, localized regions where conditions for combustion were par­ticularly favorable.45

In 1994 СІАМ received a contract from NASA, with NASA-Langley providing technical support. The goal now was Mach 6.5, at which supersonic combustion appeared to hold a particularly strong prospect. The original Russian designs had been rated for Mach 6 and were modified to accommodate the higher heat loads at this higher speed. The flight took place in February 1998 and reached Mach 6.4 at

70,0 feet, with the engine operating for 77 seconds.46

It began operation near Mach 3-5. Almost immediately the inlet unstarted due to excessive fuel injection. An onboard control system detected the unstart and reduced the fuel flow, which enabled the inlet to start and to remain started. How­ever, the onboard control failed to detect this restart and failed to permit fuel to flow through the first of the three rows of fuel injectors. Moreover, the inlet performance fell short of predictions due to problems in fabrication.

At Mach 5-5 and higher, airflow entered the fuel-air mixing zone within the combustor at speeds near Mach 2. However, only the two rear rows of injectors were active, and burning of their fuel forced the internal Mach number to subsonic values. The flow reaccelerated to sonic velocity at the combustor exit. The combina­tion of degraded inlet performance and use of only the rear fuel injectors ensured that even at the highest flight speeds, the engine operated primarily in a subsonic – combustion mode and showed little if any supersonic combustion.47

It nevertheless was clear that with better quality control in manufacturing and with better fault tolerance in the onboard control laws, full success might readily be achieved. However, the СІАМ design was axisymmetric and hence was of a type that NASA had abandoned during the early 1970s. Such scramjets had played no role in NASP, which from the start had focused on airframe-integrated configura­tions. The СІАМ project had represented an existing effort that was in a position to benefit from even the most modest of allocations; the 1992 flight, for instance, received as little as $200,000 from France.48 But NASA had its eye on a completely American scramjet project that could build on the work of NASP. It took the name Hyper-X and later X-43A.

Its background lay in a 1995 study conducted by McDonnell Douglas, with Pratt & Whitney providing concepts for propulsion. This effort, the Dual-Fuel Airbreathing Hypersonic Vehicle Study, gave conceptual designs for vehicles that could perform two significant missions: weapons delivery and reconnaissance, and operation as the airbreathing first stage of a two-stage-to-orbit launch system. This work drew interest at NASA Headquarters and led the Hypersonic Vehicles Office at NASA-Langley to commission the conceptual design of an experimental airplane that could demonstrate critical technologies required for the mission vehicles.

The Hyper-X design grew out of a concept for a Mach 10 cruise aircraft with length of 200 feet and range of 8,500 nautical miles. It broke with the NASP approach of seeking a highly integrated propulsion package that used an ejector ramLACE as a low-speed system. Instead it returned to the more conservative path of installing separate types of engine. Hydrocarbon-fueled turboramjets were to serve for takeoff, acceleration to Mach 4, and subsonic cruise and landing. Hydro­gen-burning scramjets were to take the vehicle to Mach 10. The shape of this vehicle defined that of Hyper-X, which was designed as a detailed scale model that was 12 feet long rather than 200.49

Like the Russian engines, Hyper-X was to fly to its test Mach using a rocket booster. But Hyper-X was to advance beyond the Russian accomplishments by sepa­rating from this booster to execute free flight. This separation maneuver proved to be trickier than it looked. Subsonic bombers had been dropping rocket planes into flight since the heyday of Chuck Yeager, and rocket stages had separated in near­vacuum at the high velocities of a lunar mission. However, Hyper-X was to separate at speeds as high as Mach 10 and at 100,000 feet, which imposed strong forces from the airflow. As the project manager David Reubush wrote in 1999, “To the programs knowledge there has never been a successful separation of two vehicles (let alone a separation of two non-axisymmetric vehicles) at these conditions. Therefore, it soon became obvious that the greatest challenge for the Hyper-X program was, not the design of an efficient scramjet engine, but the development of a separation scenario and the mechanism to achieve it.”50

Engineers at Sandia National Laboratory addressed this issue. They initially envisioned that the rocket might boost Hyper-X to high altitude, with the sepa­ration taking place in near-vacuum. The vehicle then could re-enter and light its scramjet. This approach fell by the wayside when the heat load at Mach 10 proved to exceed the capabilities of the thermal protection system. The next concept called for Hyper-X to ride the underside of its rocket and to be ejected downward as if it were a bomb. But this vehicle then would pass through the bow shock of the rocket and would face destabilizing forces that its control system could not counter.

Sandia’s third suggestion called for holding the vehicle at the front of the rocket using a hinged adapter resembling a clamshell or a pair of alligator jaws. Pyrotech­nics would blow the jaws open, releasing the craft into flight. The open jaws then were to serve as drag brakes, slowing the empty rocket casing while the flight vehicle sailed onward. The main problem was that if the vehicle rolled during separation, one of its wings might strike this adapter as it opened. Designers then turned to an adapter that would swing down as a single piece. This came to be known as the “drop-jaw,” and it served as the baseline approach for a time.51

NASA announced the Hyper-X Program in October 1996, citing a budget of $170 million. In February 1997 Orbital Sciences won a contract to provide the rocket, which again was to be a Pegasus. A month later the firm of Micro Craft Inc. won the contract for the Hyper-X vehicle, with GASL building the engine. Work at GASL went forward rapidly, with that company delivering a scramjet to NASA – Langley in August 1998. NASA officials marked the occasion by changing the name of the flight aircraft to X-43A.52

The issue of separation in flight proved not to be settled, however, and develop­ments early in 1999 led to abandonment of the drop-jaw. This adapter extended forward of the end of the vehicle, and there was concern that while opening it would form shock waves that would produce increased pressures on the rear underside of the flight craft, which again could overtax its control system. Wind-tunnel tests showed that this indeed was the case, and a new separation mechanism again was necessary. This arrangement called for holding the X-43A in position with explo­sive bolts. When they were fired, separate pyrotechnics were to actuate pistons that would push this craft forward, giving it a relative speed of at least 13 feet per second. Further studies and experiments showed that this concept indeed was suitable.53

The minimal size of the X-43A meant that there was little need to keep its weight down, and it came in at 2,800 pounds. This included 900 pounds of tungsten at the nose to provide ballast for stability in flight while also serving as a heat sink. High stiffness of the vehicle was essential to prevent oscillations of the structure that could interfere with the Pegasus flight control system. The X-43A thus was built with steel longerons and with steel skins having thickness of one-fourth inch. The wings were stubby and resembled horizontal stabilizers; they did not mount ailerons but moved as a whole to provide sufficient control authority. The wings and tail surfaces were constructed of temperature-resistant Haynes 230 alloy. Leading edges of the nose, vertical fins, and wings used carbon-carbon. For thermal protection, the vehicle was covered with Alumina Enhanced Thermal Barrier tiles, which resembled the tiles of the space shuttle.54

Additional weight came from the scramjet. It was fabricated of a copper alloy called Glidcop, which was strengthened with very fine particles of aluminum oxide dispersed within. This increased its strength at high temperatures, while retaining the excellent thermal conductivity of copper. This alloy formed the external surface, sidewalls, cowl, and fuel injectors. Some internal surfaces were coated with zirconia to form a thermal barrier that protected the Glidcop in areas of high heating. The engine did not use its hydrogen fuel as a coolant but relied on water cooling for the sidewalls and cowl leading edge. Internal engine seals used braided ceramic rope.55

Because the X-43A was small, its engine tests were particularly realistic. This vehicle amounted to a scale model of a much larger operational craft of the future, but the engine testing involved ground-test models that were full size for the X-43A. Most of the testing took place at NASA-Langley, where the two initial series were conducted at the Arc-Heated Scramjet Test Facility. This wind tunnel was described in 1998 as “the primary Mach 7 scramjet test facility at Langley.”56

Development tests began at the very outset of the Hyper-X Program. The first test article was the Dual-Fuel Experiment (DFX), with a name that reflected links to the original McDonnell Douglas study. The DFX was built in 1996 by modifying existing NASP engine hardware. It provided a test scramjet that could be modified rapidly and inexpensively for evaluation of changes to the flowpath. It was fabricated primarily of copper and used no active cooling, relying on heat sink. This ruled out tests at the full air density of a flight at Mach 7, which would have overheated this engine too quickly for it to give useful data. Even so, tests at reduced air densities gave valuable guidance in designing the flight engine.

The DFX reproduced the full-scale height and length of the Hyper-X engine, correctly replicating details of the forebody, cowl, and sidewall leading edge. The forebody and afterbody were truncated, and the engine width was reduced to 44 percent of the true value so that this test engine could fit with adequate clearances in the test facility. This effort conducted more than 250 tests of the DFX, in four dif­ferent configurations. They verified predicted engine forces and moments as well as inlet and combustor component performances. Other results gave data on ignition requirements, flameholding, and combustor-inlet interactions.

Within that same facility, subsequent tests used the Hyper-X Engine Module (HXEM). It resembled the DFX, including the truncations fore and aft, and it too was of reduced width. But it replicated the design of the flight engine, thereby overcoming limitations of the DFX. The HXEM incorporated the active cooling of

the flight version, which opened the door to tests at Mach 7 and at full air density. These took place within the large Eight-Foot High Temperature Tunnel (HTT).

The HTT had a test section that was long enough to accommodate the full 12- foot length of the X-43A underside, which provided major elements of the inlet and nozzle with its airframe-integrated forebody and afterbody. This replica of the underside initially was tested with the HXEM, thereby giving insight into the aero­dynamic effects of the truncations. Subsequent work continued to use the HTT and replaced the HXEM with the full-width Hyper-X Flight Engine (HXFE). This was a flight-spare Mach 7 scramjet that had been assigned for use in ground testing.

Mounted on its undersurface, this configuration gave a geometrically accurate nose-to-tail X-43A propulsion flowpath at full scale. NASA-Langley had conducted previous tests of airframe-integrated scramjets, but this was the first to replicate the size and specific details of the propulsion system of a flight vehicle. The HTT heated its air by burning methane, which added large quantities of carbon dioxide and water vapor to the test gas. But it reproduced the Mach, air density, pressure, and temperature of flight at altitude, while gaseous oxygen, added to the airflow, enabled the engine to burn hydrogen fuel. Never before had so realistic a test series been accomplished.57

The thrust of the engine was classified, but as early as 1997 Vince Rausch, the Hyper-X manager at NASA-Langley, declared that it was the best-performing scram­jet that had been tested at his center. Its design called for use of a cowl door that was to protect the engine by remaining closed during the rocket-powered ascent, with this door opening to start the inlet. The high fidelity of the HXFE, and of the test conditions, gave confidence that its mechanism would work in flight. The tests in the HTT included 14 unfueled runs and 40 with fuel. This cowl door was actu­ated 52 times under the Mach 7 test conditions, and it worked successfully every time.58

Aerodynamic wind-tunnel investigations complemented the propulsion tests and addressed a number of issues. The overall program covered all phases of the flight trajectory, using 15 models in nine wind tunnels. Configuration development alone demanded more than 5,800 wind-tunnel runs. The Pegasus rocket called for evaluation of its own aerodynamic characteristics when mated with the X-43A, and these had to be assessed from the moment of being dropped from the B-52 to sepa­ration of the flight vehicle. These used the Lockheed Martin Vought High Speed Wind Tunnel in Grand Prairie, Texas, along with facilities at NASA-Langley that operated at transonic as well as hypersonic speeds.59

Much work involved evaluating stability, control, and performance character­istics of the basic X-43A airframe. This effort used wind tunnels of McDonnell Douglas and Rockwell, with the latter being subsonic. At NASA-Langley, activity focused on that center’s 20-inch Mach 6 and 31-inch Mach 10 facilities. The test models were only one foot in length, but they incorporated movable rudders and wings. Eighteen-inch models followed, which were as large as these tunnels could accommodate, and gave finer increments of the control-surface deflections. Thirty – inch models brought additional realism and underwent supersonic and transonic tests in the Unitary Plan Wind Tunnel and the 16-Foot Transonic Tunnel.60

Similar studies evaluated the methods proposed for separation of the X-43A from its Pegasus booster. Initial tests used Langley’s Mach 6 and Mach 10 tunnels. These were blowdown facilities that did not give long run times, while their test sections were too small to permit complete representations of vehicle maneuvers during separation. But after the drop-jaw concept had been selected, testing moved to tunnel В of the Von Karman Facility at the Arnold Engineering Development Center. This wind tunnel operated with continuous flow, in contrast to the blow­down installations of Langley, and provided a 50-inch-diameter test section for use at Mach 6. It was costly to test in that tunnel but highly productive, and it accom­modated models that demonstrated a full range of relative orientations of Pegasus and the X-43A during separation.61

This wind-tunnel work also contributed to inlet development. To enhance overall engine performance, it was necessary for the boundary layer upstream of this inlet to be turbulent. Natural transition to turbulence could not be counted on, which meant that an aerodynamic device of some type was needed to trip the boundary layer into turbulence. The resulting investigations ran from 1997 into 1999 and used both the Mach 6 and Mach 10 Langley wind tunnels, executing more than 300 runs. Hypulse, a shock tunnel at GASL, conducted more than two dozen additional tests.62

Computational fluid dynamics was used extensively. The wind-tunnel tests that supported studies of X-43A separation all were steady-flow experiments, which failed to address issues such as unsteady flow in the gap between the two vehicles as they moved apart. CFD dealt with this topic. Other CFD analyses examined relative orientations of the separating vehicles that were not studied at AEDC. To scale wind-tunnel results for use with flight vehicles, CFD solutions were generated both for the small models under wind-tunnel conditions and for full-size vehicles in flight.63

Flight testing was to be conducted at NASA-Dryden. The first X-43A flight vehi­cle arrived there in October 1999, with its Pegasus booster following in December. Tests of this Pegasus were completed in May 2000, with the flight being attempted a year later. The plan called for acceleration to Mach 7 at 95,000 feet, followed by 10 seconds of powered scramjet operation. This brief time reflected the fact that the engine was uncooled and relied on copper heat sink, but it was long enough to take data and transmit them to the ground. In the words of NASA manager Lawrence Huebner, “we have ground data, we have ground CFD, we have flight CFD—all we need is the flight data.”64

Launch finally occurred in June 2001. Ordinarily, when flying to orbit, Pega­sus was air-dropped at 38,000 feet, and its first stage flew to 207,000 feet prior to second-stage ignition. It used solid propellant and its performance could not readily be altered; therefore, to reduce its peak altitude to the 95,000 feet of the X-43A, it was to be air-dropped at 24,000 feet, even though this lower altitude imposed greater loads.

The B-52 took off from Edwards AFB and headed over the Pacific. The Pega­sus fell away; its first stage ignited five seconds later and it flew normally for some eight seconds that followed. During those seconds, it initiated a pullout to begin its climb. Then one of its elevons came off, followed almost immediately by another. As additional parts fell away, this booster went out of control. It fell tumbling toward the ocean, its rocket motor still firing, and a safety officer sent a destruct signal. The X-43A never had a chance to fly, for it never came close to launch conditions.65

A year later, while NASA was trying to recoup, a small group in Australia beat the Yankees to the punch by becoming the first in the world to fly a scramjet and achieve supersonic combustion. Their project, called HyShot, cost under $2 mil­lion, compared with $185 million for the X-43A program. Yet it had plenty of technical sophistication, including tests in a shock tunnel and CFD simulations using a supercomputer.

Allan Pauli, a University of Queensland researcher, was the man who put it together. He took a graduate degree in applied mathematics in 1985 and began working at that university with Ray Stalker, an engineer who had won a global repu­tation by building a succession of shock tunnels. A few years later Stalker suffered a stroke, and Pauli found himself in charge of the program. Then opportunity came knocking, in the form of a Florida-based company called Astrotech Space Opera­tions. That firm was building sounding rockets and wanted to expand its activities into the Asia and Pacific regions.

In 1998 the two parties signed an agreement. Astrotech would provide two Ter – rier-Orion sounding rockets; Pauli and his colleagues would construct experimental scramjets that would ride those rockets. The eventual scramjet design was not air­frame-integrated, like that of the X-43A. It was a podded axisymmetric configura­tion. But it was built in two halves, with one part being fueled with hydrogen while the other part ran unfueled for comparison.66

Pauli put together a team of four people—and found that the worst of his prob­lems was what he called an “amazing legal nightmare” that ate up half his time. In the words of the magazine Air & Space, “the team had to secure authorizations from various state government agencies, coordinate with aviation bodies and insurance companies in both Australia and the United States (because of the involvement of U. S. funding), perform environmental assessments, and ensure their launch debris would steer clear of land claimed by Aboriginal tribes…. All told, the preparations took three and a half years.”67

The flight plan called for each Terrier-Orion to accelerate its scramjet onto a ballistic trajectory that was to reach an altitude exceeding 300 kilometers. Near the peak of this flight path, an attitude-control system was to point the rocket down­ward. Once it re-entered the atmosphere, below 40 kilometers, its speed would fall off and the scramjet would ignite. This engine was to operate while continuing to plunge downward, covering distance into an increasingly dense atmosphere, until it lost speed in the lower atmosphere and crashed into the outback.

The flights took place at Woomera Instrumented Range, north of Adelaide. The first launch attempt came at the end of October 2001. It flopped; the first stage performed well, but the second stage went off course. But nine months later, on 30 July 2002, the second shot gained full success. The rocket was canted slightly away from the vertical as it leaped into the air, accelerating at 22 g as it reached Mach 3-6 in only six seconds.

This left it still at low altitude while topping the speed of the SR-71, so after the second stage with payload separated, it coasted for 16 seconds while continuing to ascend. The second stage then ignited, and this time its course was true. It reached a peak speed of Mach 7.7. The scramjet went over the top; it pointed its nose down­ward, and at an altitude of 36 kilometers with its speed approaching Mach 7.8, gaseous hydrogen caused it to begin producing thrust. This continued until HyShot reached 25 kilometers, when it shut down.

It fired for only five seconds. But it returned data over 40 channels, most of which gave pressure readings. NASA itself provided support, with Lawrence Hueb – ner, the X-43A manager, declaring, “We’re very hungry for flight data.” For the moment, at least, the Aussies were in the lead.68

But the firm of Micro Craft had built two more X-43As, and the second flight took place in March 2004. This time the Pegasus first stage had been modified by having part of its propellant removed, to reduce its performance, and the drop alti­tude was considerably higher.69 In the words of Aviation Week,

The B-52B released the 37,500-lb. stack at 40,000 ft. and the Pegasus

booster ignited 5 sec. later__ After a few seconds it pulled up and reached

a maximum dynamic pressure of 1,650 psf. at Mach 3.5 climbing through

47,0 ft. Above 65,000 ft. it started to push over to a negative angle of attack to kill the climb rate and gain more speed. Burnout was 84 sec. after drop, and at 95 sec. a pair of pistons pushed the X-43A away from the booster at a target condition of Mach 7 and 95,000 ft. and a dynamic pressure of 1,060 psf. in a slight climb before the top of a ballistic arc.

After a brief period of stabilization, the X-43A inlet door was opened

to let air in through the engine___ The X-43A stabilized again because the

engine airflow changed the trim____ Then silane, a chemical that burns

upon contact with air, was injected for 3 sec. to establish flame to ignite the

Scramjets Take Flight

X-43A mission to Mach 7. (NASA)

hydrogen. Injection of the gaseous hydrogen fuel ramped up as the silane ramped down, lasting 8 sec. The hydrogen flow rate increased through and beyond a stoichiometric mixture ratio, and then ramped down to a very lean ratio that continued to burn until the fuel was shut off…. The hydrogen was stored in 8,000-psi bottles.

Accelerometers showed the X-43A gained speed while fuel was on…. Data was gathered all the way to the splashdown 450 naut. mi. offshore at about 11 min. after drop.

Aviation Week added that the vehicle accelerated “while in a slight climb at Mach 7 and 100,000 ft. altitude. The scramjet field is sufficiently challenging that produc­ing thrust greater than drag on an integrated airframe/engine is considered a major accomplishment.”70

In this fashion, NASA executed its first successful flight of a scramjet. The overall accomplishment was not nearly as ambitious as that planned for the Incremental Flight Test Vehicle of the 1960s, for which the velocity increase was to have been much greater. Nor did NASA have a follow-on program in view that could draw on the results of the X-43A. Still, the agency now could add the scramjet to its list of flight engines that had been successfully demonstrated.

The program still had one unexpended X-43A vehicle that was ready to fly, and it flew successfully as well, in November. The goal now was Mach 10. This called for beefing up the thermal structure by adding leading edges of solid carbon-carbon to the vertical tails along with a coating of hafnium carbide and by making the nose blunter to increase the detachment of the bow shock. These changes indeed were necessary. Nose temperatures reached 3,600°F, compared with 2,600°F on the Mach 7 flight, and heating rates were twice as high.

The Pegasus rocket, with the X-43A at its front, fell away from its B-52 carrier aircraft at 40,000 feet. Its solid rocket took the combination to Mach 10 at 110,000 feet. Several seconds after burnout, pistons pushed the X-43A away at Mach 9-8. Then, 2.5 seconds after separation, the engine inlet door opened and the engine began firing at Mach 9-65. It ran initially with silane to ensure ignition; then the engine continued to operate with silane off, for comparison. It fired for a total of 10 to 12 seconds and then continued to operate with the fuel off. Twenty-one seconds after separation, the inlet door closed and the vehicle entered a hypersonic glide. This continued for 14 minutes, with the craft returning data by telemetry until it struck the Pacific Ocean and sank.

This flight gave a rare look at data taken under conditions that could not be duplicated on the ground using continuous-flow wind tunnels. The X-43A had indeed been studied in 0.005-second runs within shock tunnels, and Aviation Week noted that Robert Bakos, vice president of GASL, described such tests as having done “a very good job of predicting the flight.” Dynamic pressure during the flight was 1,050 pounds per square foot, and the thrust approximately equaled the drag. In addition, the engine achieved true supersonic combustion, without internal pockets of subsonic flow. This meant that the observations could be scaled to still higher Mach values.71

Flight Test

The first important step in this direction came in January 1955, when the Air Force issued a letter contract to Lockheed that authorized them to proceed with the X-17. It took shape as a three-stage missile, with all three stages using solid-propel­lant rocket motors from Thiokol. It was to reach Mach 15, and it used a new flight mode called “over the top.”

The X-17 was not to fire all three stages to achieve a very high ballistic trajec­tory. Instead it started with only its first stage, climbing to an altitude of 65 to 100 miles. Descent from such an altitude imposed no serious thermal problems. As it re-entered the atmosphere, large fins took hold and pointed it downward. Below 100,000 feet, the two upper stages fired, again while pointing downward. These stages accelerated a test nose cone to maximum speed, deep within the atmosphere. This technique prevented the nose cone from decelerating at high altitude, which would have happened with a very high ballistic flight path. Over-the-top also gave good control of both the peak Mach and of its altitude of attainment.

The accompanying table summarizes the results. Following a succession of sub­scale and developmental flights that ran from 1955 into 1956, the program con­ducted two dozen test firings in only eight months. The start was somewhat shaky as no more than two of the first six X-17s gained full success, but the program soon settled down to routine achievement. The simplicity of solid-propellant rock­etry enabled the flights to proceed with turnaround times of as little as four days. Launches required no more than 40 active personnel, with as many as five such flights taking place within the single month of October 1956. All of them flew from a single facility: Pad 3 at Cape Canaveral.59

Подпись: X-17 FLIGHT TESTS Date Nose-Cone Shape 17 Jul 1956 Hemisphere 27 Jul 1956 Cubic Paraboloid 18 Aug 1956 Hemisphere 23 Aug 1956 Blunt 28 Aug 1956 Blunt 8 Sep 1956 Cubic Paraboloid 1 Oct 1956 Hemisphere 5 Oct 1956 Hemisphere 13 Oct 1956 Cubic Paraboloid 18 Oct 1956 Hemisphere 25 Oct 1956 Blunt 5 Nov 1956 Blunt (Avco) 16 Nov 1956 Blunt (Avco) 23 Nov 1956 Blunt (Avco) 3 Dec 1956 Blunt (Avco) 11 Dec 1956 Blunt Cone (GE) 8 Jan 1957 Blunt Cone (GE) 15 Jan 1957 Blunt Cone (GE) 29 Jan 1957 Blunt Cone (GE) 7 Feb 1957 Blunt Cone (GE) 14 Feb 1957 Hemisphere 1 Mar 1957 Blunt Cone (GE) 11 Mar 1957 Blunt (Avco) 21 Mar 1957 Blunt (Avco)
Подпись: Results Mach 12.4 at 40,000 feet. Third stage failed to ignite. Missile exploded 18 sec. after launch. Mach 12.4 at 38,500 feet. Telemetry lost prior to apogee. Upper stages ignited while ascending. Mach 12.1 at 36,500 feet. Mach 13.7 at 54,000 feet. Mach 13.8 at 58,500 feet. Mach 12.6 at 37,000 feet. Mach 14.2 at 59,000 feet. Mach 12.6 at 41,100 feet. Mach 13.8 at 57,000 feet. Mach 11.3 at 34,100 feet. Mach 13.8 at 47,700 feet. Mach 11.4 at 34,000 feet. Mach 11.5 at 34,600 feet. Upper stages failed to ignite. Missile destroyed by Range Safety. Mach 14.4 at 57,000 feet. Mach 12.1 at 35,000 feet. Mach 11.4 at 35,600 feet. Mach 11.3 at 35,500 feet. Mach 13.2 at 54,500 feet.

Source: “Re-EntryTest Vehicle X-17,” pp. 30, 32.

Many nose cones approached or topped Mach 12 at altitudes below 40,000 feet. This was half the speed of a satellite, at altitudes where airliners fly today. One places this in perspective by noting that the SR-71 cruised above Mach 3, one-fourth this speed, and at 85,000 feet, which was more than twice as high. Thermal problems dominated its design, with this spy plane being built as a titanium hot structure. The X-15 reached Mach 6.7 in 1967, half the speed of an X-17 nose cone, and at
102,000 feet. Its structure was Inconel X heat sink, and it had further protection from a spray-on ablative. Yet it sustained significant physical damage due to high temperatures and never again approached that mark.60

Another noteworthy flight involved a five-stage NACA rocket that was to accom­plish its own over-the-top mission. It was climbing gently at 96,000 feet when the third stage ignited. Telemetry continued for an additional 8.2 seconds and then suddenly cut off, with the fifth stage still having half a second to burn. The speed was Mach 15-5 at 98,500 feet. The temperature on the inner surface of the skin was 2,500°F, close to the melting point, with this temperature rising at nearly 5,300°F per second.61

How then did X-17 nose cones survive flight at nearly this speed, but at little more than one-third the altitude? They did not. They burned up in the atmosphere. They lacked thermal protection, whether heat sink or ablative (which the Air Force, the X-17’s sponsor, had not invented yet), and no attempt was made to recover them. The second and third stages ignited and burned to depletion in only 3-7 seconds, with the thrust of these stages being 102,000 and 36,000 pounds, respec­tively.62 Acceleration therefore was extremely rapid; exposure to conditions of very high Mach was correspondingly brief. The X-17 thus amounted to a flying shock tube. Its nose cones lived only long enough to return data; then they vanished into thin air.

Yet these data were priceless. They included measurements of boundary-layer transition, heat transfer, and pressure distributions, covering a broad range of peak Mach values, altitudes, and nose-cone shapes. The information from this program complemented the data from Avco Research Laboratory, contributing materially to Air Force decisions that selected ablation for Atlas (and for Titan, a second ICBM), while retaining heat sink for Thor.63

As the X-17 went forward during 1956 and 1957, the Army weighed in with its own flight-test effort. Here were no over-the-top heroics, no ultrashort moments at high Mach with nose cones built to do their duty and die. The Army wanted nothing less than complete tests of true ablating nose cones, initially at subscale and later at full scale, along realistic ballistic trajectories. The nose cones were to survive re-entry. If possible, they were to be recovered from the sea.

The launch vehicle was the Jupiter-C, another product of Von Braun. It was based on the liquid-fueled Redstone missile, which was fitted with longer propellant tanks to extend the burning time. Atop that missile rode two additional stages, both of which were built as clusters of small solid-fuel rockets.

The first flight took place from Cape Canaveral in September 1956. It carried no nose cone; this launch had the purpose of verifying the three-stage design, par­ticularly its methods for stage separation and ignition. A dummy solid rocket rode atop this stack as a payload. All three stages fired successfully, and the flight broke all

Flight Test

Thor missile with heat-sink nose cone. (U. S. Air Force)

performance records. The payload reached a peak altitude of682 miles and attained an estimated range of 3,335 miles.64

Nose-cone tests followed during 1957- Each cone largely duplicated that of the Jupiter missile but was less than one-third the size, having a length of 29 inches and maximum diameter of 20 inches. The weight was 314 pounds, of which 83

Flight Test

Jupiter nose cone. (U. S. Army)

pounds constituted the mix of glass cloth and Micarta plastic that formed the abla­tive material. To aid in recovery in the ocean, each nose cone came equipped with a balloon for flotation, two small bombs to indicate position for sonar, a dye marker, a beacon light, a radio transmitter—and shark repellant, to protect the balloon from attack.65

The first nose-cone flight took place in May. Telemetry showed that the re-entry vehicle came through the atmosphere successfully and that the ablative thermal pro­tection indeed had worked. However, a faulty trajectory caused this nose cone to fall 480 miles short of the planned impact point, and this payload was not recovered.

Full success came with the next launch, in August. All three stages again fired, pushing the nose cone to a range of 1,343 statute miles. This was shorter than the planned range of Jupiter, 1,725 miles, but still this payload experienced 95 percent of the total heat transfer that it would have received at the tip for a full-range flight. The nose cone also was recovered, giving scientists their first close look at one that had actually survived.66

In November President Eisenhower personally displayed it to the nation. The Soviets had stirred considerable concern by placing two Sputnik satellites in orbit, thus showing that they already had an ICBM. Speaking on nationwide radio and television, Ike sought to reassure the public. He spoke of American long-range bombers and then presented his jewel: “One difficult obstacle on the way to pro­ducing a useful long-range weapon is that of bringing a missile back from outer space without its burning up like a meteor. This object here in my office is the nose cone of an experimental missile. It has been hundreds of miles into outer space and back. Here it is, completely intact.”67

Jupiter then was in flight test and became the first missile to carry a full-size nose cone to full range.68 But the range of Jupiter was far shorter than that of Atlas. The Army had taken an initial lead in nose-cone testing by taking advantage of its early start, but by the time of that flight—May 1958—all eyes were on the Air Force and on flight to intercontinental range.

Atlas also was in flight test during 1958, extending its range in small steps, but it still was far from ready to serve as a test vehicle for nose cones. To attain 5,000-mile range, Air Force officials added an upper stage to the Thor. The resulting rocket, the Thor-Able, indeed had the job of testing nose cones. An early model, from General Electric, weighed more than 600 pounds and carried 700 pounds of instruments.69

Two successful flights, both to full range, took place during July 1958. The first one reached a peak altitude of 1,400 miles and flew 5,500 miles to the South Atlantic. Telemetered data showed that its re-entry vehicle survived the fiery pas­sage through the atmosphere, while withstanding four times the heat load of a Thor heat-sink nose cone. This flight carried a passenger, a mouse named Laska in honor of what soon became the 49th state. Little Laska lived through decelerations during re-entry that reached 60 g, due to the steepness of the trajectory, but the nose cone was not recovered and sank into the sea. Much the same happened two weeks later, with the mouse being named Wickie. Again the reentry vehicle came through the atmosphere successfully, but Wickie died for his country as well, for this nose cone also sank without being recovered.70

A new series of tests went forward during 1959, as General Electric introduced the RVX-1 vehicle. Weighing 645 pounds, 67 inches long with a diameter at the base of 28 inches, it was a cylinder with a very blunt nose and a conical afterbody for stability.71 A flight in March used phenolic nylon as the ablator. This was a phenolic resin containing randomly oriented one-inch-square pieces of nylon cloth. Light weight was its strong suit; with a density as low as 72 pounds per cubic foot, it was only slightly denser than water. It also was highly effective as insulation. Following flight to full range, telemetered data showed that a layer only a quarter-inch thick could limit the temperature rise on the aft body, which was strongly heated, to less than 200°F. This was well within the permissible range for aluminum, the most familiar of aerospace materials. For the nose cap, where the heating was strongest, GE installed a thick coating of molded phenolic nylon.72

Within this new series of flights, new guidance promised enhanced accuracy and a better chance of retrieval. Still, that March flight was not recovered, with another shot also flying successfully but again sinking beneath the waves. When the first recovery indeed took place, it resulted largely from luck.

Early in April an RVX-1 made a flawless flight, soaring to 764 miles in altitude and sailing downrange to 4,944 miles. Peak speed during re-entry was Mach 20, or 21,400 feet per second. Peak heating occurred at Mach 16, or 15,000 feet per second, and at 60,000 feet. The nose cone took this in stride, but searchers failed to detect its radio signals. An Avco man in one of the search planes saved the situation by spotting its dye marker. Aircraft then orbited the position for three hours until a recovery vessel arrived and picked it up.73

It was the first vehicle to fly to intercontinental range and return for inspection. Avco had specified its design, using an ablative heat shield of fused opaque quartz. Inspection of the ablated surface permitted comparison with theory, and the results were described as giving “excellent agreement.” The observed value of maximum ablated thickness was 9 percent higher than the theoretical value. The weight loss of ablated material agreed within 20 percent, while the fraction of ablated material that vaporized during re-entry was only 3 percent higher than the theoretical value. Most of the differences could be explained by the effect of impurities on the viscos­ity of opaque quartz.74

A second complete success was achieved six weeks later, again with a range of 5,000 miles. Observers aboard a C-54 search aircraft witnessed the re-entry, acquired the radio beacon, and then guided a recovery ship to the site.75 This time the nose-cone design came from GE. That company’s project engineer, Walter Scha­fer, wanted to try several materials and to instrument them with breakwire sensors. These were wires, buried at various depths within the ablative material, that would break as it eroded away and thus disclose the rate of ablation. GE followed a sugges­tion from George Sutton and installed each material as a 60-degree segment around the cylinder and afterbody, with the same material being repeated every 180 degrees for symmetry.76

Within the fast-paced world of nose-cone studies, each year had brought at least one new flight vehicle. The X-17 had flown during 1956. For the Jupiter-C, success had come in 1957. The year 1958 brought both Jupiter and the Thor-Able. Now, in 1959, the nose-cone program was to gain final success by flying full-size re-entry vehicles to full range aboard Atlas.

The program had laid important groundwork in November 1958, when this missile first flew to intercontinental distance. The test conductor, with the hopeful name of Bob Shotwell, pushed the button and the rocket leaped into the night. It traced an arc above the Moon as it flew across the starry sky. It dropped its twin booster engines; then, continuing to accelerate, the brilliant light of its main engine faded. Now it seemed to hang in the darkness like a new star, just below Orion. Shotwell and his crew contained their enthusiasm for a full seven minutes; then they erupted in shouts. They had it; the missile was soaring at 16,000 miles per hour, bound for a spot near the island of St. Helena in the South Atlantic, a full 6,300 miles from the Cape. In Shotwell s words, “We knew we had done it. It was going like a bullet; nothing could stop it.”77

Atlas could carry far heavier loads than Thor-Able, and its first nose cone reflected this. It

Flight TestПодпись: Nose cones used in flight test. Top, RVX-1; bottom, RVX- 2. (U.S. Air Force) Подпись: was the RVX-2, again from Gen-eral Electric, which had the shape
of a long cone with a round tip. With a length of 147 inches and

Flight Test

flew to a range of 5,047 miles in July 1959 and was recovered. It thereby became the largest object to have been brought back fol­lowing re-entry.78

Attention now turned to developmental tests of a nose cone for the operational Atlas. This was the Mark 3, also from

GE. Its design returned to the basic RVX-1 configuration,

Flight Test

a longer conical afterbody. It was slightly smaller than the RVX-2, with a length of 115 inches, diameter at the cylinder of 21 inches, and diameter at the base of

Подпись: 36 inches. Phenolic nylon was specified throughout for thermal protection, being

molded under high pressure for the nose cap and tape-wound on the cylinder and afterbody. The Mark 3 weighed 2,140 pounds, making it somewhat lighter than the RVX-2. The low density of phenolic nylon showed itself anew, for of this total

Flight Test
were full – range, with one of them flying 5,000 miles to Ascension Island and another going 6,300 miles. Re-entry speeds went as high as 22,500 feet per second. Peak heat transfer occurred near Mach 14 and 40,000 feet in altitude, approximating

Подпись: the conditions of the X-17 tests. The air at that height was too thin to breathe, but

the nose cone set up a shock wave that compressed the incoming flow, producing a wind resistance with dynamic pressure of more than 30 atmospheres. Temperatures at the nose reached 6,500°E81

Each re-entry vehicle was extensively instrumented, mounting nearly two dozen breakwire ablation sensors along with pressure and temperature sensors. The latter were resistance thermometers employing 0.0003-inch tungsten wire, reporting tem­peratures to 2000°F with an accuracy of 25 to 50°F. The phenolic nylon showed anew that it had the right stuff, for it absorbed heat at the rate of 3,000 BTU per pound, making it three times as effective as boiling water. A report from GE noted, “all temperature sensors located on the cylindrical section were at locations too far below the initial surface to register a temperature rise.”82

With this, the main effort in re-entry reached completion, and its solution— ablation—had proved to be relatively simple. The process resembled the charring of wood. Indeed, Kantrowitz recalls Von Braun suggesting that it was possible to build a nose cone of lightweight balsa soaked in water and frozen. In Kantrowitz’s words, “That might be a very reasonable ablator.”83

Experience with ablation also contrasted in welcome fashion with a strong ten­dency of advanced technologies to rely on highly specialized materials. Nuclear energy used uranium-235, which called for the enormous difficulty of isotope separation, along with plutonium, which had to be produced in a nuclear reac­tor and then be extracted from highly radioactive spent fuel. Solid-state electronics depended on silicon or germanium, but while silicon was common, either element demanded refinement to exquisite levels of purity.

Ablation was different. Although wood proved inappropriate, once the basic concept was in hand the problem became one of choosing the best candidate from a surprisingly wide variety of possibilities. These generally were commercial plas­tics that served as binders, with the main heat resistance being provided by glass or silica. Quartz also worked well, particularly after being rendered opaque, while pyrolytic graphite exemplified a new material with novel properties.

The physicist Steven Weinberg, winner of a Nobel Prize, stated that a researcher never knows how difficult a problem is until the solution is in hand. In 1956 Theo­dore von Karman had described re-entry as “perhaps one of the most difficult prob­lems one can imagine. It is certainly a problem that constitutes a challenge to the best brains working in these domains of modern aerophysics.”84 Yet in the end, amid all the ingenuity of shock tubes and arc tunnels, the fundamental insights derived from nothing deeper than testing an assortment of candidate materials in the blast of rocket engines.

Hypersonics and. the Space Shuttle

During the mid-1960s, two advanced flight projects sought to lay technical groundwork for an eventual reusable space shuttle. ASSET, which flew first, pro­gressed beyond Dyna-Soar by operating as a flight vehicle that used a hot structure, placing particular emphasis on studies of aerodynamic flutter. PRIME, which fol­lowed, had a wingless and teardrop-shaped configuration known as a lifting body. Its flight tests exercised this craft in maneuvering entries. Separate flights, using piloted lifting bodies, were conducted for landings and to give insight into their handling qualities.

From the perspective of ASSET and PRIME then, one would have readily con­cluded that the eventual shuttle would be built as a hot structure and would have the aerodynamic configuration of a lifting body. Indeed, initial shuttle design stud­ies, late in the 1960s, followed these choices. However, they were not adopted in the final design.

The advent of a highly innovative type of thermal protection, Lockheed’s reus­able “tiles,” completely changed the game in both the design and the thermal areas. Now, instead of building the shuttle with the complexities of a hot structure, it could be assembled as an aluminum airplane of conventional type, protected by the tiles. Lifting bodies also fell by the wayside, with the shuttle having wings. The Air Force insisted that these be delta wings that would allow the shuttle to fly long distances to the side of a trajectory. While NASA at first preferred simple straight wings, in time it agreed.

The shuttle relied on carbon-carbon for thermal protection in the hottest areas. It was structurally weak, but this caused no problem for more than 100 missions. Then in 2003, damage to a wing leading edge led to the loss of Columbia. It was the first space disaster to bring the death of astronauts due to failure of a thermal protection system.

Recent Advances in Fluid Mechanics

The methods of this field include ground test, flight test, and CFD. Ground-test facilities continue to show their limitations, with no improvements presently in view that would advance the realism of tests beyond Mach 10. A recently announced Air Force project, Mariah, merely underscores this point. This installation, to be built at AEDC, is to produce flows up to Mach 15 that are to run for as long as 10 seconds, in contrast to the milliseconds of shock tunnels. Mariah calls for a powerful electron beam to create an electrically charged airflow that can be accelerated with magnets. But this installation will require an e-beam of 200 megawatts. This is well beyond the state of the art, and even with support from a planned research program, Mariah is not expected to enter service until 2015.72

Similar slow progress is evident in CFD, for which the flow codes of recent projects have amounted merely to updates of those used in NASP. In designing the X-43A, the most important such code was the General Aerodynamic Simula­tion Program (GASP). NASP had used version 2.0; the X-43A used 3.0. The latter continued to incorporate turbulence models. Results from the codes often showed good agreement with test, but this was because the codes had been benchmarked extensively with wind-tunnel data. It did not reflect reliance on first principles at higher Mach.

Engine studies for the X-43A used their own codes, which again amounted to those of NASP. GASP 3.0 had the relatively recent date of 1996, but other pertinent litera­ture showed nothing more recent than 1993, with some papers dating to the 1970s.73

The 2002 design of ISTAR, a rocket-based combined-cycle engine, showed that specialists were using codes that were considerably more current. Studies of the forebody and inlet used OVERFLOW, from 1999, while analysis of the combustor used VULCAN version 4.3, with a users’ manual published in March 2002. OVER­FLOW used equilibrium chemistry while VULCAN included finite-rate chemistry, but both solved the Navier-Stokes equations by using a two-equation turbulence model. This was no more than had been done during NASP, more than a decade earlier.74

The reason for this lack of progress can be understood with reference to Karl Marx, who wrote that people’s thoughts are constrained by their tools of produc­tion. The tools of CFD have been supercomputers, and during the NASP era the best of them had been rated in gigaflops, billions of floating-point operations per second.75 Such computations required the use of turbulence models. But recent years have seen the advent of teraflop machines. A list of the world’s 500 most pow­erful is available on the Internet, with the accompanying table giving specifics for the top 10 of November 2004, along with number 500.

One should not view this list as having any staying power. Rather, it gives a snap­shot of a technology that is advancing with extraordinary rapidity. Thus, in 1980 NASA was hoping to build the Numerical Aerodynamic Simulator, and to have it online in 1986. It was to be the world’s fastest supercomputer, with a speed of one gigaflop (0.001 teraflop), but it would have fallen below number 500 as early as 1994. Number 500 of 2004, rated at 850 gigaflops, would have been number one as recently as 1996. In 2002 Japan’s Earth Simulator was five times faster than its nearest rivals. In 2004 it had fallen to third place.76

Today’s advances in speed are being accomplished both by increasing the number of processors and by multiplying the speed of each such unit. The ancestral Illiac – 4, for instance, had 64 processors and was rated at 35 megaflops.77 In 2004 IBM’s BlueGene was two million times more powerful. This happened both because it had 512 times more processors—32,768 rather than 64—and because each individual processor had 4,000 times more power. Put another way, a single BlueGene proces­sor could do the work of two Numerical Aerodynamic Simulator concepts of 1980.

Analysts are using this power. The NASA-Ames aerodynamicist Christian Stem – mer, who has worked with a four-teraflop machine, notes that it achieved this speed by using vectors, strings of 256 numbers, but that much of its capability went unused when his vector held only five numbers, representing five chemical species. The computation also slowed when finding the value of a single constant or when taking square roots, which is essential when calculating the speed of sound. Still, he adds, “people are happy if they get 50 percent” of a computers rated performance. “I do get 50 percent, so I’m happy.”78

THE WORLD’S FASTEST SUPERCOMPUTERS (Nov. 2004; updated annually)

Name

Manufacturer

Location

Year

Rated

speed

teraflops

Number

of

proces­

sors

1

BlueGene

IBM

Rochester, NY

2004

70,720

32,768

2

Numerical

Aerodynamic

Simulator

Silicon

Graphics

NASA-Ames

2004

51,870

10,160

3

Earth

Simulator

Nippon

Electric

Yokohama,

Japan

2002

35,860

5,120

4

Mare Nostrum

IBM

Barcelona, Spain

2004

20,530

3,564

5

Thunder

California

Digital

Corporation

Lawrence

Livermore

National

Laboratory

2004

19,940

4,096

6

ASCI Q

Hewlett-Packard

Los Alamos

National

Laboratory

2002

13,880

8,192

7

System X

Self-made

Virginia Tech

2004

12,250

2,200

8

BlueGene

(prototype)

IBM,

Livermore

Rochester, NY

2004

11,680

8,192

9

eServer p Series 655

IBM

Naval

Oceanographic

Office

2004

10,310

2,944

10

Tungsten

Dell

National Center for Supercomputer Applications

2003

9,819

2,500

500

Superdome 875

Hewlett-Packard

SBC Service, Inc.

2004

850.6

416

Source: http://www. top500.org/list/2004/! 1

Teraflop ratings, representing a thousand-fold advance over the gigaflops of NASP and subsequent projects, are required because the most demanding problems in CFD are four-dimensional, including three physical dimensions as well as time. William Cabot, who uses the big Livermore machines, notes that “to get an increase in resolution by a factor of two, you need 16” as the increase in computational speed because the time step must also be reduced. “When someone says, ‘I have a new computer that’s an order of magnitude better,”’ Cabot continues, “that’s about a factor of 1.8. That doesn’t impress people who do turbulence.”79

But the new teraflop machines increase the resolution by a factor of 10. This opens the door to two new topics in CFD: Large-Eddy Simulation (LES) and Direct Numerical Simulation (DNS).

One approaches the pertinent issues by examining the structure of turbulence within a flow. The overall flowfield has a mean velocity at every point. Within it, there are turbulent eddies that span a very broad range of stress. The largest carry most of the turbulent energy and accomplish most of the turbulent mixing, as in a combustor. The smaller eddies form a cascade, in which those of different sizes are intermingled. Energy flows down this cascade, from the larger to the smaller ones, and while turbulence is often treated as a phenomenon that involves viscosity, the transfer of energy along the cascade takes place through inviscid processes. However, viscosity becomes important at the level of the smallest eddies, which were studied by Andrei Kolmogorov in the Soviet Union and hence define what is called the Kolmogorov scale of turbulence. At this scale, viscosity, which is an intermolecular effect, dissipates the energy from the cascade into heat. The British meteorologist Lewis Richardson, who introduced the concept of the cascade in 1922, summarized the matter in a memorable sendup of a poem by England’s Jonathan Swift:

Big whorls have little whorls Which feed on their velocity;

And little whorls have lesser whorls,

And so on to viscosity.80

In studying a turbulent flow, DNS computes activity at the Kolmogorov scale and may proceed into the lower levels of the cascade. It cannot go far because the sizes of the turbulent eddies span several orders of magnitude, which cannot be captured using computational grids of realistic size. Still, DNS is the method of choice for studies of transition to turbulence, which may predict its onset. Such simulations directly reproduce the small disturbances within a laminar flow that grow to produce turbulence. They do this when they first appear, making it possible to observe their growth. DNS is very computationally intensive and remains far from ready for use with engineering problems. Even so, it stands today as an active topic for research.

LES is farther along in development. It directly simulates the large energy-bear­ing eddies and goes onward into the upper levels of the cascade. Because its com­putations do not capture the complete physics of turbulence, LES continues to rely on turbulence models to treat the energy flow in the cascade along with the Kol – mogorov-scale dissipation. But in contrast to the turbulence models of present-day codes, those of LES have a simple character that applies widely across a broad range of flows. In addition, their errors have limited consequence for a flow as a whole, in an inlet or combustor under study, because LES accurately captures the physics of the large eddies and therefore removes errors in their modeling at the outset.81

The first LES computations were published in 1970 by James Deardorff of the National Center for Atmospheric Research.82 Dean Chapman, Director of Astro­nautics at NASA-Ames, gave a detailed review of CFD in the 1979 AIAA Dryden Lectureship in Research, taking note of the accomplishments and prospects of LES.83 However, the limits of computers restricted the development of this field. More than a decade later Luigi Martinelli of Princeton University, a colleague of Antony Jameson who had established himself as a leading writer of flow codes, declared that “it would be very nice if we could run a large-eddy simulation on a full three-dimensional configuration, even a wing.” Large eddies were being simulated only for simple cases such as flow in channels and over flat plates, and even then the computations were taking as long as 100 hours on a Cray supercomputer.84

Since 1995, however, the Center for Turbulence Research has come to the fore­front as a major institution where LES is being developed for use as an engineering tool. It is part of Stanford University and maintains close ties both with NASA – Ames and with Lawrence Livermore National Laboratory. At this center, Kenneth Jansen published LES studies of flow over a wing in 1995 and 1996, treating a NACA 4412 airfoil at maximum lift.85 More recent work has used LES in studies of reacting flows within a combustor of an existing jet engine of Pratt & Whitneys PW6000 series. The LES computation found a mean pressure drop across the injec­tor of 4,588 pascals, which differs by only two percent from the observed value of 4,500 pascals. This compares with a value of 5,660 pascals calculated using a Reynolds-averaged Navier-Stokes code, which thus showed an error of 26 percent, an order of magnitude higher.86

Because LES computes turbulence from first principles, by solving the Navier – Stokes equations on a very fine computational grid, it holds high promise as a means for overcoming the limits of ground testing in shock tunnels at high Mach. The advent of LES suggests that it indeed may become possible to compute one’s way to orbit, obtaining accurate results even for such demanding problems as flow in a scramjet that is flying at Mach 17.

Parviz Moin, director of the Stanford center, cautions that such flows introduce shock waves, which do not appear in subsonic engines such as the PW6000 series, and are difficult to treat using currently available methods of LES. But his colleague

Heinz Pitsch anticipates rapid progress. He predicted in 2003 that LES will first be applied to scramjets in university research, perhaps as early as 2005. He adds that by 2010 “LES will become the state of the art and will become the method of choice” for engineering problems, as it emerges from universities and begins to enter the mainstream of CFD.87

The Х-T 5

Across almost half a century, the X-15 program stands out to this day not only for its achievements but for its audacity. At a time when the speed record stood right at Mach 2, the creators of the X-15 aimed for Mach 7—and nearly did it.[1] More­over, the accomplishments of the X-15 contrast with the history of an X-planes program that saw the X-1A and X-2 fall out of the sky due to flight instabilities, and in which the X-3 fell short in speed because it was underpowered.1

The X-15 is all the more remarkable because its only significant source of aero­dynamic data was Becker’s 11-inch hypersonic wind tunnel. Based on that instru­ment alone, the Air Force and NACA set out to challenge the potential difficulties of hypersonic piloted flight. They succeeded, with this aircraft setting speed and altitude marks that were not surpassed until the advent of the space shuttle.

It is true that these agencies worked at a time of rapid advance, when perfor­mance was leaping forward at rates never approached either before or since. Yet there was more to this craft than a can-do spirit. Its designers faced specific technical issues and overcame them well before the first metal was cut.

The X-3 had failed because it proved infeasible to fit it with the powerful tur­bojet engines that it needed. The X-15 was conceived from the start as relying on rocket power, which gave it a very ample reserve.

Flight instability was already recognized as a serious concern. Using Becker’s hypersonic tunnel, the aerodynamicist Charles McLellan showed that the effective­ness of tail surfaces could be greatly increased by designing them with wedge-shaped profiles.2

The X-15 was built particularly to study problems of heating in high-speed flight, and there was the question of whether it might overheat when re-entering the atmosphere following a climb to altitude. Calculations showed that the heating would remain within acceptable bounds if the airplane re-entered with its nose high. This would present its broad underbelly to the oncoming airflow. Here was a new application of the Allen-Eggers blunt-body principle, for an airplane with its nose up effectively became blunt.

The planes designers also benefited from a stroke of serendipity. Like any air­plane, the X-15 was to reduce its weight by using stressed-skin construction; its outer skin was to share structural loads with internal bracing. Knowing the stresses this craft would encounter, the designers produced straightforward calculations to give the requisite skin gauges. A separate set of calculations gave the skin thicknesses that were required for the craft to absorb its heat of re-entry without weakening. The two sets of skin gauges were nearly the same! This meant that the skin could do double duty, bearing stress while absorbing heat. It would not have to thicken excessively, thus adding weight, to cope with the heat.

Yet for all the ingenuity that went into this preliminary design, NACA was a very small tail on a very large dog in those days, and the dog was the Air Force. NACA alone lacked the clout to build anything, which is why one sees military insignia on photos of the X-planes of that era. Fortuitously, two new inventions—the twin – spool and the variable-stator turbojet—were bringing the Air Force face to face with a new era in flight speed. Ramjet engines also were in development, promising still higher speed. The X-15 thus stood to provide flight-test data of the highest impor­tance—and the Air Force grabbed the concept and turned it into reality.

Preludes: Asset and Lifting Bodies

At the end of the 1950s, ablatives stood out both for the ICBM and for return from space. Insulated hot structures, as on Dyna-Soar, promised reusability and

lighter weight but were less developed.

Preludes: Asset and Lifting BodiesAs early as August 1959, the Flight Dynamics Laboratory at Wright-Patter – son Air Force Base launched an in-house study of a small recoverable boost-glide vehicle that was to test hot structures during re-entry. From the outset there was strong interest in problems of aero­dynamic flutter. This was reflected in the concept name: ASSET or Aerother – modynamic/elastic Structural Systems Environmental Tests.

ASSET won approval as a program

late in January 1961. In April of that

year the firm of McDonnell Aircraft,

which was already building Mercury

capsules, won a contract to develop the

ASSET flight vehicles. Initial thought

had called for use of the solid-fuel Scout

as the booster. Soon, however, it became. . ,

АЬЬЫ, showing peak temperatures, clear that the program could use the (u. S. Air Force)

Thor for greater power. The Air Force

had deployed these missiles in England. When they came home, during 1963, they became available for use as launch vehicles.

ASSET took shape as a flat-bottomed wing-body craft that used the low-wing configuration recommended by NASA-Langley. It had a length of 59 inches and a span of 55 inches. Its bill of materials closely resembled that of Dyna-Soar, for it used TZM to withstand 3,000°F on the forward lower heat shield, graphite for similar temperatures on the leading edges, and zirconia rods for the nose cap, which was rated at 4,000°F. But ASSET avoided the use of Rene 4l, with cobalt and columbium alloys being employed instead.1

ASSET was built in two varieties: the Aerothermodynamic Structural Vehicle (ASV), weighing 1,130 pounds, and the Aerothermodynamic Elastic Vehicle (AEV), at 1,225 pounds. The AEVs were to study panel flutter along with the behavior of a trailing-edge flap, which represented an aerodynamic control surface in hypersonic flight. These vehicles did not demand the highest possible flight speeds and hence flew with single-stage Thors as the boosters. But the ASVs were built to study mate­rials and structures in the re-entry environment, while taking data on temperatures, pressures, and heat fluxes. Such missions demanded higher speeds. These boost – glide craft therefore used the two-stage Thor-Delta launch vehicle, which resembled
the Thor-Able that had conducted nose-cone tests at intercontinental range as early as 1958.2

The program conducted six flights, which had the following planned values of range and of altitude and velocity at release:

Asset Flight Tests

Date

Vehicle

Booster

Velocity,

feet/second

Altitude,

feet

Range,

nautical miles

18 September 1963

ASV-1

Thor

16,000

205,000

987

24 March 1964

ASV-2

Thor-Delta

18,000

195,000

1800

22 July 1964

ASV-3

Thor-Delta

19,500

225,000

1830

27 October 1964

AEV-1

Thor

13,000

168,000

830

8 December 1964

AEV-2

Thor

13,000

187,000

620

23 February 1965

ASV-4

Thor-Delta

19,500

206,000

2300

Source: Hallion, Hypersonic, pp. 505, 510-519.

Several of these craft were to be recovered. Following standard practice, their launches were scheduled for the early morning, to give downrange recovery crews the maximum hours of daylight. This did not help ASV-1, the first flight in the program, which sank into the sea. Still, it flew successfully and returned good data. In addition, this flight set a milestone. In the words of historian Richard Hallion, “for the first time in aerospace history, a lifting reentry spacecraft had successfully returned from space.”3

ASV-2 followed, using the two-stage Thor-Delta, but it failed when the second stage did not ignite. The next one carried ASV-3, with this mission scoring a double achievement. It not only made a good flight downrange but was successfully recov­ered. It carried a liquid-cooled double-wall test panel from Bell Aircraft, along with a molybdenum heat-shield panel from Boeing, home of Dyna-Soar. ASV-3 also had a new nose cap. The standard ASSET type used zirconia dowels, 1.5 inches long by 0.5 inch in diameter, that were bonded together with a zirconia cement. The new cap, from International Harvester, had a tungsten base covered with thorium oxide and was reinforced with tungsten.

A company advertisement stated that it withstood re-entry so well that it “could have been used again,” and this was true for the craft as a whole. Hallion writes that “overall, it was in excellent condition. Water damage…caused some problems, but not so serious that McDonnell could not have refurbished and reflown the vehicle.” The Boeing and Bell panels came through re-entry without damage, and the importance of physical recovery was emphasized when columbium aft leading edges showed significant deterioration. They were redesigned, with the new versions going into subsequent ASV and AEV spacecraft.4

The next two flights were AEVs, each of which carried a flutter test panel and a test flap. AEV-1 returned only one high-Mach data point, at Mach 11.88, but this sufficed to indicate that its panel was probably too stiff to undergo flutter. Engi­neers made it thinner and flew a new one on AEV-2, where it returned good data until it failed at Mach 10. The flap experiment also showed value. It had an elec­tric motor that deflected it into the airstream, with potentiometers measuring the force required to move it, and it enabled aerodynamicists to critique their theories. Thus, one treatment gave pressures that were in good agreement with observations, whereas another did not.

ASV-4, the final flight, returned “the highest quality data of the ASSET pro­gram,” according to the flight test report. The peak speed of 19,400 feet per second, Mach 18.4, was the highest in the series and was well above the design speed of

18,0 feet per second. The long hypersonic glide covered 2,300 nautical miles and prolonged the data return, which presented pressures at 29 locations on the vehicle and temperatures at 39. An onboard system transferred mercury ballast to trim the angle of attack, increasing L/D from its average of 1.2 to 1.4 and extending the trajectory. The only important problem came when the recovery parachute failed to deploy properly and ripped away, dooming ASV-4 to follow ASV-1 into the depths of the Atlantic.5

On the whole, ASSET nevertheless scored a host of successes. It showed that insulated hot structures could be built and flown without producing unpleasant surprises, at speeds up to three-fourths of orbital velocity. It dealt with such practical issues of design as fabrication, fasteners, and coatings. In hypersonic aerodynamics, ASSET contributed to understanding of flutter and of the use of movable con­trol surfaces. The program also developed and successfully used a reaction control system built for a lifting re-entry vehicle. Only one flight vehicle was recovered in four attempts, but it complemented the returned data by permitting a close look at a hot structure that had survived its trial by fire.

A separate prelude to the space shuttle took form during the 1960s as NASA and the Air Force pursued a burgeoning interest in lifting bodies. The initial con­cept represented one more legacy of the blunt-body principle of H. Julian Allen and Alfred Eggers at NACA’s Ames Aeronautical Laboratory. After developing this principle, they considered that a re-entering body, while remaining blunt to reduce its heat load, might produce lift and thus gain the ability to maneuver at hypersonic speeds. An early configuration, the M-l of 1957, featured a blunt-nosed cone with a flattened top. It showed some capacity for hypersonic maneuver but could not glide subsonically or land on a runway. A new shape, the M-2, appeared as a slender half-cone with its flat side up. Its hypersonic L/D of 1.4 was nearly triple that of the M-l. Fitted with two large vertical fins for stability, it emerged as a basic configura­tion that was suitable for further research.6

Dale Reed, an engineer at NASA’s Flight Research Center, developed a strong interest in the bathtub-like shape of the M-2. He was a sailplane enthusiast and a builder of radio-controlled model aircraft. With support from the local community of airplane model builders, he proceeded to craft the M-2 as a piloted glider. Desig­nating it as the M2-F1, he built it of plywood over a tubular steel frame. Completed early in 1963, it was 20 feet long and 13 feet across.

It needed a vehicle that could tow it into the air for initial tests. However, it produced too much drag for NASA’s usual vans and trucks, and Reed needed a tow car with more power. He and his friends bought a stripped-down Pontiac with a big engine and a four-barrel carburetor that reached speeds of 110 miles per hour. They took it to a funny-car shop in Long Beach for modification. Like any other flightline vehicle, it was painted yellow with “National Aeronautics and Space Administra­tion” on its side. Early tow tests showed enough success to allow the project to use a C-47, called the Cooney Bird, for true aerial flights. During these tests the Cooney Bird towed the M2-F1 above 10,000 feet and then set it loose to glide to an Edwards AFB lakebed. Beginning in August 1963, the test pilot Milt Thompson did this repeatedly. Reed thus showed that although the basic M-2 shape had been crafted for hypersonic re-entry, it could glide to a safe landing.

As he pursued this work, he won support from Paul Bikle, the director of NASA Flight Research Center. As early as April 1963, Bikle alerted NASA Headquarters that “the lifting-body concept looks even better to us as we get more into it.” The success of the M2-F1 sparked interest within the Air Force as well. Some of its offi­cials, along with their NASA counterparts, went on to pursue lifting-body programs that called for more than plywood and funny cars. An initial effort went beyond the M2-F1 by broadening the range of lifting-body shapes while working to develop satisfactory landing qualities.7

NASA contracted with the firm of Northrop to build two such aircraft: the M2- F2 and HL-10. The M2-F2 amounted to an M2-F1 built to NASA standards; the HL-10 drew on an alternate lifting-body design by Eugene Love of NASA-Langley. This meant that both Langley and Ames now had a project. The Air Force effort, the X-24A, went to the Martin Company. It used a design of Frederick Raymes at the Aerospace Corporation that resembled a teardrop fitted with two large fins.

All three flew initially as gliders, with a B-52 rather than a C-47 as the mother ship. The lifting bodies mounted small rocket engines for acceleration to supersonic

speeds, thereby enabling tests of stability and handling qualities in transonic flight. The HL-10 set records for lifting bodies by making safe approaches and landings at Edwards from speeds up to Mach 1.86 and altitudes of 90,000 feet.8

Acceptable handling qualities were not easy to achieve. Under the best of cir­cumstances, a lifting body flew like a brick at low speeds. Lowering the landing gear made the problem worse by adding drag, and test pilots delayed this deployment as long as possible. In May 1967 the pilot Bruce Peterson, flying the M2-F2, failed to get his gear down in time. The aircraft hit the lakebed at more than 250 mph, rolled over six times, and then came to rest on its back minus its cockpit canopy, main landing gear, and right vertical fin. Peterson, who might have died in the crash, got away with a skull fracture, a mangled face, and the loss of an eye. While surgeons reconstructed his face and returned him to active duty, the M2-F2 underwent sur­gery as well. Back at Northrop, engineers installed a center fin and a roll-control system that used reaction jets, while redistributing the internal weights. Gerauld Gentry, an Air Force test pilot, said that these changes turned “something I really did not enjoy flying at all into something that was quite pleasant to fly.”9

The manned lifting-body program sought to turn these hypersonic shapes into aircraft that could land on runways, but the Air Force was not about to overlook the need for tests of their hypersonic performance during re-entry. The program that addressed this issue took shape with the name PRIME, Precision Recovery Includ­ing Maneuvering Entry. Martin Marietta, builder of the X-24A, also developed the PRIME flight vehicle, the SV-5D that later was referred to as the X-23- Although it was only seven feet in length, it faithfully duplicated the shape of the X-24A, even including a small bubble-like protrusion near the front that represented the cockpit canopy.

PRIME complemented ASSET, with both programs conducting flight tests of boost-glide vehicles. However, while ASSET pushed the state of the art in materials and hot structures, PRIME used ablative thermal protection for a more straightfor­ward design and emphasized flight performance. Accelerated to near-orbital veloci­ties by Atlas launch vehicles, the PRIME missions called for boost-glide flight from Vandenberg AFB to locations in the western Pacific near Kwajalein Atoll. The SV – 5D had higher L/D than Gemini or Apollo, and as with those NASA programs, it was to demonstrate precision re-entry. The plans called for crossrange, with the vehicle flying up to 710 nautical miles to the side of a ballistic trajectory and then arriving within 10 miles of its recovery point.10

The X-24A was built of aluminum. The SV-5D used this material as well, for both the skin and primary structure. It mounted both aerodynamic and reaction controls, with the former taking shape as right and left body-mounted flaps set well aft. Used together, they controlled pitch; used individually, they produced yaw and roll. These flaps were beryllium plates that provided thermal heat sink. The fins were of steel honeycomb with surfaces of beryllium sheet.

Preludes: Asset and Lifting Bodies

Lifting bodies. Left to right: the X-24A, the M2-F3 which was modified from the M2-F2, and the HL-10. (NASA)

 

Preludes: Asset and Lifting Bodies

Landing a lifting body. The wingless X-24B required a particularly high angle of attack. (NASA)

 

Preludes: Asset and Lifting BodiesPreludes: Asset and Lifting Bodies

Martin SV-5D, which became the X-23. Mission of the SV-5D. (U. S. Air Force)

Подпись: ЙЗ rfp да Mt ,'K1 fW JW ^ jptf О? да

(U. S. Air Force)

Trajectory of the SV-5D, showing crossrange. (U. S. Air Force)

Most of the vehicle surface obtained thermal protection from ESA 3560 HF, a flexible ablative blanket of phenolic fiberglass honeycomb that used a silicone elas­tomer as the filler, with fibers of nylon and silica holding the ablative char in place during re-entry. ESA 5500 HE a high-density form of this ablator, gave added pro­tection in hotter areas. The nose cap and the beryllium flaps used a different mate­rial: a carbon-phenolic composite. At the nose, its thickness reached 3-5 inches.11

The PRIME program made three flights, which took place between December 1966 and April 1967. All returned data successfully, with the third flight vehicle also being recovered. The first mission reached 25,300 feet per second and flew 4,300 miles downrange, missing its target by only 900 feet. The vehicle executed pitch maneuvers but made no attempt at crossrange. The next two flights indeed achieved crossrange, of 500 and 800 nautical miles, and the precision again was impressive. Flight 2 missed its aim point by less than two miles. Flight 3 missed by more than four miles, but this still was within the allowed limit. Moreover, the terminal guid­ance radar had been inoperative, which probably contributed to the lack of absolute accuracy.12

By demonstrating both crossrange and high accuracy during maneuvering entry, PRIME broadened the range of hypersonic aircraft configurations and completed a line of development that dated to 1953- In December of that year the test pilot Chuck Yeager had nearly been killed when his X-1A fell out of the sky at Mach 2.44 because it lacked tail surfaces that could produce aerodynamic stability. The X-l 5 was to fly to Mach 6, and Charles McLellan of NACA-Langley showed that it could use vertical fins of reasonable size if they were wedge-shaped in cross section. Meanwhile, Allen and Eggers were introducing their blunt-body principle. This led to missile nose cones with rounded tips, designed both as cones and as blunted cylinders that had stabilizing afterbodies in the shape of conic frustums.

For manned flight, Langleys Maxime Faget introduced the general shape of a cone with its base forward, protected by an ablative heat shield. Langleys John Becker entered the realm of winged re-entry configurations with his low-wing flat-bottom shapes that showed advantage over the high-wing flat-top concepts of NACA-Ames. The advent of the lifting body then raised the prospect of a struc­turally efficient shape that lacked wings, demanded thermal protection and added weight, and yet could land on a runway. Faget’s designs had found application in Mercury, Gemini, and Apollo, while Becker’s winged vehicle had provided a basis for Dyna-Soar. As NASA looked to the future, both winged designs and lifting bodies were in the forefront.13