Category Facing the Heat Barrier: a History of Hypersonics

The Advent of NASP

With test engines well on their way in development, there was the prospect of experimental aircraft that might exercise them in flight test. Such a vehicle might come forth as a successor to Number 66671, the X-l 5 that had been slated to fly the

HRE. An aircraft of this type indeed took shape before long, with the designation X-30. However, it did not originate purely as a technical exercise. Its background lay in presidential politics.

The 1980 election took place less than a year after the Soviets invaded Afghan­istan. President Jimmy Carter had placed strong hope in arms control and had negotiated a major treaty with his Soviet counterpart, Leonid Brezhnev. But the incursion into Afghanistan took Carter by surprise and destroyed the climate of international trust that was essential for Senate ratification of this treaty. Reagan thus came to the White House with arms-control prospects on hold and with the Cold War once more in a deep freeze. He responded by launching an arms buildup that particularly included new missiles for Europe.29

Peace activist Randall Forsberg replied by taking the lead in calling for a nuclear freeze, urging the superpowers to halt the “testing, production and deployment of nuclear weapons” as an important step toward “lessening the risk of nuclear war.” His arguments touched a nerve within the general public, for within two years, support for a freeze topped 70 percent. Congressman Edward Markey introduced a nuclear-freeze resolution in the House of Representatives. It failed by a margin of only one vote, with Democratic gains in the 1982 mid-term elections making pas­sage a near certainty. By the end of that year half the states in the Union adopted their own freeze resolutions, as did more than 800 cities, counties, and towns.30

To Reagan, a freeze was anathema. He declared that it “would be largely unverifi – able…. It would reward the Soviets for their massive military buildup while prevent­ing us from modernizing our aging and increasingly vulnerable forces.” He asserted that Moscow held a “present margin of superiority” and that a freeze would leave America “prohibited from catching up.”31

With the freeze ascendant, Admiral James Watkins, the Chief of Naval Opera­tions, took a central role in seeking an approach that might counter its political appeal. Exchanges with Robert McFarlane and John Poindexter, deputies within the National Security Council, drew his thoughts toward missile defense. Then in Janu­ary 1983 he learned that the Joint Chiefs were to meet with Reagan on 11 February. As preparation, he met with a group of advisors that included the physicist Edward Teller.

Trembling with passion, Teller declared that there was enormous promise in a new concept: the x-ray laser. This was a nuclear bomb that was to produce intense beams of x-rays that might be aimed to destroy enemy missiles. Watkins agreed that the broad concept of missile defense indeed was attractive. It could introduce a new prospect: that America might counter the Soviet buildup, not with a buildup of its own but by turning to its strength in advanced technology.

Watkins succeeded in winning support from his fellow Joint Chiefs, including the chairman, General John Vessey. Vessey then gave Reagan a half-hour briefing at the 11 February meeting, as he drew extensively on the views of Watkins. Reagan showed strong interest and told the Chiefs that he wanted a written proposal. Robert McFarlane, Deputy to the National Security Advisor, already had begun to explore concepts for missile defense. During the next several weeks his associates took the lead in developing plans for a program and budget.32

On 23 March 1983 Reagan spoke to the nation in a televised address. He dealt broadly with issues of nuclear weaponry. Toward the end of the speech, he offered new thoughts:

“Let me share with you a vision of the future which offers hope. It is that we embark on a program to counter the awesome Soviet missile threat with measures that are defensive. Let us turn to the very strengths in technology that spawned our great industrial base and that have given us the quality of life we enjoy today.

What if free people could live secure in the knowledge that their security did not rest upon the threat of instant U. S. retaliation to deter a Soviet attack, that we could intercept and destroy strategic ballistic missiles before they reached our own soil or that of our allies?…

I call upon the scientific community in our country, those who gave us nuclear weapons, to turn their great talents now to the cause of mankind and world peace, to give us the means of rendering these nuclear weapons impotent and obsolete.”33

The ensuing Strategic Defense Initiative never deployed weapons that could shoot down a missile. Yet from the outset it proved highly effective in shooting down the nuclear freeze. That movement reached its high-water mark in May 1983, as a strengthened Democratic majority in the House indeed passed Markeys resolu­tion. But the Senate was still held by Republicans, and the freeze went no further. The SDI gave everyone something new to talk about. Reagans speech helped him to regain the initiative, and in 1984 he swept to re-election with an overwhelming majority.34

The SDI brought the prospect of a major upsurge in traffic to orbit, raising the prospect of a flood of new military payloads. SDI supporters asserted that some one hundred orbiting satellites could provide an effective strategic defense, although the Union of Concerned Scientists, a center of criticism, declared that the number would be as large as 2,400. Certainly, though, an operational missile defense was likely to place new and extensive demands on means for access to space.

Within the Air Force Systems Command, there already was interest in a next – generation single-stage-to-orbit launch vehicle that was to use the existing Space Shuttle Main Engine. Lieutenant General Lawrence Skantze, Commander of the

Air Force Systems Command’s Aero­nautical Systems Division (ASD), launched work in this area early in 1982 by directing the ASD planning staff to conduct an in-house study of post-shuttle launch vehicles. It then went forward under the leader­ship of Stanley Tremaine, the ASD’s Deputy for Development Planning, who christened these craft as Trans – atmospheric Vehicles. In December 1984 Tremaine set up aTAV Program Office, directed by Lieutenant Colo­nel Vince Rausch.35

Подпись: Transatmospheric Vehicle concepts, 1984. (U.S. Air Force) Moreover, General Skantze was advancing into high-level realms of command, where he could make his voice heard. In August 1982 he went to Air Force Headquarters, where he took the post of Deputy Chief of Staff for Research, Development, and Acquisition. This gave him responsi­bility for all Air Force programs in these areas. In October 1983 he pinned on his fourth star as he took an appointment as Air Force Vice Chief of Staff. In August 1984 he became Commander of the Air Force Systems Command.36

He accepted these Washington positions amid growing military disenchantment with the space shuttle. Experience was showing that it was costly and required a long time to prepare for launch. There also was increasing concern for its safety, with a 1982 Rand Corporation study flatly predicting that as many as three shuttle orbiters would be lost to accidents during the life of the program. The Air Force was unwilling to place all its eggs in such a basket. In February 1984 Defense Secretary Caspar Weinberger approved a document stating that total reliance on the shuttle “represents an unacceptable national security risk.” Air Force Secretary Edward Aldridge responded by announcing that he would remove 10 payloads from the shuttle beginning in 1988 and would fly them on expendables.37

Just then the Defense Advanced Research Projects Agency was coming to the forefront as an important new center for studies of TAV-like vehicles. DARPA was already reviving the field of flight research with its X-29, which featured a forward – swept wing along with an innovative array of control systems and advanced materi­als. Robert Cooper, DARPA’s director, held a strong interest in such projects and saw them as a way to widen his agency’s portfolio. He found encouragement during

The Advent of NASP

1982 as a group of ramjet specialists met with Richard De Lauer, the Undersecretary of Defense Research and Engineering. They urged him to keep the field alive with enough new funds to prevent them from having to break up their groups. De Lauer responded with letters that he sent to the Navy, Air Force, and DARPA, asking them to help.38

This provided an opening for Tony duPont, who had designed the HRE. He had taken a strong interest in combined-cycle concepts and decided that the scram – lace was the one he preferred. It was to eliminate the big booster that every ramjet needed, by using an ejector, but experimental versions weren’t very powerful. DuPont thought he could do better by using the HRE as a point of departure, as he added an auxiliary inlet for LACE and a set of ejector nozzles upstream of the com­bustor. He filed for a patent on his engine in 1970 and won it two years later.39

In 1982 he still believed in it, and he learned that Anthony Tether was the DARPA man who had been attending TAV meetings. The two men met several times, with Tether finally sending him up to talk with Cooper. Cooper listened to duPont and sent him over to Robert Williams, one of DARPA’s best aerodynami- cists. Cooper declares that Williams “was the right guy; he knew the most in this area. This wasn’t his specialty, but he was an imaginative fellow.”40

Williams had come up within the Navy, working at its David Taylor research center. His specialty was helicopters; he had initiated studies of the X-wing, which was to stop its rotor in midair and fly as a fixed-wing aircraft. He also was inter­ested in high-speed flight. He had studied a missile that was to fight what the Navy

called the “outer air battle,” which might use a scramjet. This had brought him into discussions with Fred Billig, who also worked for the Navy and helped him to learn his hypersonic propulsion. He came to DARPA in 1981 and joined its Tacti­cal Technologies Office, where he became known as the man to see if anyone was interested in scramjets.41

Williams now phoned duPont and gave him a test: “I’ve got a very ambitious problem for you. If you think the airplane can do this, perhaps we can promote a program. Cooper has asked me to check you out.” The problem was to achieve single-stage-to-orbit flight with a scramjet and a suite of heat-resistant materi­als, and duPont recalls his response: “I stayed up all night; I was more and more intrigued with this. Finally I called him back: ‘Okay, Bob, it’s not impossible. Now what?”’42

DuPont had been using a desktop computer, and Williams and Tether responded to his impromptu calculations by giving him $30,000 to prepare a report. Soon Williams was broadening his circle of scramjet specialists by talking with old-timers such as Arthur Thomas, who had been conducting similar studies a quarter-century earlier, and who quickly became skeptical. DuPont had patented his propulsion concept, but Thomas saw it differently: “I recognized it as a Marquardt engine. Tony called it the duPont cycle, which threw me off, but I recognized it as our engine. He claimed he’d improved it.” In fact, “he’d made a mistake in calculating the heat capacity of air. So his engine looked so much better than ours.”

Thomas nevertheless signed on to contribute to the missionary work, joining Williams and duPont in giving presentations to other conceptual-design groups. At Lockheed and Boeing, they found themselves talking to other people who knew scramjets. As Thomas recalls, “The people were amazed at the component efficien­cies that had been assumed in the study. They got me aside and asked if I really believed it. Were these things achievable? Tony was optimistic everywhere: on mass fraction, on air drag of the vehicle, on inlet performance, on nozzle perfor­mance, on combustor performance. The whole thing, across the board. But what salved our conscience was that even if these weren’t all achieved, we still could have something worth while. Whatever we got would still be exciting.”43

Williams recalls that in April 1984, “I put together a presentation for Cooper called ‘Resurrection of the Aerospaceplane.’ He had one hour; I had 150 slides. He came in, sat down, and said Go. We blasted through those slides. Then there was silence. Cooper said, Т want to spend a day on this.’” After hearing addi­tional briefings, he approved a $5.5-million effort known as Copper Canyon, which brought an expanded program of studies and analyses.44

Copper Canyon represented an attempt to show how the SDI could achieve its access to space, and a number of high-level people responded favorably when Cooper asked to give a briefing. He and Williams made a presentation to George Keyworth, Reagan’s science advisor. They then briefed the White House Science

Council. Keyworth recalls that “here were people who normally would ask ques­tions for hours. But after only about a half-hour, David Packard said, ‘What’s keep­ing us? Let’s do it!”’ Packard was Deputy Secretary of Defense.45

During 1985, as Copper Canyon neared conclusion, the question arose of expanding the effort with support from NASA and the Air Force. Cooper attended a classified review and as he recalls, “I went into that meeting with a high degree of skepticism.” But technical presentations brought him around: “For each major problem, there were three or four plausible ways to deal with it. That’s extraordi­nary. Usually it’s—‘Well, we don’t know exactly how we’ll do it, but we’ll do it.’ Or, ‘We have a way to do it, which may work.’ It was really a surprise to me; I couldn’t pick any obvious holes in what they had done. I could find no reason why they couldn’t go forward.”46

Further briefings followed. Williams gave one to Admiral Watkins, whom Cooper describes as “very supportive, said he would commit the Navy to support of the program.” Then in July, Cooper accompanied Williams as they gave a presenta­tion to General Skantze.

They displayed their viewgraphs and in Cooper’s words, “He took one look at our concept and said, ‘Yeah, that’s what I meant. I invented that idea.’” Not even the stars on his shoulders could give him that achievement, but his endorsement reflected the fact that he was dissatisfied with the TAV studies. He had come away appreciating that he needed something better than rocket engines—and here it was. “His enthusiasm came from the fact that this was all he had anticipated,” Cooper continues. “He felt as if he owned it.”

Skantze wanted more than viewgraphs. He wanted to see duPont’s engine in operation. A small version was under test at GASL, without LACE but definitely with its ejector, and one technician had said, “This engine really does put out static thrust, which isn’t obvious for a ramjet.” Skantze saw the demonstration and came away impressed. Then, Williams adds, “the Air Force system began to move with

The Advent of NASPthe speed of a spaceplane. In literally a week and a half, the entire Air Force senior com­mand was briefed.”

Later that year the Secretary of Defense, Caspar Weinberger, granted a briefing. With him were members of his staff, along with senior people from NASA and the military service. After giving the presentation, Williams recalls that “there was Initial version of the duPont engine under test at GASL. silence ІП the ГООГП The Sec-

(GASL)
retary said, ‘Interesting,’ and turned to his staff. Of course, all the groundwork had been laid. All of the people there had been briefed, and we could go for a yes-or-no decision. We had essentially total unanimity around the table, and he decided that the program would proceed as a major Defense Department initiative. With this, we moved immediately to issue requests for proposal to industry.”47

In January 1986 the TAV effort was formally terminated. At Wright-Patterson AFB, the staff of its program office went over to a new Joint Program Office that now supported what was called the National Aerospace Plane. It brought together rep­resentatives from the Air Force, Navy, and NASA. Program management remained at DARPA, where Williams retained his post as the overall manager.48

In this fashion, NASP became a significant federal initiative. It benefited from a rare alignment of the political stars, for Reagan’s SDI cried out for better launch vehicles and Skantze was ready to offer them. Nor did funding appear to be a prob­lem, at least initially. Reagan had shown favor to aerospace through such acts as approving NASA’s space station in 1984. Pentagon spending had surged, and DAR – PA’s Cooper was asserting that an X-30 might be built for an affordable cost.

Yet NASP was a leap into the unknown. Its scramjets now were in the forefront but not because the Langley research had shown that they were ready. Instead they were a focus of hope because Reagan wanted SDI, SDI needed better access to space, and Skantze wanted something better than rockets.

The people who were making Air Force decisions, such as Skantze, did not know much about these engines. The people who did know them, such as Thomas, were well aware of duPont’s optimism. There thus was abundant opportunity for high hope to give way to hard experience.

Origins of the Scramjet

The airflow within a ramjet was subsonic. This resulted from its passage through one or more shocks, which slowed, compressed, and heated the flow. This was true even at high speed, with the Mach 4.31 flight of the X-7 also using a subsonic-com­bustion ramjet. Moreover, because shocks become stronger with increasing Mach, ramjets could achieve greater internal compression of the flow at higher speeds. This increase in compression improved the engine’s efficiency.

Origins of the Scramjet

Comparative performance of scramjets and other engines. Airbreathers have veiy high perfor­mance because they are “energy machines,” which burn fuel to heat air. Rockets have much lower performance because they are “momentum machines,” which physically expel flows of mass.

(Courtesy of William Escher)

Still, there were limits to a ramjet’s effectiveness. Above Mach 5, designers faced increasingly difficult demands for thermal protection of an airframe and for cooling of the ramjet duct. With the internal flow being very hot, it became more difficult to add still more heat by burning fuel, without overtaxing the materials or the cool­ing arrangements. If the engine were to run lean to limit the temperature rise in the combustor, its thrust would fall off. At still higher Mach levels, the issue of heat addition through combustion threatened to become moot. With high internal tem­peratures promoting dissociation of molecules of air, combustion reactions would not go to completion and hence would cease to add heat.

A promising way around this problem involved doing away with a requirement for subsonic internal flow. Instead this airflow was to be supersonic and was to sus­tain combustion. Right at the outset, this approach reduced the need for internal cooling, for this airflow would not heat up excessively if it was fast enough. This relatively cool internal airflow also could continue to gain heat through combus­tion. It would avoid problems due to dissociation of air or failure of chemical reac­tions in combustion to go to completion. On paper, there now was no clear upper limit to speed. Such a vehicle might even fly to orbit.

Yet while a supersonic-combustion ramjet offered tantalizing possibilities, right at the start it posed a fundamental issue: was it feasible to burn fuel in the duct of such an engine without producing shock waves? Such shocks could produce severe internal heating, destroying the benefits of supersonic combustion by slowing the flow to subsonic speeds. Rather than seeking to achieve shock-free supersonic com­bustion in a duct, researchers initially bypassed this difficulty by addressing a sim­pler problem: demonstration of combustion in a supersonic free-stream flow.

The earliest pertinent research appears to have been performed at the Applied Physics Laboratory (APL), during or shortly after World War II. Machine gunners in aircraft were accustomed to making their streams of bullets visible by making every twentieth round a tracer, which used a pyrotechnic. They hoped that a gunner could walk his bullets into a target by watching the glow of the tracers, but experience showed that the pyrotechnic action gave these bullets trajectories of their own. The Navy then engaged two research centers to look into this. In Aberdeen, Maryland, Ballistic Research Laboratories studied the deflection of the tracer rounds them­selves. Near Washington, DC, APL treated the issue as a new effect in aerodynamics and sought to make use of it.

Investigators conducted tests in a Mach 1.5 wind tunnel, burning hydrogen at the base of a shell. A round in flight experienced considerable drag at its base, but the experiments showed that this combustion set up a zone of higher pressure that canceled the drag. This work did not demonstrate supersonic combustion, for while the wind-tunnel flow was supersonic, the flow near the base was subsonic. Still, this work introduced APL to topics that later proved pertinent to supersonic-combus­tion ramjets (which became known as scramjets).17

NACA’s Lewis Flight Propulsion Laboratory, the agency’s center for studies of engines, emerged as an early nucleus of interest in this topic. Initial work involved theoretical studies of heat addition to a supersonic flow. As early as 1950, the Lewis investigators Irving Pinkel and John Serafini treated this problem in a two-dimen­sional case, as in flow over a wing or past an axisymmetric body. In 1952 they specifically treated heat addition under a supersonic wing. They suggested that this might produce more lift than could be obtained by burning the same amount of fuel in a turbojet to power an airplane.18

This conclusion immediately raised the question of whether it was possible to demonstrate supersonic combustion in a wind tunnel. Supersonic tunnels pro­duced airflows having very low pressure, which added to the experimental difficul­ties. However, researchers at Lewis had shown that aluminum borohydride could promote the ignition of pentane fuel at air pressures as low as 0.03 atmosphere. In 1953 Robert Dorsch and Edward Fletcher launched a research program that sought to ignite pure borohydride within a supersonic flow. Two years later they declared that they had succeeded. Subsequent work showed that at Mach 3, combustion of this fuel under a wing more than doubled the lift.19

Also at Lewis, the aerodynamicists Richard Weber and John MacKay published the first important open-literature study of theoretical scramjet performance in 1958. Because they were working entirely with equations, they too bypassed the problem of attaining shock-free flow in a supersonic duct by simply positing that it was feasible. They treated the problem using one-dimensional gas dynamics, corre­sponding to flow in a duct with properties at any location being uniform across the diameter. They restricted their treatment to flow velocities from Mach 4 to 7.

They discussed the issue of maximizing the thrust and the overall engine effi­ciency. They also considered the merits of various types of inlet, showing that a suit­able choice could give a scramjet an advantage over a conventional ramjet. Super­sonic combustion failed to give substantial performance improvements or to lead to an engine of lower weight. Even so, they wrote that “the trends developed herein indicate that the [scramjet] will offer superior performance at higher hypersonic flight speeds.”20

An independent effort proceeded along similar lines at Marquardt, where inves­tigators again studied scramjet performance by treating the flow within an engine duct using one-dimensional gasdynamic theory. In addition, Marquardt researchers carried out their own successful demonstration of supersonic combustion in 1957. They injected hydrogen into a supersonic airflow, with the hydrogen and the air having the same velocity. This work overcame objections from skeptics, who had argued that the work at NACA-Lewis had not truly demonstrated supersonic com­bustion. The Marquardt experimental arrangement was simpler, and its results were less equivocal.21

The Navy’s Applied Physics Laboratory, home ofTalos, also emerged as an early center of interest in scramjets. As had been true at NACA-Lewis and at Marquardt, this group came to the concept by way of external burning under a supersonic wing. ‘William Avery, the leader, developed an initial interest in supersonic combustion around 1955, for he saw the conventional ramjet facing increasingly stiff competi­tion from both liquid rockets and afterburning turbojets. (Two years later such competition killed Navaho.) Avery believed that he could use supersonic combus­tion to extend the performance of ramjets.

His initial opportunity came early in 1956, when the Navy’s Bureau of Ordnance set out to examine the technological prospects for the next 20 years. Avery took on the task of assembling APL’s contribution. He picked scramjets as a topic to study, but he was well aware of an objection. In addition to questioning the fundamental feasibility of shock-free supersonic combustion in a duct, skeptics considered that a hypersonic inlet might produce large pressure losses in the flow, with consequent negation of an engine’s thrust.

Avery sent this problem through Talos management to a young engineer, James Keirsey, who had helped with Talos engine tests. Keirsey knew that if a hypersonic ramjet was to produce useful thrust, it would appear as a small difference between

two large quantities: gross thrust and total drag. In view of uncertainties in both these numbers, he was unable to state with confidence that such an engine would work. Still he did not rule it out, and his “maybe” gave Avery reason to pursue the topic further.

Avery decided to set up a scramjet group and to try to build an engine for test in a wind tunnel. He hired Gordon Dugger, who had worked at NACA-Lewis. Dugger’s first task was to decide which of several engine layouts, both ducted and unducted, was worth pursuing. He and Avery selected an external-burning configuration with the shape of a broad upside-down triangle. The forward slope, angled downward, was to compress the incoming airflow. Fuel could be injected at the apex, with the upward slope at the rear allowing the exhaust to expand. This approach again bypassed the problem of producing shock-free flow in a duct. The use of external burning meant that this concept could produce lift as well as thrust.

Dugger soon became concerned that this layout might be too simple to be effec­tive. Keirsey suggested placing a very short cowl at the apex, thereby easing problems of ignition and combustion. This new design lent itself to incorporation within the wings of a large aircraft of reasonably conventional configuration. At low speeds the wide triangle could retract until it was flat and flush with the wing undersurface, leaving the cowl to extend into the free stream. Following acceleration to supersonic speed, the two shapes would extend and assume their triangular shape, then func­tion as an engine for further acceleration.

Wind-tunnel work also proceeded at APL. During 1958 this center had a Mach 5 facility under construction, and Dugger brought in a young experimentalist named Frederick Billig to work with it. His first task was to show that he too could demonstrate supersonic combustion, which he tried to achieve using hydrogen as his fuel. He tried electric ignition; an APL history states that he “generated gigantic arcs,” but “to no avail.” Like the NACA-Lewis investigators, he turned to fuels that ignited particularly readily. His choice, triethyl aluminum, reacts spontaneously, and violently, on contact with air.

“The results of the tests on 5 March 1959 were dramatic,” the APL history con­tinues. “A vigorous white flame erupted over the rear of [the wind-tunnel model] the instant the triethyl aluminum fuel entered the tunnel, jolting the model against its support. The pressures measured on the rear surface jumped upward.” The device produced less than a pound of thrust. But it generated considerable lift, supporting calculations that had shown that external burning could increase lift. Later tests showed that much of the combustion indeed occurred within supersonic regions of the flow.22

By the late 1950s small scramjet groups were active at NACA-Lewis, Marquardt, and APL. There also were individual investigators, such as James Nicholls of the University of Michigan. Still it is no small thing to invent a new engine, even as an extension of an existing type such as the ramjet. The scramjet needed a really high-level advocate, to draw attention within the larger realms of aerodynamics and propulsion. The man who took on this role was Antonio Ferri.

He had headed the supersonic wind tunnel in Guidonia, Italy. Then in 1943 the Nazis took control of that country and Ferri left his research to command a band of partisans who fought the Nazis with considerable effectiveness. This made him a marked man, and it was not only Germans who wanted him. An American agent, Мое Berg, was also on his trail. Berg found him and persuaded him to come to the States. The war was still on and immigration was nearly impossible, but Berg per­suaded William Donovan, the head of his agency, to seek support from President Franklin Roosevelt himself. Berg had been famous as a baseball catcher in civilian life, and when Roosevelt learned that Ferri now was in the hands of his agent, he remarked, “I see Berg is still catching pretty well.”23

At NACA-Langley after the war, he rose in management and became director of the Gas Dynamics Branch in 1949. He wrote an important textbook, Elements of Aerodynamics of Supersonic Flows (Macmillan, 1949). Holding a strong fondness for the academic world, he took a professorship at Brooklyn Polytechnic Institute in 1951, where in time he became chairman of his department. He built up an aerodynamics laboratory at Brooklyn Poly and launched a new activity as a con­sultant. Soon he was working for major companies, drawing so many contracts that his graduate students could not keep up with them. He responded in 1956 by founding a company, General Applied Science Laboratories (GASL). With financial backing from the Rockefellers, GASL grew into a significant center for research in high-speed flight.24

He was a formidable man. Robert Sanator, a former student, recalls that “you had to really want to be in that course, to learn from him. He was very fast. His mind was constantly moving, redefining the problem, and you had to be fast to keep up with him. He expected people to perform quickly, rapidly.” John Erdos, another ex-student, adds that “if you had been a student of his and later worked for him, you could never separate the professor-student relationship from your normal working relationship.” He remained Dr. Ferri to these people, never Tony, even when they rose to leadership within their companies.25

He came early to the scramjet. Taking this engine as his own, he faced its techni­cal difficulties squarely and asserted that they could be addressed, giving examples of approaches that held promise. He repeatedly emphasized that scramjets could offer performance far higher than that of rockets. He presented papers at international conferences, bringing these ideas to a wider audience. In turn, his strong professional reputation ensured that he was taken seriously. He also performed experiments as he sought to validate his claims. More than anyone else, Ferri turned the scramjet from an idea into an invention, which might be developed and made practical.

His path to the scramjet began during the 1950s, when his work as a consul­tant brought him into a friendship with Alexander Kartveli at Republic Aviation. Louis Nucci, Ferris longtime colleague, recalls that the two men “made good sparks. They were both Europeans and learned men; they liked opera and history.” They also complemented each other professionally, as Kartveli focused on airplane design while Ferri addressed difficult problems in aerodynamics and propulsion. The two men worked together on the XF-103 and fed off each other, each encouraging the other to think bolder thoughts. Among the boldest was a view that there were no natural limits to aircraft speed or performance. Ferri put forth this idea initially; Kartveli then supported it with more detailed studies.26

The key concept, again, was the scramjet. Holding a strong penchant for experi­mentation, Ferri conducted research at Brooklyn Poly. In September 1958, at a conference in Madrid, he declared that steady combustion, without strong shocks, had been accomplished in a supersonic airstream at Mach 3-0. This placed him midway in time between the supersonic-combustion demonstrations at Marquardt and at APL.27

Shock-free flow in a duct continued to loom as a major problem. The Lewis, Marquardt, and APL investigators had all bypassed this issue by treating external combustion in the supersonic flow past a wing, but Ferri did not flinch. He took the problem of shock-free flow as a point of departure, thereby turning the ducted scramjet from a wish into a serious topic for investigation.

In supersonic wind tunnels, shock-free flow was an everyday affair. However, the flow in such tunnels achieved its supersonic Mach values by expanding through a nozzle. By contrast, flow within a scramjet was to pass through a supersonic inlet and then be strongly heated within a combustor. The inlet actually had the purpose of producing a shock, an oblique one that was to slow and compress the flow while allowing it to remain supersonic. However, the combustion process was only too likely to produce unwanted shocks, which would limit an engines thrust and per­formance.

Nicholls, at Michigan, proposed to make a virtue of necessity by turning a com­bustor shock to advantage. Such a shock would produce very strong heating of the flow. If the fuel and air had been mixed upstream, then this combustor shock could produce ignition. Ferri would have none of this. He asserted that “by using a suit­able design, formation of shocks in the burner can be avoided.”28

Specifically, he started with a statement by NACA’s Weber and MacKay on combustors. These researchers had already written that the combustor needed a diverging shape, like that of a rocket nozzle, to overcome potential limits on the airflow rate due to heat addition (“thermal choking”). Ferri proposed that within such a combustor, “fuel is injected parallel to the stream to eliminate formation of shocks…. The fuel gradually mixes with the air and burns…and the combustion process can take place without the formation of shocks.” Parallel injection might take place by building the combustor with a step or sudden widening. The flow could expand as it passed the step, thereby avoiding a shock, while the fuel could be injected at the step.29

Ferri also made an intriguing contribution in dealing with inlets, which are criti­cal to the performance ofscramjets. He did this by introducing a new concept called “thermal compression.” One approaches it by appreciating that a process of heat addition can play the role of a normal shock wave. When an airflow passes through such a shock, it slows in speed and therefore diminishes in Mach, while its tempera­ture and pressure go up. The same consequences occur when a supersonic airflow is heated. It therefore follows that a process of heat addition can substitute for a normal shock.30

Practical inlets use oblique shocks, which are two-dimensional. Such shocks afford good control of the aerodynamics of an inlet. If heat addition is to substitute for an oblique shock, it too must be two-dimensional. Heat addition in a duct is one-dimensional, but Ferri proposed that numerous small burners, set within a flow, could achieve the desired two-dimensionality. By turning individual burners on or off, and by regulating the strength of each ones heating, he could produce the desired pattern of heating that in fact would accomplish the substitution of heating for shock action.31

Why would one want to do this? The nose of a hypersonic aircraft produces a strong bow shock, an oblique shock that accomplishes initial compression of the airflow. The inlet rides well behind the nose and features an enclosing cowl. The cowl, in turn, has a lip or outer rim. For best effectiveness, the inlet should sustain a “shock-on-lip” condition. The shock should not impinge within the inlet, for only the lip is cooled in the face of shock-impingement heating. But the shock also should not ride outside the inlet, or the inlet will fail to capture all of the shock – compressed airflow.

To maintain the shock-on-lip condition across a wide Mach range, an inlet requires variable geometry. This is accomplished mechanically, using sliding seals that must not allow leakage of very hot boundary-layer air. Ferris principle of ther­mal compression raised the prospect that an inlet could use fixed geometry, which was far simpler. It would do this by modulating its burners rather than by physically moving inlet hardware.

Thermal compression brought an important prospect of flexibility. At a given value of Mach, there typically was only one arrangement of a variable-geometry inlet that would produce the desired shock that would compress the flow. By con­trast, the thermal-compression process might be adjusted at will simply by control­ling the heating. Ferri proposed to do this by controlling the velocity of injection of the fuel. He wrote that “the heat release is controlled by the mixing process, [which]

depends on the difference of velocity of the air and of the injected gas.” Shock-free internal flow appeared feasible: “The fuel is injected parallel to the stream to elimi­nate formation of shocks [and] the combustion process can take place without the formation of shocks.” He added,

“The preliminary analysis of supersonic combustion ramjets…indicates that combustion can occur in a fixed-geometry burner-nozzle combination through a large range of Mach numbers of the air entering the combustion region. Because the Mach number entering the burner is permitted to vary with flight Mach number, the inlet and therefore the complete engine does not require variable geometry. Such an engine can operate over a large range of flight Mach numbers and, therefore, can be very attractive as an accelerating engine.”32

There was more. As noted, the inlet was to produce a bow shock of specified character, to slow and compress the incoming air. But if the inflow was too great, the inlet would disgorge its shock. This shock, now outside the inlet, would disrupt the flow within the inlet and hence in the engine, with the drag increasing and the thrust falling off sharply. This was known as an unstart.

Supersonic turbojets, such as the Pratt & Whitney J58 that powered the SR-71 to speeds beyond Mach 3, typically were fitted with an inlet that featured a conical spike at the front, a centerbody that was supposed to translate back and forth to adjust the shock to suit the flight Mach number. Early in the program, it often did not work.33 The test pilot James Eastham was one of the first to fly this spy plane, and he recalls what happened when one of his inlets unstarted.

“An unstart has your foil and undivided attention, right then. The airplane gives a very pronounced yaw; then you are very preoccupied with getting the inlet started again. The speed falls off; you begin to lose altitude. You follow a procedure, putting the spikes forward and opening the bypass doors. Then you would go back to the automatic positioning of the spike— which many times would unstart it again. And when you unstarted on one side, sometimes the other side would also unstart. Then you really had to give it a good massage.”34

The SR-71 initially used a spike-positioning system from Hamilton Standard. It proved unreliable, and Eastham recalls that at one point, “unstarts were literally stopping the whole program.”35 This problem was eventually overcome through development of a more capable spike-positioning system, built by Honeywell.36 Still, throughout the development and subsequent flight career of the SR-71, the positioning of inlet spikes was always done mechanically. In turn, the movable spike represented a prime example of variable geometry

Scramjets faced similar issues, particularly near Mach 4. Ferris thermal-compres­sion principle applied here as well—and raised the prospect of an inlet that might fight against unstarts by using thermal rather than mechanical arrangements. An inlet with thermal compression then might use fixed geometry all the way to orbit, while avoiding unstarts in the bargain.

Ferri presented his thoughts publicly as early as I960. He went on to give a far more detailed discussion in May 1964, at the Royal Aeronautical Society in London. This was the first extensive presentation on hypersonic propulsion for many in the audience, and attendees responded effusively.

One man declared that “this lecture opened up enormous possibilities. Where they had, for lack of information, been thinking of how high in flight speed they could stretch conventional subsonic burning engines, it was now clear that they should be thinking of how far down they could stretch supersonic burning engines.” A. D. Baxter, a Fellow of the Society, added that Ferri “had given them an insight into the prospects and possibilities of extending the speed range of the airbreathing engine far beyond what most of them had dreamed of; in fact, assailing the field which until recently was regarded as the undisputed regime of the rocket.”37

Not everyone embraced thermal compression. “The analytical basis was rather weak,” Marquardt’s Arthur Thomas commented. “It was something that he had in his head, mostly. There were those who thought it was a lot of baloney.” Nor did Ferri help his cause in 1968, when he published a Mach 6 inlet that offered “much better performance” at lower Mach “because it can handle much higher flow.” His paper contained not a single equation.38

But Fred Billig was one who accepted the merits of thermal compression and gave his own analyses. He proposed that at Mach 5, thermal compression could increase an engine’s specific impulse, an important measure of its performance, by 61 percent. Years later he recalled Ferris “great capability for visualizing, a strong physical feel. He presented a full plate of ideas, not all of which have been real­ized.”39

The Decline of NASP

NASP was one of Reagan’s programs, and for a time it seemed likely that it would not long survive the change in administrations after he left office in 1989- That fiscal year brought a high-water mark for the program, as its budget peaked at $320 million. During the spring of that year officials prepared budgets for FY 1991, which President George H. W Bush would send to Congress early in 1990. Military spending was already trending downward, and within the Pentagon, analyst David Chu recommended canceling all Defense Department spending for NASP. The new Secretary of Defense, Richard Cheney, accepted this proposal. With this, NASP appeared dead.

NASP had a new program manager, Robert Barthelemy, who had replaced Wil­liams. Working through channels, he found support in the White House from Vice President Dan Quayle. Quayle chaired the National Space Council, which had been created by law in 1958 and that just then was active for the first time in a decade. He

The Decline of NASP

X-30 concept of 1985. (NASA)

used it to rescue NASP. He led the Space Council to recommend proceeding with the program under a reduced but stable budget, and with a schedule slip. This plan won acceptance, giving the program leeway to face a new issue: excessive technical optimism.49

During 1984, amid the Copper Canyon activities, Tony duPont devised a con­ceptual configuration that evolved into the program’s baseline. It had a gross weight of 52,650 pounds, which included a 2,500-pound payload that it was to carry to polar orbit. Its weight of fuel was 28,450 pounds. The propellant mass fraction, the ratio of these quantities, then was 0.54.50

The fuel had low density and was bulky, demanding high weight for the tank­age and airframe. To save weight, duPont’s concept had no landing gear. It lacked reserves of fuel; it was to reach orbit by burning its last drops. Once there it could not execute a controlled deorbit, for it lacked maneuvering rockets as well as fuel and oxidizer for them. DuPont also made no provision for a reserve of weight to accommodate normal increases during development.51

Williams’s colleagues addressed these deficiencies, although they continued to accept duPont’s optimism in the areas of vehicle drag and engine performance. The new concept had a gross weight of 80,000 pounds. Its engines gave a specific impulse of 1,400 seconds, averaged over the trajectory, which corresponded to a mean exhaust velocity of 45,000 feet per second. (That of the SSME was 453-5 sec­onds in vacuum, or 14,590 feet per second.) The effective velocity increase for the X-30 was calculated at 47,000 feet per second, with orbital velocity being 25,000 feet

per second; the difference represented loss due to drag. This version of the X-30 was designated the “government baseline” and went to the contractors for further study.52

The initial round of contract awards was announced in April 1986. Five airframe firms developed new conceptual designs, introducing their own estimates of drag and engine performance along with their own choices of materials. They gave the following weight estimates for the X-30:

Подпись:Rockwell International McDonnell Douglas General Dynamics Boeing Lockheed

A subsequent downselection, in October 1987, eliminated the two heaviest con­cepts while retaining Rockwell, McDonnell Douglas, and General Dynamics for further work.53

What brought these weight increases? Much of the reason lay in a falloff in estimated engine performance, which fell as low as 1,070 seconds of averaged spe­cific impulse. New estimates of drag pushed the required effective velocity increase during ascent to as much as

52,0 feet per second.

A 1989 technical review, sponsored by the National Research Council, showed what this meant. The chair­man, Jack Kerrebrock, was an experienced propulsion spe­cialist from MIT. His panel included other men of similar background: Seymour Bog – donoff of Princeton, Artur Mager of Marquardt, Frank Marble from Caltech. Their report stated that for the X-30 to reach orbit as a single stage,

“a fuel fraction of approxi­mately 0.75 is required.”54

One gains insight by con – X-30 concept of 1990, which had grown considerably, sidering three hydrogen-fueled (U. s. Air Force)

rocket stages of NASA and calculating their values of propellant mass fraction if both their hydrogen and oxygen tanks were filled with NASP fuel. This was slush hydrogen, a slurry of the solid and liquid. The stages are the S-II and S-IVB of Apollo and the space shuttle’s external tank. Liquid hydrogen has 1/16 the density of liquid oxygen. With NASP slush having 1.16 times the density of liquid hydro­gen,55 the propellant mass fractions are as follows:56

S-IVB, third stage of the Saturn V

0.722

S-II, second stage of the Saturn V

0.753

External Tank

0.868

The S-II, which comes close to Kerrebrock’s value of 0.75, was an insulated shell that mounted five rocket engines. It withstood compressive loads along its length that resulted from the weight of the S-IVB and the Apollo moonship but did not require reinforcement to cope with major bending loads. It was constructed of alu­minum alloy and lacked landing gear, thermal protection, wings, and a flight deck.

How then did NASP offer an X-30 concept that constituted a true hypersonic airplane rather than a mere rocket stage? The answer lay in adding weight to the fuel, which boosted the pro­pellant mass fraction. The I I «!*■■■. ІНІЦИНІН £¥ IduJ ІЇ FP£

The Decline of NASPvehicle was not to reach orbit entirely on slush – fueled scramjets but was to use a rocket for final ascent.

It used tanked oxygen— with nearly 14 times the density of slush hydrogen.

In addition, design require­ments specified a tripro­pellant system that was to burn liquid methane during the early part of the flight.

This fuel had less energy than hydrogen, but it too added weight because it was relatively dense. The recom­mended mix called for 69 Evolution of the X-30. The government baseline of 1986 had percent hydrogen, 20 per – IsP ofJ1’40J° seconds’ delta’V “reach ^t?*7’?™***per

1 j second, and propellant mass fraction of 0.54. Its 1992 counter-

Cent Oxygen, and 11 percent part had less Isp, more drag, propellant mass fraction of 0.75, methane.57 and could not reach orbit. (NASP National Program Office)

In 1984, with optimism at its height, Cooper had asserted that the X-30 would be the size of an SR-71 and could be ready in three years. DuPont argued that his concept could lead to a “5-5-50” program by building a 50,000-pound vehicle in five years for $5 billion.58 Eight years later, in October 1990, the program had a new chosen configuration. It was rectangular in cross section, with flat sides. Three scramjet engines were to provide propulsion. Two small vertical stabilizers were at the rear, giving better stability than a single large one. A single rocket engine of approximately 60,000 pounds of thrust, integrated into the airframe, completed the layout. Other decisions selected the hot structure as the basic approach to thermal protection. The primary structure was to be of titanium-matrix composite, with insulated panels of carbon to radiate away the heat.59

This 1990 baseline design showed little resemblance to its 1984 ancestor. As revised in 1992, it no longer was to fly to a polar orbit but would take off on a due-east launch from Kennedy Space Center, thereby gaining some 1,340 feet per second of launch velocity. Its gross weight was quoted at 400,000 pounds, some 40 percent heavier than the General Dynamics weight that had been the heaviest acceptable in the 1987 downselect. Yet even then the 1992 concept was expected to fall short of orbit by some 3,000 feet per second. An uprated version, with a gross weight of at least 450,000 pounds, appeared necessary to reach orbital velocity. The prospective program budget came to $15 billion or more, with the time to first flight being eight to ten years.60

During 1992 both the Defense Science Board (DSB) and Congress’s General Accounting Office (GAO) conducted major program reviews. The immediate issue was whether to proceed as planned by making a commitment that would actually build and fly the X-30. Such a decision would take the program from its ongoing phase of research and study into a new phase of mainstream engineering develop­ment.

Both reviews focused on technology, but international issues were in the back­ground, for the Cold War had just ended. The Soviet Union had collapsed in 1991, with communists falling from power while that nation dissolved into 15 constituent states. Germany had already reunified; the Berlin Wall had fallen, and the whole of Eastern Europe had won independence from Moscow. The western border of Russia now approximated that of 1648, at the end of the Thirty Years’ War. Two complete tiers of nominally independent nations now stood between Russia and the West.

These developments greatly diminished the military urgency of NASP, while the reviews’ conclusions gave further reason to reduce its priority. The GAO noted that program managers had established 38 technical milestones that were to be satisfied before proceeding to mainstream development. These covered the specific topics of X-30 design, propulsion, structures and materials, and use of slush hydrogen as a fuel. According to the contractors themselves, only 17 of those milestones—fewer than half—were to be achieved by September 1993. The situation was particularly worrisome in the critical area of structures and materials, for which only six of 19 milestones were slated for completion. The GAO therefore recommended delaying a commitment to mainstream development “until critical technologies are devel­oped and demonstrated.”61

The DSB concurred, highlighting specific technical deficiencies. The most important involved the prediction of scramjet performance and of boundary-layer transition. In the latter, an initially laminar or smoothly flowing boundary layer becomes turbulent. This brings large increases in heat transfer and skin friction, a major source of drag. The locations of transition thus had to be known.

The scramjet-performance problem arose because of basic limitations in the capabilities of ground-test facilities. The best of them could accommodate a com­plete engine, with inlet, combustor, and nozzle, but could conduct tests only below Mach 8. “Even at Mach 8,” the DSB declared, “the scramjet cycle is just beginning to be established and consequently, there is uncertainty associated with extrapolat­ing the results into the higher Mach regime. At speeds above Mach 8, only small components of the scramjet can be tested.” This brought further uncertainty when predicting the performance of complete engines.

Boundary-layer transition to turbulence also demanded attention: “It is essential to understand the boundary-layer behavior at hypersonic speeds in order to ensure thermal survival of the airplane structure as designed, as well as to accurately predict the propulsion system performance and airplane drag. Excessive conservatism in boundary-layer predictions will lead to an overweight design incapable of achieving [single stage to orbit], while excessive optimism will lead to an airplane unable to survive in the hypersonic flight environment.”

The DSB also showed strong concern over issues of control in flight of the X – 30 and its engines. These were not simple matters of using ailerons or pushing throttles. The report stated that “controllability issues for NASP are so complex, so widely ranging in dynamics and frequency, and so interactive between technical disciplines as to have no parallels in aeronautical history…the most fundamental initial requirements for elementary aircraft control are not yet fully comprehended.” An onboard computer was to manage the vehicle and its engines in flight, but an understanding of the pertinent forces and moments “is still in an embryonic state.” Active cooling of the vehicle demanded a close understanding of boundary-layer transition. Active cooling of the engine called for resolution of “major uncertain­ties… connected with supersonic burning.” In approaching these issues, “very great uncertainties exist at a fundamental level.”

The DSB echoed the GAO in calling for extensive additional research before proceeding into mainstream development of the X-30:

We have concluded [that] fundamental uncertainties will continue to exist in at least four critical areas: boundary-layer transition; stability and controllability; propulsion performance; and structural and subsystem weight. Boundary-layer transition and scramjet performance cannot be validated in existing ground-test facilities, and the weight estimates have insufficient reserves for the inevitable growth attendant to material

allowables, fastening and joining, and detailed configuration issues________

Using optimistic assumptions on transition and scramjet performance, and the present weight estimates on material performance and active cooling, the vehicle design does not yet close; the velocity achieved is short of orbital requirements.62

Faced with the prospect that the flight trajectory of the X-30 would merely amount to a parabola, budget makers turned the curve of program funding into a parabola as well. The total budget had held at close to $250 million during FY 1990 and 1991, falling to $205 million in 1992. But in 1993 it took a sharp dip to $140 million. The NASP National Program Office tried to rescue the situation by proposing a six-year program with a budget of $2 billion, called Fiyflite, that was to conduct a series of unmanned flight tests. The Air Force responded with a new technical group, the Independent Review Team, that turned thumbs down on Hyflite and called instead for a “minimum” flight test program. Such an effort was to address the key problem of reducing uncertainties in scramjet performance at high Mach.

The National Program Office came back with a proposal for a new program called HySTP. Its budget request came to $400 million over five years, which would have continued the NASP effort at a level only slightly higher than its allocation of $60 million for FY 1994. Yet even this minimal program budget proved to be unavailable. In January 1995 the Air Force declined to approve the FiySTP budget and initiated the formal termination of the NASP program.63

In this fashion, NASP lived and died. Like SDI and the space station, one could view it as another in a series of exercises in Reaganesque optimism that fell short. Yet from the outset, supporters of NASP had emphasized that it was to make important contributions in such areas as propulsion, hypersonic aerodynamics, computational fluid dynamics, and materials. The program indeed did these things and thereby laid groundwork for further developments.

Combined-Cycle Propulsion Systems

The scramjet used a single set of hardware and operated in two modes, sustain­ing supersonic combustion as well as subsonic combustion. The transition involved a process called “swallowing the shock.” In the subsonic mode, the engine held a train of oblique shocks located downstream of the inlet and forward of the combus­tor. When the engine went over to the supersonic-combustion mode, these shocks passed through the duct and were lost. This happened automatically, when the flight

Combined-Cycle Propulsion Systems

Ejector ramjet. Primary flow from a ducted rocket entrains a substantial secondary flow of external air. (U. S. Air Force)

vehicle topped a critical speed and the engine continued to burn with no diminu­tion of thrust.

The turboramjet arrangement of the XF-103 also operated in two modes, serv­ing both as a turbojet and as a ramjet. Here, however, the engine employed two sets of hardware, which were physically separate. They shared a common inlet and nozzle, while the ramjet also served as the turbojet’s afterburner. But only one set of equipment operated at any given time. Moreover, they were mounted separately and were not extensively integrated.40

System integration was the key concept within a third class of prime mover: the combined-cycle engine, which sought to integrate two separate thrust-producing cycles within a single set of hardware. In contrast to the turboramjet of the XF-103, engines of this type merged their equipment rather than keeping them separate. Two important concepts that did this were the ejector ramjet, which gave thrust even when standing still, and the Liquid Air Cycle Engine (LACE), which was an airbreathing rocket.

The ejector ramjet amounted to a combined-cycle system derived from a conven­tional ramjet. It used the ejector principle, whereby the exhaust of a rocket motor, placed within a duct, entrains a flow of air through the ducts inlet. This increases the thrust by converting thermal energy, within the hot exhaust, to mechanical energy. The entrained air slows and cools the exhaust. The effect is much the same as when designers improve the performance of a turbojet engine by installing a fan.

Ejectors date back to the nineteenth century. Horatio Phillips, a pioneer in aero­nautical research, used a steam ejector after 1870 to build a wind tunnel. His ejector was a ring of pipe pierced with holes and set within a duct with the holes facing downstream. Pressurized steam, expanding through the holes, entrained an airflow

Combined-Cycle Propulsion Systems

SE;скг;лйч/рйінайт MAS? fi, qh patio

Performance of an ejector. Even with minimal flow of entrained air, the pressure ratio is much lower than that of a turbojet. A pressure ratio of 1.5 implied low efficiency. (Courtesy of William Escher) within the duct that reached speeds of 40 mph, which served his studies.41 Nearly a century later ejectors were used to evacuate chambers that conducted rocket-engine tests in near-vacuum, with the ejectors rapidly pumping out the rocket exhaust. Ejectors also flew, being used with both the F-104 and the SR-71. Significantly, the value of an ejector could increase with Mach. On the SR-71, for instance, it contributed only 14 percent of the total thrust at Mach 2.2, but accounted for 28 percent at Mach 3.2.42

Jack Charshafian of Curtiss-Wright, director of development of ramjets for Navaho, filed a patent disclosure for an ejector rocket as early as 1947. By entrain­ing outside air, it might run fuel-rich and still burn all its fuel. Ejector concepts also proved attractive to other aerospace pioneers, with patents being awarded to Alfred Africano, a founder of the American Rocket Society; to Hans von Ohain, an inventor of the turbojet; and to Helmut Schelp, who stirred early interest in military turbojets within the Luftwaffe.43

A conventional ramjet needed a boost to reach speeds at which its air-ramming effect would come into play, and the hardware requirements approached those of a complete and separate flight vehicle. The turbojet of the XF-103 exemplified what was necessary, as did the large rocket-powered booster of Navaho. But after I960 the ejector ramjet brought the prospect of a ramjet that could produce thrust even when on the runway. By placing small rocket engines in a step surrounding a duct, a designer could leave the duct free of hardware. It might even sustain supersonic combustion, with the engine converting to a scramjet.

The ejector ramjet also promised to increase the propulsive efficiency by improv­ing the match between flight speed and exhaust speed. A large mismatch greatly reduces the effectiveness of a propulsive jet. There would be little effective thrust,

for instance, if one had a jet velocity of 10,000 feet per second while flying in the atmosphere at 400 feet per second. The ejector promised to avoid this by slowing the overall flow.

The ejector ramjet thus offered the enticing concept of a unified engine that could propel a single-stage vehicle from a runway to orbit. It would take off with ejector-boosted thrust from its rocket, accelerate through the atmosphere by using the combination as an ejector-boosted ramjet and scramjet, and then go over com­pletely to rocket propulsion for the final boost to orbit.

Yet even with help from an ejector, a rocket still had a disadvantage. A ramjet or scramjet could use air as its oxidizer, but a rocket had to carry heavy liquid oxygen in an onboard tank. Hence, there also was strong interest in airbreathing rockets. Still, it was not possible to build such a rocket through a simple extension of principles applicable to the turbojet, for there was a serious mismatch between pressures avail­able through turbocompression and those of a rockets thrust chamber.

In the SR-71, for instance, a combination of inlet compression and turbocom­pression yielded an internal pressure of approximately 20 pounds per square inch (psi) at Mach 3 and 80,000 feet. By contrast, internal pressures of rocket engines started in the high hundreds of psi and rapidly ascended into the thousands for high performance. Unless one could boost the pressure of ram air to that level, no airbreathing rocket would ever fly.44

The concept that overcame this difficulty was LACE. It dated to 1954, and Ran­dolph Rae of the Garrett Corporation was the inventor. LACE used liquid hydro­gen both as fuel and as a refrigerant, to liquefy air. The temperature of the liquid hydrogen was only 21 K, far below that at which air liquefies. LACE thus called for incoming ram air to pass through a heat exchanger that used liquid hydrogen as the coolant. The air would liquefy, and then a pump could raise its pressure to whatever value was desired. In this fashion, LACE bypassed the restrictions on turbocom­pression of gaseous air. In turn, the warmed hydrogen flowed to the combustion chamber to burn in the liquefied air.45

At the outset, LACE brought a problem. The limited thermal capacity of liquid hydrogen brought another mismatch, for the system needed eight times more liquid hydrogen to liquefy a given mass of air than could burn in that mass. The resulting hydrogen-rich exhaust still had a sufficiently high velocity to give LACE a prospec­tive advantage over a hydrogen-fueled rocket using tanked oxygen. Even so, there was interest in “derichening” the fuel-air mix, by making use of some of this extra hydrogen. An ejector promised to address this issue by drawing in more air to burn the hydrogen. Such an engine was called a ramLACE or scramLACE.46

A complementary strategy called for removal of nitrogen from the liquefied air, yielding nearly pure liquid oxygen as the product. Nitrogen does not support com­bustion, constitutes some three-fourths of air by weight, and lacks the redeeming quality of low molecular weight that could increase the exhaust velocity. Moreover,

a hydrogen-fueled rocket could give much better performance when using oxygen rather than air. With oxygen liquefying at 90 К while nitrogen becomes a liquid at 77 K, at atmospheric pressure the prospect existed of using this temperature differ­ence to leave the nitrogen unliquefied. Nor would it be useless; it could flow within a precooler, an initial heat exchanger that could chill the inflowing air while reserv­ing the much colder liquid hydrogen for the main cooler.

It did not appear feasible in practice to operate a high-capacity LACE air lique – fier with the precision in temperature that could achieve this. However, a promis­ing approach called for use of fractional distillation of liquid air, as a variant of the process used in oil refineries to obtain gasoline from petroleum. The distillation process promised fine control, allowing the nitrogen to boil off while keeping the oxygen liquid. To increase the throughput, the distillation was to take place within a rotary apparatus that could impose high g-loads, greatly enhancing the buoyancy of the gaseous nitrogen. A LACE with such an air separator was called ACES, Air Collection and Enrichment System.47

When liquid hydrogen chilled and liquefied the nitrogen in air, that hydrogen went only partially to waste. In effect, it transferred its coldness to the nitrogen, which used it to advantage in the precooler. Still, there was a clear prospect of greater efficiency in the heat-transfer process if one could remove the nitrogen directly from the ram air. A variant of ACES promised to do precisely this, using chemical separa­tion of oxygen. The process relied on the existence of metal oxides that could take up additional oxygen when heated by the hot ram air and then release this oxygen when placed under reduced pressure. Only the oxygen then was liquefied. This brought the increased efficiency, for the amount of liquid hydrogen used as a cool­ant was reduced. This enhanced efficiency also translated into conceptual designs for chemical-separation ACES units that could be lighter in weight and smaller in size than rotary-distillation counterparts.48

Turboramjets, ramjets, scramjets, LACE, ramLACE and scramLACE, ACES: with all these in prospect, designers of paper engines beheld a plenitude of possibili­ties. They also carried a strong mutual synergism. A scramjet might use a type of turboramjet for takeoff, again with the scramjet duct also functioning as an after­burner. Alternately, it might install an internal rocket and become a scramLACE. It could use ACES for better performance, while adopting the chemical-separation process to derichen the use of hydrogen.

It did not take long before engineers rose to their new opportunities by conceiv­ing of new types of vehicles that were to use these engines, perhaps to fly to orbit as a single stage. Everyone in aerospace was well aware that it had taken only 30 years to progress from Lindbergh in Paris to satellites in space. The studies that explored the new possibilities amounted to an assertion that this pace of technical advance was likely to continue.

Why NASP. Fell Short

NASP was founded on optimism, but it involved a good deal more than blind faith. Key technical areas had not been properly explored and offered significant prospects of advance. These included new forms of titanium, along with the use of an ejector to eliminate the need for an auxiliary engine as a separate installation, for initial boost of a scramjet. There also was the highly promising field of computa­tional fluid dynamics (CFD), which held the prospect of supplementing flight test and work in wind tunnels with sophisticated mathematical simulation.

Still NASP fell short, and there were reasons. CFD proved not to be an exact sci­ence, particularly at high Mach. Investigators worked with the complete equations of fluid mechanics, which were exact, but were unable to give precise treatments in such crucial areas as transition to turbulence and the simulation or modeling of turbulence. Their discussions introduced approximations that took away the accu­racy and left NASP with more drag and less engine performance than people had sought.

In the field of propulsion, ejectors had not been well studied and stood as a topic that was ripe for deeper investigation. Even so, the ejectors offered poor performance at the outset, and subsequent studies did not bring substantial improvements. This was unfortunate, for use of a highly capable ejector was a key feature of Anthony duPont’s patented engine cycle, which had provided technical basis for NASP.

With drag increasing and engine performance falling off, metallurgists might have saved the day by offering new materials. They indeed introduced Beta-21S titanium, which approached the heat resistance of Rene 41, the primary structural material of Dyna-Soar, but had only half the density. Yet even this achievement was not enough. Structural designers needed still more weight saving, and while they experimented with new types of beryllium and carbon-carbon, they came up with no significant contributions to the state of the art.

Aerospaceplane

“I remember when Sputnik was launched,” says Arthur Thomas, a leader in early work on scramjets at Marquardt. The date was 4 October 1957- “I was doing analy­sis of scramjet boosters to go into orbit. We were claiming back in those days that we could get the cost down to a hundred dollars per pound by using airbreathers.” He adds that “our job was to push the frontiers. We were extremely excited and optimistic that we were really on the leading edge of something that was going to be big.”49

At APL, other investigators proposed what may have been the first concept for a hypersonic airplane that merited consideration. In an era when the earliest jet airliners were only beginning to enter service, William Avery leaped beyond the supersonic transport to the hypersonic transport, at least in his thoughts. His col­league Eugene Pietrangeli developed a concept for a large aircraft with a wingspan of 102 feet and length of 175 feet, fitted with turbojets and with the Dugger-Keirsey external-burning scramjet, with its short cowl, under each wing. It was to accelerate to Mach 3.6 using the turbojets, then go over to scramjet propulsion and cruise at Mach 7. Carrying 130 passengers, it was to cross the country in half an hour and achieve a range of 7,000 miles. Its weight of 600,000 pounds was nearly twice that of the Boeing 707 Intercontinental, largest of that family of jetliners.50

Within the Air Force, an important prelude to similar concepts came in 1957 with Study Requirement 89774. It invited builders of large missiles to consider what modifications might make them reusable. It was not hard to envision that they might return to a landing on a runway by fitting them with wings and jet engines, but most such rocket stages were built of aluminum, which raised serious issues of thermal protection. Still, Convair at least had a useful point of departure. Its Atlas used stainless steel, which had considerably better heat resistance.51

The Convair concept envisioned a new version of this missile, fitted out as a reusable first stage for a launch vehicle. Its wings were to use the X-15’s structure. A crew compartment, set atop a rounded nose, recalled that company’s B-36 heavy bomber. To ease the thermal problem, designers were aware that this stage, having burned its propellants, would be light in weight. It therefore could execute a hyper­sonic glide while high in the atmosphere, losing speed slowly and diminishing the rate of heating.52

It did not take long before Convair officials began to view this reusable Atlas as merely a first step into space, for the prospect of LACE opened new vistas. Begin­ning late in 1957, using a combination of Air Force and in-house funding, the company launched paper studies of a new concept called Space Plane. It took shape as a large single-stage vehicle with highly-swept delta wings and a length of 235 feet. Propulsion was to feature a combination of ramjets and LACE with ACES, installed as separate engines, with the ACES being distillation type. The gross weight at take­off, 450,000 pounds, was to include 270,000 pounds of liquid hydrogen.

Aerospaceplane

Convair’s Space Plane concept. (Art by Dennis Jenkins)

Space Plane was to take off from a runway, using LACE and ACES while pump­ing the oxygen-rich condensate directly to the LACE combustion chambers. It would climb to 40,000 feet and Mach 3, cut off the rocket, and continue to fly using hydrogen-fueled ramjets. It was to use ACES for air collection while cruising at Mach 5-5 and 66,000 feet, trading liquid hydrogen for oxygen-rich liquid air while taking on more than 600,000 pounds of this oxidizer. Now weighing more than a million pounds, Space Plane would reach Mach 7 on its ramjets, then shut them down and go over completely to rocket power. Drawing on its stored oxidizer, it could fly to orbit while carrying a payload of 38,000 pounds.

The concept was born in exuberance. Its planners drew on estimates “that by 1970 the orbital payload accumulated annually would be somewhere between two million and 20 million pounds.” Most payloads were to run near 10,000 pounds, thereby calling for a schedule of three flights per day. Still the concept lacked an important element, for if scramjets were nowhere near the state of the art, at Con – vair they were not even the state of the imagination.53 Space Plane, as noted, used ramjets with subsonic combustion, installing them in pods like turbojets on a B-52. Scramjets lay beyond the thoughts of other companies as well. Thus, Northrop expected to use LACE with its Propulsive Fluid Accumulator (PROFAC) concept, which also was to cruise in the atmosphere while building up a supply of liquefied air. Like Space Plane, PROFAC also specified conventional ramjets.54

But Republic Aviation was home to the highly imaginative Kartveli, with Ferri being just a phone call away. Here the scramjet was very much a part of people’s thinking. Like the Convair designers, Kartveli looked ahead to flight to orbit with a single stage. He also expected that this goal was too demanding to achieve in a single jump, and he anticipated that intermediate projects would lay groundwork. He presented his thoughts in August I960 at a national meeting of the Institute of Aeronautical Sciences.55

The XF-103 had been dead and buried for three years, but Kartveli had crafted the F-105, which topped Mach 2 as early as 1956 and went forward into produc­tion. He now expected to continue with a Mach 2.3 fighter-bomber with enough power to lift off vertically as if levitating and to cruise at 75,000 feet. Next on the agenda was a strategic bomber powered by nuclear ramjets, which would use atomic power to heat internal airflow, with no need to burn fuel. It would match the peak speed of the X-7 by cruising at Mach 4.25, or 2,800 mph, and at 85,000 feet.56

Kartveli set Mach 7, or 5,000 mph, as the next goal. He anticipated achieving this speed with another bomber that was to cruise at 120,000 feet. Propulsion was to come from two turbojets and two ramjets, with this concept pressing the limits of subsonic combustion. Then for flight to orbit, his masterpiece was slated for Mach 25. It was to mount four J58 turbojets, modified to burn hydrogen, along with four scramjets. Ferri had convinced him that such engines could accelerate this craft all the way to orbit, with much of the gain in speed taking place while flying at 200,000 feet. A small rocket engine might provide a final boost into space, but Kartveli placed his trust in Ferris scramjets, planning to use neither LACE nor ACES.57

These concepts drew attention, and funding, from the Aero Propulsion Labora­tory at Wright-Patterson Air Force Base. Its technical director, Weldon Worth, had been closely involved with ramjets since the 1940s. Within a world that the turbojet had taken by storm, he headed a Nonrotating Engine Branch that focused on ram­jets and liquid-fuel rockets. Indeed, he regarded the ramjet as holding the greater

promise, taking this topic as his own while leaving the rockets to his deputy, Lieu­tenant Colonel Edward Hall. He launched the first Air Force studies of hypersonic propulsion as early as 1957. In October 1959 he chaired a session on scramjets at the Second USAF Symposium on Advanced Propulsion Concepts.

In the wake of this meeting, he built on the earlier SR-89774 efforts and launched a new series of studies called Aerospaceplane. It did not aim at anything so specific as a real airplane that could fly to orbit. Rather, it supported design studies and conducted basic research in advanced propulsion, seeking to develop a base for the evolution of such craft in the distant future. Marquardt and GASL became heavily involved, as did Convair, Republic, North American, GE, Lockheed, Northrop, and Douglas Aircraft.58

The new effort broadened the scope of the initial studies, while encouraging companies to pursue their concepts to greater depth. Convair, for one, had issued single-volume reports on Space Plane in October 1959, April I960, and December I960. In February 1961 it released an 11-volume set of studies, with each of them addressing a specific topic such as Aerodynamic Heating, Propulsion, Air Enrich­ment Systems, Structural Analysis, and Materials.59

Aerospaceplane proved too hot to keep under wraps, as a steady stream of disclo­sures presented concept summaries to the professional community and the general public. Aviation Week, hardly shy in these matters, ran a full-page article in October 1960:

USAF PLANS RADICAL SPACE PLANE

Studies costing $20 million sought in next budget, Earth-to-orbit vehicle

would need no large booster.60

At the Los Angeles Times, the aerospace editor Marvin Miles published headlined stories of his own. The first appeared in November:

LOCKHEED WORKING ON PLANE ABLE TO GO INTO ORBIT

ALONE

Air Force Interested in Project51

Two months later another of his articles ran as a front-page headline:

HUGE BOOSTER NOT NEEDED BY AIR FORCE SPACE PLANE

Proposed Wing Vehicle Would Take Off, Return Like Conventional Craft

It particularly cited Convair’s Space Plane, with a Times artist presenting a view of this craft in flight.62

Participants in the new studies took to the work with enthusiasm matching that of Arthur Thomas at Marquardt. Robert Sanator, a colleague of Kartveli at Repub­lic, recalls the excitement: “This one had everything. There wasn’t a single thing in it that was off-the-shelf. Whatever problem there was in aerospace—propulsion, materials, cooling, aerodynamics—Aerospaceplane had it. It was a lifetime work and it had it all. I naturally jumped right in.”63

Aerospaceplane also drew attention from the Air Forces Scientific Advisory Board, which set up an ad hoc committee to review its prospects. Its chairman, Alexander Flax, was the Air Forces chief scientist. Members specializing in propul­sion included Ferri, along with Seymour Bogdonoflf of Princeton University, a lead­ing experimentalist; Perry Pratt of Pratt & Whitney, who had invented the twin – spool turbojet; NASA’s Alfred Eggers; and the rocket specialist George P. Sutton. There also were hands-on program managers: Robert Widmer of Convair, builder of the Mach 2 B-58 bomber; Harrison Storms of North American, who had shaped the X-15 and the Mach З XB-70 bomber.64

This all-star group came away deeply skeptical of the prospects for Aerospace­plane. Its report, issued in December 1960, addressed a number of points and gave an overall assessment:

The proposed designs for Aerospace Plane…appear to violate no physical principles, but the attractive performance depends on an estimated combination of optimistic assumptions for the performance of components and subsystems. There are practically no experimental data which support these assumptions.

Aerodynamics

In March 1984, with the Copper Canyon studies showing promise, a classified program review was held near San Diego. In the words of George Baum, a close
associate of Robert Williams, “We had to put together all the technology pieces to make it credible to the DARPA management, to get them to come out to a meeting in La Jolla and be willing to sit down for three full days. It wasn’t hard to get people out to the West Coast in March; the problem was to get them off the beach.”

One of the attendees, Robert Whitehead of the Office of Naval Research, gave a talk on CFD. Was the mathematics ready; were computers at hand? Williams recalls that “he explained, in about 15 minutes, the equations of fluid mechanics, in a memorable way. With a few simple slides, he could describe their nature in almost an offhand manner, laying out these equations so the computer could solve them, then showing that the computer technology was also there. We realized that we could compute our way to Mach 25, with high confidence. That was a high point of the presentations.”1

Подпись: Development of CFD prior to NASP. In addition to vast im- аП<^ coefficients, provement in computers, there also was similar advance in the performance of codes. (NASA) Whiteheads point of departure lay in the fundamental equations of fluid flow: the Navier-Stokes equations, named for the nineteenth-century physicists Claude – Louis-Marie Navier and Sir George Stokes. They form a set of nonlinear partial differential equations that contain 60 partial derivative terms. Their physical con­tent is simple, comprising the basic laws of conserva­tion of mass, momentum, and energy, along with an equation of state. Yet their solutions, when available, cover the entire realm of fluid mechanics.2

An example of an important development, contemporaneous with Whiteheads presentation, was a 1985 treatment of flow over a complete X – 24C vehicle at Mach 5.95. The authors, Joseph Shang and S. J. Scheer, were at the Air Forces Wright Aeronautical Laboratories. They used a Cray X-MP supercomputer and gave

Availability of test facilities. Continuous-flow wind tunnels are far below the requirements of real­istic simulation of full-size aircraft in flight. Impulse facilities, such as shock tunnels, come close to the requirements but are limited by their very short run times. (NASA)

cD

cL

L/D

Experimental data

0.03676

0.03173

1.158

Numerical results

0.03503

0.02960

1.183

Percent error

4.71

6.71

2.16

Подпись: REVHOUB h JH6EH
Подпись: TEST FACILITY AVAILABILITY — A MAJOR ISSUE TE$r FACILITY REYNOLDS NUMBER CAPABILITY 10»
Подпись: 1&T
Aerodynamics

(Source: AIAA Paper 85-1509)

In that year the state of the art permitted extensive treatments of scramjets. Complete three-dimensional simulations of inlets were available, along with two – dimensional discussions of scramjet flow fields that covered the inlet, combustor, and nozzle. In 1984 Fred Billig noted that simulation of flow through an inlet using complete Navier-Stokes equations typically demanded a grid of 80,000 points and up to 12,000 time steps, with each run demanding four hours on a Control Data Cyber 203 supercomputer. A code adapted for supersonic flow was up to a hundred times faster. This made it useful for rapid surveys of a number of candidate inlets, with full Navier-Stokes treatments being reserved for a few selected choices.4

CFD held particular promise because it had the potential of overcoming the limitations of available facilities. These limits remained in place all through the NASP era. A 1993 review found “adequate” test capability only for classical aerody­namic experiments in a perfect gas, namely helium, which could support such work to Mach 20. Between Mach 13 and 17 there was “limited” ability to conduct tests that exhibited real-gas effects, such as molecular excitation and dissociation. Still, available facilities were too small to capture effects associated with vehicle size, such as determining the location of boundary-layer transition to turbulence.

For scramjet studies, the situation was even worse. There was “limited” abil­ity to test combustors out to Mach 7, but at higher Mach the capabilities were “inadequate.” Shock tunnels supported studies of flows in rarefied air from Mach 16 upward, but the whole of the nation’s capacity for such tests was “inadequate.” Some facilities existed that could study complete engines, either by themselves or in airframe-integrated configurations, but again the whole of this capability was “inadequate.”5

Yet it was an exaggeration in 1984, and remains one to this day, to propose that CFD could remedy these deficiencies by computing one’s way to orbital speeds “with high confidence.” Experience has shown that CFD falls short in two areas: prediction of transition to turbulence, which sharply increases drag due to skin fric­tion, and in the simulation of turbulence itself.

For NASP, it was vital not only to predict transition but to understand the prop­erties of turbulence after it appeared. One could see this by noting that hypersonic propulsion differs substantially from propulsion of supersonic aircraft. In the latter, the art of engine design allows engineers to ensure that there is enough margin of thrust over drag to permit the vehicle to accelerate. A typical concept for a Mach 3 supersonic airliner, for instance, calls for gross thrust from the engines of 123,000 pounds, with ram drag at the inlets of 54,500. The difference, nearly 80,000 pounds of thrust, is available to overcome skin-friction drag during cruise, or to accelerate.

At Mach 6, a representative hypersonic-transport design shows gross thrust of

330,0 pounds and ram drag of 220,000. Again there is plenty of margin for what, after all, is to be a cruise vehicle. But in hypersonic cruise at Mach 12, the numbers typically are 2.1 million pounds for gross thrust—and 1.95 million for ram drag! Here the margin comes to only 150,000 pounds of thrust, which is narrow indeed. It could vanish if skin-friction drag proves to be higher than estimated, perhaps because of a poor forecast of the location of transition. The margin also could vanish if the thrust is low, due to the use of optimistic turbulence models.6

Any high-Mach scramjet-powered craft must not only cruise but accelerate. In turn, the thrust driving this acceleration appears as a small difference between two quantities: total drag and net thrust, the latter being net of losses within the engines. Accordingly, valid predictions concerning transition and turbulence are matters of the first importance.

NASP-era analysts fell back on the “eN method,” which gave a greatly simplified sum­mary of the pertinent physics but still gave results that were often viewed as useful. It used the Navier-Stokes equa­tions to solve for the overall flow in the lami – nary boundary layer, upstream of transition. This method then intro­duced new and simple equations derived from the original Navier – Stokes. These were linear and traced the

Подпись: Experimentally determined locations of the onset of transition to turbulent flow. The strong scatter of the data points defeats attempts to find a predictive rule. (NASA) growth of a small disturbance as one followed the flow downstream. When it had grown by a factor of 22,000—e10, with N = 10—the analyst accepted that transition to turbulence had occurred.7

One can obtain a solution in this fashion, but transition results from local rough­nesses along a surface, and these can lead to results that vary dramatically. Thus, the repeated re-entries of the space shuttle, during dozens of missions, might have given numerous nearly identical data sets. In fact, transition has occurred at Mach numbers from 6 to 19! A 1990 summary presented data from wind tunnels, ballistic ranges, and tests of re-entry vehicles in free flight. There was a spread of as much as 30 to one in the measured locations of transition, with the free-flight data showing transition positions that typically were five times farther back from a nose or leading edge than positions observed using other methods. At Mach 7, observed locations covered a range of 20 to one.8

One may ask whether transition can be predicted accurately even in principle because it involves minute surface roughnesses whose details are not known a priori and may even change in the course of a re-entry. More broadly, the state of transi­tion was summarized in a 1987 review of problems in NASP hypersonics that was written by three NASA leaders in CFD:

Almost nothing is known about the effects of heat transfer, pressure gradient, three-dimensionality, chemical reactions, shock waves, and other

influences on hypersonic transition. This is caused by the difficulty of conducting meaningful hypersonic transition experiments in noisy ground – based facilities and the expense and difficulty of carrying out detailed and carefully controlled experiments in flight where it is quiet. Without an adequate, detailed database, development of effective transition models will be impossible.9

Matters did not improve in subsequent years. In 1990 Mujeeb Malik, a leader in studies of transition, noted “the long-held view that conventional, noisy ground facilities are simply not suitable for simulation of flight transition behavior.” A sub­sequent critique added that “we easily recognize that there is today no reasonably reliable predictive capability for engineering applications” and commented that “the reader…is left with some feeling of helplessness and discouragement.”10 A contem­porary review from the Defense Science Board pulled no punches: “Boundary layer transition…cannot be validated in existing ground test facilities.”11

There was more. If transition could not be predicted, it also was not generally possible to obtain a valid simulation, from first principles, of a flow that was known to be turbulent. The Navier-Stokes equations carried the physics of turbulence at all scales. The problem was that in flows of practical interest, the largest turbulent eddies were up to 100,000 times bigger than the smallest ones of concern. This meant that complete numerical simulations were out of the question.

Late in the nineteenth century the physicist Osborne Reynolds tried to bypass this difficulty by rederiving these equations in averaged form. He considered the flow velocity at any point as comprising two elements: a steady-flow part and a turbulent part that contained all the motion due to the eddies. Using the Navier – Stokes equations, he obtained equations for averaged quantities, with these quanti­ties being based on the turbulent velocities.

He found, though, that the new equations introduced additional unknowns. Other investigators, pursuing this approach, succeeded in deriving additional equations for these extra unknowns—only to find that these introduced still more unknowns. Reynolds’s averaging procedure thus led to an infinite regress, in which at every stage there were more unknown variables describing the turbulence than there were equations with which to solve for them. This contrasted with the Navier – Stokes equations themselves, which in principle could be solved because the number of these equations and the number of their variables was equal.

This infinite regress demonstrated that it was not sufficient to work from the Navier-Stokes equations alone—something more was needed. This situation arose because the averaging process did not preserve the complete physical content of the Navier-Stokes formulation. Information had been lost in the averaging. The problem of turbulence thus called for additional physics that could replace the lost information, end the regress, and give a set of equations for turbulent flow in which the number of equations again would match the number of unknowns.12

The standard means to address this issue has been a turbulence model. This takes the form of one or more auxiliary equations, either algebraic or partial-differential, which are solved simultaneously with the Navier-Stokes equations in Reynolds-aver­aged form. In turn, the turbulence model attempts to derive one or more quantities that describe the turbulence and to do so in a way that ends the regress.

Viscosity, a physical property of every liquid and gas, provides a widely used point of departure. It arises at the molecular level, and the physics of its origin is well understood. In a turbulent flow, one may speak of an “eddy viscosity” that arises by analogy, with the turbulent eddies playing the role of molecules. This quantity describes how rapidly an ink drop will mix into a stream—or a parcel of hydrogen into the turbulent flow of a scramjet combustor.13

Like the eN method in studies of transition, eddy viscosity presents a view of tur­bulence that is useful and can often be made to work, at least in well-studied cases. The widely used Baldwin-Lomax model is of this type, and it uses constants derived from experiment. Antony Jameson of Princeton University, a leading writer of flow codes, described it in 1990 as “the most popular turbulence model in the industry, primarily because it’s easy to program.”14

This approach indeed gives a set of equations that are solvable and avoid the regress, but the analyst pays a price: Eddy viscosity lacks standing as a concept supported by fundamental physics. Peter Bradshaw of Stanford University virtu­ally rejects it out of hand, declaring, “Eddy viscosity does not even deserve to be described as a ‘theory’ of turbulence!” He adds more broadly, “The present state is that even the most sophisticated turbulence models are based on brutal simplifica­tion of the N-S equations and hence cannot be relied on to predict a large range of flows with a fixed set of empirical coefficients.”15

Other specialists gave similar comments throughout the NASP era. Thomas Coakley of NASA-Ames wrote in 1983 that “turbulence models that are now used for complex, compressible flows are not well advanced, being essentially the same models that were developed for incompressible attached boundary layers and shear flows. As a consequence, when applied to compressible flows they yield results that vary widely in terms of their agreement with experimental measurements.”16

A detailed critique of existing models, given in 1985 by Budugur Lakshminara – yana of Pennsylvania State University, gave pointed comments on algebraic models, which included Baldwin-Lomax. This approach “provides poor predictions” for flows with “memory effects,” in which the physical character of the turbulence does not respond instantly to a change in flow conditions but continues to show the influ­ence of upstream effects. Such a turbulence model “is not suitable for flows with curvature, rotation, and separation. The model is of little value in three-dimensional complex flows and in situations where turbulence transport effects are important.”

“Two-equation models,” which used two partial differential equations to give more detail, had their own faults. In the view of Lakshminarayana, they “fail to cap­ture many of the features associated with complex flows.” This class of models “fails for flows with rotation, curvature, strong swirling flows, three-dimensional flows, shock-induced separation, etc.”17

Rather than work with eddy viscosity, some investigators used “Reynolds stress” models. Reynolds stresses were not true stresses, which contributed to drag. Rather, they were terms that appeared in the Reynolds-averaged Navier-Stokes equations alongside other terms that indeed represented stress. Models of this type offered greater physical realism, but again this came at the price of severe computational difficulty.18

A group at NASA-Langley, headed by Thomas Gatski, offered words of caution in 1990: “…even in the low-speed incompressible regime, it has not been possible to construct a turbulence closure model which can be applied over a wide class of

flows__ In general, Reynolds stress closure models have not been very successful in

handling the effects of rotation or three-dimensionality even in the incompressible regime; therefore, it is not likely that these effects can be treated successfully in the compressible regime with existing models.”19

Anatol Roshko of Caltech, widely viewed as a dean of aeronautics, has his own view: “History proves that each time you get into a new area, the existing models are found to be inadequate.” Such inadequacies have been seen even in simple flows, such as flow over a flat plate. The resulting skin friction is known to an accuracy of around one percent. Yet values calculated from turbulence models can be in error by up to 10 percent. “You can always take one of these models and fix it so it gives the right answer for a particular case,” says Bradshaw. “Most of us choose the flat plate. So if you cant get the flat plate right, your case is indeed piteous.”20

Another simple case is flow within a channel that suddenly widens. Downstream of the point of widening, the flow shows a zone of strongly whirling circulation. It narrows until the main flow reattaches, flowing in a single zone all the way to the now wider wall. Can one predict the location of this reattachment point? “This is a very severe test,” says John Lumley of Cornell University. “Most of the simple models have trouble getting reattachment within a factor of two.” So-called “k-epsi – lon models,” he says, are off by that much. Even so, NASA’s Tom Coakley describes them as “the most popular two-equation model,” whereas Princeton University’s Jameson speaks of them as “probably the best engineering choice around” for such problems as…flow within a channel.21

Turbulence models have a strongly empirical character and therefore often fail to predict the existence of new physics within a flow. This has been seen to cause difficulties even in the elementary case of steady flow past a cylinder at rest, a case so simple that it is presented in undergraduate courses. Nor do turbulence models cope with another feature of some flows: their strong sensitivity to slight changes in conditions. A simple example is the growth of a mixing layer.

In this scenario, two flows that have different velocities proceed along opposite sides of a thin plate, which terminates within a channel. The mixing layer then forms and grows at the interface between these streams. In Roshko’s words, “a one – percent periodic disturbance in the free stream completely changes the mixing layer growth.” This has been seen in experiments and in highly detailed solutions of the Navier-Stokes equations that solve the complete equations using a very fine grid. It has not been seen in solutions of Reynolds-averaged equations that use turbulence models.22

And if simple flows of this type bring such difficulties, what can be said of hyper – sonics? Even in the free stream that lies at some distance from a vehicle, one finds strong aerodynamic heating along with shock waves and the dissociation, recombi­nation, and chemical reaction of air molecules. Flow along the aircraft surface adds a viscous boundary layer that undergoes shock impingement, while flow within the engine adds the mixing and combustion of fuel.

As William Dannevik of Lawrence Livermore National Laboratory describes it, “There’s a fully nonlinear interaction among several fields: an entropy field, an acoustic field, a vortical field.” By contrast, in low-speed aerodynamics, “you can often reduce it down to one field interacting with itself.” Hypersonic turbulence also brings several channels for the flow and exchange of energy: internal energy, density, and vorticity. The experimental difficulties can be correspondingly severe.23

Roshko sees some similarity between turbulence modeling and the astronomy of Ptolemy, who flourished when the Roman Empire was at its height. Ptolemy repre­sented the motions of the planets using epicycles and deferents in a purely empirical fashion and with no basis in physical theory. “Many of us have used that example,” Roshko declares. “It’s a good analogy. People were able to continually keep on fixing up their epicyclic theory, to keep on accounting for new observations, and they were completely wrong in knowing what was going on. I don’t think we’re that badly off, but it’s illustrative of another thing that bothers some people. Every time some new thing comes around, you’ve got to scurry and try to figure out how you’re going to incorporate it.”24

A 1987 review concluded, “In general, the state of turbulence modeling for supersonic, and by extension, hypersonic, flows involving complex physics is poor.” Five years later, late in the NASP era, little had changed, for a Defense Science Board program review pointed to scramjet development as the single most impor­tant issue that lay beyond the state of the art.25

Within NASP, these difficulties meant that there was no prospect of computing one’s way in orbit, or of using CFD to make valid forecasts of high-Mach engine performance. In turn, these deficiencies forced the program to fall back on its test facilities, which had their own limitations.

NACA-Langley and John Becker

During the war the Germans failed to match the Allies in production of air­planes, but they were well ahead in technical design. This was particularly true in the important area of jet propulsion. They fielded an operational jet fighter, the Me-262, and while the Yankees were well along in developing the Lockheed P-80 as a riposte, the war ended before any of those jets could see combat. Nor was the Me – 262 a last-minute work of desperation. It was a true air weapon that showed better speed and acceleration than the improved P-80A in flight test, while demonstrat­ing an equal rate of climb.28 Albert Speer, Hitler’s minister of armaments, asserted in his autobiographical Inside the Third Reich (1970) that by emphasizing produc­tion of such fighters and by deploying the Wasserfall antiaircraft missile that was in development, the Nazis “would have beaten back the Western Allies’ air offensive against our industry from the spring of 1944 on.”29 The Germans thus might have prolonged the war until the advent of nuclear weapons.

Wartime America never built anything resembling the big Mach 4.4 wind tunnels at Peenemunde, but its researchers at least constructed facilities that could compare with the one at Aachen. The American installations did not achieve speeds to match Aachen’s Mach 3-3, but they had larger test sections. Arthur Kantrowitz, a young physicist from Columbia University who was working at Langley, built a nine-inch tunnel that reached Mach 2.5 when it entered operation in 1942. (Aachen’s had been four inches.) Across the country, at NACA’s Ames Aeronautical Laboratory, two other wind tunnels entered service during 1945- Their test sections measured one by three feet, and their flow speeds reached Mach 2.2.30

The Navy also was active. It provided $4.5 million for the nation’s first really large supersonic tunnel, with a test section six feet square. Built at NACA-Ames, operating at Mach 1.3 to 1.8, this installation used 60,000 horsepower and entered service soon after the war.31 The Navy also set up its Ordnance Aerophysics Labora­tory in Daingerfield, Texas, adjacent to the Lone Star Steel Company, which had air compressors that this firm made available. The supersonic tunnel that resulted covered a range of Mach 1.25 to 2.75, with a test section of 19 by 27-5 inches. It became operational in June 1946, alongside a similar installation that served for high-speed engine tests.32

Theorists complemented the wind-tunnel builders. In April 1947 Theodore von Karman, a professor at Caltech who was widely viewed as the dean of American aerodynamicists, gave a review and survey of supersonic flow theory in an address to the Institute of Aeronautical Sciences. His lecture, published three months later in the Journal of the Aeronautical Sciences, emphasized that supersonic flow theory now was mature and ready for general use. Von Karman pointed to a plethora of available methods and solutions that not only gave means to attack a number of important design problems but also gave independent approaches that could permit cross-checks on proposed solutions.

John Stack, a leading Langley aerodynamicist, noted that Prandtl had given a similarly broad overview of subsonic aerodynamics a quarter-century earlier. Stack declared, “Just as Prandtl’s famous paper outlined the direction for the engineer in the development of subsonic aircraft, Dr. von Karmans lecture outlines the direc­tion for the engineer in the development of supersonic aircraft.”33

Yet the United States had no facility, and certainly no large one, that could reach Mach 4.4. As a stopgap, the nation got what it wanted by seizing German wind tun­nels. A Mach 4.4 tunnel was shipped to the Naval Ordnance Laboratory in White Oak, Maryland. Its investigators had fabricated a Mach 5.18 nozzle and had con­ducted initial tests in January 1945- In 1948, in Maryland, this capability became routine.34 Still, if the U. S. was to advance beyond the Germans and develop the true hypersonic capability that Germany had failed to achieve, the nation would have to rely on independent research.

The man who pursued this research, and who built Americas first hypersonic tunnel, was Langleys John Becker. He had been at that center since 1936; during

the latter part of the war he was assistant chief of Stack’s Compressibility Research Division. He specifically was in charge of Langleys 16-Foot High-Speed Tunnel, which had fought its war by investigating cooling problems in aircraft motors as well as the design of propellers. This facility contributed particularly to tests of the B-50 bomber and to the aerodynamic shapes of the first atomic bombs. It also assisted development of the Pratt & Whitney R-2800 Double Wasp, a widely used piston engine that powered several important wartime fighter planes, along with the DC-6 airliner and the C-69 transport, the military version of Lockheed’s Constel­lation.35

It was quite a jump from piston-powered warbirds to hypersonics, but Becker willingly made the leap. The V-2, flying at Mach 5, gave him his justification. In a memo to Langley’s chief of research, dated 3 August 1945, Becker noted that planned facilities were to reach no higher than Mach 3. He declared that this was inadequate: “When it is considered that all of these tunnels will be used, to a large extent, to develop supersonic missiles and projectiles of types which have already been operated at Mach numbers as high as 5.0, it appears that there is a definite need for equipment capable of higher test Mach numbers.”

Within this memo, he outlined a design concept for “a supersonic tunnel having a test section four-foot square and a maximum test Mach number of 7.0.” It was to achieve continuous flow, being operated by a commercially-available compressor of 2,400 horsepower. To start the flow, the facility was to hold air within a tank that was compressed to seven atmospheres. This air was to pass through the wind tunnel before exhausting into a vacuum tank. With pressure upstream pushing the flow and with the evacuated tank pulling it, airspeeds within the test section would be high indeed. Once the flow was started, the compressor would maintain it.

A preliminary estimate indicated that this facility would cost $350,000. This was no mean sum, and Becker’s memo proposed to lay groundwork by first building a model of the big tunnel, with a test section only one foot square. He recommended that this subscale facility should “be constructed and tested before proceeding with a four-foot-square tunnel.” He gave an itemized cost estimate that came to $39,550, including $10,000 for installation and $6,000 for contingency.

Becker’s memo ended in formal fashion: “Approval is requested to proceed with the design and construction of a model supersonic tunnel having a one-foot-square test section at Mach number 7-0. If successful, this model tunnel would not only provide data for the design of economical high Mach number supersonic wind tun­nels, but would itself be a very useful research tool.”36

On 6 August, three days after Becker wrote this memo, the potential useful­ness of this tool increased enormously. On that day, an atomic bomb destroyed Hiroshima. With this, it now took only modest imagination to envision nuclear – tipped V-2s as weapons of the future. The standard V-2 had carried only a one-ton conventional warhead and lacked both range and accuracy. It nevertheless had been technically impressive, particularly since there was no way to shoot it down. But an advanced version with an atomic warhead would be far more formidable.

John Stack strongly supported Beckers proposal, which soon reached the desk of George Lewis, NACA’s Director of Aeronautical Research. Lewis worked at NACA’s Washington Headquarters but made frequent visits to Langley. Stack discussed the proposal with Lewis in the course of such a visit, and Lewis said, “Lets do it.”

Just then, though, there was little money for new projects. NACA faced a post­war budget cut, which took its total appropriation from $40.9 million in FY 1945 to $24 million in FY 1946. Lewis therefore said to Stack, “John, you know I’m a sucker for a new idea, but don’t call it a wind tunnel because I’ll be in trouble with having to raise money in a formal way. That will necessitate Congressional review and approval. Call it a research project.” Lewis designated it as Project 506 and obtained approval from NACA’s Washington office on 18 December.37

A month later, in January 1946, Becker raised new issues in a memo to Stack. He was quite concerned that the high Mach would lead to so low a temperature that air in the flow would liquefy. To prevent this, he called for heating the air, declar­ing that “a temperature of 600°F in the pressure tank is essential.” He expected to achieve this by using “a small electrical heater.”

The pressure in that tank was to be considerably higher than in his plans of August. The tank would hold a pressure of 100 atmospheres. Instead of merely starting the flow, with a powered compressor sustaining in continuous operation, this pressure tank now was to hold enough air for operating times of 40 seconds. This would resolve uncertainties in the technical requirements for continuous oper­ation. Continuous flows were still on the agenda but not for the immediate future. Instead, this wind tunnel was to operate as a blowdown facility.

Here, in outline, was a description of the installation as finally built. Its test sec­tion was 11 inches square. Its pressure tank held 50 atmospheres. It never received a compressor system for continuous flow, operating throughout its life entirely as a blowdown wind tunnel. But by heating its air, it indeed operated routinely at speeds close to Mach 7.38

Taking the name of 11-Inch Hypersonic Tunnel, it operated successfully for the first time on 26 November 1947. It did not heat its compressed air directly within the pressure tank, relying instead on an electric resistance heater as a separate com­ponent. This heater raised the air to temperatures as high as 900°F, eliminating air liquefaction in the test section with enough margin for Mach 8. Specialized experi­ments showed clearly that condensation took place when the initial temperature was not high enough to prevent it. Small particles promoted condensation by serving as nuclei for the formation of droplets. Becker suggested that such particles could have formed through the freezing of C02, which is naturally present in air. Subsequent research confirmed this conjecture.39

NACA-Langley and John Becker

The facility showed initial early problems as well as a long-term problem. The early difficulties centered on the air heater, which showed poor internal heat con­duction, requiring as much as five hours to reach a suitably uniform temperature distribution. In addition, copper tubes within the heater produced minute par­ticles of copper oxide, due to oxidation of this metal at high temperature. These particles, blown within the hypersonic airstream, damaged test models and instru­ments. Becker attacked the problem of slow warmup by circulating hot air through the heater. To eliminate the problem of oxidation, he filled the heater with nitrogen while it was warming up.40

A more recalcitrant difficulty arose because the hot airflow, entering the nozzle, heated it and caused it to undergo thermal expansion. The change in its dimensions was not large, but the nozzle design was highly sensitive to small changes, with this expansion causing the dynamic pressure in the airflow to vary by up to 13 percent in the course of a run. Run times were as long as 90 seconds, and because of this, data taken at the beginning of a test did not agree with similar data recorded a minute later. Becker addressed this by fixing the angle of attack of each test model. He did not permit the angle to vary during a run, even though variation of this angle would have yielded more data. He also made measurements at a fixed time during each run.41

The wind tunnel itself represented an important object for research. No similar facility had ever been built in America, and it was necessary to learn how to use it most effectively. Nozzle design represented an early topic for experimental study. At Mach 7, according to standard tables, the nozzle had to expand by a ratio of 104.1 to 1. This nozzle resembled that of a rocket engine. With an axisymmetric design, a throat of one-inch diameter would have opened into a channel having a diameter slightly greater than 10 inches. However, nozzles for Beckers facility proved difficult to develop.

Conventional practice, carried over from supersonic wind tunnels, called for a two-dimensional nozzle. It featured a throat in the form of a narrow slit, having the full width of the main channel and opening onto that channel. However, for flow at Mach 7, this slit was to be only about 0.1 inch high. Hence, there was considerable interest in nozzles that might be less sensitive to small errors in fabrication.42

Initial work focused on a two-step nozzle. The first step was flat and constant in height, allowing the flow to expand to 10 inches wide in the horizontal plane and to reach Mach 4.36. The second step maintained this width while allowing the flow to expand to 10.5 inches in height, thus achieving Mach 7. But this nozzle performed poorly, with investigators describing its flow as “entirely unsatisfactory for use in a wind tunnel.” The Mach number reached 6.5, but the flow in the test section was “not sufficiently uniform for quantitative wind-tunnel test purposes.” This was due to “a thick boundary layer which developed in the first step” along the flat parallel walls set closely together at the top and bottom.43

A two-dimensional, single-step nozzle gave much better results. Its narrow slit­like throat indeed proved sensitive; this was the nozzle that gave the variation with time of the dynamic pressure. Still, except for this thermal-expansion effect, this nozzle proved “far superior in all respects” when compared with the two-step nozzle. In turn, the thermal expansion in time proved amenable to correction. This expan­sion occurred because the nozzle was made of steel. The commercially available alloy Invar had a far lower coefficient of thermal expansion. A new nozzle, fabricated from this material, entered service in 1954 and greatly reduced problems due to expansion of the nozzle throat.44

Another topic of research addressed the usefulness of the optical techniques used for flow visualization. The test gas, after all, was simply air. Even when it formed shock waves near a model under test, the shocks could not be seen with the unaided eye. Therefore, investigators were accustomed to using optical instruments when studying a flow. Three methods were in use: interferometry, schlieren, and shadow­graph. These respectively observed changes in air density, density gradient, and the rate of change of the gradient.

Such instruments had been in use for decades. Ernst Mach, of the eponymous Mach number, had used a shadowgraph as early as 1887 to photograph shock waves produced by a speeding bullet. Theodor Meyer, a student of Prandtl, used schlie – ren to visualize supersonic flow in a nozzle in 1908. Interferometry gave the most detailed photos and the most information, but an interferometer was costly and dif­ficult to operate. Shadowgraphs gave the least information but were the least costly and easiest to use. Schlieren apparatus was intermediate in both respects and was employed often.45

Still, all these techniques depended on the flow having a minimum density. One could not visualize shock waves in a vacuum because they did not exist. Highly rarefied flows gave similar difficulties, and hypersonic flows indeed were rarefied. At Mach 7, a flow of air fell in pressure to less than one part in 4000 of its initial value, reducing an initial pressure of 40 atmospheres to less than one-hundredth of an atmosphere.46 Higher test-section pressures would have required correspond­ingly higher pressures in the tank and upstream of the nozzle. But low test-section pressures were desirable because they were physically realistic. They corresponded to conditions in the upper atmosphere, where hypersonic missiles were to fly.

Becker reported in 1950 that the limit of usefulness of the schlieren method “is reached at a pressure of about 1 mm of mercury for slender test models at M = 7-0.”47 This corresponded to the pressure in the atmosphere at 150,000 feet, and there was interest in reaching the equivalent of higher altitudes still. A consultant, Joseph Kaplan, recommended using nitrogen as a test gas and making use of an afterglow that persists momentarily within this gas when it has been excited by an electrical discharge. With the nitrogen literally glowing in the dark, it became much easier to see shock waves and other features of the flow field at very low pressures.

“The nitrogen afterglow appears to be usable at static pressures as low as 100 microns and perhaps lower,” Becker wrote.48 This corresponded to pressures of barely a ten-thousandth of an atmosphere, which exist near 230,000 feet. It also corresponded to the pressure in the test section of a blowdown wind tunnel with air in the tank at 50 atmospheres and the flow at Mach 13.8.49 Clearly, flow visualiza­tion would not be a problem.

Condensation, nozzle design, and flow visualization were important topics in their own right. Nor were they merely preliminaries. They addressed an important reason for building this tunnel: to learn how to design and use subsequent hyper­sonic facilities. In addition, although this 1 l-inch tunnel was small, there was much interest in using it for studies in hypersonic aerodynamics.

This early work had a somewhat elementary character, like the hypersonic exper­iments of Erdmann at Peenemunde. When university students take initial courses in aerodynamics, their textbooks and lab exercises deal with simple cases such as flow over a flat plate. The same was true of the first aerodynamic experiments with the 11-inch tunnel. The literature held a variety of theories for calculating lift, drag, and pressure distributions at hypersonic speeds. The experiments produced data that permitted comparison with theory—to check their accuracy and to determine circumstances under which they would fail to hold.

One set of tests dealt with cone-cylinder configurations at Mach 6.86. These amounted to small and simplified representations of a missile and its nose cone. The test models included cones, cylinders with flat ends, and cones with cylindri­cal afterbodies, studied at various angles of attack. For flow over a cone, the British researchers Geoffrey I. Taylor and J. Ml Maccoll published a treatment in 1933- This quantitative discussion was a cornerstone of supersonic theory and showed its merits anew at this high Mach number. An investigation showed that it held “with a high degree of accuracy.”

The method of characteristics, devised by Prandtl and Busemann in 1929, was a standard analytical method for designing surfaces for supersonic flow, including wings and nozzles. It was simple enough to lend itself to hand computation, and it gave useful results at lower supersonic speeds. Tests in the 11-inch facility showed that it continued to give good accuracy in hypersonic flow. For flow with angle of attack, a theory put forth by Antonio Ferri, a leading Italian aerodynamicist, pro­duced “very good results.” Still, not all preexisting theories proved to be accurate. One treatment gave good results for drag but overestimated some pressures and values of lift.50

Boundary-layer effects proved to be important, particularly in dealing with hypersonic wings. Tests examined a triangular delta wing and a square wing, the latter having several airfoil sections. Existing theories gave good results for lift and drag at modest angles of attack. However, predicted pressure distributions were often in error. This resulted from flow separation at high angles of attack—and from the presence of thick laminar boundary layers, even at zero angle of attack. These finds held high significance, for the very purpose of a hypersonic wing was to generate a pressure distribution that would produce lift, without making the vehicle unstable and prone to go out of control while in flight.

The aerodynamicist Charles McLellan, who had worked with Becker in design­ing the 11-inch tunnel and who had become its director, summarized the work within the Journal of the Aeronautical Sciences. He concluded that near Mach 7, the aerodynamic characteristics of wings and bodies “can be predicted by available theo­retical methods with the same order of accuracy usually obtainable at lower speeds, at least for cases in which the boundary layer is laminar.”51

At hypersonic speeds, boundary layers become thick because they sustain large temperature changes between the wall and the free stream. Mitchel Bertram, a col­league of McLellan, gave an approximate theory for the laminar hypersonic boundary layer on a flat plate. Using the 11-inch tunnel, he showed good agreement between his theory and experiment in several significant cases. He noted that boundary – layer effects could increase drag coefficients at least threefold, when compared with

values using theories that include only free-stream flow and ignore the boundary layer. This emphasized anew the importance of the boundary layer in producing hypersonic skin friction.52

These results were fundamental, both for aerodynamics and for wind-tunnel design. With them, the 1 l-inch tunnel entered into a brilliant career. It had been built as a pilot facility, to lay groundwork for a much larger hypersonic tunnel that could sustain continuous flows. This installation, the Continuous Flow Hypersonic Tunnel (CFHT), indeed was built. Entering service in 1962, it had a 31-inch test section and produced flows at Mach 10.53

Still, it took a long time for this big tunnel to come on line, and all through the 1950s the 11-inch facility continued to grow in importance. At its peak, in 1961, it conducted more than 2,500 test runs, for an average of 10 per working day. It remained in use until 1972.54 It set the pace with its use of the blowdown principle, which eliminated the need for costly continuous-flow compressors. Its run times proved to be adequate, and the CFHT found itself hard-pressed to offer much that was new. It had been built for continuous operation but found itself used in a blowdown mode most of the time. Becker wrote that his 11-inch installation “far exceeded” the CFHT “in both the importance and quality of its research output.” He described it as “the only ‘pilot tunnel’ in NACA history to become a major research facility in its own right.”55

Yet while the work of this wind tunnel was fundamental to the development of hypersonics, in 1950 the field of hypersonics was not fundamental to anything in particular. Plenty of people expected that America in time would build missiles and aircraft for flight at such speeds, but in that year no one was doing so. This soon changed, and the keyyearwas 1954. In that year the Air Force embraced theX-15, a hypersonic airplane for which studies in the 11-inch tunnel proved to be essential. Also in that year, advances in the apparently unrelated field of nuclear weaponry brought swift and emphatic approval for the development of the ICBM. With this, hypersonics vaulted to the forefront of national priority.

On LACE and ACES

We consider the estimated LACE-ACES performance very optimistic. In several cases complete failure of the project would result from any significant performance degradation from the present estimates…. Obviously the advantages claimed for the system will not be available unless air can be condensed and purified very rapidly during flight. The figures reported indicate that about 0.8 ton of air per second would have to be processed.

In conventional, i. e., ordinary commercial equipment, this would require a distillation column having a cross section on the order of 500 square feet…. It is proposed to increase the capacity of equipment of otherwise conventional design by using centrifugal force. This may be possible, but as far as the Committee knows this has never been accomplished.

On other propulsion systems:

When reduced to a common basis and compared with the best of current technology, all assumed large advances in the state-of-the-art…. On the basis of the best of current technology, none of the schemes could deliver useful payloads into orbits.

On vehicle design:

We are gravely concerned that too much emphasis may be placed on the more glamorous aspects of the Aerospace Plane resulting in neglect of what appear to be more conventional problems. The achievement of low structural weight is equally important… as is the development of a highly successful propulsion system.

Regarding scramjets, the panel was not impressed with claims that supersonic combustion had been achieved in existing experiments:

These engine ideas are based essentially upon the feasibility of diffusion deflagration flames in supersonic flows. Research should be immediately initiated using existing facilities… to substantiate the feasibility of this type of combustion.

The panelists nevertheless gave thumbs-up to the Aerospaceplane effort as a con­tinuing program of research. Their report urged a broadening of topics, placing greater emphasis on scramjets, structures and materials, and two-stage-to-orbit con­figurations. The array of proposed engines were “all sufficiently interesting so that research on all of them should be continued and emphasized.”65

As the studies went forward in the wake of this review, new propulsion concepts continued to flourish. Lockheed was in the forefront. This firm had initiated com­pany-funded work during the spring of 1959 and had a well-considered single-stage concept two years later. An artists rendering showed nine separate rocket nozzles at its tail. The vehicle also mounted four ramjets, set in pods beneath the wings.

Convair’s Space Plane had used separated nitrogen as a propellant, heating it in the LACE precooler and allowing it to expand through a nozzle to produce thrust. Lockheed’s Aerospace Plane turned this nitrogen into an important system element, with specialized nitrogen rockets delivering 125,000 pounds of thrust. This cer­tainly did not overcome the drag produced by air collection, which would have turned the vehicle into a perpetual motion machine. However, the nitrogen rockets made a valuable contribution.66

On LACE and ACES

Lockheed’s Aerospaceplane concept. The alternate hypersonic in-flight refueling system approach called for propellant transfer at Mach 6. (Art by Dennis Jenkins)

On LACE and ACES

Republic’s Aerospaceplane concept showed extensive engine-airframe integration. (Republic Aviation)

For takeoff, Lockheed expected to use Turbo-LACE. This was a LACE variant that sought again to reduce the inherently hydrogen-rich operation of the basic system. Rather than cool the air until it was liquid, Turbo-Lace chilled it deeply but allowed it to remain gaseous. Being very dense, it could pass through a turbocom­pressor and reach pressures in the hundreds of psi. This saved hydrogen because less was needed to accomplish this cooling. The Turbo-LACE engines were to operate at chamber pressures of 200 to 250 psi, well below the internal pressure of standard rockets but high enough to produce 300,000 pounds of thrust by using turbocom – pressed oxygen.67

Republic Aviation continued to emphasize the scramjet. A new configuration broke with the practice of mounting these engines within pods, as if they were turbojets. Instead, this design introduced the important topic of engine-airframe integration by setting forth a concept that amounted to a single enormous scramjet fitted with wings and a tail. A conical forward fuselage served as an inlet spike. The inlets themselves formed a ring encircling much of the vehicle. Fuel tankage filled most of its capacious internal volume.

This design study took two views regarding the potential performance of its engines. One concept avoided the use of LACE or ACES, assuming again that this craft could scram all the way to orbit. Still, it needed engines for takeoff so turbo­ramjets were installed, with both Pratt & Whitney and General Electric providing candidate concepts. Republic thus was optimistic at high Mach but conservative at low speed.

The other design introduced LACE and ACES both for takeoff and for final ascent to orbit and made use of yet another approach to derichening the hydrogen. This was SuperLACE, a concept from Marquardt that placed slush hydrogen rather than standard liquid hydrogen in the main tank. The slush consisted of liquid that contained a considerable amount of solidified hydrogen. It therefore stood at the freezing point of hydrogen, 14 K, which was markedly lower than the 21 К of liquid hydrogen at the boiling point.68

SuperLACE reduced its use of hydrogen by shunting part of the flow, warmed in the LACE heat exchanger, into the tank. There it mixed with the slush, chilling again to liquid while melting some of the hydrogen ice. Careful control of this flow ensured that while the slush in the tank gradually turned to liquid and rose toward the 21 К boiling point, it did not get there until the air-collection phase of a flight was finished. As an added bonus, the slush was noticeably denser than the liquid, enabling the tank to hold more fuel.69

LACE and ACES remained in the forefront, but there also was much interest in conventional rocket engines. Within the Aerospaceplane effort, this approach took the name POBATO, Propellants On Board At Takeoff. These rocket-powered vehicles gave points of comparison for the more exotic types that used LACE and scramjets, but here too people used their imaginations. Some POBATO vehicles ascended vertically in a classic liftoff, but others rode rocket sleds along a track while angling sharply upward within a cradle.70

In Denver, the Martin Company took rocket-powered craft as its own, for this firm expected that a next-generation launch vehicle of this type could be ready far sooner than one based on advanced airbreathing engines. Its concepts used vertical liftoff, while giving an opening for the ejector rocket. Martin introduced a concept of its own called RENE, Rocket Engine Nozzle Ejector (RENE), and conducted experiments at the Arnold Engineering Development Center. These tests went for­ward during 1961, using a liquid rocket engine, with nozzle of 5-inch diameter set within a shroud of 17-inch width. Test conditions corresponded to flight at Mach 2 and 40,000 feet, with the shrouds or surrounding ducts having various lengths to achieve increasingly thorough mixing. The longest duct gave the best perfor­mance, increasing the rated 2,000-pound thrust of the rocket to as much as 3,100 pounds.71

A complementary effort at Marquardt sought to demonstrate the feasibility of LACE. The work started with tests of heat exchangers built by Garrett AiResearch that used liquid hydrogen as the working fluid. A company-made film showed dark liquid air coming down in a torrent, as seen through a porthole. Further tests used this liquefied air in a small thrust chamber. The arrangement made no attempt to derichen the hydrogen flow; even though it ran very fuel-rich, it delivered up to 275 pounds of thrust. As a final touch, Marquardt crafted a thrust chamber of 18-inch diameter and simulated LACE operation by feeding it with liquid air and gaseous hydrogen from tanks. It showed stable combustion, delivering thrust as high as 5,700 pounds.72

Within the Air Force, the SAB’s Ad Hoc Committee on Aerospaceplane contin­ued to provide guidance along with encouraging words. A review of July 1962 was less skeptical in tone than the one of 18 months earlier, citing “several attractive arguments for a continuation of this program at a significant level of funding”:

It will have the military advantages that accrue from rapid response times and considerable versatility in choice of landing area. It will have many of the advantages that have been demonstrated in the X-15 program, namely, a real pay-off in rapidly developing reliability and operational pace that comes from continuous re-use of the same hardware again and again. It may turn out in the long run to have a cost effectiveness attractiveness… the cost per pound may eventually be brought to low levels. Finally, the Aerospaceplane program will develop the capability for flights in the atmosphere at hypersonic speeds, a capability that may be of future use to the Defense Department and possibly to the airlines.73

Single-stage-to-orbit (SSTO) was on the agenda, a topic that merits separate comment. The space shuttle is a stage-and-a-half system; it uses solid boosters plus a main stage, with all engines burning at liftoff. It is a measure of progress, or its lack, in astronautics that the Soviet R-7 rocket that launched the first Sputniks was also stage-and-a-half.74 The concept of SSTO has tantalized designers for decades, with these specialists being highly ingenious and ready to show a can-do spirit in the face of challenges.

This approach certainly is elegant. It also avoids the need to launch two rockets to do the work of one, and if the Earth’s gravity field resembled that of Mars, SSTO would be the obvious way to proceed. Unfortunately, the Earth’s field is consider­ably stronger. No SSTO has ever reached orbit, either under rocket power or by using scramjets or other airbreathers. The technical requirements have been too severe.

The SAB panel members attended three days of contractor briefings and reached a firm conclusion: “It was quite evident to the Committee from the presentation of nearly all the contractors that a single stage to orbit Aerospaceplane remains a highly speculative effort.” Reaffirming a recommendation from its I960 review, the group urged new emphasis on two-stage designs. It recommended attention to “develop­ment of hydrogen fueled turbo ramjet power plants capable of accelerating the first

stage to Mach 6.0 to 10.0____ Research directed toward the second stage which

will ultimately achieve orbit should be concentrated in the fields of high pressure hydrogen rockets and supersonic burning ramjets and air collection and enrichment systems. n

Convair, home of Space Plane, had offered single-stage configurations as early as I960. By 1962 its managers concluded that technical requirements placed such a vehicle out of reach for at least the next 20 years. The effort shifted toward a two-stage concept that took form as the 1964 Point Design Vehicle. With a gross takeoff weight of700,000 pounds, the baseline approach used turboramjets to reach Mach 5. It cruised at that speed while using ACES to collect liquid oxygen, then accelerated anew using ramjets and rockets. Stage separation occurred at Mach 8.6 and 176,000 feet, with the second stage reaching orbit on rocket power. The pay – load was 23,000 pounds with turboramjets in the first stage, increasing to 35,000 pounds with the more speculative SuperLACE.

The documentation of this 1964 Point Design, filling 16 volumes, was issued during 1963. An important advantage of the two-stage approach proved to lie in the opportunity to optimize the design of each stage for its task. The first stage was a Mach 8 aircraft that did not have to fly to orbit and that carried its heavy wings, structure, and ACES equipment only to staging velocity. The second-stage design showed strong emphasis on re-entry; it had a blunted shape along with only modest requirements for aerodynamic performance. Even so, this Point Design pushed the state of the art in materials. The first stage specified superalloys for the hot underside along with titanium for the upper surface. The second stage called for coated refrac­tory metals on its underside, with superalloys and titanium on its upper surfaces.76

Although more attainable than its single-stage predecessors, the Point Design still relied on untested technologies such as ACES, while anticipating use in aircraft structures of exotic metals that had been studied merely as turbine blades, if indeed they had gone beyond the status of laboratory samples. The opportunity neverthe­less existed for still greater conservatism in an airbreathing design, and the man who pursued it was Ernst Steinhoff. He had been present at the creation, having worked with Wernher von Braun on Germany’s wartime V-2, where he headed up the development of that missiles guidance. After I960 he was at the Rand Corpo­ration, where he examined Aerospaceplane concepts and became convinced that single-stage versions would never be built. He turned to two-stage configurations and came up with an outline of a new one: ROLS, the Recoverable Orbital Launch System. During 1963 he took the post of chief scientist at Holloman Air Force Base and proceeded to direct a formal set of studies.77

The name of ROLS had been seen as early as 1959, in one of the studies that had grown out of SR-89774, but this concept was new. Steinhoff considered that the staging velocity could be as low as Mach 3. At once this raised the prospect that the first stage might take shape as a modest technical extension of the XB-70, a large bomber designed for flight at that speed, which at the time was being readied for flight test. ROLS was to carry a second stage, dropping it from the belly like a bomb, with that stage flying on to orbit. An ACES installation would provide the liquid oxidizer prior to separation, but to reach from Mach 3 to orbital speed, the second stage had to be simple indeed. Steinhoff envisioned a long vehicle resembling a tor­pedo, powered by hydrogen-burning rockets but lacking wings and thermal protec­tion. It was not reusable and would not reenter, but it would be piloted. A project report stated, “Crew recovery is accomplished by means of a reentry capsule of the Gemini-Apollo class. The capsule forms the nose section of the vehicle and serves as the crew compartment for the entire vehicle.”78

ROLS appears in retrospect as a mirror image of NASA’s eventual space shuttle, which adopted a technically simple booster—a pair of large solid-propellant rock­ets—while packaging the main engines and most other costly systems within a fully – recoverable orbiter. By contrast, ROLS used a simple second stage and a highly intricate first stage, in the form of a large delta-wing airplane that mounted eight turbojet engines. Its length of 335 feet was more than twice that of a B-52. Weigh­ing 825,000 pounds at takeoff, ROLS was to deliver a payload of 30,000 pounds to orbit.79

Such two-stage concepts continued to emphasize ACES, while still offering a role for LACE. Experimental test and development of these concepts therefore remained on the agenda, with Marquardt pursuing further work on LACE. The earlier tests, during I960 and 1961, had featured an off-the-shelf thrust chamber that had seen use in previous projects. The new work involved a small LACE engine, the MAI 17, that was designed from the start as an integrated system.

LACE had a strong suit in its potential for a very high specific impulse, I. This is the ratio of thrust to propellant flow rate and has dimensions of seconds. It is a key measure of performance, is equivalent to exhaust velocity, and expresses the engine’s fuel economy. Pratt & Whitney’s RL10, for instance, burned hydrogen and oxygen to give thrust of 15,000 pounds with an I of 433 seconds.80 LACE was an airbreather, and its I could be enormously higher because it took its oxidizer from the atmosphere rather than carrying it in an onboard tank. The term “propellant flow rate” referred to tanked propellants, not to oxidizer taken from the air. For LACE this meant fuel only.

The basic LACE concept produced a very fuel-rich exhaust, but approaches such as RENE and SuperLACE promised to reduce the hydrogen flow substan­tially. Indeed, such concepts raised the prospect that a LACE system might use an optimized mixture ratio of hydrogen and oxidizer, with this ratio being selected to give the highest I. The MAI 17 achieved this performance artificially by using a large flow of liquid hydrogen to liquefy air and a much smaller flow for the thrust chamber. Hot-fire tests took place during December 1962, and a company report stated that “the system produced 83% of the idealized theoretical air flow and 81% of the idealized thrust. These deviations are compatible with the simplifications of the idealized analysis.”81

The best performance run delivered 0.783 pounds per second of liquid air, which burned a flow of 0.0196 pounds per second of hydrogen. Thrust was 73 pounds; I reached 3,717 seconds, more than eight times that of the RL10. Tests of the MAI 17 continued during 1963, with the best measured values of Is topping 4,500 seconds.82

In a separate effort, the Marquardt manager Richard Knox directed the pre­liminary design of a much larger LACE unit, the MAI 16, with a planned thrust of

10,0 pounds. On paper, it achieved substantial derichening by liquefying only one-fifth of the airflow and using this liquid air in precooling, while deeply cooling the rest of the airflow without liquefaction. A turbocompressor then was to pump this chilled air into the thrust chamber. A flow of less than four pounds per second of liquid hydrogen was to serve both as fuel and as primary coolant, with the antici­pated I exceeding 3,000 seconds.83

New work on RENE also flourished. The Air Force had a cooperative agree­ment with NASA’s Marshall Space Flight Center, where Fritz Pauli had developed a subscale rocket engine that burned kerosene with liquid oxygen for a thrust of 450 pounds. Twelve of these small units, mounted to form a ring, gave a basis for this new effort. The earlier work had placed the rocket motor squarely along the center – line of the duct. In the new design, the rocket units surrounded the duct, leaving it unobstructed and potentially capable of use as an ejector ramjet. The cluster was tested successfully at Marshall in September 1963 and then went to the Air Forces AEDC. As in the RENE tests of 1961, the new configuration gave a thrust increase of as much as 52 percent.84

While work on LACE and ejector rockets went forward, ACES stood as a par­ticularly critical action item. Operable ACES systems were essential for the practical success of LACE. Moreover, ACES had importance distinctly its own, for it could provide oxidizer to conventional hydrogen-burning rocket engines, such as those of ROLS. As noted earlier, there were two techniques for air separation: by chemi­cal methods and through use of a rotating fractional distillation apparatus. Both approaches went forward, each with its own contractor.

In Cambridge, Massachusetts, the small firm of Dynatech took up the challenge of chemical separation, launching its effort in May 1961. Several chemical reac­tions appeared plausible as candidates, with barium and cobalt offering particular promise:

2BaO, / 2BaO + 02 2Co304 ^ 6CoO + 02

The double arrows indicate reversibility. The oxidation reactions were exother­mic, occurring at approximately 1,600°F for barium and 1,800°F for cobalt. The reduction reactions, which released the oxygen, were endothermic, allowing the oxides to cool as they yielded this gas.

Dynatechs separator unit consisted of a long rotating drum with its interior divided into four zones using fixed partitions. A pebble bed of oxide-coated particles lined the drum interior; containment screens held the particles in place while allow­ing the drum to rotate past the partitions with minimal leakage. The zones exposed the oxide alternately to high-pressure ram air for oxidation and to low pressure for reduction. The separation was to take place in flight, at speeds of Mach 4 to Mach 5, but an inlet could slow the internal airflow to as little as 50 feet per second, increas­ing the residence time of air within a unit. The company proposed that an array of such separators weighing just under 10 tons could handle 2,000 pounds per second of airflow while producing liquid oxygen of 65 percent purity.85

Ten tons of equipment certainly counts within a launch vehicle, even though it included the weight of the oxygen liquefaction apparatus. Still it was vastly lighter than the alternative: the rotating distillation system. The Linde Division of Union Carbide pursued this approach. Its design called for a cylindrical tank containing the distillation apparatus, measuring nine feet long by nine feet in diameter and rotating at 570 revolutions per minute. With a weight of 9,000 pounds, it was to process 100 pounds per second of liquefied air—which made it 10 times as heavy as the Dynatech system, per pound of product. The Linde concept promised liquid oxygen of 90 percent purity, substantially better than the chemical system could offer, but the cited 9,000-pound weight left out additional weight for the LACE equipment that provided this separator with its liquefied air.8S

A study at Convair, released in October 1963, gave a clear preference to the Dynatech concept. Returning to the single-stage Space Plane of prior years, Convair engineers considered a version with a weight at takeoff of 600,000 pounds, using either the chemical or the distillation ACES. The effort concluded that the Dynatech separator offered a payload to orbit of 35,800 using barium and 27,800 pounds with cobalt. The Linde separator reduced this payload to 9,500 pounds. Moreover, because it had less efficiency, it demanded an additional 31,000 pounds of hydrogen fuel, along with an increase in vehicle volume of 10,000 cubic feet.87

The turn toward feasible concepts such as ROLS, along with the new emphasis on engineering design and test, promised a bright future for Aerospaceplane studies. However, a commitment to serious research and development was another matter. Advanced test facilities were critical to such an effort, but in August 1963 the Air Force canceled plans for a large Mach 14 wind tunnel at AEDC. This decision gave a clear indication of what lay ahead.88

A year earlier Aerospaceplane had received a favorable review from the SAB Ad Hoc Committee. The program nevertheless had its critics, who existed particularly within the SAB’s Aerospace Vehicles and Propulsion panels. In October 1963 they issued a report that dealt with proposed new bombers and vertical-takeoff-and – landing craft, as well as with Aerospaceplane, but their view was unmistakable on that topic:

The difficulties the Air Force has encountered over the past three years in identifying an Aerospaceplane program have sprung from the facts that the requirement for a fully recoverable space launcher is at present only vaguely defined, that today’s state-of-the-art is inadequate to support any real hardware development, and the cost of any such undertaking will be extremely large…. [T]he so-called Aerospaceplane program has had such an erratic history, has involved so many clearly infeasible factors, and has been subject to so much ridicule that from now on this name should be dropped. It is also recommended that the Air Force increase the vigilance that no new program achieves such a difficult position.89

Aerospaceplane lost still more of its rationale in December, as Defense Secretary Robert McNamara canceled Dyna-Soar. This program was building a mini-space shuttle that was to fly to orbit atop a Titan III launch vehicle. This craft was well along in development at Boeing, but program reviews within the Pentagon had failed to find a compelling purpose. McNamara thus disposed of it.90

Prior to this action, it had been possible to view Dyna-Soar as a prelude to opera­tional vehicles of that general type, which might take shape as Aerospaceplanes. The cancellation of Dyna-Soar turned the Aerospaceplane concept into an orphan, a long-term effort with no clear relation to anything currently under way. In the wake of McNamara’s decision, Congress deleted funds for further Aerospaceplane studies, and Defense Department officials declined to press for its restoration within the FY 1964 budget, which was under consideration at that time. The Air Force carried forward with new conceptual studies of vehicles for both launch and hypersonic cruise, but these lacked the focus on advanced airbreathing propulsion that had characterized Aerospaceplane.91

There nevertheless was real merit to some of the new work, for this more realistic and conservative direction pointed out a path that led in time toward NASA’s space shuttle. The Martin Company made a particular contribution. It had designed no Aerospaceplanes; rather, using company funding, its technical staff had examined concepts called Astro rockets, with the name indicating the propulsion mode. Scram – jets and LACE won little attention at Martin, but all-rocket vehicles were another matter. A concept of 1964 had a planned liftoff weight of 1,250 tons, making it intermediate in size between the Saturn I-B and Saturn V. It was a two-stage fully – reusable configuration, with both stages having delta wings and flat undersides. These undersides fitted together at liftoff, belly to belly.

On LACE and ACES

Martin’s Astrorocket. (U. S. Air Force)

The design concepts of that era were meant to offer glimpses of possible futures, but for this Astrorocket, the future was only seven years off. It clearly foreshadowed a class of two-stage fully reusable space shuttles, fitted with delta wings, that came to the forefront in NASA-sponsored studies of 1971- The designers at Martin were not clairvoyant; they drew on the background of Dyna-Soar and on studies at NASA – Ames of winged re-entry vehicles. Still, this concept demonstrated that some design exercises were returning to the mainstream.92

Further work on ACES also proceeded, amid unfortunate results at Dynatech. That company’s chemical separation processes had depended for success on having a very large area of reacting surface within the pebble-bed air separators. This appeared achievable through such means as using finely divided oxide powders or porous particles impregnated with oxide. But the research of several years showed that the oxide tended to sinter at high temperatures, markedly diminishing the reacting sur­face area. This did not make chemical separation impossible, but it sharply increased the size and weight of the equipment, which robbed this approach of its initially strong advantage over the Linde distillation system. This led to abandonment of Dynatech’s approach.93

Linde’s system was heavy and drastically less elegant than Dynatech’s alterna­tive, but it amounted largely to a new exercise in mechanical engineering and went forward to successful completion. A prototype operated in test during 1966, and

while limits to the company’s installed power capacity prevented the device from processing the rated flow of 100 pounds of air per second, it handled 77 pounds per second, yielding a product stream of oxygen that was up to 94 percent pure. Studies of lighter-weight designs also proceeded. In 1969 Linde proposed to build a distil­lation air separator, rated again at 100 pounds per second, weighing 4,360 pounds. This was only half the weight allowance of the earlier configuration.94

In the end, though, Aerospaceplane failed to identify new propulsion concepts that held promise and that could be marked for mainstream development. The program’s initial burst of enthusiasm had drawn on a view that the means were in hand, or soon would be, to leap beyond the liquid-fuel rocket as the standard launch vehicle and to pursue access to orbit using methods that were far more advanced. The advent of the turbojet, which had swiftly eclipsed the piston engine, was on everyone’s mind. Yet for all the ingenuity behind the new engine concepts, they failed to deliver. What was worse, serious technical review gave no reason to believe that they could deliver.

In time it would become clear that hypersonics faced a technical wall. Only limited gains were achievable in airbreathing propulsion, with single-stage-to-orbit remaining out of reach and no easy way at hand to break through to the really advanced performance for which people hoped.

On LACE and ACES

Propulsion

In the spring of 1992 the NASP Joint Program Office presented a final engine design called the E22A. It had a length of 60 feet and included an inlet ramp, cowled inlet, combustor, and nozzle. An isolator, located between the inlet and combustor, sought to prevent unstarts by processing flow from the inlet through a series of oblique shocks, which increased the backpressure from the combustor.

Program officials then constructed two accurately scaled test models. The Sub­scale Parametric Engine (SXPE) was built to one-eighth scale and had a length of eight feet. It was tested from April 1993 to March 1994. The Concept Demonstra­tor Engine (CDE), which followed, was built to a scale of 30 percent. Its length topped 16 feet, and it was described as “the largest airframe-integrated scramjet engine ever tested.”26

In working with the SXPE, researchers had an important goal in achieving com­bustion of hydrogen within its limited length. To promote rapid ignition, the engine used a continuous flow of a silane-hydrogen mixture as a pilot, with the silane ignit­ing spontaneously on exposure to air. In addition, to promote mixing, the model incorporated an accurate replication of the spacing between the fuel-injecting struts and ramps, with this spacing being preserved at the model’s one-eighth scale. The combustor length required to achieve the desired level of mixing then scaled in this fashion as well.

The larger CDE was tested within the Eight-Foot High-Temperature Tunnel, which was Langleys biggest hypersonic facility. The tests mapped the flowfield entering the engine, determined the performance of the inlet, and explored the potential performance of the design. Investigators varied the fuel flow rate, using the combustors to vary its distribution within the engine.

Boundary-layer effects are important in scramjets, and the tests might have rep­licated the boundary layers of a full-scale engine by operating at correspondingly higher flow densities. For the CDE, at 30 percent scale, the appropriate density would have been 1/0.3 or 3-3 times that of the atmospheric density at flight alti­tude. For the SXPE, at one-eighth scale, the test density would have shown an eight­fold increase over atmospheric. However, the SXPE used an arc-heated test facility that was limited in the power that drove its arc, and it provided its engine with air at only one-fiftieth of that density. The High Temperature Tunnel faced limits on its flow rate and delivered its test gas at only one-sixth of the appropriate density.

Engineers sought to compensate by using analytical methods to determine the drag in a full-scale engine. Still, this inability to replicate boundary-layer effects meant that the wind-tunnel tests gave poor simulations of internal drag within the test engines. This could have led to erroneous estimates of true thrust, net of drag. In turn, this showed that even when working with large test models and with test facilities of impressive size, true simulations of the boundary layer were ruled out from the start.27

For takeoff from a runway, the X-30 was to use a Low-Speed System (LSS). It comprised two principal elements: the Special System, an ejector ramjet; and the Low Speed Oxidizer System, which used LACE.28 The two were highly synergistic. The ejector used a rocket, which might have been suitable for the final ascent to orbit, with ejector action increasing its thrust during takeoff and acceleration. By giving an exhaust velocity that was closer to the vehicle velocity, the ejector also increased the fuel economy.

The LACE faced the standard problem of requiring far more hydrogen than could be burned in the air it liquefied. The ejector accomplished some derichen – ing by providing a substantial flow of entrained air that burned some of the excess. Additional hydrogen, warmed in the LACE heat exchanger, went into the fuel tanks, which were full of slush hydrogen. By melting the slush into conventional liquid hydrogen (LH ), some LACE coolant was recycled to stretch the vehicles fuel supply.29

There was good news in at least one area of LACE research: deicing. LACE systems have long been notorious for their tendency to clog with frozen moisture within the air that they liquefy. “The largest LACE ever built made around half a pound per second of liquid air,” Paul Czysz of McDonnell Douglas stated in 1986. “It froze up at six percent relative humidity in the Arizona desert, in 38 seconds.” Investigators went on to invent more than a dozen methods for water alleviation. The most feasible approach called for injecting antifreeze into the system, to enable the moisture to condense out as liquid water without freezing. A rotary separator eliminated the water, with the dehumidified air being so cold as to contain very little residual water vapor.30

The NASP program was not run by shrinking violets, and its managers stated that its LACE was not merely to operate during hot days in the desert near Phoenix. It was to function even on rainy days, for the X-30 was to be capable of flight from anywhere in the world. At NASA-Lewis, James Van Fossen built a water-alleviation system that used ethylene glycol as the antifreeze, spraying it directly onto the cold tubes of a heat exchanger. Water, condensing on those tubes, dissolved some of the glycol and remained liquid as it swept downstream with the flow. Fie reported that this arrangement protected the system against freezing at temperatures as low as ~55°F, with the moisture content of the chilled air being reduced to 0.00018 pounds in each pound of this air. This represented removal of at least 99 percent of the humidity initially present in the airflow.31

Pratt & Whitney conducted tests of a LACE precooler that used this arrange­ment. A company propulsion manager, Walt Lambdin, addressed a NASP technical review meeting in 1991 and reported that it completely eliminated problems of reduced performance of the precooler due to formation of ice. With this, the prob­lem of ice in a LACE system appeared amenable to control.32

It was also possible to gain insight into the LACE state of the art by considering contemporary work that was under way in Japan. The point of departure in that country was the H-2 launch vehicle, which first flew to orbit in February 1994. It was a two-stage expendable rocket, with a liquid-fueled core flanked by two solid boosters. LACE was pertinent because a long-range plan called for upgrades that could replace the solid strap-ons with new versions using LACE engines.33

Mitsubishi Heavy Industries was developing the H-2 s second-stage engine, des­ignated LE-5. It burned hydrogen and oxygen to produce 22,000 pounds of thrust. As an initial step toward LACE, this company built heat exchangers to liquefy air for this engine. In tests conducted during 1987 and 1988, the Mitsubishi heat exchanger demonstrated liquefaction of more than three pounds of air for every pound of LH2. This was close to four to one, the theoretical limit based on the ther­mal properties of LH2 and of air. Still, it takes 34.6 pounds of air to burn a pound of hydrogen, and an all-LACE LE-5 was to run so fuel-rich that its thrust was to be only 6,000 pounds.

But the Mitsubishi group found their own path to prevention of ice buildup. They used a freeze-thaw process, melting ice by switching periodically to the use of ambient air within the cooler after its tubes had become clogged with ice from LH2. The design also provided spaces between the tubes and allowed a high-speed airflow to blow ice from them.34

LACE nevertheless remained controversial, and even with the moisture problem solved, there remained the problem of weight. Czysz noted that an engine with

100,0 pounds of thrust would need 600 pounds per second of liquid air: “The largest liquid-air plant in the world today is the AiResearch plant in Los Angeles, at 150 pounds per second. It covers seven acres. It contains 288,000 tubes welded to headers and 59 miles of 3/32-inch tubing.”35

Still, no law required the use of so much tubing, and advocates of LACE have long been inventive. A 1963 Marquardt concept called for an engine with 10,000 pounds of thrust, which might have been further increased by using an ejector. This appeared feasible because LACE used LH, as the refrigerant. This gave far greater effectiveness than the AiResearch plant, which produced its refrigerant on the spot by chilling air through successive stages.36

For LACE heat exchangers, thin-walled tubing was essential. The Japanese model, which was sized to accommodate the liquid-hydrogen flow rate of the LE – 5, used 5,400 tubes and weighed 304 pounds, which is certainly noticeable when the engine is to put out no more than 6,000 pounds of thrust. During the mid – 1960s investigators at Marquardt and AiResearch fabricated tubes with wall thick­nesses as low as 0.001 inch, or one mil. Such tubes had not been used in any heat exchanger subassemblies, but 2-mil tubes of stainless steel had been crafted into a heat exchanger core module with a length of 18 inches.37

Even so, this remained beyond the state of the art for NASP, a quarter-cen­tury later. Weight estimates for the X-30 LACE heat exchanger were based on the assumed use of З-mil Weldalite tubing, but a 1992 Lockheed review stated, “At present, only small quantities of suitable, leak free, З-mil tubing have been fabri­cated.” The plans of that year called for construction of test prototypes using 6-mil Weldalite tubing, for which “suppliers have been able to provide significant quanti­ties.” Still, a doubled thickness of the tubing wall was not the way to achieve low weight.38

Other weight problems arose in seeking to apply an ingenious technique for derichening the product stream by increasing the heat capacity of the LH2 coolant. Molecular hydrogen, H2, has two atoms in its molecule and exists in two forms: para and ortho, which differ in the orientation of the spins of their electrons. The ortho form has parallel spin vectors, while the para form has spin vectors that are oppositely aligned. The ortho molecule amounts to a higher-energy form and loses energy as heat when it transforms into the para state. The reaction therefore is exo­thermic.

The two forms exist in different equilibrium concentrations, depending on the temperature of the bulk hydrogen. At room temperature the gas is about 25 percent para and 75 percent ortho. When liquefied, the equilibrium state is 100 percent para. Hence it is not feasible to prepare LH2 simply by liquefying the room-tem­perature gas. The large component of ortho will relax to para over several hours, producing heat and causing the liquid to boil away. The gas thus must be exposed to a catalyst to convert it to the para form before it is liquefied.

These aspects of fundamental chemistry also open the door to a molecular shift that is endothermic and that absorbs heat. One achieves this again by using a cata­lyst to convert the LH, from para to ortho. This reaction requires heat, which is obtained from the liquefying airflow within the LACE. As a consequence, the air chills more readily when using a given flow of hydrogen refrigerant. This effect is sufficiently strong to increase the heat-sink capacity of the hydrogen by as much as 25 percent.39

This concept also dates to the 1960s. Experiments showed that ruthenium metal deposited on aluminum oxide provided a suitable catalyst. For 90 percent para-to – ortho conversion, the LACE required a “beta,” a ratio of mass to flow rate, of five to seven pounds of this material for each pound per second of hydrogen flow. Data published in 1988 showed that a beta of five pounds could achieve 85 percent con­version, with this value showing improvement during 1992. However, X-30 weight estimates assumed a beta of two pounds, and this performance remained out of reach.40

During takeoff, the X-30 was to be capable of operating from existing runways and of becoming airborne at speeds similar to those of existing aircraft. The low – speed system, along with its accompanying LACE and ejector systems, therefore needed substantial levels of thrust. The ejector, again, called for a rocket exhaust to serve as a primary flow within a duct, entraining an airstream as the secondary flow. Ejectors gave good performance across a broad range of flight speeds, showing an effectiveness that increased with Mach. In the SR-71 at Mach 2.2, they accounted for 14 percent of the thrust in afterburner; at Mach 3-2 this was 28.4 percent. Nor did the SR-71 ejectors burn fuel. They functioned entirely as aerodynamic devices.41

It was easy to argue during the 1980s that their usefulness might be increased still further. The most important unclassified data had been published during the 1950s. A good engine needed a high pressure increase, but during the mid-1960s studies at Marquardt recommended a pressure rise by a factor of only about 1.5, when turbojets were showing increases that were an order of magnitude higher.42 The best theoretical treatment of ejector action dated to 1974. Its author, NASA’s В. H. Anderson, also wrote a computer program called REJECT that predicted the performance of supersonic ejectors. However, he had done this in 1974, long before the tools of CFD were in hand. A 1989 review noted that since then “little attention has been directed toward a better understanding of the details of the flow mechanism and behavior.”43

Within the NASP program, then, the ejector ramjet stood as a classic example of a problem that was well suited to new research. Ejectors were known to have good effectiveness, which might be increased still further and which stood as a good topic for current research techniques. CFD offered an obvious approach, and NASP activities supplemented computational work with an extensive program of experi­ment.44

The effort began at GASL, where Tony duPont s ejector ramjet went on a static test stand during 1985 and impressed General Skantze. DuPont’s engine design soon took the title of the Government Baseline Engine and remained a topic of active experimentation during 1986 and 1987. Some work went forward at NASA – Langley, where the Combustion Heated Scramjet Test Facility exercised ejectors over the range of Mach 1.2 to 3-5- NASA-Lewis hosted further tests, at Mach 0.06 and from Mach 2 to 3-5 within its 10 by 10 foot Supersonic Wind Tunnel.

The Lewis engine was built to accommodate growth of boundary layers and placed a 17-degree wedge ramp upstream of the inlet. Three flowpaths were mounted side by side, but only the center duct was fueled; the others were “dummies” that gave data on unfueled operation for comparison. The primary flow had a pressure of 1,000 pounds per square inch and a temperature of 1,340°F, which simulated a fuel-rich rocket exhaust. The experiments studied the impact of fuel-to-air ratio on performance, although the emphasis was on development of controls.

Even so, the performance left much to be desired. Values of fuel-to-air ratio greater than 0.52, with unity representing complete combustion, at times brought “buzz” or unwanted vibration of the inlet structure. Even with no primary flow, the inlet failed to start. The main burner never achieved thermal choking, where the flow rate would rise to the maximum permitted by heat from burning fuel. Ingestion of the boundary layer significantly degraded engine performance. Thrust measurements were described as “no good” due to nonuniform thermal expansion across a break between zones of measurement. As a contrast to this litany of woe, operation of the primary gave a welcome improvement in the isolation of the inlet from the combustor.

Also at GASL, again during 1987, an ejector from Boeing underwent static test. It used a markedly different configuration that featured an axisymmetric duct and a fuel-air mixer. The primary flow was fuel-rich, with temperatures and pressures similar to those of NASA-Lewis. On the whole, the results of the Boeing tests were encouraging. Combustion efficiencies appeared to exceed 95 percent, while mea­sured values of thrust, entrained airflow, and pressures were consistent with com­pany predictions. However, the mixer performance was no more than marginal, and its length merited an increase for better performance.45

In 1989 Pratt & Whitney emerged as a major player, beginning with a subscale ejector that used a flow of helium as the primary. It underwent tests at company facilities within the United Technologies Research Center. These tests addressed the basic issue of attempting to increase the entrainment of secondary flow, for which non-combustible helium was useful. Then, between 1990 and 1992, Pratt built three versions of its Low Speed Component Integration Rig (LSCIR), testing them all within facilities of Marquardt.

LSCIR-1 used a design that included a half-scale X-30 flowpath. It included an inlet, front and main combustors, and nozzle, with the inlet cowl featuring fixed geometry. The tests operated using ambient air as well as heated air, with and with­out fuel in the main combustor, while the engine operated as a pure ramjet for several runs. Thermal choking was achieved, with measured combustion efficiencies lying within 2 percent of values suitable for the X-30. But the inlet was unstarted for nearly all the runs, which showed that it needed variable geometry. This refinement was added to LSCIR-2, which was put through its paces in July 1991, at Mach 2.7. The test sequence would have lasted longer but was terminated prematurely due to a burnthrough of the front combustor, which had been operating at 1,740°E Thrust measurements showed only limited accuracy due to flow separation in the nozzle.

LSCIR-3 followed within months. The front combustor was rebuilt with a larger throat area to accommodate increased flow and received a new ignition system that used silane. This gas ignited spontaneously on contact with air. In tests, leaks devel­oped between the main combustor, which was actively cooled, and the uncooled nozzle. A redesigned seal eliminated the leakage. The work also validated a method for calculating heat flux to the wall due to impingement of flow from primaries.

Other results were less successful. Ignition proceeded well enough using pure silane, but a mix of silane and hydrogen failed as an ignitant. Problems continued to recur due to inlet unstarts and nozzle flow separation. The system produced 10,000 pounds of thrust at Mach 0.8 and 47,000 pounds at Mach 2.7, but this perfor­mance still was rated as low.

Within the overall LSS program, a Modified Government Baseline Engine went under test at NASA-Lewis during 1990, at Mach 3-5. The system now included hydraulically-operated cowl and nozzle flaps that provided variable geometry, along with an isolator with flow channels that amounted to a bypass around the combus­tor. This helped to prevent inlet unstarts.

Once more the emphasis was on development of controls, with many tests oper­ating the system as a pure ramjet. Only limited data were taken with the primaries on. Ingestion of the boundary layer gave significant degradation in engine perfor­mance, but in other respects most of the work went well. The ramjet operations were successful. The use of variable geometry provided reliable starting of the inlet, while operation in the ejector mode, with primaries on, again improved the inlet isolation by diminishing the effect of disturbances propagating upstream from the combustor.46

Despite these achievements, a 1993 review at Rocketdyne gave a blunt conclu­sion: “The demonstrated performance of the X-30 special system is lower than the performance level used in the cycle deck…the performance shortfall is primarily associated with restrictions on the amount of secondary flow.” (Secondary flow is entrained by the ejector’s main flow.) The experimental program had taught much concerning the prevention of inlet unstarts and the enhancement of inlet-combus­tor isolation, but the main goal—enhanced performance of the ejector ramjet—still lay out of reach.

Simple enlargement of a basic design offered little promise; Pratt & Whitney had tried that, in LSCIR-3, and had found that this brought inlet flow separation along with reduced inlet efficiency. Then in March 1993, further work on the LSS was canceled due to budget cuts. NASP program managers took the view that they could accelerate an X-30 using rockets for takeoff, as an interim measure, with the LSS being added at a later date. Thus, although the LSS was initially the critical item in duPont’s design, in time it was put on hold and held off for another day.47