Category AERONAUTICS

Good Stewards: NASA’s Role in Alternative Energy

Bruce I. Larrimer

Подпись: 13Consistent with its responsibilities to exploit aeronautics technology for the benefit of the American people, NASA has pioneered the develop­ment and application of alternative energy sources. Its work is argu­ably most evident in wind energy and solar power for high-altitude remotely piloted vehicles. Here, NASA’s work in aerodynamics, solar power, lightweight structural design, and electronic flight controls has proven crucial to the evolution of novel aerospace craft.

HIS CASE STUDY REVIEWS two separate National Aeronautics and Space Administration (NASA) programs that each involved research and development (R&D) in the use of alternative energy. The first part of the case study covers NASA’s participation in the Federal Wind Energy Program from 1974 through 1988. NASA’s work in the wind energy area included design and fabrication of large horizontal-axis wind turbine (HAWT) generators, and the conduct of supporting research and technology projects. The second part of the case study reviews NASA’s development and testing of high-altitude, long-endurance solar – powered unmanned aerial vehicles (UAVs). This program, which ran from 1994 through 2003, was part of the Agency’s Environmental Research and Aircraft Sensor Technology Program.

Solar Cells and Fuel Cells for Solar-Powered ERAST Vehicles

Подпись: 13NASA had first acquired solar cells from Spectralab but chose cells from SunPower Corporation of Sunnyvale, CA, for the ERAST UAVs. These photovoltaic cells converted sunlight directly into electricity and were lighter and more efficient than other commercially available solar cells at that time. Indeed, after NASA flew Helios, SunPower was selected to fur­nish high-efficiency solar concentrator cells for a NASA Dryden ground solar cell test installation, spring-boarding, as John Del Frate recalled subsequently, "from the technology developed on the PF+ and Helios solar cells.”[1546] The Dryden solar cell configuration consisted of two fixed – angle solar arrays and one sun-tracking array that together generated up to 5 kilowatts of direct current. Field-testing at the Dryden site helped SunPower lower production costs of its solar cells and identify uses and performance of its cells that enabled the company to develop large-scale commercial applications, resulting in the mass-produced SunPower A-300 series solar cells.[1547] SunPower’s solar cells were selected for use on the Pathfinders, Centurion, and Helios Prototype UAVs because of their high – efficiency power recovery (more than 50-percent higher than other com­mercially available cells) and because of their light weight. The solar cells designed for the last generation of ERAST UAVs could convert about 19 percent of the solar energy received into 35 kilowatts of electrical current at high noon on a summer day. The solar cells on the ERAST vehicles were bifacial, meaning that they could absorb sunlight on both sides of the cells, thus enabling the UAV s to catch sunrays reflected upward when flying above cloud covers, and were specially developed for use on the aircraft.

While solar cell technology satisfied the propulsion problem during daylight hours, a critical problem relating to long-endurance backup sys­tems remained to be solved for flying during periods of darkness. Without solving this problem, solar UAV flight would be limited to approximately 14 hours in the summer (much less, of course, in the dark of winter), plus whatever additional time could be provided by the limited (up to 5 hours for the Pathfinder) backup batteries. Although significant improvements had been made, batteries failed to satisfy both the weight limitation and long duration power generation requirements for the solar-powered UAVs.

Подпись: 13As an alternative to batteries, the ERAST alliance tested a number of different fuel cells and fuel cell power systems. An initial problem to overcome was how to develop lightweight fuel cells because only 440 pounds of Helios’s takeoff weight of 1,600 pounds were originally planned to be allocated to a backup fuel cell power system. Helios required approximately 120 kilowatthours of energy to power the craft for up to 12 hours of flight during darkness, and, fortunately, the state of fuel cell technology had advanced far enough to permit attaining this; ear­lier efforts back to the early 1980s had been frustrated because fuel cell technology was not sufficiently developed at that time. The NASA – industry team later determined, as part of the ERAST program, that a hydrogen-oxygen regenerative fuel cell system (RFCS or regen system) was the hoped for solution to the problem, and substantial resources were committed to the project.

RFCSs are closed systems whereby some of the electrical power pro­duced by the UAV’s solar array during daylight hours is sent to an electro­lyzer that takes onboard water and disassociates the water into hydrogen gas and oxygen gas, both of which are stored in tanks aboard the vehicle. During periods of darkness, the stored gases are recombined in the fuel cell, which results in the production of electrical power and water. The power is used to maintain systems and altitude. The water is then stored for reuse the following day. This cycle theoretically would repeat on a 24-hour basis for an indefinite time period. NASA and AeroVironment also considered, but did not use, a reversible regen system that instead of having an electrolyzer and a fuel cell used only a reversible fuel cell to do the work of both components.[1548]

As originally planned, Helios was to carry two separate regen fuel cell systems contained in two of four landing gear pods. This not only disbursed the weight over the flying wing, but also was in keeping with the plan for redundant systems. If one of the two fuel cells failed, Helios could still stay aloft for several days, albeit at a lower altitude. Contracts to make the fuel cell and electrolyzer were given to two companies—Giner of Waltham, MA, and Lynntech, Inc., of College Station, TX. Each of the two systems was planned to weigh 200 pounds, including 27 pounds for the fuel cell, 18 pounds for the electrolyzer, 40 pounds for oxygen and hydrogen tanks, and 45 pounds for water. The remaining 70 pounds con­sisted of plumbing, controls, and ancillary equipment.[1549]

Подпись: 13While the NASA-AeroVironment team made a substantial invest­ment in the RFCS and successfully demonstrated a nearly closed system in ground tests, it decided that the system was not yet ready to satisfy the planned flight schedule. Because of these technical difficulties and time and budget deadlines, NASA and AeroVironment agreed in 2001 to switch to a consumable hydrogen-air primary fuel cell system for the Helios Prototype’s long-endurance ERAST mission. The fuel cells were already in development for the automotive industry. The hydrogen-air fuel cell system required Helios to carry its own supply of hydrogen. In periods of darkness, power for the UAV would be produced by combining gaseous hydrogen and air from the atmosphere in a fuel cell. Because of the low air density at high altitudes, a compressor needed to be added to the sys­tem. This system, however, would operate only until the hydrogen fuel was consumed, but the team thought that the system could still provide multiple days of operation and that an advanced version might be able to stay aloft for up to 14 days. The installation plan was likewise changed. The fuel cell was now placed in one pod with the hydrogen tanks attached to the lower surface of the wing near each wingtip. This modification, of course, dramatically changed Helios’s structural loadings, transforming it from a span-loaded flying wing to a point-loaded vehicle.[1550]

Swing Wing: The Path to Variable Geometry

The notion of variable wing-sweeping dates to the earliest days of avi­ation and, in many respects, represents an expression of the "bird imi­tative” philosophy of flight that gave the ornithopter and other flexible wing concepts to aviation. Varying the sweep of a wing was first con­ceptualized as a means of adjusting longitudinal trim. Subsequently,

Swing Wing: The Path to Variable Geometry

A time-lapse photograph of the Bell X-5, showing the range of its wing sweep. Note how the wing roots translated fore and aft to accommodate changes in center of lift with varying sweep angles. NASA.

variable-geometry advocates postulated possible use of asymmetric sweeping as a means of roll control. Lippisch, pioneer of tailless and delta design, likewise filed a patent in 1942 for a scheme of wing sweeping, but it was another German, Waldemar Voigt (the chief of advanced design for the Messerschmitt firm) who triggered the path to modern variable wing-sweeping. Ironically, at the time he did so, he had no plan to make use of such a scheme himself. Rather, he designed a graceful midwing turbojet swept wing fighter, the P 1101. The German air ministry rejected its devel­opment based upon assessments of its likely utility. Voigt decided to con­tinue its development, planning to use the airplane as an in-house swept wing research aircraft, fitted with wings of varying sweep and ballasted to accommodate changes in center of lift.[110]

By war’s end, when the Oberammergau plant was overrun by American forces, the P 1101 was over 80-percent complete. A techni­cal team led by Robert J. Woods, a member of the NACA Aerodynamics Committee, moved in to assess the plant and its projects. Woods imme­diately recognized the value of the P 1101 program, but with a twist: he proposed to Voigt that the plane be finished with a wing that could be variably swept in flight, rather than with multiple wings that could be installed and removed on the ground. Woods’s advocacy, and the results of NACA variable-sweep tests by Charles Donlan of a modified XS-1 model in the Langley 7-foot by 10-foot wind tunnel, convinced the NACA to support development of such an aircraft. In May 1949, the Air Force Air Materiel Command issued a contract covering development of two Bell variable sweep airplanes, to be designated X-5. They were effectively American-built versions of the P 1101, but with American, not German, propulsion, larger cockpit canopies for greater pilot visibility, and, of course, variable sweep wings that could range from 20 to 60 degrees.[111]

Swing Wing: The Path to Variable GeometryThe first X-5 flew in June 1951 and within 5 weeks had demonstrated variable in-flight wing sweep to its maximum 60-degree aft position. Slightly over a year later, Grumman flew a prototype variable wing-sweep naval fighter, the XF10F-1 Jaguar. Neither aircraft represented a mature application of variable sweep design. The mechanism in each was heavy and complex and shifted the wing roots back and forth down the cen­terline of the aircraft to accommodate center of lift changes as the wing was swept and unswept. Each of the two had poor flying qualities unre­lated to the variable-sweep concept, reflecting badly on their design. The XF10F-1 was merely unpleasant (its test pilot, the colorful Corwin "Corky” Meyer, tellingly recollected later "I had never attended a test pilots’ school, but, for me, the F10F provided the complete curriculum”), but the X-5 was lethal.[112] It had a vicious pitch-up at higher-sweep angles, and its aerodynamic design ensured that it would have very great difficulty when it departed into a spin. The combination of the two led to the death of Air Force test pilot Raymond Popson in the crash of the second X-5

in 1953. More fortunate, NACA pilots completed 133 research flights in the first X-5 before retiring it in 1955.

Swing Wing: The Path to Variable GeometryThe X-5 experience demonstrated that variable geometry worked, and the potential of combining good low-speed performance with high-speed supersonic dash intrigued military authorities looking at future inter­ceptor and long-range strike aircraft concepts. Coincidentally, in the late 1950s, Langley developed increasingly close ties with the British aeronau­tical community, largely a result of the personal influence of John Stack of Langley Research Center, who, in characteristic fashion, used his force­ful personality to secure a strong transatlantic partnership. This partner­ship, best known for its influence upon Anglo-American V/STOL research leading to the Harrier strike fighter, influenced as well the course of vari­able-geometry research. Barnes Wallis of Vickers had conceptualized a sharply swept variable-geometry tailless design, the Swallow, but was not satisfied with the degree of support he was receiving for the idea within British aeronautical and governmental circles. Accordingly, he turned to the United States. Over November 13-18, 1958, Stack sponsored an Anglo – American meeting at Langley to craft a joint research program, in which Wallis and his senior staff briefed the Swallow design.[113] As revealed by subsequent Langley tunnel tests over the next 6 months, Wallis’s Swallow had many stability and control deficiencies but one significant attribute: its outboard wing-pivot design. Unlike the X-5 and Jaguar and other early symmetrical-sweep v-g concepts, the wing did not adjust for chang­ing center of lift position by translating fore and aft along the fuselage centerline using a track-type approach and a single pivot point. Rather, slightly outboard of the fuselage centerline, each wing panel had its own independent pivot point. This permitted elimination of the complex track and allowed use of a sharply swept forebody to address at least some of the changes in center-of-lift location as the wings moved aft and forward. The remainder could be accommodated by control surface deflection and shifting fuel. Studies in Langley’s 7-foot by 10-foot tunnel led to refinement of the outboard pivot concept and, eventually, a patent to William J. Alford and E. C. Polhamus for its concept, awarded in September 1962. Wallis’s inspiration, joined with insightful research by Alford and Polhamus and

followed by adaptation of a conventional "tailed” configuration (a crit­ical necessity in the pre-fly-by-wire computer-controlled era), made variable wing sweep a practical reality.[114] (Understandably, after return­ing to Britain, Wallis had mixed feelings about the NASA involvement. On one hand, he had sought it after what he perceived as a "go slow” approach to his idea in Britain. On the other, following enunciation of outboard wing sweep, he believed—as his biographer subsequently wrote—"The Americans stole his ideas,”)[115]

Swing Wing: The Path to Variable GeometryThus, by the early 1960s, multiple developments—swept wings, high-performance afterburning turbofans, area ruling, the outboard wing pivot, low horizontal tail, advanced stability augmentation sys­tems, to select just a few—made possible the design of variable – geometry combat aircraft. The first of these was the General Dynamics Tactical Fighter Experimental (TFX), which became the F-111. It was a troubled program, though, like most of the Century series that had pre­ceded it (the F-102 in particular), this had essentially nothing to do with the adaptation of a variably swept wing. Instead, a poorly written speci­fication emphasizing joint service over practical, attainable military util­ity resulted in development of a compromised design. The result was a decade of lost fighter time for the U. S. Navy, which never did receive the aircraft it sought, and a constrained Air Force program that resulted in the eventual development of a satisfactory strike aircraft—the F-111F— but years late and at tremendous cost. Throughout the evolution of the F-111, NASA research proved of crucial importance to saving the pro­gram. NASA Langley, Ames, and Lewis researchers invested over 30,000

hours of wind tunnel test time in the F-111 (over 22,000 at Langley alone), addressing various shortcomings in its design, including excessive drag, lack of transonic and supersonic maneuverability, deficient directional stability, and inlet distortion that plagued its engine performance. As a result, the Air Force F-111 became a reliable weapon system, evidenced by its performance in Desert Storm, where it flew long-range strike mis­sions, performed electronic jamming, and proved the war’s single most successful "tank plinker,” on occasion destroying upward of 150 tanks per night and 1,500 over the length of the 43-day conflict.[116]

Swing Wing: The Path to Variable GeometryFrom the experience gained with the F-111 program sprang the Grumman F-14 Tomcat naval fighter and the Rockwell B-1 bomber, both of which experienced fewer development problems, benefitting greatly from NASA tunnel and other analytical research.[117] Emulating American variable-geometry development, Britain, France, and the Soviet Union undertook their own development efforts, spawning the experi­mental Dassault Mirage G (test-flown, though never placed in service), the multipartner NATO Tornado interceptor and strike fighter program, and a range of Soviet fighter and bomber aircraft, including the MiG – 23/27 Flogger, the Sukhoi Su-17/22 Fitter, the Su-24 Fencer, the Tupolev Tu-22M Backfire, and the Tu-160 Blackjack.[118]

Variable geometry has had a mixed history since; in the heyday of the space program, many proposals existed for tailored lifting body shapes deploying "switchblade” wings, and the variable-sweep wing was a prom­inent feature of the Boeing SST concept before its subsequent rejection. The tailored aerodynamics and power available with modern aircraft have rendered variable-geometry approaches less attractive than they once were, particularly because, no matter how well thought out, they invari-

Swing Wing: The Path to Variable Geometry

The Grumman F-14A Tomcat naval fighter marked the maturation of the variable wing-sweep con­cept. This is one was assigned to Dryden for high angle of attack and departure flight-testing. NASA.

ably involve greater cost, weight, and structural complexity. In 1945-1946, John Campbell and Hubert Drake undertook tests in the Langley Free Flight Tunnel of a simple model with a single pivot, so that its wing could be skewed over a range of sweep angles. This concept, which German aerodynamicists had earlier proposed in the Second World War, demon­strated "that an airplane wing can be skewed as a unit to angles as great as 40° without encountering serious stability and control difficulties.”[119] This concept, the simplest of all variable-geometry schemes, returned to the fore in the late 1970s, thanks to the work of Robert T. Jones, who adopted and expanded upon it to generate the so-called "oblique wing” design con­cept. Jones conceptualized the oblique wing as a means of producing a transonic transport that would have minimal drag and a minimal sonic boom; he even foresaw possible twin fuselage transports with a skewed wing shifting their relative position back and forth. Tests with a subscale turbojet demonstrator, the AD-1 (for Ames-Dryden), at the Dryden Flight Research Center confirmed what Campbell and Drake had discovered
nearly four decades previously, namely that at moderate sweep angles the oblique wing possessed few vices. But at higher sweep angles near 60 degrees, its deficits became more pronounced, calling into question whether its promise could ever actually be achieved.[120] On the whole, the variable-geometry wing has not enjoyed the kind of widespread suc­cess that its adherents hoped. While it may be expected that, from time to time, variable sweep aircraft will be designed and flown for partic­ular purposes, overall the fixed conventional planform, outfitted with all manner of flaps and slats and blowing, sucking, and perhaps even warping technology, continues to prevail.

NASA 1990-2007: Coping with Institutional and Resource Challenges

Over the next decade and a half, the NASA rotary wing program’s avail­able organizational and financial resources were significantly impacted by NASA and supporting Agency organizational, mission, and budget management decisions. These decisions were driven by changes in pro­gram priorities in the face of severe budget pressures and reorganization mandates seeking to improve operational efficiency. NASA leaders were being tasked with more ambitious space missions and with recovering from two Shuttle losses. In the face of these challenges, the rotary wing program, among others, was adjusted in the effort to continue to make notable research contributions. Examples of the array of real impacts on the rotary wing program over this period were: (1) termination of the NASA-DARPA RSRA-X-Wing program; (2) stopping the NASA-Army flight operations of the only XV-15 TRRA aircraft and the two RSRA vehi­cles; (3) transfer of all active NASA research aircraft to Dryden Flight Research Center, which essentially closed NASA rotary wing flight oper­ations; (4) elimination of vehicle program offices at NASA Headquarters; (5) closing the National Full-Scale Aerodynamic Complex wind tunnel at

Ames in 2003 (reopened under a lease to the United States Air Force in 2007); (6) converting to full-cost accounting, which represented a new burden on vehicle research funding allocations; and (7) the imposition of a steady and severe decline in aeronautics budget requests, staring in the late 1990s. Overshadowing this retrenching activity in the 1990s was the total reorientation, and hence complete transformation, of the Ames Research Center from an Aeronautics Research Mission Center to a Science Mission Center with the new lead in information technol­ogy (IT).[313] Responsibility for Ames’s aerodynamics and wind tunnel management was assigned to Langley Research Center. The persistent turbulence in the NASA rotary wing research community presented a growing challenge to the ability to generate research contributions. Here is where the established partnership with the United States Army and co-located laboratories at Ames, Langley, and Glenn Research Centers made it possible to maximize effectiveness by strengthening the com­bined efforts. In the case of Ames, this was done by creating a new com­bined Army-NASA Rotorcraft Division. The center of gravity of NASA rotary wing research thus gradually shifted to the Army.

The decision to ground and place in storage the only remaining XV-15 TRRA in 1994 was fortunately turned from a real setback to an unplanned contribution. Bell Helicopter, having lost the other XV-15, N702NA, in an accident in 1992, requested bailment of the Ames air­craft, N703NA, in 1994 to continue its own tilt rotor research, demon­strations, and applications evaluations in support of the ongoing (and troubled) V-22 Osprey program. The NASA and Army management agreed. As part of the extended use, on April 21, 1995, the XV-15 became the first tilt rotor to land at the world’s first operational civil vertiport at the Dallas Convention Center Heliport/Vertiport. After its long and successful operation and its retirement in 2003, this aircraft is on per­manent display at the Smithsonian Institution’s Udvar-Hazy Center at Washington Dulles International Airport, Chantilly, VA.

With the military application of proven tilt rotor technology well underway with the procurement of the V-22 Osprey by the Marine Corps and Air Force, the potential for parallel application of tilt rotor technol­ogy to civil transportation was also addressed by NASA. Early studies, funded by the FAA and NASA, indicated that the concept had potential

for worldwide application and could be economically viable.[314] In late 1992, Congress directed the Secretary of Transportation to establish a Civil Tilt Rotor Development Advisory Committee (CTRDAC) to exam­ine the technical, operational, and economic issues associated with inte­grating the civil tilt rotor (CTR) into the Nation’s transportation system. The Committee was also charged with determining the required addi­tional research and development, the regulatory changes required, and the estimated cost of the aircraft and related infrastructure develop­ment. In 1995, the Committee issued the findings. The CTR was deter­mined to be technically feasible and could be developed by the United States’ industry. It appeared that the CTR could be economically viable in heavily traveled corridors. Additional research and development and infrastructure planning were needed before industry could make a pro­duction decision. In response to this finding, elements of work suggested by the CTRDAC were included in the NASA rotorcraft program plans.

Significant advances in several technological areas would be required to enable the tilt rotor concept to be introduced into the transportation system. In 1994, researchers at Ames, Langley, and Glenn Research Centers launched the Advanced Tiltrotor Transport Technology (ATTT) program to develop the new technologies. Because of existing fund­ing limitations, initial research activity was focused on the primary concerns of noise and safety. The noise research activity included the development of refined acoustic analyses, the acquisition of wind tun­nel prop-rotor noise data to validate the analytical method, and flight tests to determine the effect of different landing approach profiles on terminal area and community noise. The safety effort was related to the need to execute approaches and departures at confined urban ver – tiports. For these situations the capability to operate safely with one – engine-inoperative in adverse weather conditions was required. This area was addressed by conducting engine design studies to enable generat­ing high levels of emergency power in OEI situations without adversely impacting weight, reliability, maintenance, or normal fuel economy. Additional operational safety investigations were carried out on the Ames Vertical Motion Simulator to assess crew station issues, control law variations, and assign advanced configurations such as the vari­able diameter tilt rotor. The principal American rotary wing airframe

and engine manufacturers participated in the noise and safety investi­gations, which assured that proper attention was given to the practical application of the new technology.[315] An initial step in civil tilt rotor air­craft development was taken by Bell Helicopter in September 1998, by teaming with Agusta Helicopter Company of Italy, to design, manufac­ture, and certify a commercial version of the XV-15 aircraft design des­ignated the BA 609.

Despite the institutional and resource turbulence overshadowing rotary wing activity, the NASA and Army researchers persisted in con­ducting base research. They continued to make contributions to advance the state of rotary wing technology applicable to civil and military needs, a typical example being the analysis of the influence of the vortex ring state (VRS) flight in rapid, steep descents, brought to the forefront by initial operating problems experienced by the V-22 Osprey.[316] The cur­rent NASA Technical Report Server (NTRS) Web site has posted over 2,200 NASA rotary wing technical reports. Of these, approximately 800 entries have been posted since 1991—the peak year, with 143 entries. These postings facilitate public access to the formal documentation of NASA contributions to rotary wing technology. The annual postings grad­ually declined after 1991. In what may be a mirror image of the state of NASA’s realigned rotary wing program, since 2001 the annual totals of posted rotary wing reports are in the 20-40 range, with an increasing percentage reflecting contributions by Army coauthors.

As the Army and NASA rotary wing research was increasingly linked in mutually supporting roles at the co-located centers, outsourcing, cooperation, and partnerships with industry and academia also grew. In 1995, the Army and NASA agreed to form the National Rotorcraft Technology Center (NRTC) occupying a dedicated facility at Ames Research Center. This jointly funded and managed organization was created to provide central coordination of rotary wing research activities of the Government, academia, and industry. Government participation included Army, NASA, Navy, and the FAA. The academic laboratories’ participation was accomplished by NRTC having acquired the responsi­bility to manage the Rotorcraft Centers of Excellence (RCOE) program

that had been in existence since 1982 under the Army Research Office. In 1996, the periodic national competition resulted in establishing Georgia Institute of Technology, the University of Maryland at College Park, and Pennsylvania State University as the three RCOE sites.

The Rotorcraft Industry Technology Association (RITA), Inc., was also established in 1996. Principal members of RITA included the United States helicopter manufacturers Bell Helicopter Textron, the Boeing Company, Sikorsky Aircraft Corporation, and Kaman Aerospace Corporation. Supporting members included rotorcraft subsystem man­ufacturers and other industry entities. Associate Members included a growing number of American universities and nonprofit organizations. RITA was governed by a Board of Directors supported by a Technical Advisory Committee that guided and coordinated the performance of the research projects. This industry-led organization and NRTC signed a unique agreement to be partners in rotary wing research. The Government would share the cost of annual research projects pro­posed by RITA and approved by NRTC evaluation teams. NASA and the Army each contributed funds for 25 percent of the cost of each proj­ect—together they matched the industry-member share of 50 percent. Over the first 5 years of the Government-industry agreement, the total annual investment averaged $20 million. The RITA projects favored mid – and near-term research efforts that complemented mid – and long­term research missions of the Army and NASA. Originally, there was concern that the research staff of industry competitors would be reluc­tant to share project proposal information and pool results under the RITA banner. This concern quickly turned out to be unfounded as the research teams embarked on work addressing common technical prob­lems faced by all participants.

NRTC was not immune to the challenges posed by limited NASA budgets, which eventually caused some cutbacks in NRTC support of RITA and the RCOE program. In 2005, the name of the RITA enter­prise was changed to the Center for Rotorcraft Innovation (CRI), and the principal office was relocated from Connecticut to the Philadelphia area.[317] Accomplishments posted by RITA-CRI include cost-effective integrated helicopter design tools and improved design and manufac­turing practices for increased damage tolerance. The area of rotorcraft

operations accomplishments included incorporating developments in synthetic vision and cognitive decision-making systems to enhance the routine performance of critical piloting tasks and enabling changes in the air traffic management system that will help rotorcraft become a more-significant participant in the civil transportation system. The American Helicopter Society International recognized RITA for one of its principal areas of research effort by awarding the Health and Usage Monitoring Project Team the AHS 1998 Grover E. Bell Award for "fos­tering and encouraging research and experimentation in the important field of helicopters.”

As previously noted, in the mid-1990s, NASA Ames’s entire aircraft fleet was transferred some 300 miles south to Dryden Flight Research Center at Edwards Air Force Base, CA. This inventory included a num­ber of NASA rotary wing research aircraft that had been actively engaged since the 1970s.[318] However, the U. S. Army Aeroflightdynamics Directorate, co-located at Ames since 1970, chose to retain their research aircraft. In 1997, after several years of negotiation, NASA Headquarters signed a directive that Ames would continue to support the Army’s rotorcraft air­worthiness research using three military helicopters outfitted for special flight research investigations. The AH-1 Cobra had been configured as the Flying Laboratory for Integrated Test and Evaluation (FLITE). One UH-60 Blackhawk was configured as the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) and remained as the focus for advanced controls and was utilized by the NASA-Army Rotorcraft Division to develop programmable, fly-by-wire controls for nap-of-the – Earth maneuvering studies. This aircraft was also used for investigat­ing noise-abatement, segmented approaches using local differential Global Positioning System (GPS) guidance. The third aircraft, another UH-60 Blackhawk, had been extensively instrumented for the conduct of the UH-60 Airloads Program. The principal focus of the program was the acquisition of detailed rotor-blade pressure distributions in a wide array of flight conditions to improve and validate advanced analytical methodology. The last NACA-NASA rotor air-loads flight program of this nature had been conducted over three decades earlier, before the advent of the modern digital data acquisition and processing revolu-

tion.[319] Again, the persistence of the NASA-Army researchers met the institutional and resource challenges and pressed on with fundamen­tal research to advance rotary wing technology.

On December 20, 2006, the White House issued Executive Order 13419 establishing the first National Aeronautics Research and Development Policy. The Executive order was accompanied by the policy statement pre­pared by the National Science and Technology Council’s Committee on Technology. This 13-page document included recommendations to clar­ify, focus, and coordinate Federal Government aeronautics R&D activi­ties. Of particular note for NASA’s rotary wing community was Section V of the policy statement: "Stable and Long-Term Foundational Research Guidelines.” The roles and responsibilities of the executive departments and agencies were addressed, noting that several executive organizations should take responsibility for specific parts of the national foundational (i. e., fundamental) aeronautical research program. Specifically, "NASA should maintain a broad foundational research effort aimed at preserv­ing the intellectual stewardship and mastery of aeronautics core compe­tencies.” In addition, "NASA should conduct research in key areas related to the development of advanced aircraft technologies and systems that support DOD, FAA, the Joint Planning and Development Office (JPDO) and other executive departments and agencies.[320] NASA may also con­duct such research to benefit the broad aeronautics community in its pursuit of advanced aircraft technologies and systems. . . . ” In support­ing research benefiting the broad aeronautics community, care is to be taken "to ensure that the government is not stepping beyond its legiti­mate purpose by competing with or unfairly subsidizing commercial ven­tures.” There is a strong implication that the new policy may lead NASA’s aeronautics role in a return to the more modest, but successful, ways of NASA’s predecessor, the National Advisory Committee for Aeronautics, with a primary focus on fundamental research, with the participation of

academia, and the cooperative research support for systems technology and experimental aircraft program investments by the DOD, the FAA, and industry. In the case of rotary wing research, since the 1990s, NASA man­agement decisions had moved the residual effort in this direction under the pressure of limited resources.

As charged, 1 year after the Executive order and policy statement were issued, the National Science and Technology Council issued the "National Plan For Aeronautics Research and Development and Related Infrastructure.” Rotary wing R&D is specifically identified as being among the aviation elements vital to national security and homeland defense with a goal of "Developing improved lift, range, and mission capability for rotorcraft.” Future NASA rotary wing foundational research contributions may also contribute to other goals and objective of the plan. For example, under Energy Efficiency and Environment Protection, is Goal 2: Advance development of technologies and operations to enable significant increases in energy efficiency of the aviation system, and Goal 3: Advance development of technologies and operational procedures to decrease the significant environmental impacts of the aviation system.

Perhaps the most important long-term challenge for the rotary wing segment of aviation is the need for focused attention on improved safety. In this regard, Goal 2 under the plan section titled "Aviation Safety is Paramount” appears to embrace the rotary wing need in calling for devel­oping technologies to reduce accidents and incidents through enhanced aerospace vehicle operations on the ground and in the air. The opportu­nity for making significant contributions in this arena may exist through enhanced teaming of NASA and the rotary wing community under the International Helicopter Study Team (IHST).[321] The goal of the ambitious IHST is to work to reduce helicopter accident rates by 80 percent in 10 years. The participating members of the organization include techni­cal societies, helicopter and engine manufacturers, commercial operator and public service organizations, the FAA, and NASA. Past performance suggests that the timely application of NASA rotary wing fundamental research expertise and unique facilities to this international endeavor would spawn significant contributions and accomplishments.

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

During the 1950s and early 1960s, aviation advanced from flight at high altitude and Mach 1 to flight in orbit at Mach 25. Within the atmo­sphere, a number of these advances stemmed from the use of the ram­jet, at a time when turbojets could barely pass Mach 1 but ramjets could aim at Mach 3 and above. Ramjets needed an auxiliary rocket stage as a booster, which brought their general demise after high-performance afterburning turbojets succeeded in catching up. But in the heady days of the 1950s, the ramjet stood on the threshold of becoming a main­stream engine. Many plans and proposals existed to take advantage of their power for a variety of aircraft and missile applications.

The burgeoning ramjet industry included Marquardt and Wright Aeronautical, though other firms such as Bendix developed them as well. There were also numerous hardware projects. One was the Air Force- Lockheed X-7, an air-launched high-speed propulsion, aerodynamic, and structures testbed. Two were surface-to-air ramjet-powered mis­siles: the Navy’s ship-based Mach 2.5+ Talos and the Air Force’s Mach 3+ Bomarc. Both went on to years of service, with the Talos flying "in anger” as a MiG-killer and antiradiation SAM-killer in Vietnam. The Air Force also was developing a 6,300-mile-range Mach 3+ cruise missile— the North American SM-64 Navaho—and a Mach 3+ interceptor fighter— the Republic XF-103. Neither entered the operational inventory. The Air Force canceled the troublesome Navaho in July 1957, weeks after the first flight of its rival, Atlas, but some flight hardware remained, and Navaho flew in test for as far as 1,237 miles, though this was a rare success. The XF-103 was to fly at Mach 3.7 using a combined turbojet-ramjet engine. It was to be built largely of titanium, at a time when this metal was little understood; it thus lived for 6 years without approaching flight test. Still, its engine was built and underwent test in December 1956.[564]

The steel-structured X-7 proved surprisingly and consistently produc­tive. The initial concept of the X-7 dated to December 1946 and consti­tuted a three-stage vehicle. A B-29 (later a B-50) served as a "first stage” launch aircraft; a solid rocket booster functioned as a "second stage” accelerating it to Mach 2, at which the ramjet would take over. First flying in April 1951, the X-7 family completed 100 missions between 1955 and program termination in 1960. After achieving its Mach 3 design goal, the program kept going. In August 1957, an X-7 reached Mach 3.95 with a 28-inch diameter Marquardt ramjet. The following April, the X-7 attained Mach 4.31—2,881 mph—with a more-powerful 36-inch Marquardt ram­jet. This established an air-breathing propulsion record that remains unsurpassed for a conventional subsonic combustion ramjet.[565]

At the same time that the X-7 was edging toward the hypersonic fron­tier, the NACA, Air Force, Navy, and North American Aviation had a far more ambitious project underway: the hypersonic X-15. This was Round Two, following the earlier Round One research airplanes that had taken flight faster than sound. The concept of the X-15 was first proposed by Robert Woods, a cofounder and chief engineer of Bell Aircraft (manu­facturer of the X-1 and X-2), at three successive meetings of the NACA’s influential Committee on Aerodynamics between October 1951 and June 1952. It was a time when speed was king, when ambitious technology­pushing projects were flying off the drawing board. These included the Navaho, X-2, and XF-103, and the first supersonic operational fight­ers—the Century series of the F-100, F-101, F-102, F-104, and F-105.[566]

Some contemplated even faster speeds. Walter Dornberger, former commander of the Nazi research center at Peenemunde turned senior Bell Aircraft Corporation executive, was advocating BoMi, a proposed skip­gliding "Bomber-Missile” intended for Mach 12. Dornberger supported Woods in his recommendations, which were adopted by the NACA’s Executive Committee in July 1952. This gave them the status of policy, while the Air Force added its own support. This was significant because

its budget was 300 times larger than that of the NACA.[567] The NACA alone lacked funds to build the X-15, but the Air Force could do this easily. It also covered the program’s massive cost overruns. These took the air­frame from $38.7 million to $74.5 million and the large engine from $10 million to $68.4 million, which was nearly as much as the airframe.[568]

The Air Force had its own test equipment at its Arnold Engineering Development Center (AEDC) at Tullahoma, TN, an outgrowth of the Theodore von Karman technical intelligence mission that Army Air Forces Gen. Henry H. "Hap” Arnold had sent into Germany at the end of the Second World War. The AEDC, with brand-new ground test and research facilities, took care to complement, not duplicate, the NACA’s research facilities. It specialized in air-breathing and rocket-engine test­ing. Its largest installation accommodated full-size engines and provided continuous flow at Mach 4.75. But the X-15 was to fly well above this, to over Mach 6, highlighting the national facilities shortfall in hypersonic test capabilities existing at the time of its creation.[569]

While the Air Force had the deep pockets, the NACA—specifically Langley—conducted the research that furnished the basis for a design. This took the form of a 1954 feasibility study conducted by John Becker, assisted by structures expert Norris Dow, rocket expert Maxime Faget, configu­ration and controls specialist Thomas Toll, and test pilot James Whitten. They began by considering that during reentry, the vehicle should point its nose in the direction of flight. This proved impossible, as the heating was too high. He considered that the vehicle might alleviate this problem by using lift, which he was to obtain by raising the nose. He found that the thermal environment became far more manageable. He concluded that the craft should enter with its nose high, presenting its flat under­surface to the atmosphere. The Allen-Eggers paper was in print, and he later wrote that: "it was obvious to us that what we were seeing here was a new manifestation of H. J. Allen’s ‘blunt-body’ principle.”[570]

To address the rigors of the daunting aerothermodynamic environ­ment, Norris Dow selected Inconel X (a nickel alloy from International Nickel) as the temperature-resistant superalloy that was to serve for the aircraft structure. Dow began by ignoring heating and calculated the skin gauges needed only from considerations of strength and stiffness. Then he determined the thicknesses needed to serve as a heat sink. He found that the thicknesses that would suffice for the latter were nearly the same as those that would serve merely for structural strength. This meant that he could design his airplane and include heat sink as a bonus, with little or no additional weight. Inconel X was a wise choice; with a density of 0.30 pounds per cubic inch, a tensile strength of over 200,000 pounds per square inch (psi), and yield strength of 160,000 psi, it was robust, and its melting temperature of over 2,500 °F ensured that the rigors of the anticipated 1,200 °F surface temperatures would not weaken it.[571]

Work at Langley also addressed the important issue of stability. Just then, in 1954, this topic was in the forefront because it had nearly cost the life of the test pilot Chuck Yeager. On the previous December 12, he had flown the X-1A to Mach 2.44 (approximately 1,650 mph). This exceeded the plane’s stability limits; it went out of control and plunged out of the sky. Only Yeager’s skill as a pilot had saved him and his airplane. The problem of stability would be far more severe at higher speeds.[572]

Analysis, confirmed by experiments in the 11-inch wind tunnel, had shown that most of the stability imparted by an aircraft’s tail surfaces was produced by its wedge-shaped forward portion. The aft portion contributed little to the effectiveness because it experienced lower air pressure. Charles McLellan, another Langley aerodynamicist, now proposed to address the problem of hypersonic stability by using tail sur­faces that would be wedge-shaped along their entire length. Subsequent tests in the 11-inch tunnel, as mentioned previously, confirmed that this solution worked. As a consequence, the size of the tail surfaces shrank from being almost as large as the wings to a more nearly con­ventional appearance.[573]

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

LIQUID OXYGEN TANK (OXIDIZER)

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

HYDROGEN-

PEROXIDE

 

ATTITUDE ROCKE’

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

Подпись:Подпись: HYDROGEN PEROXOE HELIUM

TANKS ‘

EJECTION SEAT-

A schematic drawing of the X-15’s internal layout. NASA.

This study made it possible to proceed toward program approval and the award of contracts both for the X-15 airframe and its power-plant, a 57,000-pound-thrust rocket engine burning a mix of liquid oxygen and anhydrous ammonia. But while the X-15 promised to advance the research airplane concept to over Mach 6, it demanded something more than the conventional aluminum and stainless steel structures of earlier craft such as the X-1 and X-2. Titanium was only beginning to enter use, primarily for reducing heating effects around jet engine exhausts and afterburners. Magnesium, which Douglas favored for its own high-speed designs, was flammable and lost strength at temperatures higher than 600 °F. Inconel X was heat-resistant, reasonably well known, and relatively easily worked. Accordingly, it was swiftly selected as the structural material of choice when Becker’s Langley team assessed the possibility of designing and fabricating a rocket-boosted air-launched hypersonic research airplane. The Becker study, completed in April 1954, chose Mach 6 as the goal and proposed to fly to altitudes as great as 350,000 feet. Both marks proved remarkably prescient: the X-15 eventually flew to 354,200 feet in 1963 and Mach 6.70 in 1967. This was above 100 kilometers and well above the sensible atmo­sphere. Hence, at that early date, more than 3 years before Sputnik, Becker and his colleagues already were contemplating piloted flight into space.[574]

The X-15: Pioneering Piloted Hypersonics

North American Aviation won the contract to build the X-15. It first flew under power in September 1959, by which time an Atlas had hurled an

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

The North American X-1 5 at NASA’s Flight Research Center (now the Dryden Flight Research Center) in 1961. NASA.

RVX-2 nose cone to its fullest range. However, as a hypersonic experiment, the X-15 was a complete airplane. It thus was far more complex than a simple reentry body, and it took several years of cautious flight-testing before it reached peak speed of above Mach 6, and peak altitude as well.

Testing began with two so-called "Little Engines,” a pair of vintage Reaction Motors XLR11s that had earlier served in the X-1 series and the Douglas D-558-2 Skyrocket. Using these, the X-15 topped the records of the earlier X-2, reaching Mach 3.50 and 136,500 feet. Starting in 1961, using the "Big Engine”—the Thiokol XLR99 with its 57,000 pounds of thrust—the X-15 flew to its Mach 6 design speed and 50+ mile design alti­tude, with test pilot Maj. Robert White reaching Mach 6.04 and NASA pilot Joseph Walker an altitude of 354,200 feet. After a landing accident, the second X-15 was modified with external tanks and an ablative coating, with Air Force Maj. William "Pete” Knight subsequently flying this variant to Mach 6.70 (4,520 mph) in 1967. However, it sustained severe thermal damage, partly as a result of inadequate understanding of the interac­tions of impinging hypersonic shock-on-shock flows. It never flew again.[575]

The X-15’s cautious buildup proved a wise approach, for this gave lee­way when problems arose. Unexpected thermal expansion leading to local­ized buckling and deformation showed up during early high-Mach flights. The skin behind the wing leading edge exhibited localized buckling after the first flight to Mach 5.3, but modifications to the wings eliminated hot

spots and prevented subsequent problems, enabling the airplane to reach beyond Mach 6. In addition, a flight to Mach 6.04 caused a windshield to crack because of thermal expansion. This forced redesign of its frame to incorporate titanium, which has a much lower coefficient of expansion. The problem—a rare case in which Inconel caused rather than resolved a heating problem—was fixed by this simple substitution.[576]

Altitude flights brought their own problems, involving potentially dangerous auxiliary power unit (APU) failures. These issues arose in 1962 as flights began to reach well above 100,000 feet; the APUs began to experience gear failure after lubricating oil foamed and lost its lubri­cating properties. A different oil had much less tendency to foam; it now became standard. Designers also enclosed the APU gearbox within a pressurized enclosure. The gear failures ceased.[577]

The X-15 substantially expanded the use of flight simulators. These had been in use since the famed Link Trainer of Second World War and now included analog computers, but now they also took on a new role as they supported the development of control systems and flight equip­ment. Analog computers had been used in flight simulation since 1949. Still, in 1955, when the X-15 program began, it was not at all custom­ary to use flight simulators to support aircraft design and development. But program managers turned to such simulators because they offered effective means to study new issues in cockpit displays, control systems, and aircraft handling qualities. A 1956 paper stated that simulation had "heretofore been considered somewhat of a luxury for high-speed air­craft,” but now "has been demonstrated as almost a necessity,” in all three axes, "to insure [sic] consistent and successful entries into the atmosphere.” Indeed, pilots spent much more time practicing in simu­lators than they did in actual flight, as much as an hour per minute of actual flying time.[578]

The most important flight simulator was built by North American. Located originally in Los Angeles, Paul Bikle, the Director of NASA’s Flight Research Center, moved it to that Center in 1961. It replicated the X-15 cockpit and included actual hydraulic and control-system hard­ware. Three analog computers implemented equations of motion that governed translation and rotation of the X-15 about all three axes, trans­forming pilot inputs into instrument displays.[579]

The North American simulator became critical in training X-15 pilots as they prepared to execute specific planned flights. A particular mission might take little more than 10 minutes, from ignition of the main engine to touchdown on the lakebed, but a test pilot could easily spend 10 hours making practice runs in this facility. Training began with repeated trials of the normal flight profile with the pilot in the simulator cockpit and a ground controller close at hand. The pilot was welcome to recommend changes, which often went into the flight plan. Next came rehearsals of off-design missions: too much thrust from the main engine, too high a pitch angle when leaving the stratosphere.

Much time was spent practicing for emergencies. The X-15 had an inertial reference unit that used analog circuitry to display attitude, alti­tude, velocity, and rate of climb. Pilots dealt with simulated failures in this unit as they worked to complete the normal mission or, at least, to execute a safe return. Similar exercises addressed failures in the stabil­ity augmentation system. When the flight plan raised issues of possible flight instability, tests in the simulator used highly pessimistic assump­tions concerning stability of the vehicle. Other simulations introduced in-flight failures of the radio or Q-ball multifunction sensor. Premature engine shutdown imposed a requirement for safe landing on an alter­nate lakebed that was available for emergency use.[580]

The simulations indeed had realistic cockpit displays, but they left out an essential feature: the g-loads, produced both by rocket thrust and by deceleration during reentry. In addition, a failure of the stability aug­mentation system, during reentry, could allow the airplane to oscillate

in pitch and yaw. This changed the drag characteristics and imposed a substantial cyclical force.

To address such issues, investigators installed a flight simulator within the gondola of an existing centrifuge at the Naval Air Development Center in Johnsville, PA. The gondola could rotate on two axes while the centrifuge as a whole was turning. It not only produced g-forces; its g-forces increased during the simulated rocket burn. The centrifuge imposed such forces anew during reentry while adding a cyclical com­ponent to give the effect of an oscillation in yaw or pitch.[581]

There also were advances in pressure suits, under development since the 1930s. Already an early pressure suit had saved the life of Maj. Frank K. Everest during a high-altitude flight in the X-1, when it had suffered cabin decompression from a cracked canopy. Marine test pilot Lt. Col. Marion Carl had worn another during a flight to 83,235 feet in the D-558-2 Skyrocket in 1953, as had Capt. Iven Kincheloe during his record flight to 126,200 feet in the Bell X-2 in 1956. But these early suits, while effective in protecting pilots, were almost rigid when inflated, nearly immobilizing them. In contrast, the David G. Clark Company, a girdle manufacturer, introduced a fabric that contracted in circumfer­ence while it stretched in length. An exchange between these effects cre­ated a balance that maintained a constant volume, preserving a pilot’s freedom of movement. The result was the Clark MC-2 suit, which, in addition to the X-15, formed the basis for American spacesuit develop­ment from Project Mercury forward. Refined as the A/P22S-2, the X-15’s suit became the standard high-altitude pressure suit for NASA and the Air Force. It formed the basis for the Gemini suit and, after 1972, was adopted by the U. S. Navy as well, subsequently being employed by pilots and aircrew in the SR-71, U-2, and Space Shuttle.[582]

The X-15 also accelerated development of specialized instrumenta­tion, including a unique gimbaled nose sensor developed by Northrop. It furnished precise speed and positioning data by evaluation of dynamic pressure ("q” in aero engineering shorthand), and thus was known as the Q-ball. The Q-ball took the form of a movable sphere set in the nose of the craft, giving it the appearance of the enlarged tip of a ballpoint pen. "The Q-ball is a go-no go item,” NASA test pilot Joseph Walker told Time magazine reporters in 1961, adding: "Only if she checks okay do we go.”[583] The X-15 also incorporated "cold jet” hydrogen peroxide reac­tion controls for maintaining vehicle attitude in the tenuous upper atmo­sphere, when dynamic air pressure alone would be insufficient to permit adequate flight control functionality. When Iven Kincheloe reached 126,200 feet, his X-2 was essentially a free ballistic object, uncontrolla­ble in pitch, roll, and yaw as it reached peak altitude and then began its descent. This situation made reaction controls imperative for the new research airplane, and the NACA (later NASA) had evaluated them on a so-called "Iron Cross” simulator on the ground and then in flight on the Bell X-1B and on a modified Lockheed F-104 Starfighter. They then proved their worth on the X-15 and, as with the Clark pressure suit, were incorporated on Mercury and subsequent American spacecraft.

The X-15 introduced a side stick flight controller that the pilot would utilize during acceleration (when under loads of approximately 3 g’s), relying on a fighter-type conventional control column for approach and landing. The third X-15 had a very different flight control system than the other two, differing greatly from the now-standard stability-aug­mented hydromechanical system carried by operational military and civilian aircraft. The third aircraft introduced a so-called "adaptive” flight control system, the MH-96. Built by Minneapolis Honeywell, the MH-96 relied on rate gyros, which sensed rates of motion in pitch, roll, and yaw. It also incorporated "gain,” defined as the proportion between sensed rates of angular motion and a deflection of the ailerons or other controls. This variable gain, which changed automatically in response to flight conditions, functioned to maintain desired handling qualities across the spectrum of X-15 performance. This arrangement made it possible to introduce blended reaction and aerodynamic controls on the same stick, with this blending occurring automatically in response to

the values determined for gain as the X-15 flew out of the atmosphere and back again. Experience, alas, would reveal the MH-96 as an imma­ture, troublesome system, one that, for all its ambition, posed signifi­cant headaches. It played an ultimately fatal role in the loss of X-15 pilot Maj. Michael Adams in 1967.[584]

The three X-15s accumulated a total of 199 flights from 1959 through

1968. As airborne instruments of hypersonic research, they accumu­lated nearly 9 hours above Mach 3, close to 6 hours above Mach 4, and 87 minutes above Mach 5. Many concepts existed for X-15 deriva­tives and spinoffs, including using it as a second stage to launch small satellite-lofting boosters, to be modified with a delta wing and scram – jet, and even to form the basis itself for some sort of orbital spacecraft; for a variety of reasons, NASA did not proceed with any of these. More significantly, however, was the strong influence the X-15 exerted upon subsequent hypersonic projects, particularly the National Hypersonic Flight Research Facility (NHFRF, pronounced "nerf”), intended to reach Mach 8.

A derivative of the Air Force Flight Dynamics Laboratory’s X-24C study effort, NHFRF was also to cruise at Mach 6 for 40 seconds. A joint Air Force-NASA committee approved a proposal in July 1976 with an estimated program cost of $200 million, and NHFRF had strong support from NASA’s hypersonic partisans in the Langley and Dryden Centers. Unfortunately, its rising costs, at a time when the Shuttle demanded an ever-increasing proportion of the Agency’s budget and effort, doomed it, and it was canceled in September 1977. Overall, the X-15 set speed and altitude records that were not surpassed until the advent of the Space Shuttle.[585]

The X-20 Dyna-Soar

During the 1950s, as the X-15 was taking shape, a parallel set of ini­tiatives sought to define a follow-on hypersonic program that could actually achieve orbit. They were inspired in large measure by the 1938-1944 Silbervogel ("Silver Bird”) proposal of Austrian space flight advocate Eugen Sanger and his wife, mathematician Irene Sanger-Bredt, which greatly influenced postwar Soviet, American, and European think­ing about hypersonics and long-range "antipodal” flight. Influenced by Sanger’s work and urged onward by the advocacy of Walter Dornberger, Bell Aircraft Corporation in 1952 proposed the BoMi, intended to fly 3,500 miles. Bell officials gained funding from the Air Force’s Wright Air Development Center (WADC) to study longer-range 4,000-mile and 6,000-mile systems under the aegis of Air Force project MX-2276.

Support took a giant step forward in February 1956, when Gen. Thomas Power, Chief of the Air Research and Development Command (ARDC, predecessor of Air Force Systems Command) and a future Air Force Chief of Staff, stated that the service should stop merely consid­ering such radical craft and instead start building them. With this level of interest, events naturally moved rapidly. A month later, Bell received a study contract for Brass Bell, a follow-on Mach 15 rocket-lofted boost – glider for strategic reconnaissance. Power preferred another orbital glider concept, RoBo (for Rocket Bomber), which was to serve as a global strike system. To accelerate transition of hypersonics from the research to the operational community, the ARDC proposed its own concept, Hypersonic Weapons Research and Development Supporting System (HYWARDS). With so many cooks in the kitchen, the Air Force needed a coordinated plan. An initial step came in December 1956, as Bell raised the velocity of Brass Bell to Mach 18. A month later, a group headed by John Becker, at Langley, recommended the same design goal for HYWARDS. RoBo still remained separate, but it emerged as a long­term project that could be operational by the mid-1970s.[586]

NACA researchers split along centerlines over the issue of what kind of wing design to employ for HYWARDS. At NACA Ames, Alfred Eggers and Clarence Syvertson emphasized achieving maximum lift. They pro­posed a high-wing configuration with a flat top, calculating its hypersonic

life-to-drag (L/D) as 6.85 and measuring a value of 6.65 during hyper­sonic tunnel tests. Langley researchers John Becker and Peter Korycinski argued that Ames had the configuration "upside down.” Emphasizing lighter weight, they showed that a flat-bottom Mach 18 shape gave a weight of 21,400 pounds, which rose only modestly at higher speeds. By contrast, the Ames "flat-top” weight was 27,600 pounds and rising steeply. NASA officials diplomatically described the Ames and Langley HYWARDS concepts respectively as "high L/D” and "low heating,” but while the imbroglio persisted, there still was no acceptable design. It fell to Becker and Korycinski to break the impasse in August 1957, and they did so by considering heating. It was generally expected that such craft required active cooling. But Becker and his Langley colleagues found that a glider of global range achieved peak uncooled skin tem­peratures of 2,000 °F, which was survivable by using improved materi­als. Accordingly, the flat-bottom design needed no coolant, dramatically reducing both its weight and complexity.[587]

This was a seminal conclusion that reshaped hypersonic thinking and influenced all future development down to the Space Shuttle. In October 1957, coincident with the Soviet success with Sputnik, the ARDC issued a coordinated plan that anticipated building HYWARDS for research at 18,000 feet per second, following it with Brass Bell for reconnaissance at the same speed and then RoBo, which was to carry nuclear bombs into orbit. HYWARDS now took on the new name of Dyna-Soar, for "Dynamic Soaring,” an allusion to the Sanger-legacy skip-gliding hypersonic reentry. (It was later designated X-20.) To the NACA, it constituted a Round Three following the Round One X-1, X-2, and Skyrocket, and the Round Two X-15.

The flat-bottom configuration quickly showed that it was robust enough to accommodate flight at much higher speeds. In 1959, Herbert York, the Defense Director of Research and Engineering, stated that Dyna-Soar was to fly at 15,000 mph, lofted by the Martin Company’s Titan I missile, though this was significantly below orbital speed. But

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

W/S, – 20 turbulent.

 

Vatiatkn

 

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

This 1957 Langley trade-study shows weight advantage of flat-bottom reentry vehicles at higher Mach numbers. This led to abandonment of high-wing designs in favor of flat-bottom ones such as the X-20 Dyna-Soar and the Space Shuttle. NASA.

during subsequent years it changed to the more-capable Titan II and then to the powerful Titan III-C. With two solid-fuel boosters augment­ing its liquid hypergolic main stage, it could easily boost Dyna-Soar to the 18,000 mph necessary for it to achieve orbit. A new plan of December 1961 dropped suborbital missions and called for "the early attainment of orbital flight.”[588]

By then, though, Dyna-Soar was in deep political trouble. It had been conceived initially as a prelude to the boost-glider Brass Bell for

Transitioning from the Supersonic to the Hypersonic: X-7 to X-15

This full-size mockup of the X-20 gives an indication of its small, compact design. USAF.

reconnaissance and to the orbital RoBo for bombardment. But Brass Bell gave way to a purpose-built concept for a small-piloted station, the Manned Orbiting Laboratory (MOL), which could carry more sophis­ticated reconnaissance equipment. (Ironically, though a team of MOL astronauts was selected, MOL itself likewise was eventually canceled.) RoBo, a strategic weapon, fell out of the picture completely, for the success of the solid-propellant Minuteman ICBM established the silo – launched ICBM as the Nation’s prime strategic force, augmented by the Navy’s fleet of Polaris-launching ballistic missile submarines.[589]

In mid-196l, Secretary of Defense Robert S. McNamara directed the Air Force to justify Dyna-Soar on military grounds. Service advocates responded by proposing a host of applications, including orbital recon­naissance, rescue, inspection of Soviet spacecraft, orbital bombardment,

and use of the craft as a ferry vehicle. McNamara found these rational­izations unconvincing but was willing to allow the program to proceed as a research effort, at least for the time being. In an October 1961 memo to President John F. Kennedy, he proposed to "re-orient the program to solve the difficult technical problems involved in boosting a body of high lift into orbit, sustaining man in it and recovering the vehicle at a desig­nated place.”[590] This reorientation gave the project 2 more years of life.

Then in 1963, he asked what the Air Force intended to do with it after using it to demonstrate maneuvering entry. He insisted he could not justify continuing the program if it was a dead-end effort with no ultimate purpose. But it had little potential utility, for it was not a cargo rocket, nor could it carry substantial payloads, nor could it conduct long – duration missions. And so, in December McNamara canceled it, after 6 years of development time, a Government contract investment of $410 million, the expenditure of 16 million man-hours by nearly 8,000 con­tractor personnel, 14,000 hours of wind tunnel testing, 9,000 hours of simulator runs, and the preparation of 3,035 detailed technical reports.[591]

Ironically, by time of its cancellation, the X-20 was so far advanced that the Air Force had already set aside a block of serial numbers for the 10 production aircraft. Its construction was well underway, Boeing having completed an estimated 42 percent of design and fabrication tasks.[592] Though the X-20 never flew, portions of its principal purposes were fulfilled by other programs. Even before cancellation, the Air Force launched the first of several McDonnell Aerothermodynamic/ elastic Structural Systems Environmental Test (ASSET) hot-structure radiative-cooled flat-bottom cone-cylinder shapes sharing important configuration similarities to the Dyna-Soar vehicle. Slightly later, its Project PRIME demonstrated cross-range maneuver after atmospheric entry. This used the Martin SV-5D lifting body, a vehicle differing significantly from the X-20 but which complemented it nonetheless. In this fashion, the Air Force succeeded at least partially in obtain­ing lifting reentry data from winged vehicles and lifting bodies that widened the future prospects for reentry.

Hot Structures and Return from Space: X-20’s Legacy and ASSET

Dyna-Soar never flew, but it sharply extended both the technology and the temperature limits of hot structures and associated aircraft ele­ments, at a time when the American space program was in its infancy.[593] The United States successfully returned a satellite from orbit in April 1959, while ICBM nose cones were still under test, when the Discoverer II test vehicle supporting development of the National Reconnaissance Office’s secret Corona spy satellite returned from orbit. Unfortunately, it came down in Russian-occupied territory far removed from its intended recovery area near Hawaii. Still, it offered proof that practical hyper­sonic reentry and recovery were at hand.

An ICBM nose cone quickly transited the atmosphere, whereas recov­erable satellite reentry took place over a number of minutes. Hence a sat­ellite encountered milder aerothermodynamic conditions that imposed strong heat but brought little or no ablation. For a satellite, the heat of ablation, measured in British thermal units (BTU) per pound of protec­tive material, was usually irrelevant. Instead, insulative properties were more significant: Teflon, for example, had poor ablative properties but was an excellent insulator.[594]

Production Dyna-Soar vehicles would have had a four-flight ser­vice life before retirement or scrapping, depending upon a hot structure comprised of various materials, each with different but complementary properties. A hot structure typically used a strong material capable of withstanding intermediate temperatures to bear flights loads. Set off from it were outer panels of a temperature-resistant material that did not have to support loads but that could withstand greatly elevated tem­peratures as high as 3,000 °F. In between was a lightweight insulator (in Dyna-Soar’s case, Q-felt, a silica fiber from the firm of Johns Manville). It had a tendency to shrink, thus risking dangerous gaps where high heat could bypass it. But it exhibited little shrinkage above 2,000 °F

and could withstand 3,000 °F. By "preshrinking” this material, it qual­ified for operational use.[595]

For its primary structure, Dyna-Soar used Rene 41, a nickel alloy that included chromium, cobalt, and molybdenum. Its use was pioneered by General Electric for hot-section applications in its jet engines. The alloy had room temperature yield strength of 130,000 psi, declining slightly at 1,200 °F, and was still strong at 1,800 °F. Some of the X-20’s panels were molybdenum alloy, which offered clear advantages for such hot areas as the wing leading edges. D-36 columbium alloy covered most other areas of the vehicle, including the flat underside of the wings.

These panels had to resist flutter, which brought a risk of cracking because of fatigue, as well as permitting the entry of superheated hypersonic flows that could destroy the internal structure within seconds. Because of the risks to wind tunnels from hasty and ill-considered flutter testing (where a test model for example can disintegrate, damaging the interior of the tun­nel), X-20 flutter testing consumed 18 months of Boeing’s time. Its people started testing at modest stress levels and reached levels that exceeded the vehicle’s anticipated design requirements.[596]

The X-20’s nose cap had to function in a thermal and dynamic pres­sure environment more extreme even than that experienced by the X-15’s Q-ball. It was a critical item that faced temperatures of 3,680 °F, accom­panied by a daunting peak heat flux of 143 BTU per square foot per second. Both Boeing and its subcontractor Chance Vought pursued inde­pendent approaches to development, resulting in two different designs. Vought built its cap of siliconized graphite with an insulating layer of a temperature-resistant zirconium oxide ceramic tiles. Their melting point was above 4,500 °F, and they covered its forward area, being held in place by thick zirconium oxide pins. The Boeing design was simpler, using a solid zirconium oxide nose cap reinforced against cracking with two screens of platinum-rhodium wire. Like the airframe, the nose caps were rated through four orbital flights and reentries.[597]

Generally, the design of the X-20 reflected the thinking of Langley’s John Becker and Peter Korycinski. It relied on insulation and radia­tion of the accumulated thermal load for primary thermal protection. But portions of the vehicle demanded other approaches, with special­ized areas and equipment demanding specialized solutions. Ball bear­ings, facing a 1,600 °F thermal environment, were fabricated as small spheres of Rene 41 nickel alloy covered with gold. Antifriction bearings used titanium carbide with nickel as a binder. Antenna windows had to survive hot hypersonic flows yet be transparent to radio waves. A mix of oxides of cobalt, aluminum, and nickel gave a coating that showed a suitable emittance while furnishing requisite temperature protection.

The pilot looked through five clear panes: three that faced forward and two on the sides. The three forward panes were protected by a jetti – sonable protective shield and could only be used below Mach 5 after reen­try, but the side ones faced a less severe aerothermodynamic environment and were left unshielded. But could the X-20 be landed if the protective shield failed to jettison after reentry? NASA test pilot Neil Armstrong, later the first human to set foot upon the Moon, flew approaches using a modified Douglas F5D Skylancer. He showed it was possible to land the Dyna-Soar using only visual cues obtained through the side windows.

The cockpit, equipment bay, and a power bay were thermally iso­lated and cooled via a "water wall” using lightweight panels filled with a jelled water mix. The hydraulic system was cooled as well. To avoid overheating and bursting problems with conventional inflated rubber tires, Boeing designed the X-20 to incorporate tricycle-landing skids with wire brush landing pads.[598] Dyna-Soar, then, despite never having flown, significantly advanced the technology of hypersonic aerospace vehicle design. Its contributions were many and can be illustrated by examin­ing the confidence with which engineers could approach the design of critical technical elements of a hypersonic craft, in 1958 (the year North American began fabricating the X-15) and 1963 (the year Boeing began fabricating the X-20):[599] In short, within the 5 years that took the X-20 from a paper study to a project well underway, the "art of the possible”

TABLE 1

INDUSTRY HYPERSONIC "DESIGN CONFIDENCE"

AS MEASURED BY ACHIEVABLE DESIGN TEMPERATURE CRITERIA, °F

ELEMENT

X-15

X-20

Nose cap

3,200

4,300

Surface panels

1,200

2,750

Primary structure

1,200

1,800

Leading edges

1,200

3,000

Control surfaces

1,200

1,800

Bearings

1,200

1,800

in hypersonics witnessed a one-third increase in possible nose cap tem­peratures, a more than double increase in the acceptable temperatures of surface panels and leading edges, and a one-third increase in the accept­able temperatures of primary structures, control surfaces, and bearings.

The winddown and cancellation of Dyna-Soar coincided with the first flight tests of the much smaller but nevertheless still very techni­cally ambitious McDonnell ASSET hypersonic lifting reentry test vehicle. Lofted down the Atlantic Test Range on modified Thor and Thor-Delta boosters, they demonstrated reentry at over Mach 18. ASSET dated to 1959, when Air Force hypersonic advocates advanced it as a means of assessing the accuracy of existing hypersonic theory and predictive tech­niques. In 1961, McDonnell Aircraft, a manufacturer of fighter aircraft and also the Project Mercury spacecraft, began design and fabrication of ASSET’s small sharply swept delta wing flat-bottom boost-gliders. They had a length of 69 inches and a span of 55 inches.

Though in many respects they resembled the soon-to-be-canceled X-20, unlike that larger, crewed transatmospheric vehicle, the ASSET gliders were more akin to lifting nose cone shapes. Instead of the X-20’s primary reliance upon Rene 41, the ASSET gliders largely used colum- bium alloys, with molybdenum alloy on their forward lower heat shield, graphite wing leading edges, various insulative materials, and colum – bium, molybdenum, and graphite coatings as needed. There were also three nose caps: one fabricated from zirconium oxide rods, another from tungsten coated with thorium, and a third of siliconized graphite coated with zirconium oxide. Though all six ASSETs looked alike, they were built in two differing variants: four Aerothermodynamic Structural Vehicles (ASV) and two Aerothermodynamic Elastic Vehicles (AEV). The former reentered from higher velocities (between 16,000 and 19,500 feet

per second) and altitudes (from 202,000 to 212,000 feet), necessitating use of two-stage Thor-Delta boosters. The latter (only one of which flew successfully) used a single-stage Thor booster and reentered at 13,000 feet per second from an altitude of 173,000 feet. It was a hypersonic flut­ter research vehicle, analyzing as well the behavior of a trailing-edge flap representing a hypersonic control surface. Both the ASV and AEV flew with a variety of experimental panels installed at various locations and fabricated by Boeing, Bell, and Martin.[600] The ASSET program conducted six flights between September 1963 and February 1965, all successful save for one AEV launch in March 1964. Though intended for recovery from the Atlantic, only one survived the rigors of parachute deployment, descent, and being plunged into the ocean. But that survivor, the ASV – 3, proved to be in excellent condition, with the builder, International Harvester, rightly concluding it "could have been used again.”[601] ASV-4, the best flight flown, was also the last one, with the final flight-test report declaring that it returned "the highest quality data of the ASSET pro­gram.” It flew at a peak speed of Mach 18.4, including a hypersonic glide that covered 2,300 nautical miles.[602]

Overall, the ASSET program scored a host of successes. It was all the more impressive for the modest investment made in its development: just $21.2 million. It furnished the first proof of the magnitude and serious­ness of upper-surface leeside heating and the dangers of hypersonic flow impingement into interior structures. It dealt with practical issues of fab­rication, including fasteners and coatings. It contributed to understand­ing of hypersonic flutter and of the use of movable control surfaces. It also demonstrated successful use of an attitude-adjusting reaction con­trol system, in near vacuum and at speeds much higher than those of the X-15. It complemented Dyna-Soar and left the aerospace industry believ­ing that hot structure design technology would be the normative tech­nical approach taken on future launch vehicles and orbital spacecraft.[603]

TABLE 2

MCDONNELL ASSET FLIGHT TEST PROGRAM

DATE

VEHICLE

BOOSTER

VELOCITY

(FEET/

SECOND)

ALTITUDE

(FEET)

RANGE

(NAUTICAL

MILES)

Sept. 1 8, 1963

ASV-1

Thor

16,000

205,000

987

Mar. 24, 1964

ASV-2

Thor-Delta

18,000

195,000

1,800

July 22, 1964

ASV-3

Thor-Delta

19,500

225,000

1,830

Oct. 27, 1964

AEV-1

Thor

13,000

168,000

830

Dec. 8, 1964

AEV-2

Thor

13,000

1 87,000

620

Feb. 23, 1965

ASV-4

Thor-Delta

19,500

206,000

2,300

Hypersonic Aerothermodynamic Protection and the Space Shuttle

Certainly over much of the Shuttle’s early conceptual period, advocates thought such logistical transatmospheric aerospace craft would employ hot structure thermal protection. But undertaking such structures on large airliner-size vehicles proved troublesome and thus premature. Then, as though given a gift, NASA learned that Lockheed had built a pilot plant and could mass-produce silica "tiles” that could be attached to a conventional aluminum structure, an approach far more appealing than designing a hot structure. Accordingly, when the Agency under­took development of the Space Shuttle in the 1970s, it selected this approach, meaning that the new Shuttle was, in effect, a simple alumi­num airplane. Not surprisingly, Lockheed received a NASA subcontract in 1973 for the Shuttle’s thermal-protection system.

Lockheed had begun its work more than a decade earlier, when investigators at Lockheed Missiles and Space began studying ceramic fiber mats, filing a patent on the technology in December 1960. Key people included R. M. Beasley, Ronald Banas, Douglas Izu, and Wilson Schramm. By 1965, subsequent Lockheed work had led to LI-1500, a material that was 89 percent porous and weighed 15 pounds per cubic foot (lb/ft3). Thicknesses of no more than an inch protected test sur­faces during simulations of reentry heating. LI-1500 used methyl meth­acrylate (Plexiglas), which volatilized when hot, producing an outward

flow of cool gas that protected the heat shield, though also compromis­ing its reusability.[604]

Lockheed’s work coincided with NASA plans in 1965 to build a space station as is main post-Apollo venture and, consequently, the first great wave of interest in designing practical logistical Shuttle-like spacecraft to fly between Earth and the orbital stations. These typically were con­ceived as large winged two-stage-to-orbit systems with fly-back boosters and orbital spaceplanes. Lockheed’s Maxwell Hunter devised an influen­tial design, the Star Clipper, with two expendable propellant tanks and LI-1500 thermal protection.[605] The Star Clipper also was large enough to benefit from the Allen-Eggers blunt-body principle, which lowered its temperatures and heating rates during reentry. This made it possi­ble to dispense with the outgassing impregnant, permitting use—and, more importantly, reuse—of unfilled LI-1500. Lockheed also introduced LI-900, a variant of LI-1500 with a porosity of 93 percent and a weight of only 9 pounds per cubic foot. As insulation, both LI-900 and LI-1500 were astonishing. Laboratory personnel found that they could heat a tile in a furnace until it was white hot, remove it, allow its surface to cool for a couple of minutes, and pick it up at its edges with their fin­gers, with its interior still glowing at white heat.[606]

Previous company work had amounted to general materials research. But Lockheed now understood in 1971 that NASA wished to build the Shuttle without simultaneously proceeding with the station, opening a strong possibility that the company could participate. The program had started with a Phase A preliminary study effort, advancing then to Phase B, which was much more detailed. Hot structures were ini­tially ascendant but posed serious challenges, as NASA Langley research­ers found when they tried to build a columbium heat shield suitable for the Shuttle. The exercise showed that despite the promise of reusability and long life, coatings were fragile and damaged easily, leading to rapid oxygen-induced embrittlement at high temperatures. Unprotected colum­bium oxidized particularly readily and, when hot, could burst into flame. Other refractory metals were available, but they were little understood because they had been used mostly in turbine blades.

Even titanium amounted literally to a black art. Only one firm, Lockheed, had significant experience with a titanium hot structure. That experience came from the Central Intelligence Agency-sponsored Blackbird strategic reconnaissance program, so most of the pertinent shop-floor experience was classified. The aerospace community knew that Lockheed had experienced serious difficulties in learning how to work with titanium, which for the Shuttle amounted to an open invita­tion to difficulties, delays, and cost overruns.

The complexity of a hot structure—with large numbers of clips, brackets, standoffs, frames, beams, and fasteners—also militated against its use. Each of the many panel geometries needed their own structural analysis that was to show with confidence that the panel could withstand creep, buckling, flutter, or stress under load, and in the early computer era, this posed daunting analytical challenges. Hot structures were also known generally to have little tolerance for "over­temps,” in which temperatures exceeded the structure’s design point.[607]

Thus, having taken a long look at hot structures, NASA embraced the new Lockheed pilot plant and gave close examination to Shuttle designs that used tiles, which were formally called Reusable Surface Installation (RSI). Again, the choice of hot structures versus RSI reflected the deep pockets of the Air Force, for hot structures were

costly and complex. But RSI was inexpensive, flexible, and simple. It suited NASA’s budget while hot structures did not, so the Agency chose it.

In January 1972, President Richard M. Nixon approved the Shuttle as a program, thereby raising it to the level of a Presidential initiative. Within days, Dale Myers, a senior official, announced that NASA had made the basic decision to use RSI. The North American Rockwell con­cept that won the $2.6 billion prime contract in July therefore specified RSI as well—but not Lockheed’s. North American Rockwell’s version came from General Electric and was made from mullite.[608]

Which was better, the version from GE or the one from Lockheed? Only tests would tell—and exposure to temperature cycles of 2,300 °F gave Lockheed a clear advantage. NASA then added acoustic tests that simulated the loud roars of rocket flight. This led to a "sudden-death shootout,” in which competing tiles went into single arrays at NASA Johnson. After 20 cycles, only Lockheed’s entrants remained intact. In separate tests, Lockheed’s LI-1500 withstood 100 cycles to 2,500 °F and survived a thermal overshoot to 3,000 °F as well as an acoustic overshoot to 174 decibels (dB).

Lockheed won the thermal-protection subcontract in 1973, with NASA specifying LI-900 as the baseline RSI. The firm responded by pre­paring to move beyond the pilot-plant level and to construct a full-scale production facility in Sunnyvale, CA. With this, tiles entered the main­stream of thermal protection systems available for spacecraft design, in much the same way that blunt bodies and ablative approaches had before them, first flying into space aboard the Space Shuttle Columbia in April 1981. But getting them operational and into space was far from easy.[609]

Structures and their Aeroelastic Manifestations

Though an airplane looks rigidly solid, in fact it is a surprisingly flexible machine. The loadings it experiences in flight can manifest themselves in a variety of ways that affect and "move” the structure, and, as dis­cussed previously, the flight control system itself can adversely affect the structure. The convoluted field in which aerodynamics and structures collide both statically and dynamically has led to some of the most com­plex and challenging problems that engineers, researchers, and design­ers have faced in the history of aeronautics.

The safety factor for a railroad bridge is usually "10,” meaning that the structural members are sized to carry 10 times the design load with­out failing. Since weight is so crucial to the performance of an airplane, however, its structural safety factor is typically "1.5,” that is, the struc­ture can fail if the loads are only 50 percent higher than the design value. As a result of the low aircraft design safety factor, aircraft structures receive far more attention during the design than do bridge structures and are subject to much larger deformations when loaded. This struc­tural deformation can also interact with the aerodynamics of an air­plane, both dynamically and statically, independently from the control system interaction mentioned earlier.

Hot Structure Approaches

Another option for thermal protection during entry was the use of exotic, high-temperature materials for the external surface that could re­radiate the heat back into space. This concept was proposed for the X-20 Dyna-Soar program, and the vehicle was well under construc­tion at the time of cancellation.[755] In parallel with the X-20 program, the Air Force Flight Dynamics Laboratory developed a small radia – tive-cooled hot structure vehicle (essentially the first 4 feet of the X-20 Dyna Soar’s nose), called the McDonnell Aerothermodynamic/elastic Structural Systems Environmental Tests (ASSET). The ASSET design used the same materials and thermal protection concepts as the X-20 and first flew in September 1963, 3 months before cancellation of the Dyna-Soar. The fourth ASSET vehicle successfully completed a Mach 18.4 entry from 202,000 feet in 1965. Postflight examination indicated

it survived the entry well, although the operational problems and man­ufacturing methods for these exotic materials were expensive and time­consuming. Since that time, joint NASA-Air Force-Navy-industry devel­opmental programs such as the X-30 National Aero-Space Plane (NASP) effort of the late 1980s to early 1990s have advanced materials and fabri­cation technologies that, in due course, may be applied to future hyper­sonic systems.[756]

Structural Analysis Prior to Computers

Basic principles of structural analysis—static equilibrium, trusses, and beam theory—were known long before computers, or airplanes, existed. Bridges, towers and other buildings, and ships were designed by a combination of experience and some amount of analysis—more so as designs became larger and more ambitious during and after the Industrial Revolution.

With airplanes came much greater emphasis on weight minimiza­tion. Massive overdesign was no longer an acceptable means to achieve structural integrity. More rigorous analysis and structural sizing was required. Simplifications allowed the analysis of primary members under simple loading conditions:

• Slender beams: axial load, shear, bending, torsion.

• Trusses: members carry axial load only, joined to other such members at ends.

• Simple shells: pressure loading.

• Semi-monocoque (skin and stringer) structures: shear flow, etc.

• Superposition of loading conditions.

With these simplifications, primary structural members could be sized appropriately to the expected loads. In the days of wood, wire, and fabric, many aircraft structures could be analyzed as trusses: exter­nally braced biplane wings; fuselage structures consisting of longerons, uprights, and cross braces, with diagonal braces or wires carrying tor­sion; landing gears; and engine mounts. As early as the First World War and in the 1920s, researchers were working to cover every required aspect of the problem: general analysis methods, analysis of wings, horizontal and vertical tails, gust loads, test methods, etc. The National Advisory Committee for Aeronautics (NACA) contributed significantly to the build­ing of this early body of methodology.[787]

Structures with redundancy—multiple structural members capable of sharing one or more loading components—may be desirable for safety, but they posed new problems for analysis. Redundant structures cannot be analyzed by force equilibrium alone. A conservative simplification, often practiced in the early days of aviation, was to analyze the struc­ture with redundant members missing. A more precise solution would require the consideration of displacements and "compatibility” condi­tions: members that are connected to one another must deform in such a manner that they move together at the point of connection. Analysis was feasible but time-consuming. Large-scale solutions to redundant ("statically indeterminate”) structure problems would become practical with the aid of computers. Until then, more simplifications were made, and specific types of solutions—very useful ones—were developed.

While these analysis methods were being developed, there was a lot of airplane building going on without very much analysis at all. In the "golden age of aviation,” many airplanes were built in garages or at small companies that lacked the resources for extensive analysis. "In many cases people who flew the airplanes were the same people who car­ried out the analysis and design. They also owned the company. There was very little of what we now call structural analysis. Engineers were brought in and paid—not to design the aircraft—but to certify that the aircraft met certain safety requirements.”[788]

Through the 1930s, as aircraft structures began to be formed out of aluminum, the semi-monocoque or skin-and-stringer structure became prevalent, and analysis methods were developed to suit. "In the 1930s, ’40s, and ’50s, techniques were being developed to analyze specific struc­tural components, such as wing boxes and shear panels, with combined bending, torsion, and shear loads and with stiffeners on the skins.”[789] A number of exact solutions to the differential equations for stress and strain in a structural member were known, but these generally exist only for very simple geometric shapes and very limited sets of loading conditions and boundary conditions. Exact solutions were of little prac­tical value to the aircraft designer or stress analyst. Instead, "free body diagrams” were used to analyze structures at selected locations, or "sta­tions.” The structure was considered to be cut by a theoretical plane at the station of interest. All loads, applied and inertial, on the portion of the aircraft outboard of the cut had to be borne (reacted) by the struc­ture at the cut.

In principle, this allowed the stress at any point in the structure to be analyzed—given the time to make an arbitrarily large number of these theoretical cuts through the aircraft. In practice, free body dia­grams were used to analyze the structure at key locations—selected fuselage stations, the root, and selected stations of wings and tail sur­faces. Structural members were left constant, or tapered appropriately, according to experience and judgment, between the analyzed sections. For major projects such as airliners or bombers, the analysis would be more thorough, and consequently, major design organizations had rooms full of people whose jobs were to perform the required calculations.

The NACA also utilized this brute-force approach to large calcu­lations, and the people who performed the calculations—overwhelm­ingly women—were called "computers.” Annie J. Easley, who worked at the NASA Lewis (now Glenn) Research Center starting in 1955, recalls:

. . . we were called computers until we started to get the machines, and then we were changed over to either math tech­nicians or mathematicians. . . . The engineers and the scien­tists are working away in their labs and their test cells, and they come up with problems that need mathematical compu­tation. At that time, they would bring that portion to the com­puters, and our equipment then were the huge calculators, where you’d put in some numbers and it would clonk, clonk, clonk out some answers, and you would record them by hand. Could add, subtract, multiply, and divide. That was pretty much what those big machines, those big desktop machines, could do. If we needed to find a logarithm or an exponential, we then pulled out the tables.[790]

After World War II, with jet engines pushing aircraft into ever more demanding flight regimes, the analytical community sought to keep up. The NACA continued to improve the methodologies for calculating loads on various parts of an aircraft, and some of the reports generated during that time are still used by industry practitioners today. NACA Technical Report (TR) 1007, for horizontal tail loads in pitch maneuvers, is a good example, although it does not cover all of the conditions required by recent airworthiness regulations.[791]

For structural analysis, energy methods and matrix methods began to receive more attention. Energy methods work as follows: one first expresses the deflection of a member as a set of assumed shape func­tions, each multiplied by an (initially unknown) coefficient; expresses the total strain energy in terms of these unknown coefficients; and finally, finds the values of the coefficients that minimize the strain energy. If the shape functions, from which the solution is built, satisfy the boundary conditions of the problem, then so does the final solution.

Energy methods were not new. The concept of energy minimization was introduced by Lord Rayleigh in the late 19 th century and extended by Walter Ritz in two papers of 1908 and 1909.[792] Rayleigh and Ritz were par­ticularly concerned with vibrations. Carlo Alberto Castigliano, an Italian engineer, published a dissertation in 1873 that included two important theorems for applying energy principles to forces and static displace­ments in structures.[793] However, in the early works, the shape functions were continuous over the domain of interest. The idea of breaking up (discretizing) a complex structure into many simple elements for numer­ical solution would lead to the concept of finite elements, but for this to be useful, computing technology needed to mature.

Applying Computational Structural Analysis to Flight Research

We now turn to an area of activity that provides, for aviation, the ulti­mate proof of design techniques and predictive capabilities: flight-test­ing. While there are many fascinating projects that could be discussed, we will consider only five that had particular relevance to the subject at hand, either because they collected data that were specifically intended to provide validation of computational predictions of structural behav­ior, or because they demonstrated unique structural design approaches.

Two of these are the YF-12 Thermal Loads project and the Rotor Aerodynamic Limits survey, both of which collected data for validat­ing and improving predictive methods. The remaining three are the Highly Maneuverable Aircraft Technology (HiMAT) digital fly-by-wire (DFBW) enhanced agility composite-structured canard demonstrator, the AD-1 oblique wing demonstrator, and the Grumman X-29 forward- swept wing (FSW) research aircraft. These three projects exercised, in progressively more challenging ways, the concept of aeroelastic tailor­ing: that is, predicting airframe flexibility and having enough confidence in those predictions to design an airplane that takes advantage of elas­tic deformation, rather than just trying to minimize it. In all of these, NASA-rooted computational structural prediction proved of great, and even occasionally, critical, significance.

The investigation of aircraft structural mechanics or, indeed, of almost any discipline, can be considered to include the following activ­ities: investigation by basic theory, computational analysis or simula­tion, laboratory test, and flight test (or, more generally, any test of the final product in its actual operating environment). Many arguments have been had over which is the most valuable. This author is of the opinion—based on his experience in the practice of engineering, on a certain amount of historical research, and on the teaching and example of mentors and peers—that theory, computation, laboratory test, and flight test all constitute imperfect but complementary views of reality. Thus, until someone comes up with a way to know the exact state of stress and deflection in every part of a vehicle under actual operating conditions, we must form our understanding of reality as a composite image, using what information we can gain from each available source:

• Flight test, obviously, is the best representation we have of an aircraft in actual operational conditions. However, our ability to interrogate the system is most severely compromised in this activity. Many data parameters are not available unless special instrumentation is installed, if at all, and this is the most difficult environment in which to obtain stable, high-quality data.

• Laboratory test offers better visibility into the opera­tion of specific parts of the system and better control of experimental parameters, at the price of some separa­tion from true operational conditions.

• Computation offers even greater opportunity to inter­rogate the value of any data parameter at any time(s) and to simulate conditions that might be impossible, difficult, or dangerous to test. Computation also elimi­nates all physical complications of running the experi­ment and all physical sources of noise and uncertainty.

But in stepping out of the physical world and into the analytical world, the researcher also becomes subject to the limited fidelity of his computational method: what effects are and are not included in the computation and how well the computation represents physical reality.

• Theory is sometimes the best source of insight and of understanding what parameters might be changed to obtain some desired effect, but it does not provide the detailed quantitative data necessary to implement the solution.

In this light, the following flight programs are discussed. Much more could be said about each of them. The present discussion is necessarily confined to their significance to the development or validation of loads and structural computation methods.

Structural Analysis and Loads Prediction Facilities

Test facilities have an important role in verifying and improving anal­ysis methods. A few test facilities that had a lot to do with the devel­opment and validation of structural analysis methods are described below. In addition to those described, other "landmark” test facilities include large-scale launch vehicle structural test facilities at Johnson and Marshall Space Centers, and the crash dynamics test facility at Langley Research Center.

Structural Dynamics Laboratory (Ames Research Center, 1965)

During the 1960s, Ames and Langley collaborated on some of the struc­tural dynamics and buffet problems of spacecraft during ascent. (This collaboration occurred through some of the same meetings at NASA Headquarters that led to the development of NASTRAN.) To help assess the structural dynamic characteristics of boosters, and to build confi­dence in predictive methods, a large structural dynamics test facility was built at Ames (completed in 1965). This facility was large enough to hold a full-size Atlas or Titan II, had provisions for exciting the struc­tural modes of the test article, and could be evacuated to test the struc­tural damping characteristics in zero or reduced ambient air density.[1005] The facility was also used for research on buffet during reentry and land­ing impacts.[1006] Much of the structural dynamics research at Ames was discontinued or relocated during the early 1970s. The laboratory is long since deactivated, but the large, pentagonal tower still stands, housing a machine shop and a wind tunnel that can simulate Mars’s atmosphere by evacuating the chamber and then filling to low pressure with CO2.[1007]

Thermal Loads Laboratory (Dryden Flight Research Center, 1960s)

A 1973 accounting of NASA research facilities listed only one major ground laboratory at Dryden: the High Temperature Loads Calibration Laboratory.[1008] High supersonic and hypersonic flight research created a need (1) to test airframes on the ground under simultaneous thermal and structural loading conditions and (2) to calibrate loads instrumen-

tation at elevated temperatures, so that the data obtained in flight could be reliably interpreted. These needs " . . . led to the construction of a laboratory for calibrating strain-gage installations to measure loads in an elevated temperature environment. The problems involved in mea­suring loads with strain gages. . . require the capability to heat and load aircraft under simulated flight conditions. . . . The laboratory has the capability of testing structural components and complete vehicles under the combined effects of loads and temperatures, and calibrating and evaluating flight loads instrumentation under [thermal] conditions expected in flight.”[1009]

The laboratory is housed in a hangarlike building with attached shop, offices, and control room. Capabilities included:

• Hangar-door opening 40 feet high by 136 feet wide.

• Unobstructed test area 150 by 120 by 40 feet allowed the testing of aircraft up to and including, for example, a YF-12 or SR-71.

• Ten megawatts of electrical heating power via quartz lamps and reflectors.

• Temperatures up to 3,000 °F.

• Hydraulic power of 4.5 gallons/minute at 3,000 pounds per square inch (psi) to apply loads.

• Fourteen channels closed-loop load or position control of up to 34 separate actuators.

• Sensors including strain gages, thermocouples, load cells, and position transducers.

Slots in the floor provided flexible locations for tiedown points, as well as routing for hydraulic and electrical power, instrumentation wir­ing, compressed air, or water (presumably for cooling). Closed-loop ana­log control of both mechanical load and heating was provided, to any desired preprogrammed time history.

The facility was used in the YF-12 thermal loads project (discussed elsewhere in this paper), in Space Shuttle structural verification at high

temperatures, and for a variety of other studies.[1010] The loads laboratory made contributions to the validation of computational methods by pro­viding the opportunity to compare computational predictions with test data obtained under known, controlled, thermal and structural load­ing conditions, applied together or independently as required. At time of this writing, the facility is still in use.[1011]