Category AERONAUTICS

Early Aircraft Fly-By-Wire Applications

By the 1950s, fully boosted flight controls were common, and the potential benefits of fly-by-wire were becoming increasingly apparent. Beginning during the Second World War and continuing postwar, fly-by­wire and power-by-wire flight control systems had been fielded in var­ious target drones and early guided missiles.[1114] However, most aircraft designers were reluctant to completely abandon mechanical linkages to
flight control surfaces in piloted aircraft, an attitude that would undergo an evolutionary change over the next two decades as a result of a broad range of NACA-NASA, Air Force, and foreign research efforts.

Подпись: 10Beginning in 1952, the NACA Langley Aeronautical Laboratory began an effort oriented to exploring various aspects of fly-by-wire, including the use of a side stick controller.[1115] By 1954, flight-testing began with what was perhaps the first jet-powered fly-by-wire research aircraft, a modified former U. S. Navy Grumman F9F-2 Panther carrier-based jet fighter used as an NACA research aircraft. The primary objective of the NACA effort was to evaluate various automatic flight control systems, including those based on rate and normal acceleration feedback. Secondary objectives were to evaluate use of fly-by-wire with a side stick controller for pilot inputs. The existing F9F-2 hydraulic flight control system, with its mechan­ical linkages, was retained with the NACA designing an auxiliary flight control system based on a fly-by-wire analog concept. A small, 4-inch-tall side stick controller was mounted at the end of the right ejection seat arm­rest. The controller was pivoted at the bottom and was used for both lat­eral (roll) and longitudinal (pitch) control. Only 4 pounds of force were required for full stick deflection. The control friction normally present in a hydromechanical system was completely eliminated by the electrically powered system. Additionally, the aircraft’s fuel system was modified to enable fuel to be pumped aft to destabilize the aircraft by moving the cen­ter of gravity rearward. Another modification was the addition of a steel container mounted on the lower aft fuselage. This carried 250 pounds of lead shot to further destabilize the aircraft. In an emergency, the shot could be rapidly jettisoned to restabilize the aircraft. Fourteen pilots flew the modified F9F-2, including NACA test pilots William Alford[1116] and Donald

L. Mallick.[1117] [1118] Using only the side stick controller, the pilots conducted
takeoffs, stall approaches, acrobatics, and rapid precision maneuvers that included air-to-air target tracking, ground strafing runs, and pre­cision approaches and landings. The test pilots quickly became used to flying with the side stick and found it comfortable and natural to use.18

In mid-1956, after interviewing aircraft flight control experts from the Air Force Wright Air Development Center’s Flight Control Laboratory, Aviation Week magazine concluded:

Подпись: 10The time may not be far away when the complex mechani­cal linkage between the pilot’s control stick and the airplane’s control surface (or booster valve system) is replaced with an electrical servo system. It has long been recognized that this"fly-by-wire” approach offered attractive possibilities for reducing weight and complexity. However, airplane designers and pilots have been reluctant to entrust such a vital function to electronics whose reliability record leaves much to be desired.[1119]

Even as the Aviation Week article was published, several noteworthy aircraft were under development that would incorporate various fly-by­wire approaches in their flight control systems. In 1956, the British Avro Vulcan B.2 bomber flew with a partial fly-by-wire system that operated in conjunction with hydraulically boosted, mechanically activated flight controls. The supersonic North American A-5 Vigilante Navy carrier – based attack bomber flew in 1958 with a pseudo-fly-by-wire flight control system. The Vigilante served the fleet for many years, but its highly com­plex design proved very difficult to maintain and operate in an aircraft carrier environment. By the mid-1960s, the General Dynamics F-111 was flying with triple-redundant, large-authority stability and command aug­mentation systems and fly-by-wire-controlled wing-mounted spoilers.[1120]

On the basic research side, the delta winged British Short S. C.1, first flown in 1957, was a very small, single-seat Vertical Take-Off and Landing
(VTOL) aircraft. It incorporated a triply redundant fly-by-wire flight con­trol system with a mechanical backup capability. The outputs from the three independent fly-by-wire channels were compared, and a failure in a single channel was overridden by the other two. A single channel failure was relayed to the pilot as a warning, enabling him to switch to the direct (mechanical) control system. The S. C. 1 had three flight control modes, as described below, with the first two only being selectable prior to takeoff.[1121]

Подпись: 10Full fly-by-wire mode with aerodynamic surfaces and noz­zles controlled electrically via three independent servo motors with triplex fail-safe operation in conjunction with three analog autostabilizer control systems.

• A hybrid mode in which the reaction nozzles were servo/ autostabilizer (fly-by-wire) controlled and the aerodynamic surfaces were linked directly to the pilot’s manual controls.

• A direct mode in which all controls were mechanically linked to the pilot control stick.

The S. C. 1 weighed about 8,000 pounds and was powered by four ver­tically mounted Rolls-Royce RB.108 lift engines, providing a total ver­tical thrust of 8,600 pounds. One RB.108 engine mounted horizontally in the rear fuselage provided thrust for forward flight. The lift engines were mounted vertically in side-by-side pairs in a central engine bay and could be swiveled to produce vectored thrust (up to 23 degrees for­ward for acceleration or -12 degrees for deceleration). Variable thrust nose, tail, and wingtip jet nozzles (powered by bleed air from the four lift engines) provided pitch, roll, and yaw control in hover and at low speeds during which the conventional aerodynamic controls were inef­fective. The S. C.1 made its first flight (a conventional takeoff and land­ing) on April 2, 1957. It demonstrated tethered vertical flight on May 26, 1958, and free vertical flight on October 25, 1958. The first transi­tion from vertical flight to conventional flight was made April 6, 1960.[1122]

During 10 years of flight-testing, the two S. C.1 aircraft made hun­dreds of flights and were flown by British, French, and NASA test pilots. A Royal Aircraft Establishment (RAE) report summarizing flight-test experience with the S. C. 1 noted: "Of the visiting pilots, those from NASA [Langley’s John P. "Jack” Reeder and Fred Drinkwater from Ames] flew the aircraft 6 or 7 times each. They were pilots of very wide experience, including flight in other VTOL aircraft and variable stability helicopters, which was of obvious assistance to them in assessing the S. C.1.”[1123] On October 2, 1963, while hovering at an altitude of 30 feet, a gyro input malfunction in the flight control system produced uncontrollable pitch and roll oscillations that caused the second S. C. 1 test aircraft (XG 905) to roll inverted and crash, killing Shorts test pilot J. R. Green. The air­craft was then rebuilt for additional flight-testing. The first S. C. 1 (XG 900) was used for VTOL research until 1971 and is now part of the Science Museum aircraft collection at South Kensington, London. The second S. C.1 (XG 905) is in the Flight Experience exhibit at the Ulster Folk and Transport Museum in Northern Ireland, near where the air­craft was originally built by Short Brothers.

Подпись: 10The Canadian Avro CF-105 Arrow supersonic interceptor flew for the first time in 1958. Revolutionary in many ways, it featured a dual channel, three-axis fly-by-wire flight control system designed without any mechanical backup flight control capability. In the CF-105, the pilot’s control inputs were detected by pressure-sensitive transducers mounted in the pilot’s control column. Electrical signals were sent from the transducers to an electronic control servo that operated the valves in the hydraulic system to move the various flight control surfaces. The CF-105 also incorporated artificial feel and stability augmentation sys­tems.[1124] In a highly controversial decision, the Canadian government can­celed the Arrow program in 1959 after five aircraft had been built and flown. Although only about 50 flight test hours had been accumulated, the Arrow had reached Mach 2.0 at an altitude of 50,000 feet. During its development, NACA Langley Aeronautical Laboratory assisted the CF-105 design team in a number of areas, including aerodynamics, performance, stability, and control. After the program was terminated,

many Avro Canada engineers accepted jobs with NASA and British or American aircraft companies.[1125] Although it never entered production and details of its pioneering flight control system design were reportedly lit­tle known at the time, the CF-105 presaged later fly-by-wire applications.

Подпись: 10NACA test data derived from the F9F-2 fly-by-wire experiment were used in development of the side stick controllers in the North American X-15 rocket research plane, with its adaptive flight control system.[1126] First flown in 1959, the X-15 eventually achieved a speed of Mach 6.7 and reached a peak altitude of 354,200 feet. One of the two side stick con­trollers in the X-15 cockpit (on the left console) operated the reaction thruster control system, critical to maintaining proper attitude control at high Mach numbers and extreme altitudes during descent back into the higher-density lower atmosphere. The other controller (on the right cockpit console) operated the conventional aerodynamic flight control surfaces. A CALSPAN NT-33 variable stability test aircraft equipped with a side stick controller and an NACA-operated North American F-107A (ex-USAF serial No. 55-5120), modified by NACA engineers with a side stick flight control system, were flown by X-15 test pilots during 1958— 1959 to gain side stick control experience prior to flying the X-15.[1127]

Interestingly, the British VC10 jet transport, which first flew in 1962, has a quad channel flight control system that transmits electrical signals directly from the pilot’s flight controls or the aircraft’s autopilot via elec­trical wiring to self-contained electrohydraulic Powered Flight Control Units (PFCUs) in the wings and tail of the aircraft, adjacent to the flight control surfaces. Each VC10 PFCU consists of an individual small self – contained hydraulic system with an electrical pump and small reservoir. The PFCUs move the control surfaces based on electrical signals pro­vided to the servo valves that are electrically connected to the cockpit flying controls.[1128] There are no mechanical linkages or hydraulic lines between the pilot and the PFCUs. The PFCUs drive the primary flight
control surfaces that consist of split rudders, ailerons, and elevators on separate electrical circuits. Thus, the VC10 has many of the attributes of fly-by-wire and power-by-wire flight control systems. It also features a backup capability that allows it to be flown using the hydraulically boosted variable incidence tail plane and differential spoilers that are operated via conventional mechanical linkages and separate hydraulic systems.[1129] The VC10K air refueling tanker was still in Royal Air Force (RAF) service as of 2009, and the latest Airbus airliner, the A380, uses the PFCU concept in its fly-by-wire flight control system.

Подпись: 10The Anglo-French Concorde supersonic transport first flew in 1969 and was capable of transatlantic sustained supercruise speeds of Mach 2.0 at cruising altitudes well above 50,000 feet. In support of the Concorde development effort, a two-seat Avro 707C delta winged flight research aircraft was modified as a fly-by-wire technology testbed with a side stick controller. It flew 200 hours on fly-by-wire flight trails at the U. K. at Farnborough until September 1966.[1130] Concorde had a dual channel analog fly-by-wire flight control system with a backup mechanical capa­bility. The mechanical system served in a follower role unless problems developed with the fly-by-wire control elements of the system, in which case it was automatically connected. Pilot movements of the cockpit con­trols operated signal transducers that generated commands to the flight control system. These commands were processed by an analog electri­cal controller that included the aircraft autopilot. Mechanically operated servo valves were replaced by electrically controlled ones. Much as with the CF-105, artificial feel forces were electrically provided to the Concorde pilots based on information generated by the electronic controller.[1131]

AFTI Phase I Testing

Phase I flight-testing was conducted by the AFTI/F-16 Joint Test Force from the NASA Dryden Flight Research Facility at Edwards AFB, CA, from July 10, 1982, through July 30, 1983. During this phase, five test pilots from NASA, the Air Force, and the U. S. Navy flew the aircraft. Initial flights checked out the aircraft’s stability and control systems. Handling qualities were assessed in air-to-air and air-to-ground scenarios, as well
as in-formation flight and during approach and landing. The Voice Command System allowed the pilot to change switch positions, display formats, and modes simply by saying the correct word. Initial tests were of the system’s ability to recognize words, with later testing conducted under increasing levels of noise, vibrations, and g-forces. Five pilots flew a total of 87 test sorties with the Voice Command System, with a gen­eral success rate approaching 90 percent. A prototype helmet-mounted sight was also evaluated. On July 30, 1983, the AFTI/F-16 aircraft was flown back to the General Dynamics facility at Fort Worth, TX, for mod­ification for Phase II. During the Phase I test effort, 118 flight-test sor­ties were flown, totaling about 177 flight hours. In addition to evaluating the DFCS, the potential operational utility of task-tailored flight modes (that included decoupling of aircraft attitude and flight path) was also assessed. During these unconventional maneuvers, the AFTI/F-16 dem­onstrated that it could alter its nose position without changing flight path and change its flight path without changing aircraft attitude. The air­craft also performed coordinated horizontal turns without banking or sideslip.[1183] NASA test pilot Bill Dana recounted: "In Phase I we evaluated non-classic flight control modes. By deflecting the elevators and flaps in various relationships, it was possible to translate the aircraft vertically without changing pitch attitude or to pitch-point the airplane without changing your altitude. You could also translate laterally without using bank and yaw-point without translating the aircraft, by using rudder and canard inputs programmed together in the flight control computer.”[1184]

Highly Integrated Digital Electronic Control

The Highly Integrated Digital Electronic Control (HIDEC) evolved from the earlier DEEC research effort. Major elements of the HIDEC were
a Digital Electronic Flight Control System (DEFCS), engine-mounted DEECs, an onboard general-purpose computer, and an integrated archi­tecture that provided connectivity between components. The HIDEC F-15A (USAF serial No. 71-0287) was modified to incorporate DEEC – equipped F100 engine model derivative (EMD) engines. A dual chan­nel Digital Electronic Flight Control System augmented the standard hydromechanical flight control system in the F-15A and replaced its ana­log control augmentation system. The DEFCS was linked to the aircraft data buses to tie together all other electronic systems, including the air­craft’s variable geometry engine inlet control system.[1261] Over a span of about 15 years, the HIDEC F-15 would be used to develop several modes of integrated propulsion and flight control systems. These integrated modes were Adaptive Engine Control System, Performance Seeking Control, Self-Repairing Flight Control System, and the Propulsion-Only Flight Control System. They are discussed separately in the following sections.[1262]

Advanced Turboprop Project-Yesterday and Today

The third engine-related effort to design a more fuel-efficient powerplant during this era did not focus on another idea for a turbojet configura­tion. Instead, engineers chose to study the feasibility of reintroducing a jet-powered propeller to commercial airliners. An initial run of the numbers suggested that such an advanced turboprop promised the larg­est reduction in fuel cost, perhaps by as much as 20 to 30 percent over turbofan engines powering aircraft with a similar performance. This compared with the goal of a 5-percent increase in fuel efficiency for the Engine Component Improvement program and a 10- to 15-percent increase in fuel efficiency for the E Cubed program.[1316]

But the implementation of an advanced turboprop was one of NASA’s more challenging projects, both in terms of its engineering and in secur­ing public acceptance. For years, the flying public had been conditioned to see the fanjet engine as the epitome of aeronautical advancement. Now they had to be "retrained” to accept the notion that a turbopropel­ler engine could be every bit as advanced, indeed, even more advanced, than the conventional fanjet engine. The idea was to have a jet engine
firing as usual with air being compressed and ignited with fuel and the exhaust expelled after first passing through a turbine. But instead of the turbine spinning a shaft that turned a fan at the front of the engine, the turbines would be spinning a shaft, which fed into a gearbox that turned another shaft that spun a series of unusually shaped propeller blades exterior to the engine casing.[1317]

Begun in 1976, the project soon grew into one of the larger NASA aeronautics endeavors in the history of the Agency to that point, eventu­ally involving 4 NASA Field Centers, 15 university grants, and more than 40 industrial contracts.[1318]

Подпись: 11Early on in the program, it was recognized that the major areas of concern were going to be the efficiency of the propeller at cruise speeds, noise both on the ground and within the passenger cabin, the effect of the engine on the aerodynamics of the aircraft, and maintenance costs. Meeting those challenges were helped once again by the computer-aided, three-dimensional design programs created by the Lewis Research Center. An original look for an aircraft propeller was devised that changed the blade’s sweep, twist, and thickness, giving the propellers the look of a series of scimitar-shaped swords sticking out of the jet engine. After much development and testing, the NASA-led team eventually found a solution to the design challenge and came up with a propeller shape and engine configuration that was promising in terms of meeting the fuel-efficiency goals and reduced noise by as much as 65 decibels.[1319]

In fact, by 1987, the new design was awarded a patent, and the NASA-industry group was awarded the coveted Collier Trophy for creat­ing a new fuel-efficient turboprop propulsion system. Unfortunately, two unexpected variables came into play that stymied efforts to put the design into production.[1320]

The first had to do with the public’s resistance to the idea of flying in an airliner powered by propellers—even though the blades were still

Подпись: A General Electric design for an Unducted Fan engine is tested during the early 1980s. General Electric. Подпись: 11

being turned by a jet engine. It didn’t matter that a standard turbofan jet also derived most of its thrust from a series of blades—which did, in fact, look more like a fan than a series of propellers. Surveys showed passengers had safety concerns about an exposed blade letting go and sending shrapnel into the cabin, right where they were sitting. Many passengers also believed an airliner equipped with an advanced turbo­prop was not as modern or reliable as pure turbojet engine. Jets were in; propellers were old fashioned. The second thing that happened was that world fuel prices dropped to the lower levels that preceded the oil embargo and the very rationale for developing the new turboprop in the first place. While fuel-efficient jet engines were still needed, the "extra mile” in fuel efficiency the advanced turboprop provided was no lon­ger required. As a result, NASA and its partners shelved the technology and waited to use the archived files another day.[1321]

The story of the Advanced Turboprop project had one more twist to it. While NASA and its team of contractor engineers were working on their new turboprop design, engineers at GE were quietly working on their own design, initially without NASA’s knowledge. NASA’s engine was distinguished by the fact that it had one row of blades, while GE’s ver­sion featured two rows of counter-rotating blades. GE’s design, which became known as the Unducted Fan (UDF), was unveiled in 1983 and demonstrated at the 1985 Paris Air Show. A summary of the UDF’s tech­nical features is described in a GE-produced report about the program:

Подпись: 11The engine system consists of a modified F404 gas generator engine and counterrotating propulsor system, mechanically decoupled, and aerodynamically integrated through a mixing frame structure. Utilization of the existing F404 engine min­imized engine hardware, cost, and timing requirements and provided an engine within the desired thrust class. The power turbine provides direct conversion of the gas generator horse­power into propulsive thrust without the requirement for a gearbox and associated hardware. Counterrotation utilizes the full propulsive efficiency by recovering the exit swirl between blade stages and converting it into thrust.[1322]

Although shelved during the late 1980s, the Alternate Turboprop and UDF technology and concept is being explored again as part of programs such as the Ultra-High Bypass Turbofan and Pratt & Whitney’s Geared Turbofan. Neither engine is routinely flying yet on commercial airlin­ers. But both concepts promise further reductions in noise, increases in fuel efficiency, and lower operating costs for the airline—goals the aero­space community is constantly working to improve upon.

Several concepts are under study for an Ultra-High Bypass Turbofan, including a modernized version of the Advanced Turboprop that takes advantage of lessons learned from GE’s UDF effort. NASA has teamed with GE to start testing an open-rotor engine. For the NASA tests at Glenn Research Center, GE will run two rows of counter-rotating fan blades, with 12 blades in the front row and 10 blades in the back row. The composite fan blades are one-fifth subscale in size. Tests in
a low-speed wind tunnel will simulate low-altitude aircraft speeds for acoustic evaluation, while tests in a high-speed wind tunnel will simulate high-altitude cruise conditions in order to evaluate blade efficiency and performance.[1323]

"The tests mark a new journey for GE and NASA in the world of open rotor technology. These tests will help to tell us how confident we are in meeting the technical challenges of an open-rotor architecture. It’s a journey driven by a need to sharply reduce fuel consumption in future aircraft,” David Joyce, president of GE Aviation, said in a statement.[1324]

Подпись: 11In an Ultra-High Bypass Turbofan, the amount of air going through the engine casing but not through the core compressor and combustion chamber is at least 10 times greater than the air going through the core. Such engines promise to be quieter, but there can be tradeoffs. For exam­ple, an Ultra-High Bypass Engine might have to operate at a reduced thrust or have its fan spin slower. While the engine would meet all the goals, it would fly slower, thus making passengers endure longer trips.

In the case of Pratt & Whitney’s Geared Turbofan engine, the idea is to have an Ultra-High Bypass Ratio engine, yet spin the fan slower (to reduce noise and improve engine efficiency) than the core compressor blades and turbines, all of which traditionally spin at the same speed, as they are connected to the same central shaft. Pratt & Whitney designed a gearbox into the engine to allow for the central shaft to turn at one speed yet turn a second shaft connected to the fan at another speed.[1325]

Alan H. Epstein, a Pratt & Whitney vice president, testifying before the House Subcommittee on Transportation and Infrastructure in 2007, explained the potential benefits the company’s Geared Turbofan might bring to the aviation industry:

The Geared Turbofan engine promises a new level of very low noise while offering the airlines superior economics and envi­ronmental performance. For aircraft of 70 to 150 passenger size, the Geared Turbofan engine reduces the fuel burned,
and thus the CO2 produced, by more than 12% compared to today’s aircraft, while reducing cumulative noise levels about 20dB below the current Stage 4 regulations. This noise level, which is about half the level of today’s engines, is the equiva­lent difference between standing near a garbage disposal run­ning and listening to the sound of my voice right now.[1326]

Подпись: 11Pratt & Whitney’s PW1000G engine incorporating a geared turbo­fan is selected to be used on the Bombardier CSeries and Mitsubishi Regional Jet airliners beginning in 2013. The engine was first flight-tested in 2008, using an Airbus A340-600 airliner out of Toulouse, France.[1327]

Good Stewards: NASA’s Role in Alternative Energy

Bruce I. Larrimer

Подпись: 13Consistent with its responsibilities to exploit aeronautics technology for the benefit of the American people, NASA has pioneered the develop­ment and application of alternative energy sources. Its work is argu­ably most evident in wind energy and solar power for high-altitude remotely piloted vehicles. Here, NASA’s work in aerodynamics, solar power, lightweight structural design, and electronic flight controls has proven crucial to the evolution of novel aerospace craft.

HIS CASE STUDY REVIEWS two separate National Aeronautics and Space Administration (NASA) programs that each involved research and development (R&D) in the use of alternative energy. The first part of the case study covers NASA’s participation in the Federal Wind Energy Program from 1974 through 1988. NASA’s work in the wind energy area included design and fabrication of large horizontal-axis wind turbine (HAWT) generators, and the conduct of supporting research and technology projects. The second part of the case study reviews NASA’s development and testing of high-altitude, long-endurance solar – powered unmanned aerial vehicles (UAVs). This program, which ran from 1994 through 2003, was part of the Agency’s Environmental Research and Aircraft Sensor Technology Program.

Solar Cells and Fuel Cells for Solar-Powered ERAST Vehicles

Подпись: 13NASA had first acquired solar cells from Spectralab but chose cells from SunPower Corporation of Sunnyvale, CA, for the ERAST UAVs. These photovoltaic cells converted sunlight directly into electricity and were lighter and more efficient than other commercially available solar cells at that time. Indeed, after NASA flew Helios, SunPower was selected to fur­nish high-efficiency solar concentrator cells for a NASA Dryden ground solar cell test installation, spring-boarding, as John Del Frate recalled subsequently, "from the technology developed on the PF+ and Helios solar cells.”[1546] The Dryden solar cell configuration consisted of two fixed – angle solar arrays and one sun-tracking array that together generated up to 5 kilowatts of direct current. Field-testing at the Dryden site helped SunPower lower production costs of its solar cells and identify uses and performance of its cells that enabled the company to develop large-scale commercial applications, resulting in the mass-produced SunPower A-300 series solar cells.[1547] SunPower’s solar cells were selected for use on the Pathfinders, Centurion, and Helios Prototype UAVs because of their high – efficiency power recovery (more than 50-percent higher than other com­mercially available cells) and because of their light weight. The solar cells designed for the last generation of ERAST UAVs could convert about 19 percent of the solar energy received into 35 kilowatts of electrical current at high noon on a summer day. The solar cells on the ERAST vehicles were bifacial, meaning that they could absorb sunlight on both sides of the cells, thus enabling the UAV s to catch sunrays reflected upward when flying above cloud covers, and were specially developed for use on the aircraft.

While solar cell technology satisfied the propulsion problem during daylight hours, a critical problem relating to long-endurance backup sys­tems remained to be solved for flying during periods of darkness. Without solving this problem, solar UAV flight would be limited to approximately 14 hours in the summer (much less, of course, in the dark of winter), plus whatever additional time could be provided by the limited (up to 5 hours for the Pathfinder) backup batteries. Although significant improvements had been made, batteries failed to satisfy both the weight limitation and long duration power generation requirements for the solar-powered UAVs.

Подпись: 13As an alternative to batteries, the ERAST alliance tested a number of different fuel cells and fuel cell power systems. An initial problem to overcome was how to develop lightweight fuel cells because only 440 pounds of Helios’s takeoff weight of 1,600 pounds were originally planned to be allocated to a backup fuel cell power system. Helios required approximately 120 kilowatthours of energy to power the craft for up to 12 hours of flight during darkness, and, fortunately, the state of fuel cell technology had advanced far enough to permit attaining this; ear­lier efforts back to the early 1980s had been frustrated because fuel cell technology was not sufficiently developed at that time. The NASA – industry team later determined, as part of the ERAST program, that a hydrogen-oxygen regenerative fuel cell system (RFCS or regen system) was the hoped for solution to the problem, and substantial resources were committed to the project.

RFCSs are closed systems whereby some of the electrical power pro­duced by the UAV’s solar array during daylight hours is sent to an electro­lyzer that takes onboard water and disassociates the water into hydrogen gas and oxygen gas, both of which are stored in tanks aboard the vehicle. During periods of darkness, the stored gases are recombined in the fuel cell, which results in the production of electrical power and water. The power is used to maintain systems and altitude. The water is then stored for reuse the following day. This cycle theoretically would repeat on a 24-hour basis for an indefinite time period. NASA and AeroVironment also considered, but did not use, a reversible regen system that instead of having an electrolyzer and a fuel cell used only a reversible fuel cell to do the work of both components.[1548]

As originally planned, Helios was to carry two separate regen fuel cell systems contained in two of four landing gear pods. This not only disbursed the weight over the flying wing, but also was in keeping with the plan for redundant systems. If one of the two fuel cells failed, Helios could still stay aloft for several days, albeit at a lower altitude. Contracts to make the fuel cell and electrolyzer were given to two companies—Giner of Waltham, MA, and Lynntech, Inc., of College Station, TX. Each of the two systems was planned to weigh 200 pounds, including 27 pounds for the fuel cell, 18 pounds for the electrolyzer, 40 pounds for oxygen and hydrogen tanks, and 45 pounds for water. The remaining 70 pounds con­sisted of plumbing, controls, and ancillary equipment.[1549]

Подпись: 13While the NASA-AeroVironment team made a substantial invest­ment in the RFCS and successfully demonstrated a nearly closed system in ground tests, it decided that the system was not yet ready to satisfy the planned flight schedule. Because of these technical difficulties and time and budget deadlines, NASA and AeroVironment agreed in 2001 to switch to a consumable hydrogen-air primary fuel cell system for the Helios Prototype’s long-endurance ERAST mission. The fuel cells were already in development for the automotive industry. The hydrogen-air fuel cell system required Helios to carry its own supply of hydrogen. In periods of darkness, power for the UAV would be produced by combining gaseous hydrogen and air from the atmosphere in a fuel cell. Because of the low air density at high altitudes, a compressor needed to be added to the sys­tem. This system, however, would operate only until the hydrogen fuel was consumed, but the team thought that the system could still provide multiple days of operation and that an advanced version might be able to stay aloft for up to 14 days. The installation plan was likewise changed. The fuel cell was now placed in one pod with the hydrogen tanks attached to the lower surface of the wing near each wingtip. This modification, of course, dramatically changed Helios’s structural loadings, transforming it from a span-loaded flying wing to a point-loaded vehicle.[1550]

The Quest for Refinement

By the end of the 1960s, the "classic” era of aircraft design was argu­ably at an end. As exemplars of the highest state of aviation technology, the piston engine had given way to the gas turbine, the wood-and-fabric aircraft to the all-metal, the straight wing had given way to the swept and delta. Aircraft flight speeds had risen from a mere 40 mph at the time of the Wright brothers to over 100 times as fast, as the X-15A-2 dem­onstrated when it streaked to Mach 6.70 (4,520 mph) in October 1967, piloted by Maj. William J. Knight. Fighters, by that time, had been flying on a Mach 2 plateau for a decade and transports on a Mach 0.82 plateau for roughly the same amount of time. In space, Americans were basking in the glow of the recent Apollo triumph, where a team of astronauts, led by former NACA-NASA research pilot Neil Armstrong—a Round One and Round Two veteran whose experience included both the X-1 and the X-15—journeyed to the Moon, landed two of their number upon it, and then returned to Earth.

Such accomplishments hardly meant that the frontiers of the sky were closing, or that NASA had little to do. Indeed, in some respects, it was facing even greater challenges: conducting comprehensive aeronautical research at a time when, increasingly, more people identified it with space than aeronautics and when, in the aftermath of the Apollo success, mon­ies were increasingly tight. Added to this was a dramatically transforming world situation: increasing tension in the Middle East, a growing Soviet threat, rising oil prices, open concern over environmental stewardship,
and a national turning away from the reflexive perception that limitless technological progress was both a given and a good thing.

The Quest for RefinementWithin this framework, NASA work increasingly turned to achieving efficiencies: more fuel-efficient and energy-efficient civilian flight, and more efficient military systems. It was not NASA’s business to, per se, design new aircraft, but, as NACA-NASA history amply demonstrated, the Agency’s mark could be found on many aircraft and their innova­tions. Little things counted for much. When, for example, NACA High­Speed Flight Research Station pilots flew a Douglas D-558-1 Skystreak modified with a row of small vortex generators (little rectangular fins of 0.5-inch chord standing vertically like a row of razor blades) on its upper wing surface, they hardly expected that such a small energy – imparting modification would so dramatically improve its transonic handling qualities that rows of vortex generators would become a com­monly recognized feature on many aircraft, including such "classics” as the B-52, the 707, and the A-4.[121] In the post-1970 period, NASA assidu­ously pursued three concepts related to swept wing and delta flight, in hopes that each would pay great dividends: the supercritical wing, the winglet, and the arrow wing.[122] All had roots embedded and nourished in the earliest days of the supersonic and swept/delta revolution. Each reflected Whitcomb’s passion—indeed obsession, in its most positive sense—with minimizing interference effects and achieving the greatest possible aerodynamic efficiency without incurring performance-robbing complexity. Many had researched configurations approaching the purity of the arrow wing, but it was Whitcomb who first actually achieved such a configuration, as part of Langley’s Supersonic Transport study effort.

Long a subject of individual research and thought, Langley’s institu­tional SST studies had begun in 1958, when the ever-enthusiastic John Stack formed a Supersonic Transport Research Committee (STRC). It evaluated the maturity of various disciplines—particularly the "classics” of aerodynamics, structures, propulsion, and controls—and then fore­cast the overall feasibility of a Supersonic Transport. The Stack team presented the results of their studies to the head of the Federal Aviation

Administration (FAA), Elwood Quesada, a retired Air Force general, in December 1959. Their report, issued the following year, concluded: "the state of the art appears sufficiently advanced to permit the design of an airplane at least marginally capable of performing the super­sonic transport mission.”[123] NASA swiftly ramped up to match growing interest in the FAA in such aircraft; within a decade, SST-focused research would constitute over a quarter of all NASA aeronautics research undertaken at the Langley, Ames, and Lewis Centers.[124]

The Quest for RefinementGiven that the British and French subsequently designed the Mach 2+ Concorde, and the Soviets the Tupolev Tu-144, NASA Langley’s tech­nological optimism in 1959-1960 was, within limits, technically well justified, and such optimism infused Washington’s political community as well. In March 1966, President Lyndon Johnson announced that the first American SST, designed to cruise at Mach 2.7, would fly at decade’s end and enter commercial service in 1974.[125] But such expectations would prove overly optimistic. As Mach number rose, so too did a number of daunting technical challenges encountered by the more ambitious air­craft American SST proponents favored. Assessing the technology alone did not address the serious questions—research and development invest­ment, production costs, operating economics, and environmental con­cerns, for example—such aircraft would pose and would limit the airline acceptance (and, hence, market success) of even the "modest” Concorde and Tu-144. Air transport constitutes a system of systems, and excellence in some does not guarantee or imply excellence overall. Political support, strongly bipartisan over the Kennedy-Johnson era, withered in the Nixon

years as technical and other challenges arose, and a re-action against the SST set in, fueled by questions over the value of high technology and reaction to the long and costly war in Southeast Asia.[126]

The Quest for RefinementFrom the standpoint of aircraft design, from Langley’s interest emerged a series of Supersonic Commercial Air Transport (SCAT) design studies, most of which incorporated variable-geometry planforms reflecting a growing popular wisdom that future military or civilian supersonic cruise designs would necessarily incorporate such wings. Whitcomb, focused on simplicity and efficiency, demurred, preferring instead a sharply swept arrow configuration, the SCAT-4, which he had derived. It drew upon a two-decade tradition of Langley swept and delta studies running through those of Clinton E. Brown and F. Edward McLean in the 1950s, back to the thin swept and delta research manifested in Robert T. Jones’s origi­nal concepts in 1944-1945. Though he was not successful at the time at selling his vision of what such an aircraft should be (and, in fact, left the Stack SST study effort as a result), in time the fixed wing predominated. In 1964, a Langley team comprised of Harry Carlson, Roy Harris, Ed McLean, Wilbur Middleton, and A. Warner Robins derived a fixed wing variant of the variable-sweep SCAT-15, generating an elegant slender arrow wing called the SCAT-15F. SCAT-15F had an incredible lift-to-drag ratio of 9.3 at Mach 2.6, well beyond what previous analysis and thought had deemed possible, though it also had serious low-speed pitch-up and deep-stall tendencies that triggered intensive investigations by research­ers using the Langley Full-Scale Tunnel.[127] Out of this came a revised SCAT-15F configuration, with leading edge flaps, wing notches, area-and – camber-increasing Fowler flaps, and a small, horizontal tail, all of which worked to make it a much more acceptable planform. The development

of the high supersonic L/D fixed wing eventually led Boeing (winner of the Government’s SST design competition) to abandon variable-sweep in favor of a highly refined small-tailed delta, for its final SST proposal, though congressional refusal to furnish needed developmental monies brought the American SST development effort to a sorry end.[128] It did not, however, end interest in similar configurations for a range of other mis­sions. Today, in an era of vastly different technology, with much higher­performing engines, better structures, and better means of modeling and simulating the aerodynamic and propulsive performance of such designs, tailored fixed arrow wing configurations are commonplace for future advanced high-speed civil and military aircraft applications.

The Quest for RefinementAs the American SST program, plagued by controversy and numerous wounds (many self-inflicted), died amid performance and environmental concerns, Whitcomb increasingly turned his attention to the transonic, thereby giving to aviation one of its most compelling images, that of the graceful supercritical wing and, of less aesthetic appeal but no less significance, the wingtip winglet. Both, in various forms, became standard design elements of future civil and military transport design and are examined elsewhere (by historian Jeremy Kinney) in this work.

The Quest for RefinementAs for the arrow wing, military exigency and the Cold War com­bined to ensure that studies of this most promising configuration spawned the "cranked arrow wing” of the late 1970s. Following cancel­lation of the national SST effort, NASA researchers continued study­ing supersonic cruise for both military and civil applications, under the guise of a new study effort, the Advanced Supersonic Technology (AST) effort. AST was succeeded by another Langley-run cruise-focused effort, the Supersonic Cruise Aircraft Research (SCAR, later shortened to SCR) program. SCR lasted until 1982, when NASA terminated it to focus more attention and resources on the already troubled Shuttle program. But meantime, it had spawned the Supersonic Cruise and Maneuver Prototype (SCAMP), a derivative of the F-16 designed to cruise at supersonic speeds. Its "cranked arrow” wing, blending a 70-degree swept inboard leading edge and a 50-degree swept outboard leading edge, looked deceptively simple but embodied sophisticated shaping and camber (reflecting the long legacy of SCAT studies, particularly the refinement of the SCAT – 15F), with leading edge vortex flaps to improve both transonic and low – speed performance. General Dynamics’ F-16 designer Harry J. Hillaker adopted the planform for a proposed strike fighter version of the F-16 because it reduced supersonic wave drag, increasing the F-16’s potential combat mission radius by as much as 65 percent and more than doubling its permissible angle-of-attack range as well. In the early 1980s, SCAMP, now designated the F-16XL, competed with the prototype F-15E Strike Eagle at Edwards Air Force Base for an Air Force deep-strike fighter con­tract. But the F-16XL was too small an airplane to win the completion; with greater internal fuel and volume, the larger Strike Eagle offered more growth potential and versatility. The two F-16XL aircraft, among the most beautiful ever flown, remained at Edwards, where they flew a variety of research missions at NASA Dryden, refining understanding of the complex flows around cranked arrow profiles and addressing such technical issues as the possibility of supersonic laminar flow control by using active suction. Interest in the cranked arrow has persisted, as it remains a most attractive design option for future supersonic cruise air­craft, whether piloted or not, both civil and military.[129]

By the end of the 1980s, for military aircraft, concern over aero­dynamic shaping of aircraft was beginning to take second place behind concern over their electromagnetic signature. Where something such as the blended wing-body delta SR-71 possessed an innate purity and beauty of form, inherent when aerodynamics is given the position of primacy in aircraft design, something such as the swept wing, V-tail F-117 stealth fighter did not: all angles and panels, it hardly looked aero­dynamic, and, indeed, it had numerous deficits cured only by its being birthed in the electronic fly-by-wire and composites era. But in other aspects it performed with equal brilliance: not the brilliance of Mach 3+, but the quiet brilliance of penetrating a high-threat integrated air defense network, attacking a key target, and escaping without detection.

The Quest for RefinementFor the future of the swept surface, one had to look elsewhere, back to the transonic, where it could be glimpsed in the boldly imagi­native lines of the Blended Wing-Body (BWB) transport. Conceived by Robert H. Liebeck, a gifted Boeing designer who had begun his career at Douglas, where he worked with the legendary A. M.O. Smith, the BWB represented a conception of pure aerodynamic efficiency predating NASA, the NACA that had preceded it, and even, indeed, Jack Northrop and the Horten brothers. It hearkened back to the earliest concepts for Nurflugeln (flying wings) by Hugo Junkers before the First World War, the first designer to appreciate how one could insightfully incorporate the cantilever all-metal structure to achieve a pure lifting surface.[130] Conceived while Liebeck worked for McDonnell-Douglas in the latter years before its own merger with Boeing, the graceful BWB was not strictly a flying wing but, rather, a hybrid wing-body combination whose elegant high aspect ratio wing blended smoothly into a wide, flat-bottom fuselage, the wings sprouting tall winglets at their tips for lateral con­trol, thus differing significantly from earlier concepts such as the Boeing "Spanloader” and the Horten, Armstrong-Whitworth, and Northrop fly­ing wings. Early design conceptions envisioned upward of 800 passen-

gers flying in a three-engine, double-deck, 823,000-pound, manta-shaped BWB (spanning 289 feet with a length of 161 feet), cruising across the globe at Mach 0.85. Subsequent analysis resulted in a smaller design sized for 450 passengers, the BWB-450, which served as the baseline for later research and evaluation, which concluded that the most suit­able role for the BWB might be for a range of global heavy-lift multi­purpose military missions rather than passenger-carrying.[131] Extensive studies by NASA Langley and Lewis researchers; McDonnell-Douglas (now Boeing) BWB team members; and academic researchers from Stanford University, the University of Southern California, Clark Atlanta University, and the University of Florida confirmed the aerodynamic and propulsive promise inherent in the BWB, particularly its poten­tial to carry great loads at transonic speeds over global distances with unprecedented aerodynamic and energy efficiency, resulting in poten­tially 30-percent better fuel economy than that achievable by traditional "tube and wing” airlifters.[132]

The Quest for RefinementThese and many other studies, including tests by Boeing and the United States Air Force, encouraged the next logical step: developing a subscale unmanned aerial vehicle (UAV) to assess the low-speed flight – control characteristics of the BWB in actual flight. This became the X-48B, a 21-foot span, 8.5-percent scale UAV testbed of the BWB-450 configuration, powered by three 240-pound thrust Williams turbojets. Boeing had Cranfield Aerospace, Ltd., in Great Britain build two X-48Bs for the company’s Phantom Works. After completion, the first X-48B com­pleted 250 hours of tunnel tests in the Langley Full-Scale Tunnel (run by Old Dominion University) in May 2006. Readying the BWB for flight

The Quest for Refinement

The NASA F-16XL cranked-arrow research aircraft aloft over the Dryden Flight Research Center on December 16, 1997. NASA.

consumed another year until, on July 20, 2007, the second example took to the air at Dryden, becoming the first of the X-48B testbeds to fly. By the end of the year, it had completed five research flights. Subsequent testing explored its stability and control at increasing angles of attack (to as great as 16-degree AoA), pointing to possible ways of furnishing improved controllability at even higher angles of attack.[133] Time will tell if the world’s skies will fill with blended wing-body shapes. But to those who follow the technology of the sky, if seemingly fantastic, it is well within the realm of the possible, given the history of the swept and delta wings—and NACA-NASA’s role in furthering them.

In conclusion, the invention of the swept and delta wing blended creative and imaginative analysis and insight, great risk, and steadfast research. If in introspect their story has a clarity and a cohesiveness that was not necessarily visible to those at the time, it is because time has stripped the story to its essence. It is unfortunate that the percep­tion that America was "given” (or "took”) the swept and delta wing in full-blown maturity from the laboratories of the Third Reich possesses

The Quest for Refinementsuch persistency, for it obscures the complex roots of the swept and delta wing in both Europe and America, the role of the NACA and NASA in maturing them, and, at heart, the accomplishments of successive gener­ations of Americans within the NACA-NASA and elsewhere who worked to take what were, in most cases, very immature concepts and turn them into practical reality. Doing so required achieving many other things, among which were securing a practical means of effective longitudi­nal control at transonic speeds (the low, all-moving, and powered tail), reducing transonic drag rise, developing stability augmentation sys­tems, and refining aircraft handling qualities. Defeating the transonic drag "hump”; reducing pitch-up to nuance, not nuisance; and overcom­ing the danger of inertial coupling were all crucial to ensuring that the swept and delta wing could fulfill their transforming promise. Once achieved, that gave to the world the means to fulfill the promise of the jet engine. As a result, international security and global transportation patterns were dramatically altered and a new transnational global con­sciousness born. It is something that workers of the NACA past, and NASA past, present, and future, can look back upon with a sense of both pride and accomplishment.

Softening the Sonic Boom: 50 Years of NASA Research

Lawrence R. Benson

The advent of practical supersonic flight brought with it the shatter­ing shock of the sonic boom. From the onset of the supersonic age in 1947, NACA-NASA researchers recognized that the sonic boom would work against acceptance of routine overland supersonic air­craft operation. In concert with researchers from other Federal and mil­itary organizations, they developed flight-test programs and innovative design approaches to reshape aircraft to minimize boom effects while retaining desirable high-speed behavior and efficient flight performance.

A

FTER ITS FORMATION IN 1958, the National Aeronautics and Space Administration (NASA) began devoting most of its resources to the Nation’s new civilian space programs. Yet 1958 also marked the start of a program in the time-honored aviation mission that the Agency inherited from the National Advisory Committee for Aeronautics (NACA). This task was to help foster an advanced passenger plane that would fly at least twice the speed of sound.

Because of economic and political factors, developing such an aircraft became more than a purely technological challenge. One of the major barriers to producing a supersonic transport involved a phenomenon of atmospheric physics barely understood in the late 1950s: the shock waves generated by supersonic flight. Studying these "sonic booms” and learning how to con­trol them became a specialized and enduring field of NASA research for the next five decades. During the first decade of the 21st century, all the study, testing, and experimentation of the past finally began to reap tangible benefits in the same California airspace where supersonic flight began.[322]

The Tiles Become Operational

Manufacture of the silica tiles was straightforward, at least in its basic steps. The raw material consisted of short lengths of silica fiber of l.0-micron diameter. A measured quantity of fibers, mixed with water, formed a slurry. The water was drained away, and workers added a binder of colloidal silica, then pressed the material into rectangular blocks that were 10 to 20 inches in diameter and more than 6 inches thick. These blocks were the crudest form of LI-900, the basic choice of RSI for the entire Shuttle. They sat for 3 hours to allow the binder to jell, then were dried thoroughly in a micro­wave oven. The blocks moved through sintering kilns that baked them at 2,375 °F for 2 hours, fusing binder and fibers together. Band saws trimmed distortions from the blocks, which were cut into cubes and then carved into individual tiles using milling machines driven by computer. The programs contained data from Rockwell International on the desired tile dimensions.

Next, the tiles were given a spray-on coating. After being oven-dried, they returned to the kilns for glazing at temperatures of 2,200 °F for 90 minutes. To verify that the tiles had received the proper amount of coat­ing, technicians weighed samples before and after the coating and glaz­ing. The glazed tiles then were made waterproof by vacuum deposition of a silicon compound from Dow Corning while being held in a furnace at 350 °F. These tiles were given finishing touches before being loaded into arrays for final milling.[610]

Although the basic LI-900 material showed its merits during 1972, it was another matter to produce it in quantity, to manufacture tiles that were suitable for operational use, and to provide effective coatings. To avoid having to purify raw fibers from Johns Manville, Lockheed asked that company to find a natural source of silica sand with the necessary purity. The amount needed was small, about 20 truckloads, and was not of great interest to quarry operators. Nevertheless, Johns Manville found a suitable source in Minnesota.

Problems arose when shaping the finished tiles. Initial plans called for a large number of identical flat tiles, varying only in thickness and trimmed to fit at the time of installation. But flat tiles on the curved sur­face of the Shuttle produced a faceted surface that promoted the onset of turbulence in the airflow, resulting in higher rates of heating. The tiles

then would have had to be thicker, which threatened to add weight. The alternative was an external RSI contour closely matching that of the orbit – er’s outer surface. Lockheed expected to produce 34,000 tiles for each orbiter, grouping most of them in arrays of two dozen or so and machin­ing their back faces, away from the glazed coating, to curves matching the contours of the Shuttle’s aluminum skin. Each of the many thou­sands of tiles was to be individually numbered, and none had precisely the same dimensions. Instead, each was defined by its own set of dimen­sions. This cost money, but it saved weight.

Difficulties also arose in the development of coatings. The first good one, LI-0042, was a borosilicate glass that used silicon carbide to enhance its high-temperature thermal emissivity. It dated to the late 1960s; a vari­ant, LI-0050, initially was the choice for operational use. This coating easily withstood the rated temperature of 2,300 °F, but in tests, it persis­tently developed hairline cracks after 20 to 60 thermal cycles. This was unacceptable; it had to stand up to 100 such cycles. The cracks were too small to see with the unaided eye and did not grow large or cause tile failure. But they would have allowed rainstorms to penetrate the tiles during the weeks that an orbiter was on the ground between missions, with the rain adding to the launch weight. Help came from NASA Ames, where researchers were close to Lockheed, both in their shared interests and in their facilities being only a few miles apart. Howard Goldstein at Ames, a colleague of the branch chief, Howard Larson, set up a task group and brought in a consultant from Stanford University, which also was just up the road. They spent less than $100,000 in direct costs and came up with a new and superior coating called reaction-cured glass. Like LI-0050, it was a borosilicate, consisting of more than 90 percent silica along with boria or boron oxide along with an emittance agent. The agent in LI-0050 had been silicon carbide; the new one was silicon tetraboride, SiB4. During glazing, it reacted with silica in a way that increased the level of boria, which played a critical role in controlling the coating’s thermal expansion. This coating could be glazed at lower temperature than LI-0050 could, reducing the residual stress that led to the cracking. SiB4 oxidized during reentry, but in doing so, it produced boria and silica, the ingredients of the glass coating itself.[611]

The Shuttle’s distinctive mix of black-and-white tiles was all designed as standard LI-900 with its borosilicate coating, but the black ones had SiB4 and the white ones did not. Still, they all lacked structural strength and were brittle. They could not be bonded directly to the orbiter’s alumi­num skin, for they would fracture and break because of their inability to follow the flexing of this skin under its loads. Designers therefore placed an intermediate layer between tiles and skin, called a strain isolator pad (SIP). It was a felt made of Nomex nylon from DuPont, which would nei­ther melt nor burn. It had useful elasticity and could stretch in response to Shuttle skin flexing without transmitting excessive strain to the tiles.[612]

Testing of tiles and other thermal-protection components continued through the 1970s, with NASA Ames being particularly active. A particu­lar challenge lay in creating turbulent flows, which demanded close study because they increased the heat-transfer rates many times over. During reentry, hypersonic flow over a wing is laminar near the leading edge, tran­sitioning to turbulence at some distance to the rear. No hypersonic wind tunnel could accommodate anything resembling a full-scale wing, and it took considerable power as well as a strong airflow to produce turbu­lence in the available facilities. Ames had a 60-megawatt arc-jet, but even that facility could not accomplish this. Ames succeeded in producing such flows by using a 20-megawatt arc-jet that fed its flow into a duct that was 9 inches across and 2 inches deep. The narrow depth gave a compressed flow that readily produced turbulence, while the test chamber was large enough to accommodate panels with size of 8 by 20 inches. This facil­ity supported the study of coatings that led to the use of reaction-cured glass. Tiles of LI-900, 6 inches square and treated with this coating, sur­vived 100 simulated reentries at 2,300 °F in turbulent flow.[613]

The Ames 20-megawatt arc-jet facility made its own contribution in a separate program that improved the basic silica tile. Excessive tem­peratures caused these tiles to fail by shrinking and becoming denser.

Investigators succeeded in reducing the shrinkage by raising the tile density and adding silicon carbide to the silica, rendering it opaque and reducing internal heat transfer. This led to a new grade of silica RSI with density of 22 lb/ft3 that had greater strength as well as improved thermal performance.[614]

The Ames researchers carried through with this work during 1974 and 1975, with Lockheed taking this material and putting it into produc­tion as LI-2200. Its method of manufacture largely followed that of stan­dard LI-900, but whereas that material relied on sintered colloidal silica to bind the fibers together, LI-2200 dispensed with this and depended entirely on fiber-to-fiber sintering. LI-2200 was adopted in 1977 for oper­ational use on the Shuttle, where it found application in specialized areas. These included regions of high concentrated heat near penetrations such as landing-gear doors as well as near interfaces with the carbon-carbon nose cap, where surface temperatures could reach 2,600 °F.[615]

Testing proceeded in four overlapping phases. Material selection ran through 1973 and 1974 into 1975; the work that led to LI-2200 was an example. Material characterization proceeded concurrently and extended midway through 1976. Design development tests covered 1974 through 1977; design verification activity began in 1977 and ran through subse­quent years. Materials characterization called for some 10,000 test spec­imens, with investigators using statistical methods to determine basic material properties. These were not the well-defined properties that engi­neers find listed in handbooks; they showed ranges of values that often formed a Gaussian distribution, with its bell-shaped curve. This activity addressed such issues as the lifetime of a given material, the effects of changes in processing, or the residual strength after a given number of flights. A related topic was simple but far-reaching: to be able to calcu­late the minimum tile thickness, at a given location, that would hold the skin temperature below the maximum allowable.[616]

Design development tests used only 350 articles but spanned 4 years, because each of them required close attention. An important goal involved validating the specific engineering solutions to a number

of individual thermal-protection problems. Thus the nose cap and wing leading edges were made of carbon-carbon, in anticipation of their being subjected to the highest temperatures. Their attachments were exercised in structural tests that simulated flight loads up to design limits, with design temperature gradients.

Design development testing also addressed basic questions of the tiles themselves. There were narrow gaps between them, and while Rockwell had ways to fill them, these gap-fillers required their own trials by fire. A related question was frequently asked: What happens if a tile falls off? A test program addressed this and found that in some areas of intense heating, the aluminum skin indeed would burn through. The only way to prevent this was to be sure that the tiles were firmly bonded in place, and this meant all those located in critical areas.[617]

Design verification tests used fewer than 50 articles, but these rep­resented substantial portions of the vehicle. An important test article, evaluated at NASA Johnson, reproduced a wing leading edge and mea­sured 5 by 8 feet. It had two leading-edge panels of carbon-carbon set side by side, a section of wing structure that included its principal spars, and aluminum skin covered with RSI. It could not have been fabricated earlier in the program, for its detailed design drew on lessons from previous tests. It withstood simulated air loads, launch acoustics, and mission-temperature-pressure environments, not once, but many times.[618]

The testing ranged beyond the principal concerns of aerodynamics, heating, and acoustics. There also was concern that meteoroids might not only put craters in the carbon-carbon but also cause it to crack. At NASA Langley, the researcher Donald Humes studied this by shooting small glass and nylon spheres at target samples using a light-gas gun driven by compressed helium. Helium is better than gunpowder, as it can expand at much higher velocities. Humes wrote that carbon-car­bon: "does not have the penetration resistance of the metals on a thick­ness basis, but on a weight basis, that is, mass per unit area required to stop projectiles, it is superior to steel.”[619]

Yet amid the advanced technology of arc-jets, light-gas guns, and hypersonic wind tunnels, one of the most important tests was also one of the simplest. It involved nothing more than taking tiles that were bonded with adhesive to the SIP and the underlying aluminum skin and physically pulling them off.

It was no new thing for people to show concern that the tiles might not stick. In 1974, a researcher at Ames noted that aerodynamic noise was potentially destructive, telling a reporter for Aviation Week that: "We’d hate to shake them all off when we’re leaving.” At NASA Johnson, a 10-MW arc-jet saw extensive use in lost-tile investigations. Tests indi­cated there was reason to believe that the forces acting to pull off a tile would be as low as 2 psi, just some 70 pounds for a tile measuring 6 by 6 inches square. This was low indeed; the adhesive, SIP, and RSI material all were considerably stronger. The thermal-protection testing therefore had given priority to thermal rather than to mechanical work, essen­tially taking it for granted that the tiles would stay on. Thus, attachment of the tiles to the Shuttle lacked adequate structural analysis, failing to take into account the peculiarities in the components. For example, the SIP had some fibers oriented perpendicular to the cemented tile under­surface. The tile was made of ceramic fibers, with these fibers concen­trating the loads. This meant that the actual stresses they faced were substantially greater than anticipated.[620]

Columbia orbiter OV-102 was the first to receive working tiles. Columbia was also slated to be first into space. It underwent final assem­bly at the Rockwell plant in Palmdale, CA, during 1978. Checkout of onboard systems began in September, and installation of tiles proceeded concurrently, with Columbia to be rolled out in February 1979. But mounting the tiles was not at all like laying bricks. Measured gaps were to separate them; near the front of the orbiter, they had to be positioned to within 0.17 inches of vertical tolerance to form a smooth surface that

would not trip the airflow into turbulence. This would not have been difficult if the tiles had rested directly on the aluminum skin, but they were separated from that skin by the spongy SIP. The tiles were also frag­ile. An accidental tap with a wrench, a hard hat, even a key chain could crack the glassy coating. When that happened, the damaged tile had to be removed and the process of installation had to start again with a new one.[621]

The tiles came in arrays, each array numbering about three-dozen tiles. It took 1,092 arrays to cover this orbiter, and NASA reached a high mark when technicians installed 41 of them in a single week. But unfor­tunate news came midway through 1979 as detailed studies showed that in many areas the combined loads due to aerodynamic pressure, vibra­tion, and acoustics would produce excessively large forces on the tiles. Work to date had treated a 2-psi level as part of normal testing, but now it was clear that only a small proportion of the tiles already installed faced stresses that low. Over 5,000 tiles faced force levels of 8.5 to 13 psi, with 3,000 being in the range of 2 to 6.5 psi. The usefulness of tiles as thermal protection was suddenly in doubt.[622]

What caused this? The fault lay in the nylon felt SIP, which had been modified by "needling” to increase its through-the-thickness tensile strength and elasticity. This was accomplished by punching a barbed nee­dle through the felt fabric, some 1,000 times per square inch, which ori­ented fiber bundles transversely to the SIP pad. Tensile loads applied across the SIP pad, acting to pull off a tile, were transmitted into the SIP at dis­crete regions along these transverse fibers. This created localized stress concentrations, where the stresses approached twice the mean value. These local areas failed readily under load, causing the glued bond to break.[623]

There also was a clear need to increase the strength of the tiles’ adhesive bonds. The solution came during October and involved mod­ifying a thin layer at the bottom of each tile to make it denser. The pro­cess was called, quite logically, "densification.” It used DuPont’s Ludox

with a silica "slip.” Ludox was colloidal silica stirred into water and stabilized with ammonia; the slip had fine silica particles dispersed in water. The Ludox acted like cement; the slip provided reinforcement, in the manner of sand in concrete. It worked: the densification process clearly restored the lost strength.[624]

By then, Columbia had been moved to the Kennedy Space Center. The work nevertheless went badly during 1979, for as people continued to install new tiles, they found more and more that needed to be removed and replaced. Orderly installation procedures broke down. Rockwell had received the tiles from Lockheed in arrays and had attached them in well-defined sequences. Even so, that work had gone slowly, with 550 tiles in a week being a good job. But now Columbia showed a patchwork of good ones, bad ones, and open areas with no tiles. Each individual tile had been shaped to a predetermined pattern at Lockheed using that firm’s numerically controlled milling machines. But the haphazardness of the layout made it likely that any precut tile would fail to fit into its assigned cavity, leaving too wide a gap with the adjacent ones.

Many tiles therefore were installed one by one, in a time-consuming process that fitted two into place and then carefully measured space for a third, designing it to fill the space between them. The measurements went to Sunnyvale, CA, where Lockheed carved that tile to its unique specification and shipped it to the Kennedy Space Center (KSC). Hence, each person took as long as 3 weeks to install just 4 tiles. Densification also took time; a tile removed from Columbia for rework needed 2 weeks until it was ready for reinstallation.[625]

How could these problems have been avoided? They all stemmed from the fact that the tile work was well advanced before NASA learned that the tile-SIP-adhesive bonds had less strength than the Agency needed. The analysis that disclosed the strength requirements was nei­ther costly nor demanding; it might readily have been in hand during 1976 or 1977. Had this happened, Lockheed could have begun shipping densified tiles at an early date. Their development and installation would have occurred within the normal flow of the Shuttle program, with the change amounting perhaps to little more than an engineering detail.

The Tiles Become Operational

The Space Shuttle Columbia descends to land at Edwards following its hypersonic reentry from orbit in April 1981. NASA.

The reason this did not happen was far-reaching, for it stemmed from the basic nature of the program. The Shuttle effort followed "con­current development,” with design, manufacture, and testing proceed­ing in parallel rather than in sequence. This approach carried risk, but the Air Force had used it with success during the 1960s. It allowed new technologies to enter service at the earliest possible date. But within the Shuttle program, funds were tight. Managers had to allocate their budgets adroitly, setting priorities and deferring what they could put off. To do this properly was a high art, calling for much experience and judgment, for program executives had to be able to conclude that the low-priority action items would contain no unpleasant surprises. The cal­culation of tile strength requirements was low on the action list because it appeared unnecessary; there was good reason to believe that the tiles would face nothing worse than 2 psi. Had this been true, and had the main engines been ready, Columbia might have flown by mid-1980. It did not fly until April 1981, and, in this sense, tile problems brought a delay of close to 1 year.

The delay in carrying through the tile-strength computation was not mandatory. Had there been good reason to upgrade its priority, it could readily have been done earlier. The budget stringency that brought this

deferral (along with many others) thus was false economy par excel­lence, for the program did not halt during that year of launch delay. It kept writing checks for its contractors and employees. The missing tile – strength analysis thus ramified in its consequences, contributing sub­stantially to a cost overrun in the Shuttle program.[626]

During 1979, NASA gave the same intense level of attention to the tiles’ mechanical problems that it had previously reserved for their ther­mal development. The effort nevertheless continued to follow the pat­tern of three steps forward and two steps back, and, for a while, more tiles were removed than were put on in a given week. Even so, by the fall of 1980, the end was in sight.[627]

During the spring of 1979, before the main tile problems had come to light, the schedule had called for the complete assembly of Columbia, with its external tank and solid boosters, to take place on November 24, 1979. Exactly 1 year later, a tow vehicle pulled Columbia into the Vehicle Assembly Building as a large crowd watched and cheered. Within 2 days, Columbia was mounted to its tank, forming a live Shuttle in flight configuration. Kenneth Kleinknecht, an X-series and space flight veteran and now Shuttle man­ager at NASA Johnson, put it succinctly: "The vehicle is ready to launch.”[628]

Flutter: The Insidious Threat

The most dramatic interaction of airplane structure with aerodynam­ics is "flutter”: a dynamic, high-frequency oscillation of some part of the structure. Aeroelastic flutter is a rapid, self-excited motion, potentially destructive to aircraft structures and control surfaces. It has been a par­ticularly persistent problem since invention of the cantilever monoplane at the end of the First World War. The monoplane lacked the "bridge truss” rigidity found in the redundant structure of the externally braced biplane and, as it consisted of a single surface unsupported except at the wing root, was prone to aerodynamic induced flutter. The simplest example of flutter is a free-floating, hinged control surface at the trail­ing edge of a wing, such as an aileron. The control surface will begin to oscillate (flap, like the trailing edge of a flag) as the speed increases. Eventually the motion will feed back through the hinge, into the struc­ture, and the entire wing will vibrate and eventually self-destruct. A similar situation can develop on a single fixed aerodynamic surface, like a wing or tail surface. When aerodynamic forces and moments are applied to the surface, the structure will respond by twisting or bending

about its elastic axis. Depending on the relationship between the elas­tic axis of the structure and the axis of the applied forces and moments, the motion can become self-energizing and a divergent vibration—one increasing in both frequency and amplitude—can follow. The high fre­quency and very rapid divergence of flutter causes it to be one of the most feared, and potentially catastrophic, events that can occur on an aircraft. Accordingly, extensive detailed flutter analyses are performed during the design of most modern aircraft using mathematical mod­els of the structure and the aerodynamics. Flight tests are usually per­formed by temporarily fitting the aircraft with a flutter generator. This consists of an oscillating mass, or small vane, which can be controlled and driven at different frequencies and amplitudes to force an aerody­namic surface to vibrate. Instrumentation monitors and measures the natural damping characteristics of the structure when the flutter gener­ator is suddenly turned off. In this way, the flutter mathematical model (frequency and damping) can be validated at flight conditions below the point of critical divergence.

Traditionally, if flight tests show that flutter margins are insuffi­cient, operational limits are imposed, or structural beef-ups might be accomplished for extreme cases. But as electronic flight control tech­nology advances, the prospect exists for so-called "active” suppression of flutter by using rapid, computer-directed control surface deflections. In the 1970s, NASA Langley undertook the first tests of such a system, on a one-seventeenth scale model of a proposed Boeing Supersonic Transport (SST) design, in the Langley Transonic Dynamics Tunnel (TDT). Encouraged, Center researchers followed this with TDT tests of a stores flutter suppression system on the model of the Northrop YF-17, in concert with the Air Force Flight Dynamics Laboratory (AFFDL, now the Air Force Research Laboratory’s Air Vehicles Directorate), later implementing a similar program on the General Dynamics YF-16. Then, NASA DFRC researchers modified a Ryan Firebee drone with such a system. This program, Drones for Aerodynamic and Structural Testing (DAST), used a Ryan BQM-34 Firebee II, an uncrewed aerial vehicle, rather than an inhabited system, because of the obvious risk to the pilot for such an experiment.

The modified Firebee made two successful flights but then, in June 1980, crashed on its third flight. Postflight analysis showed that one of the software gains had been inadvertently set three times higher than planned, causing the airplane wing to flutter explosively right after launch

Flutter: The Insidious Threat

A Drones for Aerodynamic and Structural Testing (DAST) unpiloted structural test vehicle, derived from the Ryan Firebee, during a 1980 flight test. NASA.

from the B-52 mother ship. In spite of the accident, progress was made in the definition of various control laws that could be used in the future for control and suppression of flutter.[714] Overall, NASA research on active flutter suppression has been generally so encouraging that the fruits of it were applied to new aircraft designs, most notably in the "growth” ver­sion of the YF-17, the McDonnell-Douglas (now Boeing) F/A-18 Hornet strike fighter. It used an Active Oscillation Suppression (AOS) system to suppress flutter tendencies induced by its wing-mounted stores and wingtip Sidewinder missiles, inspired to a significant degree by earlier YF-17 and YF-16 Transonic Dynamics Tunnel testing.[715]