Category AERONAUTICS

Advanced Turboprop Project-Yesterday and Today

The third engine-related effort to design a more fuel-efficient powerplant during this era did not focus on another idea for a turbojet configura­tion. Instead, engineers chose to study the feasibility of reintroducing a jet-powered propeller to commercial airliners. An initial run of the numbers suggested that such an advanced turboprop promised the larg­est reduction in fuel cost, perhaps by as much as 20 to 30 percent over turbofan engines powering aircraft with a similar performance. This compared with the goal of a 5-percent increase in fuel efficiency for the Engine Component Improvement program and a 10- to 15-percent increase in fuel efficiency for the E Cubed program.[1316]

But the implementation of an advanced turboprop was one of NASA’s more challenging projects, both in terms of its engineering and in secur­ing public acceptance. For years, the flying public had been conditioned to see the fanjet engine as the epitome of aeronautical advancement. Now they had to be "retrained” to accept the notion that a turbopropel­ler engine could be every bit as advanced, indeed, even more advanced, than the conventional fanjet engine. The idea was to have a jet engine
firing as usual with air being compressed and ignited with fuel and the exhaust expelled after first passing through a turbine. But instead of the turbine spinning a shaft that turned a fan at the front of the engine, the turbines would be spinning a shaft, which fed into a gearbox that turned another shaft that spun a series of unusually shaped propeller blades exterior to the engine casing.[1317]

Begun in 1976, the project soon grew into one of the larger NASA aeronautics endeavors in the history of the Agency to that point, eventu­ally involving 4 NASA Field Centers, 15 university grants, and more than 40 industrial contracts.[1318]

Подпись: 11Early on in the program, it was recognized that the major areas of concern were going to be the efficiency of the propeller at cruise speeds, noise both on the ground and within the passenger cabin, the effect of the engine on the aerodynamics of the aircraft, and maintenance costs. Meeting those challenges were helped once again by the computer-aided, three-dimensional design programs created by the Lewis Research Center. An original look for an aircraft propeller was devised that changed the blade’s sweep, twist, and thickness, giving the propellers the look of a series of scimitar-shaped swords sticking out of the jet engine. After much development and testing, the NASA-led team eventually found a solution to the design challenge and came up with a propeller shape and engine configuration that was promising in terms of meeting the fuel-efficiency goals and reduced noise by as much as 65 decibels.[1319]

In fact, by 1987, the new design was awarded a patent, and the NASA-industry group was awarded the coveted Collier Trophy for creat­ing a new fuel-efficient turboprop propulsion system. Unfortunately, two unexpected variables came into play that stymied efforts to put the design into production.[1320]

The first had to do with the public’s resistance to the idea of flying in an airliner powered by propellers—even though the blades were still

Подпись: A General Electric design for an Unducted Fan engine is tested during the early 1980s. General Electric. Подпись: 11

being turned by a jet engine. It didn’t matter that a standard turbofan jet also derived most of its thrust from a series of blades—which did, in fact, look more like a fan than a series of propellers. Surveys showed passengers had safety concerns about an exposed blade letting go and sending shrapnel into the cabin, right where they were sitting. Many passengers also believed an airliner equipped with an advanced turbo­prop was not as modern or reliable as pure turbojet engine. Jets were in; propellers were old fashioned. The second thing that happened was that world fuel prices dropped to the lower levels that preceded the oil embargo and the very rationale for developing the new turboprop in the first place. While fuel-efficient jet engines were still needed, the "extra mile” in fuel efficiency the advanced turboprop provided was no lon­ger required. As a result, NASA and its partners shelved the technology and waited to use the archived files another day.[1321]

The story of the Advanced Turboprop project had one more twist to it. While NASA and its team of contractor engineers were working on their new turboprop design, engineers at GE were quietly working on their own design, initially without NASA’s knowledge. NASA’s engine was distinguished by the fact that it had one row of blades, while GE’s ver­sion featured two rows of counter-rotating blades. GE’s design, which became known as the Unducted Fan (UDF), was unveiled in 1983 and demonstrated at the 1985 Paris Air Show. A summary of the UDF’s tech­nical features is described in a GE-produced report about the program:

Подпись: 11The engine system consists of a modified F404 gas generator engine and counterrotating propulsor system, mechanically decoupled, and aerodynamically integrated through a mixing frame structure. Utilization of the existing F404 engine min­imized engine hardware, cost, and timing requirements and provided an engine within the desired thrust class. The power turbine provides direct conversion of the gas generator horse­power into propulsive thrust without the requirement for a gearbox and associated hardware. Counterrotation utilizes the full propulsive efficiency by recovering the exit swirl between blade stages and converting it into thrust.[1322]

Although shelved during the late 1980s, the Alternate Turboprop and UDF technology and concept is being explored again as part of programs such as the Ultra-High Bypass Turbofan and Pratt & Whitney’s Geared Turbofan. Neither engine is routinely flying yet on commercial airlin­ers. But both concepts promise further reductions in noise, increases in fuel efficiency, and lower operating costs for the airline—goals the aero­space community is constantly working to improve upon.

Several concepts are under study for an Ultra-High Bypass Turbofan, including a modernized version of the Advanced Turboprop that takes advantage of lessons learned from GE’s UDF effort. NASA has teamed with GE to start testing an open-rotor engine. For the NASA tests at Glenn Research Center, GE will run two rows of counter-rotating fan blades, with 12 blades in the front row and 10 blades in the back row. The composite fan blades are one-fifth subscale in size. Tests in
a low-speed wind tunnel will simulate low-altitude aircraft speeds for acoustic evaluation, while tests in a high-speed wind tunnel will simulate high-altitude cruise conditions in order to evaluate blade efficiency and performance.[1323]

"The tests mark a new journey for GE and NASA in the world of open rotor technology. These tests will help to tell us how confident we are in meeting the technical challenges of an open-rotor architecture. It’s a journey driven by a need to sharply reduce fuel consumption in future aircraft,” David Joyce, president of GE Aviation, said in a statement.[1324]

Подпись: 11In an Ultra-High Bypass Turbofan, the amount of air going through the engine casing but not through the core compressor and combustion chamber is at least 10 times greater than the air going through the core. Such engines promise to be quieter, but there can be tradeoffs. For exam­ple, an Ultra-High Bypass Engine might have to operate at a reduced thrust or have its fan spin slower. While the engine would meet all the goals, it would fly slower, thus making passengers endure longer trips.

In the case of Pratt & Whitney’s Geared Turbofan engine, the idea is to have an Ultra-High Bypass Ratio engine, yet spin the fan slower (to reduce noise and improve engine efficiency) than the core compressor blades and turbines, all of which traditionally spin at the same speed, as they are connected to the same central shaft. Pratt & Whitney designed a gearbox into the engine to allow for the central shaft to turn at one speed yet turn a second shaft connected to the fan at another speed.[1325]

Alan H. Epstein, a Pratt & Whitney vice president, testifying before the House Subcommittee on Transportation and Infrastructure in 2007, explained the potential benefits the company’s Geared Turbofan might bring to the aviation industry:

The Geared Turbofan engine promises a new level of very low noise while offering the airlines superior economics and envi­ronmental performance. For aircraft of 70 to 150 passenger size, the Geared Turbofan engine reduces the fuel burned,
and thus the CO2 produced, by more than 12% compared to today’s aircraft, while reducing cumulative noise levels about 20dB below the current Stage 4 regulations. This noise level, which is about half the level of today’s engines, is the equiva­lent difference between standing near a garbage disposal run­ning and listening to the sound of my voice right now.[1326]

Подпись: 11Pratt & Whitney’s PW1000G engine incorporating a geared turbo­fan is selected to be used on the Bombardier CSeries and Mitsubishi Regional Jet airliners beginning in 2013. The engine was first flight-tested in 2008, using an Airbus A340-600 airliner out of Toulouse, France.[1327]

Good Stewards: NASA’s Role in Alternative Energy

Bruce I. Larrimer

Подпись: 13Consistent with its responsibilities to exploit aeronautics technology for the benefit of the American people, NASA has pioneered the develop­ment and application of alternative energy sources. Its work is argu­ably most evident in wind energy and solar power for high-altitude remotely piloted vehicles. Here, NASA’s work in aerodynamics, solar power, lightweight structural design, and electronic flight controls has proven crucial to the evolution of novel aerospace craft.

HIS CASE STUDY REVIEWS two separate National Aeronautics and Space Administration (NASA) programs that each involved research and development (R&D) in the use of alternative energy. The first part of the case study covers NASA’s participation in the Federal Wind Energy Program from 1974 through 1988. NASA’s work in the wind energy area included design and fabrication of large horizontal-axis wind turbine (HAWT) generators, and the conduct of supporting research and technology projects. The second part of the case study reviews NASA’s development and testing of high-altitude, long-endurance solar – powered unmanned aerial vehicles (UAVs). This program, which ran from 1994 through 2003, was part of the Agency’s Environmental Research and Aircraft Sensor Technology Program.

Solar Cells and Fuel Cells for Solar-Powered ERAST Vehicles

Подпись: 13NASA had first acquired solar cells from Spectralab but chose cells from SunPower Corporation of Sunnyvale, CA, for the ERAST UAVs. These photovoltaic cells converted sunlight directly into electricity and were lighter and more efficient than other commercially available solar cells at that time. Indeed, after NASA flew Helios, SunPower was selected to fur­nish high-efficiency solar concentrator cells for a NASA Dryden ground solar cell test installation, spring-boarding, as John Del Frate recalled subsequently, "from the technology developed on the PF+ and Helios solar cells.”[1546] The Dryden solar cell configuration consisted of two fixed – angle solar arrays and one sun-tracking array that together generated up to 5 kilowatts of direct current. Field-testing at the Dryden site helped SunPower lower production costs of its solar cells and identify uses and performance of its cells that enabled the company to develop large-scale commercial applications, resulting in the mass-produced SunPower A-300 series solar cells.[1547] SunPower’s solar cells were selected for use on the Pathfinders, Centurion, and Helios Prototype UAVs because of their high – efficiency power recovery (more than 50-percent higher than other com­mercially available cells) and because of their light weight. The solar cells designed for the last generation of ERAST UAVs could convert about 19 percent of the solar energy received into 35 kilowatts of electrical current at high noon on a summer day. The solar cells on the ERAST vehicles were bifacial, meaning that they could absorb sunlight on both sides of the cells, thus enabling the UAV s to catch sunrays reflected upward when flying above cloud covers, and were specially developed for use on the aircraft.

While solar cell technology satisfied the propulsion problem during daylight hours, a critical problem relating to long-endurance backup sys­tems remained to be solved for flying during periods of darkness. Without solving this problem, solar UAV flight would be limited to approximately 14 hours in the summer (much less, of course, in the dark of winter), plus whatever additional time could be provided by the limited (up to 5 hours for the Pathfinder) backup batteries. Although significant improvements had been made, batteries failed to satisfy both the weight limitation and long duration power generation requirements for the solar-powered UAVs.

Подпись: 13As an alternative to batteries, the ERAST alliance tested a number of different fuel cells and fuel cell power systems. An initial problem to overcome was how to develop lightweight fuel cells because only 440 pounds of Helios’s takeoff weight of 1,600 pounds were originally planned to be allocated to a backup fuel cell power system. Helios required approximately 120 kilowatthours of energy to power the craft for up to 12 hours of flight during darkness, and, fortunately, the state of fuel cell technology had advanced far enough to permit attaining this; ear­lier efforts back to the early 1980s had been frustrated because fuel cell technology was not sufficiently developed at that time. The NASA – industry team later determined, as part of the ERAST program, that a hydrogen-oxygen regenerative fuel cell system (RFCS or regen system) was the hoped for solution to the problem, and substantial resources were committed to the project.

RFCSs are closed systems whereby some of the electrical power pro­duced by the UAV’s solar array during daylight hours is sent to an electro­lyzer that takes onboard water and disassociates the water into hydrogen gas and oxygen gas, both of which are stored in tanks aboard the vehicle. During periods of darkness, the stored gases are recombined in the fuel cell, which results in the production of electrical power and water. The power is used to maintain systems and altitude. The water is then stored for reuse the following day. This cycle theoretically would repeat on a 24-hour basis for an indefinite time period. NASA and AeroVironment also considered, but did not use, a reversible regen system that instead of having an electrolyzer and a fuel cell used only a reversible fuel cell to do the work of both components.[1548]

As originally planned, Helios was to carry two separate regen fuel cell systems contained in two of four landing gear pods. This not only disbursed the weight over the flying wing, but also was in keeping with the plan for redundant systems. If one of the two fuel cells failed, Helios could still stay aloft for several days, albeit at a lower altitude. Contracts to make the fuel cell and electrolyzer were given to two companies—Giner of Waltham, MA, and Lynntech, Inc., of College Station, TX. Each of the two systems was planned to weigh 200 pounds, including 27 pounds for the fuel cell, 18 pounds for the electrolyzer, 40 pounds for oxygen and hydrogen tanks, and 45 pounds for water. The remaining 70 pounds con­sisted of plumbing, controls, and ancillary equipment.[1549]

Подпись: 13While the NASA-AeroVironment team made a substantial invest­ment in the RFCS and successfully demonstrated a nearly closed system in ground tests, it decided that the system was not yet ready to satisfy the planned flight schedule. Because of these technical difficulties and time and budget deadlines, NASA and AeroVironment agreed in 2001 to switch to a consumable hydrogen-air primary fuel cell system for the Helios Prototype’s long-endurance ERAST mission. The fuel cells were already in development for the automotive industry. The hydrogen-air fuel cell system required Helios to carry its own supply of hydrogen. In periods of darkness, power for the UAV would be produced by combining gaseous hydrogen and air from the atmosphere in a fuel cell. Because of the low air density at high altitudes, a compressor needed to be added to the sys­tem. This system, however, would operate only until the hydrogen fuel was consumed, but the team thought that the system could still provide multiple days of operation and that an advanced version might be able to stay aloft for up to 14 days. The installation plan was likewise changed. The fuel cell was now placed in one pod with the hydrogen tanks attached to the lower surface of the wing near each wingtip. This modification, of course, dramatically changed Helios’s structural loadings, transforming it from a span-loaded flying wing to a point-loaded vehicle.[1550]

Swing Wing: The Path to Variable Geometry

The notion of variable wing-sweeping dates to the earliest days of avi­ation and, in many respects, represents an expression of the "bird imi­tative” philosophy of flight that gave the ornithopter and other flexible wing concepts to aviation. Varying the sweep of a wing was first con­ceptualized as a means of adjusting longitudinal trim. Subsequently,

Swing Wing: The Path to Variable Geometry

A time-lapse photograph of the Bell X-5, showing the range of its wing sweep. Note how the wing roots translated fore and aft to accommodate changes in center of lift with varying sweep angles. NASA.

variable-geometry advocates postulated possible use of asymmetric sweeping as a means of roll control. Lippisch, pioneer of tailless and delta design, likewise filed a patent in 1942 for a scheme of wing sweeping, but it was another German, Waldemar Voigt (the chief of advanced design for the Messerschmitt firm) who triggered the path to modern variable wing-sweeping. Ironically, at the time he did so, he had no plan to make use of such a scheme himself. Rather, he designed a graceful midwing turbojet swept wing fighter, the P 1101. The German air ministry rejected its devel­opment based upon assessments of its likely utility. Voigt decided to con­tinue its development, planning to use the airplane as an in-house swept wing research aircraft, fitted with wings of varying sweep and ballasted to accommodate changes in center of lift.[110]

By war’s end, when the Oberammergau plant was overrun by American forces, the P 1101 was over 80-percent complete. A techni­cal team led by Robert J. Woods, a member of the NACA Aerodynamics Committee, moved in to assess the plant and its projects. Woods imme­diately recognized the value of the P 1101 program, but with a twist: he proposed to Voigt that the plane be finished with a wing that could be variably swept in flight, rather than with multiple wings that could be installed and removed on the ground. Woods’s advocacy, and the results of NACA variable-sweep tests by Charles Donlan of a modified XS-1 model in the Langley 7-foot by 10-foot wind tunnel, convinced the NACA to support development of such an aircraft. In May 1949, the Air Force Air Materiel Command issued a contract covering development of two Bell variable sweep airplanes, to be designated X-5. They were effectively American-built versions of the P 1101, but with American, not German, propulsion, larger cockpit canopies for greater pilot visibility, and, of course, variable sweep wings that could range from 20 to 60 degrees.[111]

Swing Wing: The Path to Variable GeometryThe first X-5 flew in June 1951 and within 5 weeks had demonstrated variable in-flight wing sweep to its maximum 60-degree aft position. Slightly over a year later, Grumman flew a prototype variable wing-sweep naval fighter, the XF10F-1 Jaguar. Neither aircraft represented a mature application of variable sweep design. The mechanism in each was heavy and complex and shifted the wing roots back and forth down the cen­terline of the aircraft to accommodate center of lift changes as the wing was swept and unswept. Each of the two had poor flying qualities unre­lated to the variable-sweep concept, reflecting badly on their design. The XF10F-1 was merely unpleasant (its test pilot, the colorful Corwin "Corky” Meyer, tellingly recollected later "I had never attended a test pilots’ school, but, for me, the F10F provided the complete curriculum”), but the X-5 was lethal.[112] It had a vicious pitch-up at higher-sweep angles, and its aerodynamic design ensured that it would have very great difficulty when it departed into a spin. The combination of the two led to the death of Air Force test pilot Raymond Popson in the crash of the second X-5

in 1953. More fortunate, NACA pilots completed 133 research flights in the first X-5 before retiring it in 1955.

Swing Wing: The Path to Variable GeometryThe X-5 experience demonstrated that variable geometry worked, and the potential of combining good low-speed performance with high-speed supersonic dash intrigued military authorities looking at future inter­ceptor and long-range strike aircraft concepts. Coincidentally, in the late 1950s, Langley developed increasingly close ties with the British aeronau­tical community, largely a result of the personal influence of John Stack of Langley Research Center, who, in characteristic fashion, used his force­ful personality to secure a strong transatlantic partnership. This partner­ship, best known for its influence upon Anglo-American V/STOL research leading to the Harrier strike fighter, influenced as well the course of vari­able-geometry research. Barnes Wallis of Vickers had conceptualized a sharply swept variable-geometry tailless design, the Swallow, but was not satisfied with the degree of support he was receiving for the idea within British aeronautical and governmental circles. Accordingly, he turned to the United States. Over November 13-18, 1958, Stack sponsored an Anglo – American meeting at Langley to craft a joint research program, in which Wallis and his senior staff briefed the Swallow design.[113] As revealed by subsequent Langley tunnel tests over the next 6 months, Wallis’s Swallow had many stability and control deficiencies but one significant attribute: its outboard wing-pivot design. Unlike the X-5 and Jaguar and other early symmetrical-sweep v-g concepts, the wing did not adjust for chang­ing center of lift position by translating fore and aft along the fuselage centerline using a track-type approach and a single pivot point. Rather, slightly outboard of the fuselage centerline, each wing panel had its own independent pivot point. This permitted elimination of the complex track and allowed use of a sharply swept forebody to address at least some of the changes in center-of-lift location as the wings moved aft and forward. The remainder could be accommodated by control surface deflection and shifting fuel. Studies in Langley’s 7-foot by 10-foot tunnel led to refinement of the outboard pivot concept and, eventually, a patent to William J. Alford and E. C. Polhamus for its concept, awarded in September 1962. Wallis’s inspiration, joined with insightful research by Alford and Polhamus and

followed by adaptation of a conventional "tailed” configuration (a crit­ical necessity in the pre-fly-by-wire computer-controlled era), made variable wing sweep a practical reality.[114] (Understandably, after return­ing to Britain, Wallis had mixed feelings about the NASA involvement. On one hand, he had sought it after what he perceived as a "go slow” approach to his idea in Britain. On the other, following enunciation of outboard wing sweep, he believed—as his biographer subsequently wrote—"The Americans stole his ideas,”)[115]

Swing Wing: The Path to Variable GeometryThus, by the early 1960s, multiple developments—swept wings, high-performance afterburning turbofans, area ruling, the outboard wing pivot, low horizontal tail, advanced stability augmentation sys­tems, to select just a few—made possible the design of variable – geometry combat aircraft. The first of these was the General Dynamics Tactical Fighter Experimental (TFX), which became the F-111. It was a troubled program, though, like most of the Century series that had pre­ceded it (the F-102 in particular), this had essentially nothing to do with the adaptation of a variably swept wing. Instead, a poorly written speci­fication emphasizing joint service over practical, attainable military util­ity resulted in development of a compromised design. The result was a decade of lost fighter time for the U. S. Navy, which never did receive the aircraft it sought, and a constrained Air Force program that resulted in the eventual development of a satisfactory strike aircraft—the F-111F— but years late and at tremendous cost. Throughout the evolution of the F-111, NASA research proved of crucial importance to saving the pro­gram. NASA Langley, Ames, and Lewis researchers invested over 30,000

hours of wind tunnel test time in the F-111 (over 22,000 at Langley alone), addressing various shortcomings in its design, including excessive drag, lack of transonic and supersonic maneuverability, deficient directional stability, and inlet distortion that plagued its engine performance. As a result, the Air Force F-111 became a reliable weapon system, evidenced by its performance in Desert Storm, where it flew long-range strike mis­sions, performed electronic jamming, and proved the war’s single most successful "tank plinker,” on occasion destroying upward of 150 tanks per night and 1,500 over the length of the 43-day conflict.[116]

Swing Wing: The Path to Variable GeometryFrom the experience gained with the F-111 program sprang the Grumman F-14 Tomcat naval fighter and the Rockwell B-1 bomber, both of which experienced fewer development problems, benefitting greatly from NASA tunnel and other analytical research.[117] Emulating American variable-geometry development, Britain, France, and the Soviet Union undertook their own development efforts, spawning the experi­mental Dassault Mirage G (test-flown, though never placed in service), the multipartner NATO Tornado interceptor and strike fighter program, and a range of Soviet fighter and bomber aircraft, including the MiG – 23/27 Flogger, the Sukhoi Su-17/22 Fitter, the Su-24 Fencer, the Tupolev Tu-22M Backfire, and the Tu-160 Blackjack.[118]

Variable geometry has had a mixed history since; in the heyday of the space program, many proposals existed for tailored lifting body shapes deploying "switchblade” wings, and the variable-sweep wing was a prom­inent feature of the Boeing SST concept before its subsequent rejection. The tailored aerodynamics and power available with modern aircraft have rendered variable-geometry approaches less attractive than they once were, particularly because, no matter how well thought out, they invari-

Swing Wing: The Path to Variable Geometry

The Grumman F-14A Tomcat naval fighter marked the maturation of the variable wing-sweep con­cept. This is one was assigned to Dryden for high angle of attack and departure flight-testing. NASA.

ably involve greater cost, weight, and structural complexity. In 1945-1946, John Campbell and Hubert Drake undertook tests in the Langley Free Flight Tunnel of a simple model with a single pivot, so that its wing could be skewed over a range of sweep angles. This concept, which German aerodynamicists had earlier proposed in the Second World War, demon­strated "that an airplane wing can be skewed as a unit to angles as great as 40° without encountering serious stability and control difficulties.”[119] This concept, the simplest of all variable-geometry schemes, returned to the fore in the late 1970s, thanks to the work of Robert T. Jones, who adopted and expanded upon it to generate the so-called "oblique wing” design con­cept. Jones conceptualized the oblique wing as a means of producing a transonic transport that would have minimal drag and a minimal sonic boom; he even foresaw possible twin fuselage transports with a skewed wing shifting their relative position back and forth. Tests with a subscale turbojet demonstrator, the AD-1 (for Ames-Dryden), at the Dryden Flight Research Center confirmed what Campbell and Drake had discovered
nearly four decades previously, namely that at moderate sweep angles the oblique wing possessed few vices. But at higher sweep angles near 60 degrees, its deficits became more pronounced, calling into question whether its promise could ever actually be achieved.[120] On the whole, the variable-geometry wing has not enjoyed the kind of widespread suc­cess that its adherents hoped. While it may be expected that, from time to time, variable sweep aircraft will be designed and flown for partic­ular purposes, overall the fixed conventional planform, outfitted with all manner of flaps and slats and blowing, sucking, and perhaps even warping technology, continues to prevail.

Softening the Sonic Boom: 50 Years of NASA Research

Lawrence R. Benson

The advent of practical supersonic flight brought with it the shatter­ing shock of the sonic boom. From the onset of the supersonic age in 1947, NACA-NASA researchers recognized that the sonic boom would work against acceptance of routine overland supersonic air­craft operation. In concert with researchers from other Federal and mil­itary organizations, they developed flight-test programs and innovative design approaches to reshape aircraft to minimize boom effects while retaining desirable high-speed behavior and efficient flight performance.

A

FTER ITS FORMATION IN 1958, the National Aeronautics and Space Administration (NASA) began devoting most of its resources to the Nation’s new civilian space programs. Yet 1958 also marked the start of a program in the time-honored aviation mission that the Agency inherited from the National Advisory Committee for Aeronautics (NACA). This task was to help foster an advanced passenger plane that would fly at least twice the speed of sound.

Because of economic and political factors, developing such an aircraft became more than a purely technological challenge. One of the major barriers to producing a supersonic transport involved a phenomenon of atmospheric physics barely understood in the late 1950s: the shock waves generated by supersonic flight. Studying these "sonic booms” and learning how to con­trol them became a specialized and enduring field of NASA research for the next five decades. During the first decade of the 21st century, all the study, testing, and experimentation of the past finally began to reap tangible benefits in the same California airspace where supersonic flight began.[322]

The Tiles Become Operational

Manufacture of the silica tiles was straightforward, at least in its basic steps. The raw material consisted of short lengths of silica fiber of l.0-micron diameter. A measured quantity of fibers, mixed with water, formed a slurry. The water was drained away, and workers added a binder of colloidal silica, then pressed the material into rectangular blocks that were 10 to 20 inches in diameter and more than 6 inches thick. These blocks were the crudest form of LI-900, the basic choice of RSI for the entire Shuttle. They sat for 3 hours to allow the binder to jell, then were dried thoroughly in a micro­wave oven. The blocks moved through sintering kilns that baked them at 2,375 °F for 2 hours, fusing binder and fibers together. Band saws trimmed distortions from the blocks, which were cut into cubes and then carved into individual tiles using milling machines driven by computer. The programs contained data from Rockwell International on the desired tile dimensions.

Next, the tiles were given a spray-on coating. After being oven-dried, they returned to the kilns for glazing at temperatures of 2,200 °F for 90 minutes. To verify that the tiles had received the proper amount of coat­ing, technicians weighed samples before and after the coating and glaz­ing. The glazed tiles then were made waterproof by vacuum deposition of a silicon compound from Dow Corning while being held in a furnace at 350 °F. These tiles were given finishing touches before being loaded into arrays for final milling.[610]

Although the basic LI-900 material showed its merits during 1972, it was another matter to produce it in quantity, to manufacture tiles that were suitable for operational use, and to provide effective coatings. To avoid having to purify raw fibers from Johns Manville, Lockheed asked that company to find a natural source of silica sand with the necessary purity. The amount needed was small, about 20 truckloads, and was not of great interest to quarry operators. Nevertheless, Johns Manville found a suitable source in Minnesota.

Problems arose when shaping the finished tiles. Initial plans called for a large number of identical flat tiles, varying only in thickness and trimmed to fit at the time of installation. But flat tiles on the curved sur­face of the Shuttle produced a faceted surface that promoted the onset of turbulence in the airflow, resulting in higher rates of heating. The tiles

then would have had to be thicker, which threatened to add weight. The alternative was an external RSI contour closely matching that of the orbit – er’s outer surface. Lockheed expected to produce 34,000 tiles for each orbiter, grouping most of them in arrays of two dozen or so and machin­ing their back faces, away from the glazed coating, to curves matching the contours of the Shuttle’s aluminum skin. Each of the many thou­sands of tiles was to be individually numbered, and none had precisely the same dimensions. Instead, each was defined by its own set of dimen­sions. This cost money, but it saved weight.

Difficulties also arose in the development of coatings. The first good one, LI-0042, was a borosilicate glass that used silicon carbide to enhance its high-temperature thermal emissivity. It dated to the late 1960s; a vari­ant, LI-0050, initially was the choice for operational use. This coating easily withstood the rated temperature of 2,300 °F, but in tests, it persis­tently developed hairline cracks after 20 to 60 thermal cycles. This was unacceptable; it had to stand up to 100 such cycles. The cracks were too small to see with the unaided eye and did not grow large or cause tile failure. But they would have allowed rainstorms to penetrate the tiles during the weeks that an orbiter was on the ground between missions, with the rain adding to the launch weight. Help came from NASA Ames, where researchers were close to Lockheed, both in their shared interests and in their facilities being only a few miles apart. Howard Goldstein at Ames, a colleague of the branch chief, Howard Larson, set up a task group and brought in a consultant from Stanford University, which also was just up the road. They spent less than $100,000 in direct costs and came up with a new and superior coating called reaction-cured glass. Like LI-0050, it was a borosilicate, consisting of more than 90 percent silica along with boria or boron oxide along with an emittance agent. The agent in LI-0050 had been silicon carbide; the new one was silicon tetraboride, SiB4. During glazing, it reacted with silica in a way that increased the level of boria, which played a critical role in controlling the coating’s thermal expansion. This coating could be glazed at lower temperature than LI-0050 could, reducing the residual stress that led to the cracking. SiB4 oxidized during reentry, but in doing so, it produced boria and silica, the ingredients of the glass coating itself.[611]

The Shuttle’s distinctive mix of black-and-white tiles was all designed as standard LI-900 with its borosilicate coating, but the black ones had SiB4 and the white ones did not. Still, they all lacked structural strength and were brittle. They could not be bonded directly to the orbiter’s alumi­num skin, for they would fracture and break because of their inability to follow the flexing of this skin under its loads. Designers therefore placed an intermediate layer between tiles and skin, called a strain isolator pad (SIP). It was a felt made of Nomex nylon from DuPont, which would nei­ther melt nor burn. It had useful elasticity and could stretch in response to Shuttle skin flexing without transmitting excessive strain to the tiles.[612]

Testing of tiles and other thermal-protection components continued through the 1970s, with NASA Ames being particularly active. A particu­lar challenge lay in creating turbulent flows, which demanded close study because they increased the heat-transfer rates many times over. During reentry, hypersonic flow over a wing is laminar near the leading edge, tran­sitioning to turbulence at some distance to the rear. No hypersonic wind tunnel could accommodate anything resembling a full-scale wing, and it took considerable power as well as a strong airflow to produce turbu­lence in the available facilities. Ames had a 60-megawatt arc-jet, but even that facility could not accomplish this. Ames succeeded in producing such flows by using a 20-megawatt arc-jet that fed its flow into a duct that was 9 inches across and 2 inches deep. The narrow depth gave a compressed flow that readily produced turbulence, while the test chamber was large enough to accommodate panels with size of 8 by 20 inches. This facil­ity supported the study of coatings that led to the use of reaction-cured glass. Tiles of LI-900, 6 inches square and treated with this coating, sur­vived 100 simulated reentries at 2,300 °F in turbulent flow.[613]

The Ames 20-megawatt arc-jet facility made its own contribution in a separate program that improved the basic silica tile. Excessive tem­peratures caused these tiles to fail by shrinking and becoming denser.

Investigators succeeded in reducing the shrinkage by raising the tile density and adding silicon carbide to the silica, rendering it opaque and reducing internal heat transfer. This led to a new grade of silica RSI with density of 22 lb/ft3 that had greater strength as well as improved thermal performance.[614]

The Ames researchers carried through with this work during 1974 and 1975, with Lockheed taking this material and putting it into produc­tion as LI-2200. Its method of manufacture largely followed that of stan­dard LI-900, but whereas that material relied on sintered colloidal silica to bind the fibers together, LI-2200 dispensed with this and depended entirely on fiber-to-fiber sintering. LI-2200 was adopted in 1977 for oper­ational use on the Shuttle, where it found application in specialized areas. These included regions of high concentrated heat near penetrations such as landing-gear doors as well as near interfaces with the carbon-carbon nose cap, where surface temperatures could reach 2,600 °F.[615]

Testing proceeded in four overlapping phases. Material selection ran through 1973 and 1974 into 1975; the work that led to LI-2200 was an example. Material characterization proceeded concurrently and extended midway through 1976. Design development tests covered 1974 through 1977; design verification activity began in 1977 and ran through subse­quent years. Materials characterization called for some 10,000 test spec­imens, with investigators using statistical methods to determine basic material properties. These were not the well-defined properties that engi­neers find listed in handbooks; they showed ranges of values that often formed a Gaussian distribution, with its bell-shaped curve. This activity addressed such issues as the lifetime of a given material, the effects of changes in processing, or the residual strength after a given number of flights. A related topic was simple but far-reaching: to be able to calcu­late the minimum tile thickness, at a given location, that would hold the skin temperature below the maximum allowable.[616]

Design development tests used only 350 articles but spanned 4 years, because each of them required close attention. An important goal involved validating the specific engineering solutions to a number

of individual thermal-protection problems. Thus the nose cap and wing leading edges were made of carbon-carbon, in anticipation of their being subjected to the highest temperatures. Their attachments were exercised in structural tests that simulated flight loads up to design limits, with design temperature gradients.

Design development testing also addressed basic questions of the tiles themselves. There were narrow gaps between them, and while Rockwell had ways to fill them, these gap-fillers required their own trials by fire. A related question was frequently asked: What happens if a tile falls off? A test program addressed this and found that in some areas of intense heating, the aluminum skin indeed would burn through. The only way to prevent this was to be sure that the tiles were firmly bonded in place, and this meant all those located in critical areas.[617]

Design verification tests used fewer than 50 articles, but these rep­resented substantial portions of the vehicle. An important test article, evaluated at NASA Johnson, reproduced a wing leading edge and mea­sured 5 by 8 feet. It had two leading-edge panels of carbon-carbon set side by side, a section of wing structure that included its principal spars, and aluminum skin covered with RSI. It could not have been fabricated earlier in the program, for its detailed design drew on lessons from previous tests. It withstood simulated air loads, launch acoustics, and mission-temperature-pressure environments, not once, but many times.[618]

The testing ranged beyond the principal concerns of aerodynamics, heating, and acoustics. There also was concern that meteoroids might not only put craters in the carbon-carbon but also cause it to crack. At NASA Langley, the researcher Donald Humes studied this by shooting small glass and nylon spheres at target samples using a light-gas gun driven by compressed helium. Helium is better than gunpowder, as it can expand at much higher velocities. Humes wrote that carbon-car­bon: "does not have the penetration resistance of the metals on a thick­ness basis, but on a weight basis, that is, mass per unit area required to stop projectiles, it is superior to steel.”[619]

Yet amid the advanced technology of arc-jets, light-gas guns, and hypersonic wind tunnels, one of the most important tests was also one of the simplest. It involved nothing more than taking tiles that were bonded with adhesive to the SIP and the underlying aluminum skin and physically pulling them off.

It was no new thing for people to show concern that the tiles might not stick. In 1974, a researcher at Ames noted that aerodynamic noise was potentially destructive, telling a reporter for Aviation Week that: "We’d hate to shake them all off when we’re leaving.” At NASA Johnson, a 10-MW arc-jet saw extensive use in lost-tile investigations. Tests indi­cated there was reason to believe that the forces acting to pull off a tile would be as low as 2 psi, just some 70 pounds for a tile measuring 6 by 6 inches square. This was low indeed; the adhesive, SIP, and RSI material all were considerably stronger. The thermal-protection testing therefore had given priority to thermal rather than to mechanical work, essen­tially taking it for granted that the tiles would stay on. Thus, attachment of the tiles to the Shuttle lacked adequate structural analysis, failing to take into account the peculiarities in the components. For example, the SIP had some fibers oriented perpendicular to the cemented tile under­surface. The tile was made of ceramic fibers, with these fibers concen­trating the loads. This meant that the actual stresses they faced were substantially greater than anticipated.[620]

Columbia orbiter OV-102 was the first to receive working tiles. Columbia was also slated to be first into space. It underwent final assem­bly at the Rockwell plant in Palmdale, CA, during 1978. Checkout of onboard systems began in September, and installation of tiles proceeded concurrently, with Columbia to be rolled out in February 1979. But mounting the tiles was not at all like laying bricks. Measured gaps were to separate them; near the front of the orbiter, they had to be positioned to within 0.17 inches of vertical tolerance to form a smooth surface that

would not trip the airflow into turbulence. This would not have been difficult if the tiles had rested directly on the aluminum skin, but they were separated from that skin by the spongy SIP. The tiles were also frag­ile. An accidental tap with a wrench, a hard hat, even a key chain could crack the glassy coating. When that happened, the damaged tile had to be removed and the process of installation had to start again with a new one.[621]

The tiles came in arrays, each array numbering about three-dozen tiles. It took 1,092 arrays to cover this orbiter, and NASA reached a high mark when technicians installed 41 of them in a single week. But unfor­tunate news came midway through 1979 as detailed studies showed that in many areas the combined loads due to aerodynamic pressure, vibra­tion, and acoustics would produce excessively large forces on the tiles. Work to date had treated a 2-psi level as part of normal testing, but now it was clear that only a small proportion of the tiles already installed faced stresses that low. Over 5,000 tiles faced force levels of 8.5 to 13 psi, with 3,000 being in the range of 2 to 6.5 psi. The usefulness of tiles as thermal protection was suddenly in doubt.[622]

What caused this? The fault lay in the nylon felt SIP, which had been modified by "needling” to increase its through-the-thickness tensile strength and elasticity. This was accomplished by punching a barbed nee­dle through the felt fabric, some 1,000 times per square inch, which ori­ented fiber bundles transversely to the SIP pad. Tensile loads applied across the SIP pad, acting to pull off a tile, were transmitted into the SIP at dis­crete regions along these transverse fibers. This created localized stress concentrations, where the stresses approached twice the mean value. These local areas failed readily under load, causing the glued bond to break.[623]

There also was a clear need to increase the strength of the tiles’ adhesive bonds. The solution came during October and involved mod­ifying a thin layer at the bottom of each tile to make it denser. The pro­cess was called, quite logically, "densification.” It used DuPont’s Ludox

with a silica "slip.” Ludox was colloidal silica stirred into water and stabilized with ammonia; the slip had fine silica particles dispersed in water. The Ludox acted like cement; the slip provided reinforcement, in the manner of sand in concrete. It worked: the densification process clearly restored the lost strength.[624]

By then, Columbia had been moved to the Kennedy Space Center. The work nevertheless went badly during 1979, for as people continued to install new tiles, they found more and more that needed to be removed and replaced. Orderly installation procedures broke down. Rockwell had received the tiles from Lockheed in arrays and had attached them in well-defined sequences. Even so, that work had gone slowly, with 550 tiles in a week being a good job. But now Columbia showed a patchwork of good ones, bad ones, and open areas with no tiles. Each individual tile had been shaped to a predetermined pattern at Lockheed using that firm’s numerically controlled milling machines. But the haphazardness of the layout made it likely that any precut tile would fail to fit into its assigned cavity, leaving too wide a gap with the adjacent ones.

Many tiles therefore were installed one by one, in a time-consuming process that fitted two into place and then carefully measured space for a third, designing it to fill the space between them. The measurements went to Sunnyvale, CA, where Lockheed carved that tile to its unique specification and shipped it to the Kennedy Space Center (KSC). Hence, each person took as long as 3 weeks to install just 4 tiles. Densification also took time; a tile removed from Columbia for rework needed 2 weeks until it was ready for reinstallation.[625]

How could these problems have been avoided? They all stemmed from the fact that the tile work was well advanced before NASA learned that the tile-SIP-adhesive bonds had less strength than the Agency needed. The analysis that disclosed the strength requirements was nei­ther costly nor demanding; it might readily have been in hand during 1976 or 1977. Had this happened, Lockheed could have begun shipping densified tiles at an early date. Their development and installation would have occurred within the normal flow of the Shuttle program, with the change amounting perhaps to little more than an engineering detail.

The Tiles Become Operational

The Space Shuttle Columbia descends to land at Edwards following its hypersonic reentry from orbit in April 1981. NASA.

The reason this did not happen was far-reaching, for it stemmed from the basic nature of the program. The Shuttle effort followed "con­current development,” with design, manufacture, and testing proceed­ing in parallel rather than in sequence. This approach carried risk, but the Air Force had used it with success during the 1960s. It allowed new technologies to enter service at the earliest possible date. But within the Shuttle program, funds were tight. Managers had to allocate their budgets adroitly, setting priorities and deferring what they could put off. To do this properly was a high art, calling for much experience and judgment, for program executives had to be able to conclude that the low-priority action items would contain no unpleasant surprises. The cal­culation of tile strength requirements was low on the action list because it appeared unnecessary; there was good reason to believe that the tiles would face nothing worse than 2 psi. Had this been true, and had the main engines been ready, Columbia might have flown by mid-1980. It did not fly until April 1981, and, in this sense, tile problems brought a delay of close to 1 year.

The delay in carrying through the tile-strength computation was not mandatory. Had there been good reason to upgrade its priority, it could readily have been done earlier. The budget stringency that brought this

deferral (along with many others) thus was false economy par excel­lence, for the program did not halt during that year of launch delay. It kept writing checks for its contractors and employees. The missing tile – strength analysis thus ramified in its consequences, contributing sub­stantially to a cost overrun in the Shuttle program.[626]

During 1979, NASA gave the same intense level of attention to the tiles’ mechanical problems that it had previously reserved for their ther­mal development. The effort nevertheless continued to follow the pat­tern of three steps forward and two steps back, and, for a while, more tiles were removed than were put on in a given week. Even so, by the fall of 1980, the end was in sight.[627]

During the spring of 1979, before the main tile problems had come to light, the schedule had called for the complete assembly of Columbia, with its external tank and solid boosters, to take place on November 24, 1979. Exactly 1 year later, a tow vehicle pulled Columbia into the Vehicle Assembly Building as a large crowd watched and cheered. Within 2 days, Columbia was mounted to its tank, forming a live Shuttle in flight configuration. Kenneth Kleinknecht, an X-series and space flight veteran and now Shuttle man­ager at NASA Johnson, put it succinctly: "The vehicle is ready to launch.”[628]

Flutter: The Insidious Threat

The most dramatic interaction of airplane structure with aerodynam­ics is "flutter”: a dynamic, high-frequency oscillation of some part of the structure. Aeroelastic flutter is a rapid, self-excited motion, potentially destructive to aircraft structures and control surfaces. It has been a par­ticularly persistent problem since invention of the cantilever monoplane at the end of the First World War. The monoplane lacked the "bridge truss” rigidity found in the redundant structure of the externally braced biplane and, as it consisted of a single surface unsupported except at the wing root, was prone to aerodynamic induced flutter. The simplest example of flutter is a free-floating, hinged control surface at the trail­ing edge of a wing, such as an aileron. The control surface will begin to oscillate (flap, like the trailing edge of a flag) as the speed increases. Eventually the motion will feed back through the hinge, into the struc­ture, and the entire wing will vibrate and eventually self-destruct. A similar situation can develop on a single fixed aerodynamic surface, like a wing or tail surface. When aerodynamic forces and moments are applied to the surface, the structure will respond by twisting or bending

about its elastic axis. Depending on the relationship between the elas­tic axis of the structure and the axis of the applied forces and moments, the motion can become self-energizing and a divergent vibration—one increasing in both frequency and amplitude—can follow. The high fre­quency and very rapid divergence of flutter causes it to be one of the most feared, and potentially catastrophic, events that can occur on an aircraft. Accordingly, extensive detailed flutter analyses are performed during the design of most modern aircraft using mathematical mod­els of the structure and the aerodynamics. Flight tests are usually per­formed by temporarily fitting the aircraft with a flutter generator. This consists of an oscillating mass, or small vane, which can be controlled and driven at different frequencies and amplitudes to force an aerody­namic surface to vibrate. Instrumentation monitors and measures the natural damping characteristics of the structure when the flutter gener­ator is suddenly turned off. In this way, the flutter mathematical model (frequency and damping) can be validated at flight conditions below the point of critical divergence.

Traditionally, if flight tests show that flutter margins are insuffi­cient, operational limits are imposed, or structural beef-ups might be accomplished for extreme cases. But as electronic flight control tech­nology advances, the prospect exists for so-called "active” suppression of flutter by using rapid, computer-directed control surface deflections. In the 1970s, NASA Langley undertook the first tests of such a system, on a one-seventeenth scale model of a proposed Boeing Supersonic Transport (SST) design, in the Langley Transonic Dynamics Tunnel (TDT). Encouraged, Center researchers followed this with TDT tests of a stores flutter suppression system on the model of the Northrop YF-17, in concert with the Air Force Flight Dynamics Laboratory (AFFDL, now the Air Force Research Laboratory’s Air Vehicles Directorate), later implementing a similar program on the General Dynamics YF-16. Then, NASA DFRC researchers modified a Ryan Firebee drone with such a system. This program, Drones for Aerodynamic and Structural Testing (DAST), used a Ryan BQM-34 Firebee II, an uncrewed aerial vehicle, rather than an inhabited system, because of the obvious risk to the pilot for such an experiment.

The modified Firebee made two successful flights but then, in June 1980, crashed on its third flight. Postflight analysis showed that one of the software gains had been inadvertently set three times higher than planned, causing the airplane wing to flutter explosively right after launch

Flutter: The Insidious Threat

A Drones for Aerodynamic and Structural Testing (DAST) unpiloted structural test vehicle, derived from the Ryan Firebee, during a 1980 flight test. NASA.

from the B-52 mother ship. In spite of the accident, progress was made in the definition of various control laws that could be used in the future for control and suppression of flutter.[714] Overall, NASA research on active flutter suppression has been generally so encouraging that the fruits of it were applied to new aircraft designs, most notably in the "growth” ver­sion of the YF-17, the McDonnell-Douglas (now Boeing) F/A-18 Hornet strike fighter. It used an Active Oscillation Suppression (AOS) system to suppress flutter tendencies induced by its wing-mounted stores and wingtip Sidewinder missiles, inspired to a significant degree by earlier YF-17 and YF-16 Transonic Dynamics Tunnel testing.[715]

Lightweight Ceramic Tiles

Ceramic tiles, of the kind used in a blast furnace or fireplace to insulate the surrounding structure from the extreme temperatures, were far too heavy to be considered for use on a flight vehicle. The concept of a light­weight ceramic tile for thermal protection was conceived by Lockheed and developed into operational use by NASA Ames Research Center, NASA Johnson Space Center, and Rockwell International for use on the Space Shuttle orbiter, first flown into orbit in April 1981. The result­ing tiles and ceramic blankets provided exceptionally light and efficient thermal protection for the orbiter without altering the external shape. Although highly efficient for thermal protection, the tiles were—and are— quite fragile and time-consuming to repair and maintain. The Shuttle program experienced considerable delays prior to its first flight because of bonding, breaking, and other installation issues. (Unlike the X-15 grad­ual envelope expansion program, the Shuttle orbiter was exposed to its full operational flight envelope on its very first orbital flight and entry, thus introducing a great deal of analysis and caution during flight prep­aration.) Subsequent Shuttle history confirmed the high-maintenance nature of the tiles, and their vulnerability to external damage such as ice or insulation shedding from the super-cold external propellant tank. Even with these limitations, however, they do constitute the most prom­ising technology for future lifting entry vehicles.[757]

The Advent of Direct Analog Computers

The first computers were analog computers. Direct analog computers are networks of physical components (most commonly, electrical components: resistors, capacitors, inductances, and transformers) whose behavior is gov­erned by the same equations as some system of interest that is being mod­eled. Direct analog computers were used in the 1950s and 1960s to solve problems in structural analysis, heat transfer, fluid flow, and other fields.

The method of analysis and the needs that were driving the move from classical idealizations such as slender-beam theory toward computational

Подпись: Representation of structural elements by analog circuits. NASA. Подпись: 8

methods are well stated in the following passage, from an NACA-sponsored paper by Stanley Benscoter and Richard MacNeal (subsequently a cofounder of the MacNeal Schwendler Corporation [MSC] and member of the NASTRAN development team):

The theory is expressed entirely in terms of first-order differ­ence equations in order that analogous electrical circuits can be readily designed and solutions obtained on the Caltech ana­log computer. . . . In the process of designing thin supersonic wings for minimum weight it is found that a convenient con­struction with aluminum alloy consists of a rather thick skin with closely spaced spars and no stringers. Such a wing deflects in the manner of a plate rather than as a beam. Internal stress distributions may be considerably different from those given by beam theory.[794]

Their implementation of analog circuitry for bending loads is illus­trated here and serves as an example of the direct analog modeling of structures.[795]

Direct analog computing had its advocates well into the 1960s. "For complex problems [direct analog] computers are inherently faster than digital machines since they solve the equations for the several nodes simultaneously, while the digital machines solve them sequen­tially. Direct analogs have, moreover, the advantage of visualization;

computer setups as well as programming are more closely related to the actual problem and are based primarily on physical insight rather than on numerical skills.”[796]

The advantages came at a price, however. It could take weeks, in some cases, to set up an analog computer to solve a particular type of problem. And there was no way to store a problem to be revisited at a later date. These drawbacks may not have seemed so important when there was no other recourse available, but they became more and more apparent as the programmable digital computer began to mature.

Hybrid direct-analog/digital computers were hypothesized in the 1960s: essentially a direct analog computer controlled by a digital computer capable of storing and executing program instructions. This would have overcome some of the drawbacks of direct analog com­puters.[797] However, this possibility was most likely overtaken by the rapid progress of digital computers. At the same time these hybrid ana – log/digital computers were just being thought about, NASTRAN was already in development.

A different type of analog computer—the active-element, or indi­rect, analog—consisted of operational amplifiers that performed arith­metic operations. These solved programmed mathematical equations, rather than mimicking a physical system. Several NACA locations— including Langley, Ames, and the Flight Research Center (now Dryden Flight Research Center)—used analog computers of this type for flight simulation. Ames installed its first analog computer in 1947.[798] The Flight Research Center flight simulators used analog computers exclusively from 1955 to 1964 and in combination with digital computers until 1975.[799] This type of analog computer can be thought of as simply a less precise, less reliable, and less versatile predecessor to the digital computer.

YF-12 Thermal Loads and Structural Dynamics

NASA operated two Lockheed YF-12As and one "YF-12C” (actually an early nonstandard SR-71A, although the Air Force at that time could not acknowledge that it was allowing NASA to operate an SR-71) between 1969 and 1979.[936] These aircraft were used for a variety of research proj­ects. In some projects, the YF-12s were the test articles, exploring their performance, handling qualities, and propulsion system characteristics in various baseline or modified configurations and modes of operation. In other projects, the YF-12s were used as "flying wind tunnels” to carry test models and other experiments into the Mach 3+ flight environment. Testing directly related to structural analysis methods and/or loads pre­diction included a series of thermal-structural load tests from 1969 to 1972 and smaller projects concerning ventral fin loads and structural

(a) Surface temperature distribution (deg K) at cruise

fv

Ґ

/

(

і

|—S>v

«■МЫВ

і

it—

ї

U » ‘И ««ж «м ІЙЛ0 4» lv. &J

(c) Distribution of typical temperatures in wing spar

 

8

 

YF-12 Thermal Loads and Structural Dynamics

(b) Time history of typical wing spar temperatures

Temperature time histories from YF12 flight project. NASA.

dynamics.[937] The flight-testing was conducted at Dryden, which was also responsible for project management. Ames, Langley, and Lewis Research Centers were all involved in technical planning, analysis, and supporting research activities, coordinated through NASA Headquarters. The U. S. Air Force and Lockheed also provided support in various areas.[938] Gene Matranga of Dryden was the manager of the program before Berwin Kock later assumed that role.[939]

The thermal-structural loads project involved modeling and test­ing in Dryden’s unique thermal load facility. The purpose was to corre­late in-flight and ground-test measurements and analytical predictions of temperatures, mechanical loads, strains, and deflections. "In all the X-15 work, flight conditions were always transient. The vehicle went to high speed in a matter of two to three minutes. It slowed down in a matter of three to five minutes. . . . The YF-12, on the other hand, could stay at Mach 3 for 15 minutes. We could get steady-state temperature
data, which would augment the X-15 data immeasurably.”[940] The YF-12 testing showed that it could take up to 15 minutes for absolute tem­peratures in the internal structure to approach steady state, and, even then, the gradients—which have a strong effect on stresses because of differential expansion—did not approach steady state until close to 30 minutes into the cruise.[941]

NASTRAN and FLEXSTAB (a code developed by Boeing on contract to NASA Ames to predict aeroelastic effects on stability) were used to model the YF-12A’s aeroelastic and aerothermoelastic characteristics. Alan Carter and Perry Polentz of NASA oversaw the modeling effort, which was contracted to Lockheed and accomplished by Al Curtis. This effort produced what was claimed to be the most extensive full-vehicle NASTRAN model developed up to that time. The computational models were used to predict loads and deflections, and also to identify appro­priate locations for the strain gauges that would take measurements in ground – and flight-testing. The instrumentation included strain gauges, thermocouples, and a camera mounted on the fuselage to record air­frame deflection in flight. Most of the flights, from Flight 11 in April 1970 through Flight 53 in February 1972, included data collection for this project, often mixed with other test objectives.[942] Subsequently, the air­craft ceased flying for more than a year to undergo ground tests in the high-temperature loads laboratory. The temperatures measured in flight were matched on the ground, using heated "blankets” placed over dif­ferent parts of the airframe. Ground-testing with no aerodynamic load allowed the thermal effects to be isolated from the aerodynamic effects.[943]

There were also projects involving the measurement of aerodynamic loads on the ventral fin and the excitation of structural dynamic modes. The ventral fin project was conducted to provide improved understand­ing of the aerodynamics of low aspect ratio surfaces. FLEXSTAB was used in this effort but only for linear aerodynamic predictions. Ground tests had shown the fin to be stiff enough to be treated as a rigid surface. Measured load data were compared to the linear theory predictions and to wind tunnel data.[944] For the structural dynamics tests, which occurred near the end of NASA’s YF-12A program, "shaker vanes”—essentially oscillating canards—were installed to excite structural modes in flight. Six flights with shaker vanes between November 1978 and March 1979 "provided flight data on aeroelastic response, allowed comparison with calculated response data, and thereby validated analytical techniques.”[945] Experiences from the program were communicated to industry and other interested organizations in a YF-12 Experiments Symposium that was held at Dryden in 1978, near the end of the 10-year effort.[946] There were also briefings to Boeing, specifically intended to provide informa­tion that would be useful on the Supersonic Transport (SST) program, which was canceled in 1971.[947] There have been other civil supersonic projects since then—the High-Speed Civil Transport (HSCT)/High-Speed Research (HSR) efforts in the 1990s and some efforts related to super­sonic business jets since 2000—but none have yet led to an operational civil supersonic aircraft.