Category AERONAUTICS

The Tiles Become Operational

Manufacture of the silica tiles was straightforward, at least in its basic steps. The raw material consisted of short lengths of silica fiber of l.0-micron diameter. A measured quantity of fibers, mixed with water, formed a slurry. The water was drained away, and workers added a binder of colloidal silica, then pressed the material into rectangular blocks that were 10 to 20 inches in diameter and more than 6 inches thick. These blocks were the crudest form of LI-900, the basic choice of RSI for the entire Shuttle. They sat for 3 hours to allow the binder to jell, then were dried thoroughly in a micro­wave oven. The blocks moved through sintering kilns that baked them at 2,375 °F for 2 hours, fusing binder and fibers together. Band saws trimmed distortions from the blocks, which were cut into cubes and then carved into individual tiles using milling machines driven by computer. The programs contained data from Rockwell International on the desired tile dimensions.

Next, the tiles were given a spray-on coating. After being oven-dried, they returned to the kilns for glazing at temperatures of 2,200 °F for 90 minutes. To verify that the tiles had received the proper amount of coat­ing, technicians weighed samples before and after the coating and glaz­ing. The glazed tiles then were made waterproof by vacuum deposition of a silicon compound from Dow Corning while being held in a furnace at 350 °F. These tiles were given finishing touches before being loaded into arrays for final milling.[610]

Although the basic LI-900 material showed its merits during 1972, it was another matter to produce it in quantity, to manufacture tiles that were suitable for operational use, and to provide effective coatings. To avoid having to purify raw fibers from Johns Manville, Lockheed asked that company to find a natural source of silica sand with the necessary purity. The amount needed was small, about 20 truckloads, and was not of great interest to quarry operators. Nevertheless, Johns Manville found a suitable source in Minnesota.

Problems arose when shaping the finished tiles. Initial plans called for a large number of identical flat tiles, varying only in thickness and trimmed to fit at the time of installation. But flat tiles on the curved sur­face of the Shuttle produced a faceted surface that promoted the onset of turbulence in the airflow, resulting in higher rates of heating. The tiles

then would have had to be thicker, which threatened to add weight. The alternative was an external RSI contour closely matching that of the orbit – er’s outer surface. Lockheed expected to produce 34,000 tiles for each orbiter, grouping most of them in arrays of two dozen or so and machin­ing their back faces, away from the glazed coating, to curves matching the contours of the Shuttle’s aluminum skin. Each of the many thou­sands of tiles was to be individually numbered, and none had precisely the same dimensions. Instead, each was defined by its own set of dimen­sions. This cost money, but it saved weight.

Difficulties also arose in the development of coatings. The first good one, LI-0042, was a borosilicate glass that used silicon carbide to enhance its high-temperature thermal emissivity. It dated to the late 1960s; a vari­ant, LI-0050, initially was the choice for operational use. This coating easily withstood the rated temperature of 2,300 °F, but in tests, it persis­tently developed hairline cracks after 20 to 60 thermal cycles. This was unacceptable; it had to stand up to 100 such cycles. The cracks were too small to see with the unaided eye and did not grow large or cause tile failure. But they would have allowed rainstorms to penetrate the tiles during the weeks that an orbiter was on the ground between missions, with the rain adding to the launch weight. Help came from NASA Ames, where researchers were close to Lockheed, both in their shared interests and in their facilities being only a few miles apart. Howard Goldstein at Ames, a colleague of the branch chief, Howard Larson, set up a task group and brought in a consultant from Stanford University, which also was just up the road. They spent less than $100,000 in direct costs and came up with a new and superior coating called reaction-cured glass. Like LI-0050, it was a borosilicate, consisting of more than 90 percent silica along with boria or boron oxide along with an emittance agent. The agent in LI-0050 had been silicon carbide; the new one was silicon tetraboride, SiB4. During glazing, it reacted with silica in a way that increased the level of boria, which played a critical role in controlling the coating’s thermal expansion. This coating could be glazed at lower temperature than LI-0050 could, reducing the residual stress that led to the cracking. SiB4 oxidized during reentry, but in doing so, it produced boria and silica, the ingredients of the glass coating itself.[611]

The Shuttle’s distinctive mix of black-and-white tiles was all designed as standard LI-900 with its borosilicate coating, but the black ones had SiB4 and the white ones did not. Still, they all lacked structural strength and were brittle. They could not be bonded directly to the orbiter’s alumi­num skin, for they would fracture and break because of their inability to follow the flexing of this skin under its loads. Designers therefore placed an intermediate layer between tiles and skin, called a strain isolator pad (SIP). It was a felt made of Nomex nylon from DuPont, which would nei­ther melt nor burn. It had useful elasticity and could stretch in response to Shuttle skin flexing without transmitting excessive strain to the tiles.[612]

Testing of tiles and other thermal-protection components continued through the 1970s, with NASA Ames being particularly active. A particu­lar challenge lay in creating turbulent flows, which demanded close study because they increased the heat-transfer rates many times over. During reentry, hypersonic flow over a wing is laminar near the leading edge, tran­sitioning to turbulence at some distance to the rear. No hypersonic wind tunnel could accommodate anything resembling a full-scale wing, and it took considerable power as well as a strong airflow to produce turbu­lence in the available facilities. Ames had a 60-megawatt arc-jet, but even that facility could not accomplish this. Ames succeeded in producing such flows by using a 20-megawatt arc-jet that fed its flow into a duct that was 9 inches across and 2 inches deep. The narrow depth gave a compressed flow that readily produced turbulence, while the test chamber was large enough to accommodate panels with size of 8 by 20 inches. This facil­ity supported the study of coatings that led to the use of reaction-cured glass. Tiles of LI-900, 6 inches square and treated with this coating, sur­vived 100 simulated reentries at 2,300 °F in turbulent flow.[613]

The Ames 20-megawatt arc-jet facility made its own contribution in a separate program that improved the basic silica tile. Excessive tem­peratures caused these tiles to fail by shrinking and becoming denser.

Investigators succeeded in reducing the shrinkage by raising the tile density and adding silicon carbide to the silica, rendering it opaque and reducing internal heat transfer. This led to a new grade of silica RSI with density of 22 lb/ft3 that had greater strength as well as improved thermal performance.[614]

The Ames researchers carried through with this work during 1974 and 1975, with Lockheed taking this material and putting it into produc­tion as LI-2200. Its method of manufacture largely followed that of stan­dard LI-900, but whereas that material relied on sintered colloidal silica to bind the fibers together, LI-2200 dispensed with this and depended entirely on fiber-to-fiber sintering. LI-2200 was adopted in 1977 for oper­ational use on the Shuttle, where it found application in specialized areas. These included regions of high concentrated heat near penetrations such as landing-gear doors as well as near interfaces with the carbon-carbon nose cap, where surface temperatures could reach 2,600 °F.[615]

Testing proceeded in four overlapping phases. Material selection ran through 1973 and 1974 into 1975; the work that led to LI-2200 was an example. Material characterization proceeded concurrently and extended midway through 1976. Design development tests covered 1974 through 1977; design verification activity began in 1977 and ran through subse­quent years. Materials characterization called for some 10,000 test spec­imens, with investigators using statistical methods to determine basic material properties. These were not the well-defined properties that engi­neers find listed in handbooks; they showed ranges of values that often formed a Gaussian distribution, with its bell-shaped curve. This activity addressed such issues as the lifetime of a given material, the effects of changes in processing, or the residual strength after a given number of flights. A related topic was simple but far-reaching: to be able to calcu­late the minimum tile thickness, at a given location, that would hold the skin temperature below the maximum allowable.[616]

Design development tests used only 350 articles but spanned 4 years, because each of them required close attention. An important goal involved validating the specific engineering solutions to a number

of individual thermal-protection problems. Thus the nose cap and wing leading edges were made of carbon-carbon, in anticipation of their being subjected to the highest temperatures. Their attachments were exercised in structural tests that simulated flight loads up to design limits, with design temperature gradients.

Design development testing also addressed basic questions of the tiles themselves. There were narrow gaps between them, and while Rockwell had ways to fill them, these gap-fillers required their own trials by fire. A related question was frequently asked: What happens if a tile falls off? A test program addressed this and found that in some areas of intense heating, the aluminum skin indeed would burn through. The only way to prevent this was to be sure that the tiles were firmly bonded in place, and this meant all those located in critical areas.[617]

Design verification tests used fewer than 50 articles, but these rep­resented substantial portions of the vehicle. An important test article, evaluated at NASA Johnson, reproduced a wing leading edge and mea­sured 5 by 8 feet. It had two leading-edge panels of carbon-carbon set side by side, a section of wing structure that included its principal spars, and aluminum skin covered with RSI. It could not have been fabricated earlier in the program, for its detailed design drew on lessons from previous tests. It withstood simulated air loads, launch acoustics, and mission-temperature-pressure environments, not once, but many times.[618]

The testing ranged beyond the principal concerns of aerodynamics, heating, and acoustics. There also was concern that meteoroids might not only put craters in the carbon-carbon but also cause it to crack. At NASA Langley, the researcher Donald Humes studied this by shooting small glass and nylon spheres at target samples using a light-gas gun driven by compressed helium. Helium is better than gunpowder, as it can expand at much higher velocities. Humes wrote that carbon-car­bon: "does not have the penetration resistance of the metals on a thick­ness basis, but on a weight basis, that is, mass per unit area required to stop projectiles, it is superior to steel.”[619]

Yet amid the advanced technology of arc-jets, light-gas guns, and hypersonic wind tunnels, one of the most important tests was also one of the simplest. It involved nothing more than taking tiles that were bonded with adhesive to the SIP and the underlying aluminum skin and physically pulling them off.

It was no new thing for people to show concern that the tiles might not stick. In 1974, a researcher at Ames noted that aerodynamic noise was potentially destructive, telling a reporter for Aviation Week that: "We’d hate to shake them all off when we’re leaving.” At NASA Johnson, a 10-MW arc-jet saw extensive use in lost-tile investigations. Tests indi­cated there was reason to believe that the forces acting to pull off a tile would be as low as 2 psi, just some 70 pounds for a tile measuring 6 by 6 inches square. This was low indeed; the adhesive, SIP, and RSI material all were considerably stronger. The thermal-protection testing therefore had given priority to thermal rather than to mechanical work, essen­tially taking it for granted that the tiles would stay on. Thus, attachment of the tiles to the Shuttle lacked adequate structural analysis, failing to take into account the peculiarities in the components. For example, the SIP had some fibers oriented perpendicular to the cemented tile under­surface. The tile was made of ceramic fibers, with these fibers concen­trating the loads. This meant that the actual stresses they faced were substantially greater than anticipated.[620]

Columbia orbiter OV-102 was the first to receive working tiles. Columbia was also slated to be first into space. It underwent final assem­bly at the Rockwell plant in Palmdale, CA, during 1978. Checkout of onboard systems began in September, and installation of tiles proceeded concurrently, with Columbia to be rolled out in February 1979. But mounting the tiles was not at all like laying bricks. Measured gaps were to separate them; near the front of the orbiter, they had to be positioned to within 0.17 inches of vertical tolerance to form a smooth surface that

would not trip the airflow into turbulence. This would not have been difficult if the tiles had rested directly on the aluminum skin, but they were separated from that skin by the spongy SIP. The tiles were also frag­ile. An accidental tap with a wrench, a hard hat, even a key chain could crack the glassy coating. When that happened, the damaged tile had to be removed and the process of installation had to start again with a new one.[621]

The tiles came in arrays, each array numbering about three-dozen tiles. It took 1,092 arrays to cover this orbiter, and NASA reached a high mark when technicians installed 41 of them in a single week. But unfor­tunate news came midway through 1979 as detailed studies showed that in many areas the combined loads due to aerodynamic pressure, vibra­tion, and acoustics would produce excessively large forces on the tiles. Work to date had treated a 2-psi level as part of normal testing, but now it was clear that only a small proportion of the tiles already installed faced stresses that low. Over 5,000 tiles faced force levels of 8.5 to 13 psi, with 3,000 being in the range of 2 to 6.5 psi. The usefulness of tiles as thermal protection was suddenly in doubt.[622]

What caused this? The fault lay in the nylon felt SIP, which had been modified by "needling” to increase its through-the-thickness tensile strength and elasticity. This was accomplished by punching a barbed nee­dle through the felt fabric, some 1,000 times per square inch, which ori­ented fiber bundles transversely to the SIP pad. Tensile loads applied across the SIP pad, acting to pull off a tile, were transmitted into the SIP at dis­crete regions along these transverse fibers. This created localized stress concentrations, where the stresses approached twice the mean value. These local areas failed readily under load, causing the glued bond to break.[623]

There also was a clear need to increase the strength of the tiles’ adhesive bonds. The solution came during October and involved mod­ifying a thin layer at the bottom of each tile to make it denser. The pro­cess was called, quite logically, "densification.” It used DuPont’s Ludox

with a silica "slip.” Ludox was colloidal silica stirred into water and stabilized with ammonia; the slip had fine silica particles dispersed in water. The Ludox acted like cement; the slip provided reinforcement, in the manner of sand in concrete. It worked: the densification process clearly restored the lost strength.[624]

By then, Columbia had been moved to the Kennedy Space Center. The work nevertheless went badly during 1979, for as people continued to install new tiles, they found more and more that needed to be removed and replaced. Orderly installation procedures broke down. Rockwell had received the tiles from Lockheed in arrays and had attached them in well-defined sequences. Even so, that work had gone slowly, with 550 tiles in a week being a good job. But now Columbia showed a patchwork of good ones, bad ones, and open areas with no tiles. Each individual tile had been shaped to a predetermined pattern at Lockheed using that firm’s numerically controlled milling machines. But the haphazardness of the layout made it likely that any precut tile would fail to fit into its assigned cavity, leaving too wide a gap with the adjacent ones.

Many tiles therefore were installed one by one, in a time-consuming process that fitted two into place and then carefully measured space for a third, designing it to fill the space between them. The measurements went to Sunnyvale, CA, where Lockheed carved that tile to its unique specification and shipped it to the Kennedy Space Center (KSC). Hence, each person took as long as 3 weeks to install just 4 tiles. Densification also took time; a tile removed from Columbia for rework needed 2 weeks until it was ready for reinstallation.[625]

How could these problems have been avoided? They all stemmed from the fact that the tile work was well advanced before NASA learned that the tile-SIP-adhesive bonds had less strength than the Agency needed. The analysis that disclosed the strength requirements was nei­ther costly nor demanding; it might readily have been in hand during 1976 or 1977. Had this happened, Lockheed could have begun shipping densified tiles at an early date. Their development and installation would have occurred within the normal flow of the Shuttle program, with the change amounting perhaps to little more than an engineering detail.

The Tiles Become Operational

The Space Shuttle Columbia descends to land at Edwards following its hypersonic reentry from orbit in April 1981. NASA.

The reason this did not happen was far-reaching, for it stemmed from the basic nature of the program. The Shuttle effort followed "con­current development,” with design, manufacture, and testing proceed­ing in parallel rather than in sequence. This approach carried risk, but the Air Force had used it with success during the 1960s. It allowed new technologies to enter service at the earliest possible date. But within the Shuttle program, funds were tight. Managers had to allocate their budgets adroitly, setting priorities and deferring what they could put off. To do this properly was a high art, calling for much experience and judgment, for program executives had to be able to conclude that the low-priority action items would contain no unpleasant surprises. The cal­culation of tile strength requirements was low on the action list because it appeared unnecessary; there was good reason to believe that the tiles would face nothing worse than 2 psi. Had this been true, and had the main engines been ready, Columbia might have flown by mid-1980. It did not fly until April 1981, and, in this sense, tile problems brought a delay of close to 1 year.

The delay in carrying through the tile-strength computation was not mandatory. Had there been good reason to upgrade its priority, it could readily have been done earlier. The budget stringency that brought this

deferral (along with many others) thus was false economy par excel­lence, for the program did not halt during that year of launch delay. It kept writing checks for its contractors and employees. The missing tile – strength analysis thus ramified in its consequences, contributing sub­stantially to a cost overrun in the Shuttle program.[626]

During 1979, NASA gave the same intense level of attention to the tiles’ mechanical problems that it had previously reserved for their ther­mal development. The effort nevertheless continued to follow the pat­tern of three steps forward and two steps back, and, for a while, more tiles were removed than were put on in a given week. Even so, by the fall of 1980, the end was in sight.[627]

During the spring of 1979, before the main tile problems had come to light, the schedule had called for the complete assembly of Columbia, with its external tank and solid boosters, to take place on November 24, 1979. Exactly 1 year later, a tow vehicle pulled Columbia into the Vehicle Assembly Building as a large crowd watched and cheered. Within 2 days, Columbia was mounted to its tank, forming a live Shuttle in flight configuration. Kenneth Kleinknecht, an X-series and space flight veteran and now Shuttle man­ager at NASA Johnson, put it succinctly: "The vehicle is ready to launch.”[628]

Flutter: The Insidious Threat

The most dramatic interaction of airplane structure with aerodynam­ics is "flutter”: a dynamic, high-frequency oscillation of some part of the structure. Aeroelastic flutter is a rapid, self-excited motion, potentially destructive to aircraft structures and control surfaces. It has been a par­ticularly persistent problem since invention of the cantilever monoplane at the end of the First World War. The monoplane lacked the "bridge truss” rigidity found in the redundant structure of the externally braced biplane and, as it consisted of a single surface unsupported except at the wing root, was prone to aerodynamic induced flutter. The simplest example of flutter is a free-floating, hinged control surface at the trail­ing edge of a wing, such as an aileron. The control surface will begin to oscillate (flap, like the trailing edge of a flag) as the speed increases. Eventually the motion will feed back through the hinge, into the struc­ture, and the entire wing will vibrate and eventually self-destruct. A similar situation can develop on a single fixed aerodynamic surface, like a wing or tail surface. When aerodynamic forces and moments are applied to the surface, the structure will respond by twisting or bending

about its elastic axis. Depending on the relationship between the elas­tic axis of the structure and the axis of the applied forces and moments, the motion can become self-energizing and a divergent vibration—one increasing in both frequency and amplitude—can follow. The high fre­quency and very rapid divergence of flutter causes it to be one of the most feared, and potentially catastrophic, events that can occur on an aircraft. Accordingly, extensive detailed flutter analyses are performed during the design of most modern aircraft using mathematical mod­els of the structure and the aerodynamics. Flight tests are usually per­formed by temporarily fitting the aircraft with a flutter generator. This consists of an oscillating mass, or small vane, which can be controlled and driven at different frequencies and amplitudes to force an aerody­namic surface to vibrate. Instrumentation monitors and measures the natural damping characteristics of the structure when the flutter gener­ator is suddenly turned off. In this way, the flutter mathematical model (frequency and damping) can be validated at flight conditions below the point of critical divergence.

Traditionally, if flight tests show that flutter margins are insuffi­cient, operational limits are imposed, or structural beef-ups might be accomplished for extreme cases. But as electronic flight control tech­nology advances, the prospect exists for so-called "active” suppression of flutter by using rapid, computer-directed control surface deflections. In the 1970s, NASA Langley undertook the first tests of such a system, on a one-seventeenth scale model of a proposed Boeing Supersonic Transport (SST) design, in the Langley Transonic Dynamics Tunnel (TDT). Encouraged, Center researchers followed this with TDT tests of a stores flutter suppression system on the model of the Northrop YF-17, in concert with the Air Force Flight Dynamics Laboratory (AFFDL, now the Air Force Research Laboratory’s Air Vehicles Directorate), later implementing a similar program on the General Dynamics YF-16. Then, NASA DFRC researchers modified a Ryan Firebee drone with such a system. This program, Drones for Aerodynamic and Structural Testing (DAST), used a Ryan BQM-34 Firebee II, an uncrewed aerial vehicle, rather than an inhabited system, because of the obvious risk to the pilot for such an experiment.

The modified Firebee made two successful flights but then, in June 1980, crashed on its third flight. Postflight analysis showed that one of the software gains had been inadvertently set three times higher than planned, causing the airplane wing to flutter explosively right after launch

Flutter: The Insidious Threat

A Drones for Aerodynamic and Structural Testing (DAST) unpiloted structural test vehicle, derived from the Ryan Firebee, during a 1980 flight test. NASA.

from the B-52 mother ship. In spite of the accident, progress was made in the definition of various control laws that could be used in the future for control and suppression of flutter.[714] Overall, NASA research on active flutter suppression has been generally so encouraging that the fruits of it were applied to new aircraft designs, most notably in the "growth” ver­sion of the YF-17, the McDonnell-Douglas (now Boeing) F/A-18 Hornet strike fighter. It used an Active Oscillation Suppression (AOS) system to suppress flutter tendencies induced by its wing-mounted stores and wingtip Sidewinder missiles, inspired to a significant degree by earlier YF-17 and YF-16 Transonic Dynamics Tunnel testing.[715]

Lightweight Ceramic Tiles

Ceramic tiles, of the kind used in a blast furnace or fireplace to insulate the surrounding structure from the extreme temperatures, were far too heavy to be considered for use on a flight vehicle. The concept of a light­weight ceramic tile for thermal protection was conceived by Lockheed and developed into operational use by NASA Ames Research Center, NASA Johnson Space Center, and Rockwell International for use on the Space Shuttle orbiter, first flown into orbit in April 1981. The result­ing tiles and ceramic blankets provided exceptionally light and efficient thermal protection for the orbiter without altering the external shape. Although highly efficient for thermal protection, the tiles were—and are— quite fragile and time-consuming to repair and maintain. The Shuttle program experienced considerable delays prior to its first flight because of bonding, breaking, and other installation issues. (Unlike the X-15 grad­ual envelope expansion program, the Shuttle orbiter was exposed to its full operational flight envelope on its very first orbital flight and entry, thus introducing a great deal of analysis and caution during flight prep­aration.) Subsequent Shuttle history confirmed the high-maintenance nature of the tiles, and their vulnerability to external damage such as ice or insulation shedding from the super-cold external propellant tank. Even with these limitations, however, they do constitute the most prom­ising technology for future lifting entry vehicles.[757]

The Advent of Direct Analog Computers

The first computers were analog computers. Direct analog computers are networks of physical components (most commonly, electrical components: resistors, capacitors, inductances, and transformers) whose behavior is gov­erned by the same equations as some system of interest that is being mod­eled. Direct analog computers were used in the 1950s and 1960s to solve problems in structural analysis, heat transfer, fluid flow, and other fields.

The method of analysis and the needs that were driving the move from classical idealizations such as slender-beam theory toward computational

Подпись: Representation of structural elements by analog circuits. NASA. Подпись: 8

methods are well stated in the following passage, from an NACA-sponsored paper by Stanley Benscoter and Richard MacNeal (subsequently a cofounder of the MacNeal Schwendler Corporation [MSC] and member of the NASTRAN development team):

The theory is expressed entirely in terms of first-order differ­ence equations in order that analogous electrical circuits can be readily designed and solutions obtained on the Caltech ana­log computer. . . . In the process of designing thin supersonic wings for minimum weight it is found that a convenient con­struction with aluminum alloy consists of a rather thick skin with closely spaced spars and no stringers. Such a wing deflects in the manner of a plate rather than as a beam. Internal stress distributions may be considerably different from those given by beam theory.[794]

Their implementation of analog circuitry for bending loads is illus­trated here and serves as an example of the direct analog modeling of structures.[795]

Direct analog computing had its advocates well into the 1960s. "For complex problems [direct analog] computers are inherently faster than digital machines since they solve the equations for the several nodes simultaneously, while the digital machines solve them sequen­tially. Direct analogs have, moreover, the advantage of visualization;

computer setups as well as programming are more closely related to the actual problem and are based primarily on physical insight rather than on numerical skills.”[796]

The advantages came at a price, however. It could take weeks, in some cases, to set up an analog computer to solve a particular type of problem. And there was no way to store a problem to be revisited at a later date. These drawbacks may not have seemed so important when there was no other recourse available, but they became more and more apparent as the programmable digital computer began to mature.

Hybrid direct-analog/digital computers were hypothesized in the 1960s: essentially a direct analog computer controlled by a digital computer capable of storing and executing program instructions. This would have overcome some of the drawbacks of direct analog com­puters.[797] However, this possibility was most likely overtaken by the rapid progress of digital computers. At the same time these hybrid ana – log/digital computers were just being thought about, NASTRAN was already in development.

A different type of analog computer—the active-element, or indi­rect, analog—consisted of operational amplifiers that performed arith­metic operations. These solved programmed mathematical equations, rather than mimicking a physical system. Several NACA locations— including Langley, Ames, and the Flight Research Center (now Dryden Flight Research Center)—used analog computers of this type for flight simulation. Ames installed its first analog computer in 1947.[798] The Flight Research Center flight simulators used analog computers exclusively from 1955 to 1964 and in combination with digital computers until 1975.[799] This type of analog computer can be thought of as simply a less precise, less reliable, and less versatile predecessor to the digital computer.

YF-12 Thermal Loads and Structural Dynamics

NASA operated two Lockheed YF-12As and one "YF-12C” (actually an early nonstandard SR-71A, although the Air Force at that time could not acknowledge that it was allowing NASA to operate an SR-71) between 1969 and 1979.[936] These aircraft were used for a variety of research proj­ects. In some projects, the YF-12s were the test articles, exploring their performance, handling qualities, and propulsion system characteristics in various baseline or modified configurations and modes of operation. In other projects, the YF-12s were used as "flying wind tunnels” to carry test models and other experiments into the Mach 3+ flight environment. Testing directly related to structural analysis methods and/or loads pre­diction included a series of thermal-structural load tests from 1969 to 1972 and smaller projects concerning ventral fin loads and structural

(a) Surface temperature distribution (deg K) at cruise

fv

Ґ

/

(

і

|—S>v

«■МЫВ

і

it—

ї

U » ‘И ««ж «м ІЙЛ0 4» lv. &J

(c) Distribution of typical temperatures in wing spar

 

8

 

YF-12 Thermal Loads and Structural Dynamics

(b) Time history of typical wing spar temperatures

Temperature time histories from YF12 flight project. NASA.

dynamics.[937] The flight-testing was conducted at Dryden, which was also responsible for project management. Ames, Langley, and Lewis Research Centers were all involved in technical planning, analysis, and supporting research activities, coordinated through NASA Headquarters. The U. S. Air Force and Lockheed also provided support in various areas.[938] Gene Matranga of Dryden was the manager of the program before Berwin Kock later assumed that role.[939]

The thermal-structural loads project involved modeling and test­ing in Dryden’s unique thermal load facility. The purpose was to corre­late in-flight and ground-test measurements and analytical predictions of temperatures, mechanical loads, strains, and deflections. "In all the X-15 work, flight conditions were always transient. The vehicle went to high speed in a matter of two to three minutes. It slowed down in a matter of three to five minutes. . . . The YF-12, on the other hand, could stay at Mach 3 for 15 minutes. We could get steady-state temperature
data, which would augment the X-15 data immeasurably.”[940] The YF-12 testing showed that it could take up to 15 minutes for absolute tem­peratures in the internal structure to approach steady state, and, even then, the gradients—which have a strong effect on stresses because of differential expansion—did not approach steady state until close to 30 minutes into the cruise.[941]

NASTRAN and FLEXSTAB (a code developed by Boeing on contract to NASA Ames to predict aeroelastic effects on stability) were used to model the YF-12A’s aeroelastic and aerothermoelastic characteristics. Alan Carter and Perry Polentz of NASA oversaw the modeling effort, which was contracted to Lockheed and accomplished by Al Curtis. This effort produced what was claimed to be the most extensive full-vehicle NASTRAN model developed up to that time. The computational models were used to predict loads and deflections, and also to identify appro­priate locations for the strain gauges that would take measurements in ground – and flight-testing. The instrumentation included strain gauges, thermocouples, and a camera mounted on the fuselage to record air­frame deflection in flight. Most of the flights, from Flight 11 in April 1970 through Flight 53 in February 1972, included data collection for this project, often mixed with other test objectives.[942] Subsequently, the air­craft ceased flying for more than a year to undergo ground tests in the high-temperature loads laboratory. The temperatures measured in flight were matched on the ground, using heated "blankets” placed over dif­ferent parts of the airframe. Ground-testing with no aerodynamic load allowed the thermal effects to be isolated from the aerodynamic effects.[943]

There were also projects involving the measurement of aerodynamic loads on the ventral fin and the excitation of structural dynamic modes. The ventral fin project was conducted to provide improved understand­ing of the aerodynamics of low aspect ratio surfaces. FLEXSTAB was used in this effort but only for linear aerodynamic predictions. Ground tests had shown the fin to be stiff enough to be treated as a rigid surface. Measured load data were compared to the linear theory predictions and to wind tunnel data.[944] For the structural dynamics tests, which occurred near the end of NASA’s YF-12A program, "shaker vanes”—essentially oscillating canards—were installed to excite structural modes in flight. Six flights with shaker vanes between November 1978 and March 1979 "provided flight data on aeroelastic response, allowed comparison with calculated response data, and thereby validated analytical techniques.”[945] Experiences from the program were communicated to industry and other interested organizations in a YF-12 Experiments Symposium that was held at Dryden in 1978, near the end of the 10-year effort.[946] There were also briefings to Boeing, specifically intended to provide informa­tion that would be useful on the Supersonic Transport (SST) program, which was canceled in 1971.[947] There have been other civil supersonic projects since then—the High-Speed Civil Transport (HSCT)/High-Speed Research (HSR) efforts in the 1990s and some efforts related to super­sonic business jets since 2000—but none have yet led to an operational civil supersonic aircraft.

Spin Rig (Glenn Research Center

One particular facility of many, a spin test rig built at Lewis in 1983 is mentioned here because its stated purpose was not primarily the test­ing of engine parts to verify the parts but the testing of engine parts to verify analysis methods: "The Lewis Research Center spin rig was con­structed to provide experimental evaluation of analysis methods devel­oped under the NASA Engine Structural Dynamics Program. Rotors up to 51 cm (20 in.) in diameter can be spun to 16,000 rpm in vacuum by an air motor. Vibration forcing functions are provided by shakers that apply oscillatory axial forces or transverse moments to the shaft, by a natural whirling of the shaft, and by an air jet. Blade vibration is detected by strain gages and optical tip blade-motion sensors.”[1012]

Space Race and the War in Vietnam: Emphasis on FBW Accelerates

During the 1960s, two major events would unfold in the United States that had very strong influence on the development and eventual
introduction into operational service of advanced computer-controlled fly-by-wire flight control systems. Early in his administration, President John F. Kennedy had initiated the NASA Apollo program with the goal of placing a man on the Moon and safely bringing him back to Earth by the end of the decade. The space program, and Apollo in particular, would lead to major strides in the application of the digital computer to manage and control sensors, systems, and advanced fly-by-wire vehicles (eventually including piloted aircraft). During the same period, America became increasingly involved in the expanding conflict in South Vietnam, an involvement that rapidly escalated as the war expanded into a con­ventional conflict with dimensions far beyond what was originally fore­seen. As combat operations intensified in Southeast Asia, large-scale

Подпись: 10U. S. strike missions began to be flown against North Vietnam. In response, the Soviet Union equipped North Vietnamese forces with improved air defense weapons, including advanced fighters, air-to-air and surface-to-air missiles, and massive quantities of conventional antiaircraft weapons, ranging in caliber from 12.7 to 100 millimeters (mm). U. S. aircraft losses rose dramatically, and American warplane designs came under increasing scrutiny as the war escalated.[1132] Analyses of combat data revealed that many aircraft losses resulted from battle damage to hydromechanical flight control system components. Traditionally, pri­mary and secondary hydraulic system lines had been routed in paral­lel through the aircraft structure to the flight control system actuators. In the Vietnam combat, experience revealed that loss of hydraulic fluid because of battle damage often led to catastrophic fires or total loss of aircraft control. Aircraft modification programs were developed to reroute and separate primary and secondary hydraulic lines to reduce the possibility of a total loss of fluid given a hit. Other changes to exist­ing aircraft flight control systems improved survivability, such as a mod­ification to the F-4 that froze the horizontal tail in the neutral position to prevent the aircraft from going out of control when hydraulic fluid was lost.[1133] However, there was an increasing body of opinion that felt a
new approach to flight control system design was necessary and tech­nically feasible.

Phase II Testing

From mid-1983 through mid-1984, components for the Automated Maneuvering Attack System and related avionics systems were installed into the AFTI/F-16 at GD in Fort Worth in preparation for the Phase II effort. Precision electrical-optical tracking pods were installed in the wing root area on both sides of the aircraft. First flight of the AFTI/F-16 in the AMAS configuration was on July 31, 1984, with Phase II flight-testing at Edwards beginning shortly after the aircraft returned to Dryden on August 6, 1984. Beginning in September 1984 and continuing through April 1987, improved sensors, integrated fire and flight control, and enhancements
in pilot-vehicle interface were evaluated. During Phase II testing, the system demonstrated automatic gun tracking of airborne targets and accurate delivery of unguided bombs during 5-g curvilinear toss bomb maneuvers from altitudes as low as 200 feet. An all-attitude automatic ground collision avoidance capability was demonstrated,[1185] as was the Voice Command System (for interfacing with the avionics system), a helmet-mounted sight (used for high off bore sight target cueing), and a digital terrain system with color moving map.[1186] The sortie rate dur­ing Phase II was very high. From the start of the AMAS tests in August 1984 to the completion of Phase II in early 1987, 226 flights were accom­plished, with 160 sorties being flown during 1986. To manage this high sortie rate, the ground maintenance crews worked a two-shift operation.

Подпись: 10Follow-On AFTI/F-16 Testing

Following Phase II in 1987, the forward fuselage-mounted ventral fins were removed and the AFTI/F-16 was flown in support of other test efforts and new aircraft programs, such as evaluating strike technologies pro­posed for use in the next generation ground attack aircraft, which even­tually evolved into the Joint Strike Fighter (JSF) program.

Adaptive Engine Control System

The Adaptive Engine Control System (ADECS) improved engine perfor­mance by exploiting the excess stall margin originally designed into the engines using capabilities made possible with the integrated comput­erized flight and engine control systems.[1263] ADECS used airframe and engine data to allow the engine to operate at higher performance levels at times when inlet distortion was low and the full engine stall margin is not needed. Initial engineering work on ADECS began in 1983, with research flights beginning in 1986. Test results showed thrust improve­ments of between 8 and 10 percent depending on altitude. Fuel flow reductions of between 7 and 17 percent at maximum afterburning thrust at an altitude of 30,000 feet were recorded. Rate of climb increased 14 percent at 40,000 feet. Time required to climb from 10,000 feet to 40,000 feet dropped 13 percent. Acceleration improved between 5 and 24 per­cent at intermediate and maximum power settings, depending on altitude. No unintentional engine stalls were encountered in the test program.

ADECS technology has been incorporated into the Pratt & Whitney F119 engine used on the Air Force F-22 Raptor.[1264]

Digital Electronic Engine Controls

As one set of NASA and contractor engineers worked on improving the design of the various types of jet engines, another set of researchers rep­resenting another science discipline were increasingly interested in mar­rying the computer’s capabilities to the operation of a jet engine, much in the same way that fly-by-wire systems already were in use with air­craft flight controls.

Beginning with that first Wright Flyer in 1903, flying an airplane meant moving levers and other mechanical contrivances that were directly connected by wires and cables to control the operation of the rudder, elevator, wing surfaces, instruments, and engine. When Chuck Yeager broke the sound barrier in 1947 in the X-1, if he wanted to go up, he pulled back on the yoke and cables directly connecting the stick to the elevator, which made that aerosurface move to effect a change in the aircraft’s attitude. The rockets propelling the X-1 were activated with a switch throw that closed an electrical circuit whose wiring led directly from the cockpit to the engines. As planes grew bigger, so did their control surfaces. Aircraft such as the B-52 bomber had aerosur – faces as big as the entire wings of smaller airplanes—too bulky and heavy for a single pilot to move using a simple cable/pulley system. A hydrau­lic system was required and "inserted” between the pilot’s input on the yoke and the control surface needing to be moved. Meanwhile, engine
operation remained more or less "old fashioned,” with all parameters such as fuel flow and engine temperatures reported to the cockpit on dials the pilot could read, react to, and then make changes by adjust­ing the throttle or other engine controls.

Подпись: 11With the introduction of digital computers and the miniaturiza­tion of their circuits—a necessity inspired, in part, by the reduced mass requirements of space flight—engineers began to consider how the quick-thinking electronic marvels might ease the workload for pilots flying increasingly more complex aircraft designs. In fact, as the 1960s transitioned to the 1970s, engineers were already considering aircraft designs that could do remarkable maneuvers in the sky but were inher­ently unstable, requiring constant, subtle adjustments to the flight con­trols to keep the vehicle in the air. The solution—already demonstrated for spacecraft applications during Project Apollo—was to insert the power of the computer between the cockpit controls and the flight con­trol surfaces—a concept known as fly-by-wire. A pilot using this system and wanting to turn left would move the control stick to the left, apply a little back pressure, and depress the left rudder pedal. Instead of a wire/cable system directly moving the related aerosurfaces, the move­ment of the controls would be sensed by a computer, which would send electronic impulses to the appropriate actuators, which in turn would deflect the ailerons, elevator, and rudder.[1328]

Managed by NASA’s Dryden Flight Research Facility, the fly-by-wire system was first tested without a backup mechanical system in 1972, when a modified F-8C fighter took off from Edwards Air Force Base in California. Testing on this aircraft, whose aerodynamics were known and considered stable, proved that fly-by-wire could work and be reliable. In the years to follow, the system was used to allow pilots to safely fly unstable aircraft, including the B-2 bomber, the forward-swept winged X-29, the Space Shuttle orbiter, and commercial airliners such as the Airbus A320 and Boeing 777.[1329]

As experienced was gained with the digital flight control system and computers shrunk in size and grew in power, it didn’t take long for pro­pulsion experts to start thinking about how computers could monitor
engine performance and, by making many adjustments in every vari­able that affects the efficiency of a jet engine, improve the powerplant’s overall capabilities.

Подпись: 11The first step toward enabling computer control of engine operations was taken by Dryden engineers in managing the Integrated Propulsion Control System (IPCS) program during the mid-1970s. A joint effort with the U. S. Air Force, the IPCS was installed on an F-111E long – range tactical fighter-bomber aircraft. The jet was powered by twin TF30 afterburning turbofan engines with variable-geometry external com­pression inlets. The IPCS effort installed a digital computer to control the variable inlet and realized significant performance improvements in stallfree operations, faster throttle response, increased thrust, and improved range flying at supersonic speeds. During this same period, results from the IPCS tests were applied to NASA’s YF-12C Blackbird, a civilian research version of the famous SR-71 Blackbird spy plane. A digital control system installed on the YF-12C successfully tested, mon­itored, and adjusted the engine inlet control, autothrottle, air data, and navigation functions for the Pratt & Whitney-built engines. The results gave the aircraft a 7-percent increase in range, improved handling char­acteristics, and lowered the frequency of inlet unstarts, which happen when an engine shock wave moves forward of the inlet and disrupts the flow of air into the engine, causing it to shutdown. Seeing how well this computer-controlled engine worked, Pratt & Whitney and the U. S. Air Force in 1983 chose to incorporate the system into their SR-71 fleet.[1330]

The promising future for more efficient jet engines from develop­ing digitally controlled integrated systems prompted Pratt & Whitney, the Air Force, and NASA (involving both Dryden and Lewis) to pur­sue a more robust system, which became the Digital Electronic Engine Control (DEEC) program.

Pratt & Whitney actually started what would become the DEEC pro­gram, using its own research and development funds to pay for configura­tion studies beginning during 1973. Then, in 1978, Lewis engineers tested a breadboard version of a computer-controlled system on an engine in an altitude chamber. By 1979, the Air Force had approached NASA and asked if Dryden could demonstrate and evaluate a DEEC system using an F100 engine installed in a NASA F-15, with flight tests beginning in

Подпись: The Digital Electronic Engine Control system was tested on a Pratt & Whitney F100 turbojet, similar to the one shown here undergoing a hot fire on a test stand. Pratt & Whitney. Подпись: 11

1981. At every step in the test program, researchers took advantage of les­sons learned not only from the IPCS exercise but also from a U. S. Navy – funded effort called the Full Authority Digital Engine Control program, which ran concurrently to the IPCS program during the mid-1970s.[1331]

A NASA Dryden fact sheet about the control system does a good job of explaining in a concise manner the hardware involved, what it moni­tored, and the resulting actions it was capable of performing:

The DEEC system tested on the NASA F-15 was an engine mounted, fuel-cooled, single-channel digital controller that received inputs from the airframe and engine to control a wide range of engine functions, such as inlet guide vanes, compres­sor stators, bleeds, main burner fuel flow, afterburner fuel flow and exhaust nozzle vanes.

Engine input measurements that led to these computer – controlled functions included static pressure at the compres­sor face, fan and core RPM, compressor face temperature, burner pressure, turbine inlet temperature, turbine discharge pressure, throttle position, afterburner fuel flow, fan and com­pressor speeds and an ultra violet detector in the afterburner to check for flame presence.

Подпись: 11Functions carried out after input data were processed by the DEEC computer included setting the variable vanes, position­ing compressor start bleeds, controlling gas-generator and augmentation of fuel flows, adjusting the augmenter segment – sequence valve, and controlling the exhaust nozzle position.

These actions, and others, gave the engine—and the pilot— rapid and stable throttle response, protection from fan and compressor stalls, improved thrust, better performance at high altitudes, and they kept the engine operating within its limits over the full flight envelope.[1332]

When incorporated into the F100 engine, the DEEC provided improve­ments such as faster throttle responses, more reliable capability to restart an engine in flight, an increase of more than 10,000 feet in altitude when firing the afterburners, and the capability of providing stallfree operations. And with the engine running more efficiently thanks to the DEEC, overall engine and aircraft reliability and maintainability were improved as well.[1333]

So successful and promising was this program that even before test­ing was complete the Air Force approved widespread production of the F100 control units for its F-15 and F-16 fighter fleet. Almost at the same time, Pratt & Whitney added the digital control technology in its PW2037 turbofan engines for the then-new Boeing 757 airliner.[1334]

With the DEEC program fully opening the door to computer control of key engine functions, and with the continuing understanding of fly­by-wire systems for aircraft control—along with steady improvements in making computers faster, more capable, and smaller—the next logi-

cal step was to combine together computer control of engines and flight controls. This was done initially with the Adaptive Engine Control System (ADECS) program accomplished between 1985 and 1989, followed by the Performance Seeking Control (PSC) program that performed 72 flight tests between 1990 and 1993. The PSC system was designed to handle multiple variables in performance, compared with the single-variable control allowed in ADECS. The PSC effort was designed to optimize the engine and flight con­trols in four modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, maximum thrust, and minimum thrust.[1335]

Подпись: 11The next evolution in the combining of computer-controlled flight and engine controls— a legacy of the original DEEC program—was inspired in large part by the 1989 crash in Sioux City, IA, of a DC-10 that had lost all three of its hydraulic systems when there was an uncon­tained failure of the aircraft’s No. 2 engine. With three pilots in the cockpit, no working flight controls, and only the thrust levels available for the two remaining working engines, the crew was able to steer the jet to the airport by using variable thrust. During the landing, the airliner broke apart, killing 111 of the 296 people on board.[1336]

Soon thereafter, Dryden managers established a program to thor­oughly investigate the idea of a Propulsion Controlled Aircraft (PCA) using variable thrust between engines to maintain safe flight control. Once again, the NASA F-15 was pressed into service to demonstrate the concept. Beginning in 1991 with a general ability to steer, refine­ments in the procedures were made and tested, allowing for more precise maneuvering. Finally, on April 21, 1993, the flight tests of PCA concluded with a successful landing using only engine power to climb, descend, and maneuver. Research continued using an MD-11 airliner, which success­fully demonstrated the technology in 1995.[1337]