Category AERONAUTICS

Flight Control Systems and Their Design

During the Second World War, there were multiple documented inci­dents and several fatalities that occurred when fighter pilots dove their propeller-driven airplanes at speeds approaching the speed of sound. Pilots reported increasing levels of buffet and loss of control at these speeds. Wind tunnels at that time were incapable of producing reliable meaningful data in the transonic speed range because the local shock waves were reflected off the wind tunnel walls, thus invalidating the data measurements. The NACA and the Department of Defense (DOD) cre­ated a new research airplane program to obtain a better understanding of transonic phenomena through flight-testing. The first of the resulting aircraft was the Bell XS-1 (later X-1) rocket-powered research airplane.

On NACA advice, Bell had designed the X-1 with a horizontal tail configuration consisting of an adjustable horizontal stabilizer with a hinged elevator at the rear for pitch control, at a time when a fixed hor­izontal tail and hinged elevator constituted the standard pitch control configuration for that period.[674] The X-1 incorporated this as an emergency means to increase its longitudinal (pitch) control authority at transonic speeds. It proved a wise precaution because, during the early buildup flights, the X-1 encountered similar buffet and loss of control as reported by earlier fighters. Analysis showed that local shock waves were form­ing on the tail surface, eventually migrating to the elevator hinge line. When they reached the hinge line, the effectiveness of the elevator was significantly reduced, thus causing the loss of control. The X-1 NACA-

U. S. Air Force (USAF) test team determined to go ahead, thanks to the

X-1 having an adjustable horizontal tail. They subsequently validated that the airplane could be controlled in the transonic region by moving the horizontal stabilizer and the elevator together as a single unit. This discovery allowed Capt. Charles E. Yeager to exceed the speed of sound in controlled flight with the X-1 on October 14, 1947.[675]

An extensive program of transonic testing was undertaken at the NACA High-Speed Flight Station (HSFS; subsequently the Dryden Flight Research Center) on evaluating aircraft handling qualities using the conventional elevator and then the elevator with adjustable stabi­lizer.[676] As a result, subsequent transonic airplanes were all designed to use a one-piece, all-flying horizontal stabilizer, which solved the control problem and was incorporated on the prototypes of the first supersonic American jet fighters, the North American YF-100A, and the Vought XF8U-1 Crusader, flown in 1953 and 1954. Today, the all-moving tail is a standard design ele­ment of virtually all high-speed aircraft developed around the globe.[677]

Variable Stability Airplanes

Although the centrifuge was effective in simulating relatively steady high g accelerations, it lacked realism with respect to normal aircraft motions. There was even concern that some amount of negative training might be occurring in a centrifuge. One possible method of improving the fidelity of motion simulation was to install the entire simulation (computational math­ematical model, cockpit displays, and controls) in an airplane, then forc­ing the airplane to reproduce the flight motions of the simulated airplane, thus exposing the simulator pilot to the correct motion environment. An airplane so equipped is usually referred to as a "variable stability aircraft.”

Since their invention, variable stability aircraft have played a signif­icant role in advancing flight technology. Beginning in 1948, the Cornell Aeronautical Laboratory (now Calspan) undertook pioneering work on variable stability using conventional aircraft modified in such a fashion that their dynamic characteristics reasonably approximated those of dif­ferent kinds of designs. Waldemar Breuhaus supervised modification of a Vought F4U-5 Corsair fighter as a variable stability testbed. From this sprang a wide range of subsequent "v-stab” testbeds. NACA Ames research­ers modified another Navy fighter, a Grumman F6F-5 Hellcat, so that it could fly as if its wing were set at a variety of dihedral angles; this research, and that of a later North American F-86 Sabre jet fighter likewise modified for v-stab research, was applied to design of early Century series fighters, among them the Lockheed F-104 Starfighter, a design with pronounced anhedral (negative wing dihedral).[731]

As the analog simulation capability was evolving, Cornell researchers developed a concept of installing a simulator in one cockpit of a two-

seat Lockheed NT-33A Shooting Star aircraft. By carefully measuring the stability and controllability characteristics of the "T-Bird” and then sub­tracting those characteristics from the simulated mathematical model, the researchers could program the airplane with a completely different dataset that would effectively represent a different airplane.[732] Initially the variable stability feature was used to perform general research tests by changing various controlled variables and evaluating their effect on pilot performance. Eventually mathematical models were introduced that represented the complete predicted aerodynamic and control sys­tem characteristics of new designs. The NT-33A became the most-recog­nized variable-stability testbed in the world, having "modeled” aircraft as diverse as the X-15, the B-1 bomber, and the Rockwell Space Shuttle orbiter, and flying from the early 1950s until retirement after the end of the Cold War. Thanks to its contributions and those of other v-stab tes­tbeds developed subsequently,[733] engineers and pilots have had a greater understanding of anticipated flying qualities and performance of new aircraft before the crucial first flight.[734] In particular, the variable stability aircraft did not exhibit the false rotations associated with the centrifuge simulation and were thus more realistic in simulating rapid aircraft-like maneuvers. Several YF-22 control law variations were tested using the CALSPAN NT-33 prior to the first flight. Before the first flight of the F-22, the control laws were tested on the CALSPAN VISTA. Today it is incon­ceivable that a new aircraft would fly before researchers had first eval­uated its anticipated handling qualities via variable-stability research.

The Concept of Finite Differences Enters the Mathematical Scene

The earliest concrete idea of how to simulate a partial derivative with an algebraic difference quotient was the brainchild of L. F. Richardson in
1910.[767] He was the first to introduce the numerical solution of partial dif­ferential equations by replacing each derivative in the equations with an algebraic expression involving the values of the unknown dependent vari­ables in the immediate neighborhood of a point and then solving simul­taneously the resulting massive system of algebraic equations at all grid points. Richardson named this approach a "finite-difference solution,” a name that has come down without change since 1910. Richardson did not attempt to solve the Navier-Stokes equations, however. He chose a problem reasonably described by a simpler partial differential equation, Laplace’s equation, which in mathematical speak is a linear partial dif­ferential equation and which the mathematicians classify as an ellip­tic partial differential equation.[768] He set up a numerical approach that is still used today for the solution of elliptic partial differential equa­tions called a relaxation method, wherein a sweep is taken throughout the whole grid and new values of the dependent variables are calculated from the old values at neighboring grid points, and then the sweep is repeated over and over until the new values at each grid point converges to the old value from the previous sweep, i. e., the numbers "relax” even­tually to the correct solution.

The Concept of Finite Differences Enters the Mathematical SceneIn 1928, Richard Courant, K. O. Friedrichs, and Hans Lewy pub­lished "On the Partial Difference Equations of Mathematical Physics,” a paper many consider as marking the real beginning of modern finite difference solutions; "Problems involving the classical linear partial dif­ferential equations of mathematical physics can be reduced to algebraic ones of a very much simpler structure,” they wrote, "by replacing the differentials by difference quotients on some (say rectilinear) mesh.”[769] Courant, Friedrichs, and Lewy introduced the idea of "marching solu­tions,” whereby a spatial marching solution starts at one end of the flow and literally marches the finite-difference solution step by step from one

end to the other end of the flow. A time marching solution starts with the all the flow variables at each grid point at some instant in time and marches the finite-difference solution at all the grid points in steps of time to some later value of time. These marching solutions can only be carried out for parabolic or hyperbolic partial differential equations, not for elliptic equations.

The Concept of Finite Differences Enters the Mathematical SceneCourant, Friedrichs, and Lewy highlighted another important aspect of numerical solutions of partial differential equations. Anyone attempt­ing numerical solutions of this nature quickly finds out that the numbers being calculated begin to look funny, make no sense, oscillate wildly, and finally result in some impossible operation such as dividing by zero or taking the square root of a negative number. When this happens, the solution has blown up, i. e., it becomes no solution at all. This is not a ramification of the physics, but rather, a peculiarity of the numerical processes. Courant, Friedrichs, and Lewy studied the stability aspects of numerical solutions and discovered some essential criteria to main­tain stability in the numerical calculations. Today, this stability criterion is referred to as the "CFL criterion” in honor of the three who identified it. Without it, many attempted CFD solutions would end in frustration.

So by 1928, the academic foundations of finite difference solutions of partial differential equations were in place. The Navier-Stokes equa­tions finally stood on the edge of being solved, albeit numerically. But who had the time to carry out the literally millions of calculations that are required to step through the solution? For all practical purposes, it was an impossible task, one beyond human endurance. Then came the electronic revolution and, with it, the digital computer.

Moving to Diversify and Commercialize NASTRAN

All of the improvements described above took time to implement. However, many of the using organizations had their own priorities. Several organizations therefore developed their own versions of NASTRAN for internal use, including IBM, Rockwell, and the David Taylor Naval Ship Research & Development Center, not to mention the different NASA Centers. These organizations sometimes contracted with soft­ware development companies to make enhancements to their internal versions. Thus, there developed several centers of expertise forging the way forward on somewhat separate paths, but sharing experiences with each other at the Users’ Colloquia and other venues. The NSMO did not take responsibility for maintenance of these disparate versions but did consider capabilities developed elsewhere for inclusion in the stan­dard NASTRAN, with appropriate review. This was possible because of the modular structure of NASTRAN to accept new solutions or new elements with little or no disruption to anything else, and it allowed the NSMO’s standard NASTRAN to keep up, somewhat, with developments being made by various users.

The first commercial version was announced by MacNeal Schwendler Corporation in 1971.[832] Others followed. SPERRY/NASTRAN was marketed by Sperry Support Services in Europe and by Nippon Univac Kaisha, Ltd., (NUK) in Japan from 1974 to at least 1986. (Sperry was also the UNIVAC parent company—producer of one of the three computers that could run NASTRAN when it was first created.) SPERRY/NASTRAN was developed from COSMIC NASTRAN Level 15.5.[833]

At the 10th NASTRAN Users’ Colloquium in 1982, the following com­mercial versions were identified:[834]

• UAI/NASTRAN (Universal Analytics).

• UNIVAC NASTRAN (Sperry).

• DTNSRDC NASTRAN (David Taylor Naval Ship Research & Development Center).

• MARC NASTRAN (Marc Analysis & Research).

• NKF NASTRAN (NKF Engineering Associates).

In spite of this proliferation, common input and output formats were generally maintained. In 1982, Thomas Butler compared COSMIC NASTRAN with the "competition,” which included at that time, in addi­tion to the commercial NASTRAN versions, the following programs: ASKA, STRUDL, DET VERITAS NORSKE, STARDYNE, MARC, SPAR, ANSYS, SAP, ATLAS, EASE, and SUPERB. He noted that: "during the period in which NASTRAN maintenance decisions emphasized the intensive debug­ging of existing capability in preference to adding new capabilities and con­veniences, the competitive programs strove hard to excel in something that NASTRAN didn’t. They succeeded. They added some elastic elements, e. g.:

• Bending plate with offsets, tapered thickness, and a 10:1 aspect ratio.

• Pipe elbow element.

• Tapered beam element.

• Membrane without excessive stiffness.

• High-order hexagon.

"These new elements make it a bit easier to do some analyses in the category of mechanical structures, listed above.”

In addition to new elements, some of the commercial codes added such capabilities as dynamic reduction to assist in condensing a large – order model to a smaller-order one prior to conducting eigenvalue anal­ysis, nonlinear analysis, and packaged tutorials on FEM analysis.

Viewed from the standpoint of the tools that an analyst needs. . . NASTRAN Level 17.7 can accommodate him with 95 per­cent of those tools. . . . The effect of delaying the addition of new capability during this ‘scrubbing up’ period is to temporar­ily lose the ability to serve the analyst in about 5 percent of his work with the tools that he needs. In the meantime NASTRAN has achieved a good state of health due to the caring efforts of P. R. Pamidi [of CSC] and Bob Brugh [of COSMIC].[835]

The commitment to maintaining the integrity of NASTRAN, rather than adding capability at an unsustainable pace, paid off in the long run.

Computerized Structural Analysis and Research (CSAR) intro­duced a version of NASTRAN in 1988.[836] However, the trend from the 1980s through the 1990s was toward consolidation of commercial sources for NASTRAN. In 1999, MSC acquired Universal Analytics and CSAR. The Federal Trade Commission (FTC) alleged that by these acqui­sitions, MSC had "eliminated its only advanced NASTRAN competitors.” In 2002, MSC agreed to divest at least one copy of its software, with source code, to restore competition.[837]

At the time of this writing, there are several versions of NASTRAN com­mercially available. Commercial versions have almost completely superseded NASA’s own version, although it is still available through the Open Channel Foundation (as discussed elsewhere in this paper). Even NASA now uses commercial versions of NASTRAN, in addition to other commercial and in-house structural analysis programs, when they meet a particular need.[838]

If one had to sum up the reasons for NASTRAN’s extraordinary his­tory, it might be: ripe opportunity, followed by excellent execution. Finite elements were on the cusp. The concepts, and the technology to carry them out, were just emerging. The 1960s were the decade in which the course of the technology would be determined—splintered, or integrated— not that every single activity could possibly be brought under one roof. But, if a single centerpiece of finite element analysis was going to emerge, to serve as a standard and reference point for everything else, it had to happen in that decade, before the technical community took off running in a myriad of different directions.

In execution, the project was characterized by focus, passion, estab­lishment of rules, and adherence to those rules, all coming together under an organization that was dedicated to getting its product out rather than hoarding it. Even with these ingredients, successfully producing a gen­eral-purpose computer program, able to adapt through more than 40 years of changing hardware and software technology, was remarkable. Staying true to the guiding principles (general-purpose, modular, open – ended, etc.), even as difficult decisions had to be made and there was not time to develop every intended capability, was a crucial quality of the development team. In contrast, a team that gets sloppy under time pres­sure would not have produced a program such lasting value. NASTRAN may be one of the closest approaches ever achieved to 100-percent suc­cessful technology transition. Not every structural analyst uses NASTRAN, but certainly every modern structural analyst knows about it. Those who think they need it have access to copious information about it and mul­tiple sources from which they can get it in various forms.

This state of affairs exists in part because of the remarkable nature of the product, and in part because of the priority that NASA places on the transition of its technology to other sectors. In preparation to address the other half of this paper—those many accomplishments that, though lesser than NASTRAN, also push the frontier forward in incremental steps—we now move to a discussion of those activities in which NASA engages for the specific purpose of accomplishing technology transition.

Dissemination and Distribution: NASA Establishes COSMIC

Transitioning technology to U. S. industry, universities, and other Government agencies is part of NASA’s charter under the 1958 Space Act. Some such transfer happens "naturally” through conferences, journal publi­cations, collaborative research, and other interactions among the technical community. However, NASA also has established specific, structured tech­nology transfer activities. The NASA Technology Utilization (TU) program was established in 1963. The names of the program’s components and activ­ities have changed over time but have generally included the following:[839]

• Publications.

• Tech briefs: notification of promising technologies.

• Technology Utilization compilations.

• Small-business announcements.

• Technical Support Packages: more detailed

information about a specific technology, provided on request.

• Industrial Applications Centers: university-based services to assist potential technology users in searching NASA scientific and technical information.

• Technology Utilization Officers (TUOs) at the NASA Centers to assist interested parties in identifying and understanding opportunities for technology transfer.

• An Applications Engineering process for developing and commercializing specific technologies, once interest has been established.

• The Computer Software Management and Information Center—a university-based center making NASA soft­ware and documentation available to industrial clients.

To expedite and enhance its technology transfer mandate, NASA established a Computer Software Management and Information Center at the University of Georgia at Athens. Within the first few years of the Technology Utilization program, it became apparent that many of the "technology products” being generated by NASA were computer pro­grams. NASA therefore started COSMIC in 1966 to provide the services necessary to manage and distribute computer programs. These services included screening programs and documentation for quality and usabil­ity; announcing available programs to potential users; and storing, copy­ing, and distributing the programs and documentation. In addition, as the collection grew, it was necessary to ensure that each new program added capability and was not duplicative with programs already being offered.[840]

After the first year of operation, COSMIC published a catalog of 113 programs that were fully operational and documented. Another 11 pro­grams with incomplete documentation and 7 programs missing subrou­tines were also offered for customers who would find the data useful even in an incomplete state. Monthly supplements to the catalog added approx­imately 20 programs per month.[841] By 1971, COSMIC had distributed over 2,500 software packages and had approximately 900 computer programs available. New additions were published in a quarterly Computer Programs Abstracts Journal. The collection expanded to include software developed by other Government agencies besides NASA. The Department of Defense (DOD) joined the effort in 1968, adding DOD-funded computer programs that were suitable for public release to the collection.[842] In 1981, there were 1,600 programs available.[843] Programs were also withdrawn, because of obsolescence or other reasons. During the early 1990s, the collection con­sisted of slightly more than 1,000 programs.[844] NASTRAN, when released publicly in 1970, was distributed through COSMIC, as were most of the other computer programs mentioned throughout this paper.

Customers included U. S. industry, universities, and other Government agencies. Customers received source code and documentation, and unlim­ited rights to copy the programs for their own use. Initially, the cost to the user was just the cost of the media on which the software and documen­tation were delivered. Basic program cost in 1967 was $75, furnished on cards (2,000 or less) or tape. Card decks exceeding 2,000 cards were priced on a case-by-case basis. Documentation could also be purchased separately, to help the user determine if the software itself was applicable to his or her needs. Documentation prices ranged from $1.50 for 25 pages or less, to $15 for 300 pages or more.[845] Purchase terms eventually changed to a lease/license format, and prices were increased somewhat to help defray the costs of developing and maintaining the programs. Nevertheless, the pricing was still much less than that of commercial software. A cost-ben­efit study, conducted in 1977 and covering the period from 1971-1976, noted that the operation of COSMIC during that period had only cost $1.7 million, against an estimated $43.5 million in benefit provided to users. During that period, there were 21,000 requests for computer pro­gram documentation, leading to 1,200 computer programs delivered.[846]

COSMIC operations continued through the 1990s. In 2001, custody of the COSMIC collection was transferred to a new organization, Open

Channel Software (OCS). OCS and a related nonprofit organization, the Open Channel Foundation, were started in 1999 at the University of Chicago. Originally established to provide dissemination of university – developed software, this effort, like COSMIC, soon grew to include software from a broader range of academic and research institutions. The agreement to transfer the COSMIC collection to OCS was made through the Robert C. Byrd National Technology Transfer Center (NTTC), which itself was estab­lished in 1989 and had been working with NASA and other Government agencies to facilitate beneficial technology transfer and partnerships.[847]

Although COSMIC is no longer active, NASA continues to make new technical software available to universities, other research centers, and U. S. companies that can benefit from its use. User agreements are made directly with the Centers where the software is developed. User inter­faces and documentation are typically not as polished as commercial software, but the level of technology is often ahead of anything com­mercially available, and excellent support is usually available from the research group that has developed the software. The monetary cost is minimal or, in many cases, zero.[848] Joint development agreements may be made if a potential user desires enhancements and is willing to partici­pate in their development. Whether through COSMIC or by other means, most of the computer programs discussed in the following sections have been made available to U. S. industry and other interested users.

Miscellaneous NASA Structural Analysis Programs

Note: Miscellaneous computer programs, and in some cases test facili­ties or other related projects, that have contributed to the advancement of the state of the art in various ways are described here. In some cases, there simply was not room to include them in the main body of the paper; in others, there was not enough information found, or not enough time to do further research, to adequately describe the programs and docu­ment their significance. Readers are advised that these are merely exam­ples; this is not an exhaustive list of all computer programs developed by NASA for structural analysis to the 2010 time period. Dates indicate introduction of capability. Many of the programs were subsequently enhanced. Some of the programs were eventually phased out.

Hot Structures: Dyna-Soar

Reentry of ICBM nose cones and of satellites takes place at nearly the same velocity. Reentry of spacecraft takes place at a standard velocity of Mach 25, but there are large differences in the technical means that have been studied for the thermal protection. During the 1960s, it was commonly expected that such craft would be built as hot structures. In fact, however, the thermal protection adopted for the Shuttle was the well-known "tiles,” a type of reusable insulation.

The Dyna-Soar program, early in the ’60s, was first to face this issue. Dyna-Soar used a radiatively cooled hot structure, with the primary or load-bearing structure being of Rene 41. Trusses formed the primary structure of the wings and fuselage, with many of their beams meet­ing at joints that were pinned rather than welded. Thermal gradients,

Hot Structures: Dyna-SoarPILOTS HATCH

Подпись: 9ROLL REACTION CONTROL

Подпись: WINDOW HEAT SHIELDPITCH REACTION CONTROL ANTENNAS

YAW REACTION CONTROL 35.34 FT.

Schematic drawing of the Boeing X-20A Dyna-Soar. USAF.

imposing differential expansion on separate beams, caused these mem­bers to rotate at the pins. This accommodated the gradients without imposing thermal stresses. Rene 41 was selected as a commercially available superalloy that had the best available combination of oxi­dation resistance and high-temperature strength. Its yield strength,

130,0 pounds per square inch (psi) at room temperature, fell off only slightly at 1,200 °F and retained useful values at 1,800 °F. It could be pro­cessed as sheet, strip, wire, tubes, and forgings. Used as primary struc­ture of Dyna-Soar, it supported a design specification that stated that the craft was to withstand at least four reentries under the most severe conditions permitted.

As an alloy, Rene 41 had a standard composition of 19 percent chro­mium, 11 percent cobalt, 10 percent molybdenum, 3 percent titanium, and 1.5 percent aluminum, along with 0.09 percent carbon and 0.006 percent boron, with the balance being nickel. It gained strength through age hardening, with the titanium and aluminum precipitating within the nickel as an intermetallic compound. Age-hardening weldments initially showed susceptibility to cracking, which occurred in parts that had been strained through welding or cold working. A new heat-treatment process
permitted full aging without cracking, with the fabricated assemblies showing no significant tendency to develop cracks.[1036]

As a structural material, the relatively mature state of Rene 41 reflected the fact that it had already seen use in jet engines. It neverthe­less lacked the temperature resistance necessary for use in the metallic shingles or panels that were to form the outer skin of the vehicle, which were to reradiate the heat while withstanding temperatures as high as

3,0 °F. Here there was far less existing art, and investigators at Boeing had to find their way through a somewhat roundabout path. Four refrac­tory or temperature-resistant metals initially stood out: tantalum, tung­sten, molybdenum, and columbium. Tantalum was too heavy. Tungsten was not available commercially as sheet. Columbium also appeared to be ruled out, for it required an antioxidation coating, but vendors were unable to coat it without rendering it brittle. Molybdenum alloys also faced embrittlement because of recrystallization produced by a pro­longed soak at high temperature in the course of coating formation. A promising alloy, Mo-0.5Ti, overcame this difficulty through addition of

0. 07 percent zirconium. The alloy that resulted, Mo-0.5Ti-0.07Zr, was called TZM molybdenum. For a time it appeared as a highly promising candidate for all the other panels.[1037]

Wing design also promoted its use, for the craft mounted a delta wing with leading-edge sweep of 73 degrees. Though built for hyper­sonic entry from orbit, it resembled the supersonic delta wings of contemporary aircraft such as the B-58 bomber. But this wing was designed using H. Julian Allen’s blunt-body principle, with the leading edge being thickly rounded (that is, blunted) to reduce the rate of heating. The wing sweep then reduced equilibrium temperatures along the leading edge to levels compatible with the use of TZM.[1038]

Boeing’s metallurgists nevertheless held an ongoing interest in colum­bium, because in uncoated form it showed superior ease of fabrication and lack of brittleness. A new Boeing-developed coating method elim­inated embrittlement, putting columbium back in the running. A sur­vey of its alloys showed that they all lacked the hot strength of TZM. Columbium nevertheless retained its attractiveness because it promised less weight. Based on coatability, oxidation resistance, and thermal emis – sivity, the preferred alloy was Cb-10Ti-5Zr, called D-36. It replaced TZM in many areas of the vehicle but proved to lack strength against creep at the highest temperatures. Moreover, coated TZM gave more of a mar­gin against oxidation than coated D-36 did, again at the most extreme temperatures. D-36 indeed was chosen to cover most of the vehicle, including the flat underside of the wing. But TZM retained its advan­tage for such hot areas as the wing leading edges.[1039]

The vehicle had some 140 running feet of leading edges and 140 square feet of associated area. This included leading edges of the verti­cal fins and elevons as well as of the wings. In general, D-36 served when temperatures during reentry did not exceed 2,700 °F, while TZM was used for temperatures between 2,700 and 3,000 °F. In accordance with the Stefan-Boltzmann law, all surfaces radiated heat at a rate proportional to the fourth power of the temperature. Hence for equal emissivities, a surface at 3,000 °F radiated 44 percent more heat than one at 2,700 °F.[1040]

Panels of both TZM and D-36 demanded antioxidation coatings. These coatings were formed directly on the surfaces as metallic silicides (silicon compounds), using a two-step process that employed iodine as a chemical intermediary. Boeing introduced a fluidized-bed method for application of the coatings that cut the time for preparation while enhancing uniformity and reliability. In addition, a thin layer of silicon carbide, applied to the surface, gave the vehicle its distinctive black color. It enhanced the emissivity, lowering temperatures by as much as 200 °F.

It was necessary to show that complete panels could withstand aerodynamic flutter. A report of the Aerospace Vehicles Panel of the Air Force Scientific Advisory Board came out in April 1962 and singled out the problem of flutter, citing it as one that called for critical attention. The test program used two NASA wind tunnels: the 4-foot by 4-foot Unitary facility at Langley that covered a range of Mach 1.6 to 2.8 and the 11-foot by 11-foot Unitary installation at Ames for Mach 1.2 to

1. 4. Heaters warmed test samples to 840 °F as investigators started with steel panels and progressed to versions fabricated from Rene nickel alloy.

"Flutter testing in wind tunnels is inherently dangerous,” a Boeing review declared. "To carry the test to the actual flutter point is to risk destruction of the test specimen. Under such circumstances, the safety of the wind tunnel itself is jeopardized.” Panels under test were as large as 24 by 45 inches; flutter could have brought failure through fatigue, with parts of a specimen being blown through the tunnel at supersonic speed. Thus, the work started at dynamic pressures of 400 and 500 pounds per square foot (psf) and advanced over a year and a half to exceed the design requirement of close to 1,400 psf. Tests were concluded in 1962.[1041]

Between the outer panels and the inner primary structure, a corrugated skin of Rene 41 served as the substructure. On the upper wing surface and upper fuselage, where the temperatures were no higher than 2,000 °F, the thermal-protection panels were also of Rene 41 rather than of a refrac­tory. Measuring 12 by 45 inches, these panels were spot-welded directly to the corrugations of the substructure. For the wing undersurface and for other areas that were hotter than 2,000 °F, designers specified an insulated structure. Standoff clips, each with four legs, were riveted to the underlying corrugations and supported the refractory panels, which also were 12 by 45 inches in size.

The space between the panels and the substructure was to be filled with insulation. A survey of candidate materials showed that most of them exhibited a strong tendency to shrink at high temperatures. This was undesirable; it increased the rate of heat transfer and could create uninsulated gaps at seams and corners. Q-felt, a silica fiber from Johns Manville, also showed shrinkage. However, nearly all of it occurred at

2,0 °F and below; above 2,000 °F, further shrinkage was negligible. This meant that Q-felt could be "pre-shrunk” through exposure to tempera­tures above 2,000 °F for several hours. The insulation that resulted had density no greater than 6.2 pounds per cubic foot, one-tenth that of water. In addition, it withstood temperatures as high as 3,000 °F.[1042]

TZM outer panels, insulated with Q-felt, proved suitable for wing leading edges. These were designed to withstand equilibrium tempera­tures of 2,825 °F and short-duration over-temperatures of 2,900 °F. But the nose cap faced temperatures of 3,680 °F along with a peak heat flux of 143 BTU/ft2/sec. This cap had a radius of curvature of 7.5 inches, making it far less blunt than the contemporary Project Mercury heat shield that had a radius of 120 inches.[1043] Its heating was correspondingly more severe. Reliable thermal protection of the nose was essential, so the program conducted two independent development efforts that used separate technical approaches. The firm of Chance Vought pursued the main line of activity, while Boeing also devised its own nose cap design.

The work at Vought began with a survey of materials that paralleled Boeing’s review of refractory metals for the thermal-protection panels. Molybdenum and columbium had no strength to speak of at the perti­nent temperatures, but tungsten retained useful strength even at 4,000 °F. But that metal could not be welded, while no coating could protect it against oxidation. Attention then turned to nonmetallic materials, including ceramics.

Ceramics of interest existed as oxides such as silica and magnesia, which meant that they could not undergo further oxidation. Magnesia proved to be unsuitable because it had low thermal emittance, while silica lacked strength. However, carbon in the form of graphite showed clear promise. It held considerable industrial experience; it was light in weight, while its strength actually increased with temperature. It oxi­dized readily but could be protected up to 3,000 °F by treating it with silicon, in vacuum and at high temperatures, to form a thin protective layer of silicon carbide. Near the stagnation point, the temperatures during reentry would exceed that level. This brought the concept of a nose cap with siliconized graphite as the primary material and with an insulated layer of a temperature-resistant ceramic covering its forward area. With graphite having good properties as a heat sink, it would rise in temperature uniformly and relatively slowly, while remaining below the 3,000 °F limit throughout the full time of the reentry.

Suitable grades of graphite proved to be available commercially from the firm of National Carbon. Candidate insulators included haf – nia, thoria, magnesia, ceria, yttria, beryllia, and zirconia. Thoria was the most refractory but was very dense and showed poor resistance to thermal shock. Hafnia brought problems of availability and of repro­ducibility of properties. Zirconia stood out. Zirconium, its parent metal, had found use in nuclear reactors; the ceramic was available from the Zirconium Corporation of America. It had a melting point above 4,500 °F, was chemically stable and compatible with siliconized graphite, offered high emittance with low thermal conductivity, provided adequate resis­tance to thermal shock and thermal stress, and lent itself to fabrication.[1044]

For developmental testing, Vought used two in-house facilities that simulated the flight environment, particularly during reentry. A ramjet, fueled with JP-4 and running with air from a wind tunnel, produced an exhaust with velocity up to 4,500 ft/sec and temperature up to 3,500 °F. It also generated acoustic levels above 170 decibels (dB), reproducing the roar of a Titan III booster and showing that samples under test could withstand the resulting stresses without cracking. A separate installa­tion, built specifically for the Dyna-Soar program, used an array of pro­pane burners to test full-size nose caps.

The final Vought design used a monolithic shell of siliconized graph­ite that was covered over its full surface by zirconia tiles held in place by thick zirconia pins. This arrangement relieved thermal stresses by per­mitting mechanical movement of the tiles. A heat shield stood behind the graphite, fabricated as a thick disk-shaped container made of coated TZM sheet metal and filled with Q-felt. The nose cap was attached to the vehicle with a forged ring and clamp that also were of coated TZM. The cap as a whole relied in radiative cooling. It was designed to be reus­able; like the primary structure, it was to withstand four reentries under the most severe conditions permitted.[1045]

The backup Boeing effort drew on that company’s own test equip­ment. Study of samples used the Plasma Jet Subsonic Splash Facility, which created a jet with temperature as high as 8,000 °F that splashed over the face of a test specimen. Full-scale nose caps went into the Rocket Test Chamber, which burned gasoline to produce a nozzle exit velocity of 5,800 ft/sec and an acoustic level of 154 dB. Both installations were capable of long-duration testing, reproducing conditions during reen­tries that could last for 30 minutes.[1046]

The Boeing concept used a monolithic zirconia nose cap that was reinforced against cracking with two screens of platinum-rhodium wire. The surface of the cap was grooved to relieve thermal stress. Like its counterpart from Vought, this design also installed a heat shield that used Q-felt insulation. However, there was no heat sink behind the zirco­nia cap. This cap alone provided thermal protection at the nose, through radiative cooling. Lacking pinned tiles and an inner shell, its design was simpler than that of Vought.[1047]

Its fabrication bore comparison to the age-old work of potters, who shape wet clay on a rotating wheel and fire the resulting form in a kiln. Instead of using a potter’s wheel, Boeing technicians worked with a steel die with an interior in the shape of a bowl. A paper honeycomb, reinforced with Elmer’s Glue and laid in place, defined the pattern of stress-relieving grooves within the nose cap surface. The working mate­rial was not moist clay but a mix of zirconia powder with binders, inter­nal lubricants, and wetting agents.

With the honeycomb in position against the inner face of the die, a specialist loaded the die by hand, filling the honeycomb with the damp mix and forming layers of mix that alternated with the wire screens. The finished layup, still in its die, went into a hydraulic press. A pressure of

27,0 psi compacted the form, reducing its porosity for greater strength and less susceptibility to cracks. The cap was dried at 200 °F, removed from its die, dried further, and then fired at 3,300 °F for 10 hours. The paper honeycomb burned out in the course of the firing. Following visual and x-ray inspection, the finished zirconia cap was ready for machin­ing to shape in the attachment area, where the TZM ring-and-clamp arrangement waste anchor it to the fuselage.[1048]

The nose cap, outer panels, and primary structure all were built to limit their temperatures through passive methods: radiation and insulation. Active cooling also played a role, reducing temperatures within the pilot’s compartment and two equipment bays. These used a "water wall” that mounted absorbent material between sheet-metal panels to hold a mix of water and a gel. The gel retarded flow of this fluid, while the absorbent wicking kept it distributed uniformly to prevent hotspots.

During reentry, heat reached the water walls as it penetrated into the vehicle. Some of the moisture evaporated as steam, transferring heat to a set of redundant water-glycol loops that were cooled by liquid hydro­gen from an onboard supply. A catalytic bed combined the stream of warmed hydrogen with oxygen that again came from an onboard supply. This produced gas that drove the turbine of Dyna-Soar’s auxiliary power unit, which provided both hydraulic and electric power to the craft.

A cooled hydraulic system also was necessary, to move the con­trol surfaces as on a conventional airplane. The hydraulic fluid oper­ating temperature was limited to 400 °F by using the fluid itself as an initial heat-transfer medium. It flowed through an intermediate water – glycol loop that removed its heat by being cooled with hydrogen. Major hydraulic components, including pumps, were mounted within an actively cooled compartment. Control-surface actuators, along with associated valves and plumbing, were insulated using inch-thick blan­kets of Q-felt. Through this combination of passive and active cool­ing methods, the Dyna-Soar program avoided a need to attempt to develop truly high-temperature arrangements, remaining instead within the state of the art.[1049]

Specific vehicle parts and components brought their own thermal problems. Bearings, both ball and antifriction, needed strength to carry mechanical loads at high temperatures. For ball bearings, the cobalt – base superalloy Stellite 19 was known to be acceptable up to 1,200 °F. Investigation showed that it could perform under high load for short durations at 1,350 °F. Dyna-Soar nevertheless needed ball bearings qual­ified for 1,600 °F and obtained them as spheres of Rene 41 plated with gold. The vehicle also needed antifriction bearings as hinges for control surfaces, and here there was far less existing art. The best available bear­ings used stainless steel and were suitable only to 600 °F, whereas Dyna – Soar again faced a requirement of 1,600 °F. A survey of 35 candidate materials led to selection of titanium carbide with nickel as a binder.[1050]

Antenna windows demanded transparency to radio waves at sim­ilarly high temperatures. A separate program of materials evaluation led to selection of alumina, with the best grade being available from the Coors Porcelain Company.[1051]

Hot Structures: Dyna-Soar

Hot Structures: Dyna-SoarABLATIVE HEAT SHIELDS

Подпись: 9

T —500 °F

WATER COOLING

(T<150 °F)

INSULATION

Подпись: LAYER

ABLATION

LAYER

NASA concepts for passive and actively cooled ablative heat shields, 1960. NASA.

The pilot needed his own windows. The three main ones, facing for­ward, were the largest yet planned for a piloted spacecraft. They had double panes of fused silica, with infrared-reflecting surfaces on all surfaces except the outermost. This inhibited the inward flow of heat by radiation, reducing the load on the active cooling of the pilot’s com­partment. The window frames expanded when hot; to hold the panes in position, those frames were fitted with springs of Rene. The windows also needed thermal protection so they were covered with a shield of D-36. It was supposed to be jettisoned following reentry, around Mach 5, but this raised a question: what if it remained attached? The cock­pit had two other windows, one on each side, which faced a less severe environment and were to be left unshielded throughout a flight. Over a quarter century earlier, Charles Lindbergh had flown the Spirit of St. Louis across the North Atlantic from New York to Paris using just side vision and a crude periscope. But that was a far cry from a plummeting lifting reentry vehicle. Now, test pilot Neil Armstrong flew Dyna-Soar – like approaches and landings in a modified Douglas F5D-1 fighter with side vision only and showed it was still possible.[1052]

The vehicle was to touch down at 220 knots. It lacked wheeled landing gear, for inflated rubber tires would have demanded their own cooled compartments. For the same reason, it was not possible to use a conventional oil-filled strut as a shock absorber. The craft therefore deployed tricycle landing skids. The two main skids, from Goodyear, were of Waspalloy nickel steel and mounted wire bristles of Rene 41. These gave a high coefficient of friction, enabling the vehicle to skid to a stop in a planned length of 5,000 feet while accommodating run­way irregularities. In place of the usual oleo strut, a long rod of Inconel stretched at the moment of touchdown and took up the energy of impact, thereby serving as a shock absorber. The nose skid, from Bendix, was forged from Rene 41 and had an undercoat of tungsten carbide to resist wear. Fitted with its own energy-absorbing Inconel rod, the front skid had a reduced coefficient of friction, which helped to keep the craft pointing straight ahead during slide-out.[1053]

Through such means, the Dyna-Soar program took long strides toward establishing hot structures as a technology suitable for opera­tional use during reentry from orbit. The X-15 had introduced heat sink fabricated from Inconel X, a nickel steel. Dyna-Soar went considerably further, developing radiation-cooled insulated structures fabricated from Rene 41 and from refractory materials. A chart from Boeing made the point that in 1958, prior to Dyna-Soar, the state of the art for advanced aircraft structures involved titanium and stainless steel, with tempera­ture limits of 600 °F. The X-15 with its Inconel X could withstand tem­peratures above 1,200 °F. Against this background, Dyna-Soar brought substantial advances in the temperature limits of aircraft structures.[1054]

Understanding of FBW Benefits

By the early 1970s, the full range of benefits that could be possible by the use of fly-by-wire flight control had become ever more apparent to aircraft designers and pilots. Relevant technologies were rapidly matur­ing, and various forms of fly-by-wire flight control had successfully been implemented in missiles, aircraft, and spacecraft. Fly-by-wire had many advantages over more conventional flight control systems, in addition to those made possible from the elimination of mechanical linkages. A computer-controlled fly-by-wire flight control system could generate integrated pitch, yaw, and roll control instructions at very high rates to maintain the directed flight path. It would automatically provide artifi­cial stability by constantly compensating for any flight path deviations. When the pilot moved his cockpit controls, commands would be auto­matically be generated to modify the artificial stability enough to enable the desired maneuvers to be accomplished. It could also prevent the pilot from commanding maneuvers that would exceed established air­craft limits in either acceleration or angle of attack. Additionally, the
flight control system could automatically extend high-lift devices, such as flaps, to improve maneuverability.

Подпись: 10Conceptual design studies indicated that active fly-by-wire flight control systems could enable new aircraft to be developed that featured smaller aerodynamic control surfaces. This was possible by reducing the inherent static stability traditionally designed into conventional aircraft. The ability to relax stability while maintaining good handling qualities could also lead to improved agility. Agility is a measure of an aircraft’s ability to rapidly change its position. In the 1960s, a concept known as energy maneuverability was developed within the Air Force in an attempt to quantify agility. This concept states that the energy state of a maneuvering aircraft can be expressed as the sum of its kinetic energy and its potential energy. An aircraft that possesses higher overall energy inherently has higher agility than another aircraft with lower energy. The ability to retain a high-energy state while maneuvering requires high excess thrust and low drag at high-lift maneuvering conditions.[1148] Aircraft designers began synthesizing unique conceptual fighter designs using energy maneuver theory along with exploiting an aerodynamic phenomenon known as vortex lift.[1149] This approach, coupled with com­puter-controlled fly-by-wire flight control systems, was felt to be a key to unique new fighter aircraft with very high agility levels.

Neutrally stable or even unstable aircraft appeared to be within the realm of practical reality and were the subject of ever increasing interest and widespread study in NASA and the Air Force, as well as in foreign governments and the aerospace industry. Often referred to at the time as Control Configured Vehicles, such aircraft could be optimized for specific missions with fly-by-wire flight control system characteristics
designed to improve aerodynamic performance, maneuverability, and agility while reducing airframe weight. Other CCV possibilities included the ability to control structural loads while maneuvering (maneuver load control) and the potential for implementation of unconventional control modes. Maneuver load control could allow new designs to be optimized, for example, by using automated control surface deflections to actively modify the spanwise lift distribution to alleviate wing bending loads on larger aircraft. Unconventional or decoupled control modes would be possible by using various combinations of direct-force flight controls to change the aircraft flight path without changing its attitude or, alter­natively, to point the aircraft without changing the aircraft flight path. These unconventional flight control modes were felt at the time to pro­vide an improved ability to point and fire weapons during air combat.[1150]

Подпись: 10In summary, the full range of benefits possible through the appli­cation of active fly-by-wire flight control in properly tailored aircraft design applications was understood to include:

• Enhanced performance and improved mission effective­ness made possible by the incorporation of relaxed static stability and automatically activated high-lift devices into mission-optimized aircraft designs to reduce drag, optimize lift, and improve agility and handling quali­ties throughout the flight and maneuvering envelope.

• New approaches to aircraft control, such as the use of automatically controlled thrust modulation and thrust vectoring fully integrated with the movement of the air­craft’s aerodynamic flight control surfaces and activa­tion of its high-lift devices.

• Increased safety provided by automatic angle-of-attack and angle-of-sideslip suppression as well as automatic limiting of normal acceleration and roll rates. These measures protect from stall and/or loss of control, pre­vent inadvertent overstressing of the airframe, and give the pilot maximum freedom to focus on effectively maneuvering the aircraft.

• Improved survivability made possible by the elimination

of highly vulnerable hydraulic lines and incorpora­tion of fault tolerant flight control system designs and components.

• Greatly improved flight control system reliability and lower maintenance costs resulting from less mechani­cal complexity and automated built-in system test and diagnostic capabilities.

Подпись: 10Automatic flight control system reconfiguration to allow safe flight, recovery, and landing following battle dam­age or system failures.

Aircraft Certification Contributions

Подпись: 10Certification of new aircraft with digital fly-by-wire flight control systems, especially for civilian airline service, requires software designs that pro­vide highly reliable, predictable, and repeatable performance. For this reason, NASA experts concluded that a comprehensive understanding of all possible software system behaviors is essential, especially in the case of highly complex systems. This knowledge base must be formally docu­mented and accurately communicated for both design and system certifi­cation purposes. This was highlighted in a 1993 research paper sponsored by NASA and the Federal Aviation Administration (FAA) that noted:

This formal documentation process would prove to be a tre­mendously difficult and challenging task. It was only feasible if the underlying software was rationally designed using prin­ciples of abstraction, layering, information-hiding, and any other technique that can advance the intellectual manage­ability of the task. This calls strongly for an architecture that promotes separation of concerns (whose lack seems to be the main weakness of asynchronous designs), and for a method of description that exposes the rationale for design decisions and that allows, in principle, the behavior of the system to be calculated (i. e., predicted or, in the limit, proved) . . . formal methods can make their strongest contribution to quality assur­ance for ultra-dependable systems: they address (as nothing else does) [NASA engineer Dale] Mackall’s plea for ‘a method to make system designs more understandable, more visible.'[1205]

Formal software development methodologies for critical aeronautical and space systems developments have been implemented within NASA and are contained in certification guidebooks and other documents for use by those involved in mission critical computer and software systems.[1206] Designed to help transition Formal Methods from experimental use into
practical application for critical software requirements and systems design within NASA, they discuss technical issues involved in applying Formal Methods techniques to aerospace and avionics software systems. Dryden’s flight-test experience and the observations obtained from flight­testing of such systems were exceptionally well-documented and would prove to be highly relevant to NASA, the FAA, and military service pro­grams oriented to developing Formal Methods and structured approaches in the design, development, verification, validation, testing, and certifica­tion of aircraft with advanced digital flight control systems.[1207] The NASA DFBW F-8 and AFTI/F-16 experiences (among many others) were also used as background by Government and industry experts tasked with preparing the FAA Digital Systems Validation Handbook. Today, the FAA uses Formal Methods in the specification and verification of software and hardware requirements, designs, and implementations; in the identification of the benefits, weaknesses, and difficulties in applying these Formal Methods to digital systems used in critical applications; and in support of aircraft software systems certification.

NASA Advanced Control Technology for Integrated Vehicles

In 1994, after the conclusion of Air Force S/MTD testing, the aircraft was transferred to NASA Dryden for the NASA Advanced Control Technology for Integrated Vehicles (ACTIVE) research project. ACTIVE was oriented to determining if axisymmetric vectored thrust could contribute to drag reduction and increased fuel economy and range compared with con­ventional aerodynamic controls. The project was a collaborative effort between NASA, the Air Force Research Laboratory, Pratt & Whitney,
and Boeing (formerly McDonnell-Douglas). An advanced digital flight fly-by-wire control system was integrated into the NF-15B, which was given NASA tail No. 837. Higher-thrust versions of the Pratt & Whitney F100 engine with newly developed axisymmetric thrust-vectoring engine exhaust nozzles were installed. The nozzles could deflect engine exhaust up to 20 degrees off centerline. This allowed variable thrust control in both pitch and yaw, or combinations of the two axes. An integrated pro­pulsion and flight control system controlled both aerodynamic flight control surfaces and the engines. New cockpit controls and electron­ics from an F-15E aircraft were also installed in the NF-15B. The first supersonic flight using yaw vectoring occurred in early 1996. Pitch and yaw thrust vectoring were demonstrated at speeds up to Mach 2.0, and yaw vectoring was used at angles of attack up to 30 degrees. An adaptive performance software program was developed and successfully tested in the NF-15B flight control computer. It automatically determined the optimal setting or trim for the thrust-vectoring nozzles and the aero­dynamic control surfaces to minimize aircraft drag. An improvement of Mach 0.1 in level flight was achieved at Mach 1.3 at 30,000 feet with no increase in engine thrust. The ACTIVE NF-15B continued investiga­tions of integrated flight and propulsion control with thrust-vectoring during 1997 and 1998, including an experiment that combined thrust vectoring with aerodynamic controls during simulated ground attack missions. Following completion of the ACTIVE project, the NF-15B was used as a testbed for several other NASA Dryden research experiments, which included the efforts described below.[1275]

Fuel Efficiency Takes Flight

Caitlin Harrington

Подпись: 12Decades of NASA research have led to breakthroughs in understand­ing the physical processes of pollution and determining how to secure unprecedented levels of propulsion and aerodynamic efficiency to reduce emissions. Goaded by recurring fuel supply crises, NASA has responded with a series of research plans that have dramatically improved the efficiency of gas turbine propulsion systems, the lift-to – drag ratio of new aircraft designs, and myriad other challenges.

A

LTHOUGH NASA’S AERONAUTICS BUDGET has fallen dramatically in recent years,[1372] the Agency has nevertheless managed to spear­head some of America’s biggest breakthroughs in fuel-efficient and environmentally friendly aircraft technology. The National Aeronautics and Space Administration (NASA) has engaged in major programs to increase aircraft fuel efficiency that have laid the groundwork for engines, airframes, and new energy sources—such as alternative fuel and fuel cells—that are still in use today. NASA’s research on aircraft emissions in the 1970s also was groundbreaking, leading to a widely accepted view at the national—and later, global—level that pollution can damage the ozone layer and spawning a series of efforts inside and outside NASA to reduce aircraft emissions.[1373]

This case study will explore NASA’s efforts to improve the fuel effi­ciency of aircraft and also reduce emissions, with a heavy emphasis on the 1970s, when the energy crisis and environmental concerns cre­ated a national demand for "lean and green” airplanes.[1374] The launch of

Sputnik in 1957 and the resulting space race with the Soviet Union spurred the National Advisory Committee for Aeronautics (NACA)— subsequently restructured within the new National Aeronautics and Space Administration—to shift its research heavily toward rocketry— at the expense of aeronautics—until the mid-1960s.[1375] But as commer­cial air travel grew in the 1960s, NASA began to embark on a series of ambitious programs that connected aeronautics, energy, and the envi­ronment. This case study will discuss some of NASA’s most important programs in this area.

Подпись: 12Key propulsion initiatives to be discussed include the Energy Efficient Engine program—perhaps NASA’s greatest contribution to fuel-efficient flight—as well as later efforts to increase propulsion efficiency, includ­ing the Advanced Subsonic Technology (AST) initiative and the Ultra Efficient Engine Technology (UEET) program. Another propulsion effort that paved the way for the development of fuel-efficient engine technol­ogy was the Advanced Turboprop, which led to current NASA and indus­try attempts to develop fuel-efficient "open rotor” concepts.

In addition to propulsion research, this case study will also explore several NASA programs aimed at improving aircraft structures to pro­mote fuel efficiency, including initiatives to develop supercritical wings and winglets and efforts to employ laminar flow concepts. NASA has also sought to develop alternative fuels to improve performance, maximize efficiency, and minimize emissions; this case study will touch on liquid hydrogen research conducted by NASA’s predecessor—the NACA—as well as subsequent attempts to develop synthetic fuels to replace hydro­carbon-based jet fuel.