Category AERONAUTICS

Miscellaneous NASA Structural Analysis Programs

Note: Miscellaneous computer programs, and in some cases test facili­ties or other related projects, that have contributed to the advancement of the state of the art in various ways are described here. In some cases, there simply was not room to include them in the main body of the paper; in others, there was not enough information found, or not enough time to do further research, to adequately describe the programs and docu­ment their significance. Readers are advised that these are merely exam­ples; this is not an exhaustive list of all computer programs developed by NASA for structural analysis to the 2010 time period. Dates indicate introduction of capability. Many of the programs were subsequently enhanced. Some of the programs were eventually phased out.

Hot Structures: Dyna-Soar

Reentry of ICBM nose cones and of satellites takes place at nearly the same velocity. Reentry of spacecraft takes place at a standard velocity of Mach 25, but there are large differences in the technical means that have been studied for the thermal protection. During the 1960s, it was commonly expected that such craft would be built as hot structures. In fact, however, the thermal protection adopted for the Shuttle was the well-known "tiles,” a type of reusable insulation.

The Dyna-Soar program, early in the ’60s, was first to face this issue. Dyna-Soar used a radiatively cooled hot structure, with the primary or load-bearing structure being of Rene 41. Trusses formed the primary structure of the wings and fuselage, with many of their beams meet­ing at joints that were pinned rather than welded. Thermal gradients,

Hot Structures: Dyna-SoarPILOTS HATCH

Подпись: 9ROLL REACTION CONTROL

Подпись: WINDOW HEAT SHIELDPITCH REACTION CONTROL ANTENNAS

YAW REACTION CONTROL 35.34 FT.

Schematic drawing of the Boeing X-20A Dyna-Soar. USAF.

imposing differential expansion on separate beams, caused these mem­bers to rotate at the pins. This accommodated the gradients without imposing thermal stresses. Rene 41 was selected as a commercially available superalloy that had the best available combination of oxi­dation resistance and high-temperature strength. Its yield strength,

130,0 pounds per square inch (psi) at room temperature, fell off only slightly at 1,200 °F and retained useful values at 1,800 °F. It could be pro­cessed as sheet, strip, wire, tubes, and forgings. Used as primary struc­ture of Dyna-Soar, it supported a design specification that stated that the craft was to withstand at least four reentries under the most severe conditions permitted.

As an alloy, Rene 41 had a standard composition of 19 percent chro­mium, 11 percent cobalt, 10 percent molybdenum, 3 percent titanium, and 1.5 percent aluminum, along with 0.09 percent carbon and 0.006 percent boron, with the balance being nickel. It gained strength through age hardening, with the titanium and aluminum precipitating within the nickel as an intermetallic compound. Age-hardening weldments initially showed susceptibility to cracking, which occurred in parts that had been strained through welding or cold working. A new heat-treatment process
permitted full aging without cracking, with the fabricated assemblies showing no significant tendency to develop cracks.[1036]

As a structural material, the relatively mature state of Rene 41 reflected the fact that it had already seen use in jet engines. It neverthe­less lacked the temperature resistance necessary for use in the metallic shingles or panels that were to form the outer skin of the vehicle, which were to reradiate the heat while withstanding temperatures as high as

3,0 °F. Here there was far less existing art, and investigators at Boeing had to find their way through a somewhat roundabout path. Four refrac­tory or temperature-resistant metals initially stood out: tantalum, tung­sten, molybdenum, and columbium. Tantalum was too heavy. Tungsten was not available commercially as sheet. Columbium also appeared to be ruled out, for it required an antioxidation coating, but vendors were unable to coat it without rendering it brittle. Molybdenum alloys also faced embrittlement because of recrystallization produced by a pro­longed soak at high temperature in the course of coating formation. A promising alloy, Mo-0.5Ti, overcame this difficulty through addition of

0. 07 percent zirconium. The alloy that resulted, Mo-0.5Ti-0.07Zr, was called TZM molybdenum. For a time it appeared as a highly promising candidate for all the other panels.[1037]

Wing design also promoted its use, for the craft mounted a delta wing with leading-edge sweep of 73 degrees. Though built for hyper­sonic entry from orbit, it resembled the supersonic delta wings of contemporary aircraft such as the B-58 bomber. But this wing was designed using H. Julian Allen’s blunt-body principle, with the leading edge being thickly rounded (that is, blunted) to reduce the rate of heating. The wing sweep then reduced equilibrium temperatures along the leading edge to levels compatible with the use of TZM.[1038]

Boeing’s metallurgists nevertheless held an ongoing interest in colum­bium, because in uncoated form it showed superior ease of fabrication and lack of brittleness. A new Boeing-developed coating method elim­inated embrittlement, putting columbium back in the running. A sur­vey of its alloys showed that they all lacked the hot strength of TZM. Columbium nevertheless retained its attractiveness because it promised less weight. Based on coatability, oxidation resistance, and thermal emis – sivity, the preferred alloy was Cb-10Ti-5Zr, called D-36. It replaced TZM in many areas of the vehicle but proved to lack strength against creep at the highest temperatures. Moreover, coated TZM gave more of a mar­gin against oxidation than coated D-36 did, again at the most extreme temperatures. D-36 indeed was chosen to cover most of the vehicle, including the flat underside of the wing. But TZM retained its advan­tage for such hot areas as the wing leading edges.[1039]

The vehicle had some 140 running feet of leading edges and 140 square feet of associated area. This included leading edges of the verti­cal fins and elevons as well as of the wings. In general, D-36 served when temperatures during reentry did not exceed 2,700 °F, while TZM was used for temperatures between 2,700 and 3,000 °F. In accordance with the Stefan-Boltzmann law, all surfaces radiated heat at a rate proportional to the fourth power of the temperature. Hence for equal emissivities, a surface at 3,000 °F radiated 44 percent more heat than one at 2,700 °F.[1040]

Panels of both TZM and D-36 demanded antioxidation coatings. These coatings were formed directly on the surfaces as metallic silicides (silicon compounds), using a two-step process that employed iodine as a chemical intermediary. Boeing introduced a fluidized-bed method for application of the coatings that cut the time for preparation while enhancing uniformity and reliability. In addition, a thin layer of silicon carbide, applied to the surface, gave the vehicle its distinctive black color. It enhanced the emissivity, lowering temperatures by as much as 200 °F.

It was necessary to show that complete panels could withstand aerodynamic flutter. A report of the Aerospace Vehicles Panel of the Air Force Scientific Advisory Board came out in April 1962 and singled out the problem of flutter, citing it as one that called for critical attention. The test program used two NASA wind tunnels: the 4-foot by 4-foot Unitary facility at Langley that covered a range of Mach 1.6 to 2.8 and the 11-foot by 11-foot Unitary installation at Ames for Mach 1.2 to

1. 4. Heaters warmed test samples to 840 °F as investigators started with steel panels and progressed to versions fabricated from Rene nickel alloy.

"Flutter testing in wind tunnels is inherently dangerous,” a Boeing review declared. "To carry the test to the actual flutter point is to risk destruction of the test specimen. Under such circumstances, the safety of the wind tunnel itself is jeopardized.” Panels under test were as large as 24 by 45 inches; flutter could have brought failure through fatigue, with parts of a specimen being blown through the tunnel at supersonic speed. Thus, the work started at dynamic pressures of 400 and 500 pounds per square foot (psf) and advanced over a year and a half to exceed the design requirement of close to 1,400 psf. Tests were concluded in 1962.[1041]

Between the outer panels and the inner primary structure, a corrugated skin of Rene 41 served as the substructure. On the upper wing surface and upper fuselage, where the temperatures were no higher than 2,000 °F, the thermal-protection panels were also of Rene 41 rather than of a refrac­tory. Measuring 12 by 45 inches, these panels were spot-welded directly to the corrugations of the substructure. For the wing undersurface and for other areas that were hotter than 2,000 °F, designers specified an insulated structure. Standoff clips, each with four legs, were riveted to the underlying corrugations and supported the refractory panels, which also were 12 by 45 inches in size.

The space between the panels and the substructure was to be filled with insulation. A survey of candidate materials showed that most of them exhibited a strong tendency to shrink at high temperatures. This was undesirable; it increased the rate of heat transfer and could create uninsulated gaps at seams and corners. Q-felt, a silica fiber from Johns Manville, also showed shrinkage. However, nearly all of it occurred at

2,0 °F and below; above 2,000 °F, further shrinkage was negligible. This meant that Q-felt could be "pre-shrunk” through exposure to tempera­tures above 2,000 °F for several hours. The insulation that resulted had density no greater than 6.2 pounds per cubic foot, one-tenth that of water. In addition, it withstood temperatures as high as 3,000 °F.[1042]

TZM outer panels, insulated with Q-felt, proved suitable for wing leading edges. These were designed to withstand equilibrium tempera­tures of 2,825 °F and short-duration over-temperatures of 2,900 °F. But the nose cap faced temperatures of 3,680 °F along with a peak heat flux of 143 BTU/ft2/sec. This cap had a radius of curvature of 7.5 inches, making it far less blunt than the contemporary Project Mercury heat shield that had a radius of 120 inches.[1043] Its heating was correspondingly more severe. Reliable thermal protection of the nose was essential, so the program conducted two independent development efforts that used separate technical approaches. The firm of Chance Vought pursued the main line of activity, while Boeing also devised its own nose cap design.

The work at Vought began with a survey of materials that paralleled Boeing’s review of refractory metals for the thermal-protection panels. Molybdenum and columbium had no strength to speak of at the perti­nent temperatures, but tungsten retained useful strength even at 4,000 °F. But that metal could not be welded, while no coating could protect it against oxidation. Attention then turned to nonmetallic materials, including ceramics.

Ceramics of interest existed as oxides such as silica and magnesia, which meant that they could not undergo further oxidation. Magnesia proved to be unsuitable because it had low thermal emittance, while silica lacked strength. However, carbon in the form of graphite showed clear promise. It held considerable industrial experience; it was light in weight, while its strength actually increased with temperature. It oxi­dized readily but could be protected up to 3,000 °F by treating it with silicon, in vacuum and at high temperatures, to form a thin protective layer of silicon carbide. Near the stagnation point, the temperatures during reentry would exceed that level. This brought the concept of a nose cap with siliconized graphite as the primary material and with an insulated layer of a temperature-resistant ceramic covering its forward area. With graphite having good properties as a heat sink, it would rise in temperature uniformly and relatively slowly, while remaining below the 3,000 °F limit throughout the full time of the reentry.

Suitable grades of graphite proved to be available commercially from the firm of National Carbon. Candidate insulators included haf – nia, thoria, magnesia, ceria, yttria, beryllia, and zirconia. Thoria was the most refractory but was very dense and showed poor resistance to thermal shock. Hafnia brought problems of availability and of repro­ducibility of properties. Zirconia stood out. Zirconium, its parent metal, had found use in nuclear reactors; the ceramic was available from the Zirconium Corporation of America. It had a melting point above 4,500 °F, was chemically stable and compatible with siliconized graphite, offered high emittance with low thermal conductivity, provided adequate resis­tance to thermal shock and thermal stress, and lent itself to fabrication.[1044]

For developmental testing, Vought used two in-house facilities that simulated the flight environment, particularly during reentry. A ramjet, fueled with JP-4 and running with air from a wind tunnel, produced an exhaust with velocity up to 4,500 ft/sec and temperature up to 3,500 °F. It also generated acoustic levels above 170 decibels (dB), reproducing the roar of a Titan III booster and showing that samples under test could withstand the resulting stresses without cracking. A separate installa­tion, built specifically for the Dyna-Soar program, used an array of pro­pane burners to test full-size nose caps.

The final Vought design used a monolithic shell of siliconized graph­ite that was covered over its full surface by zirconia tiles held in place by thick zirconia pins. This arrangement relieved thermal stresses by per­mitting mechanical movement of the tiles. A heat shield stood behind the graphite, fabricated as a thick disk-shaped container made of coated TZM sheet metal and filled with Q-felt. The nose cap was attached to the vehicle with a forged ring and clamp that also were of coated TZM. The cap as a whole relied in radiative cooling. It was designed to be reus­able; like the primary structure, it was to withstand four reentries under the most severe conditions permitted.[1045]

The backup Boeing effort drew on that company’s own test equip­ment. Study of samples used the Plasma Jet Subsonic Splash Facility, which created a jet with temperature as high as 8,000 °F that splashed over the face of a test specimen. Full-scale nose caps went into the Rocket Test Chamber, which burned gasoline to produce a nozzle exit velocity of 5,800 ft/sec and an acoustic level of 154 dB. Both installations were capable of long-duration testing, reproducing conditions during reen­tries that could last for 30 minutes.[1046]

The Boeing concept used a monolithic zirconia nose cap that was reinforced against cracking with two screens of platinum-rhodium wire. The surface of the cap was grooved to relieve thermal stress. Like its counterpart from Vought, this design also installed a heat shield that used Q-felt insulation. However, there was no heat sink behind the zirco­nia cap. This cap alone provided thermal protection at the nose, through radiative cooling. Lacking pinned tiles and an inner shell, its design was simpler than that of Vought.[1047]

Its fabrication bore comparison to the age-old work of potters, who shape wet clay on a rotating wheel and fire the resulting form in a kiln. Instead of using a potter’s wheel, Boeing technicians worked with a steel die with an interior in the shape of a bowl. A paper honeycomb, reinforced with Elmer’s Glue and laid in place, defined the pattern of stress-relieving grooves within the nose cap surface. The working mate­rial was not moist clay but a mix of zirconia powder with binders, inter­nal lubricants, and wetting agents.

With the honeycomb in position against the inner face of the die, a specialist loaded the die by hand, filling the honeycomb with the damp mix and forming layers of mix that alternated with the wire screens. The finished layup, still in its die, went into a hydraulic press. A pressure of

27,0 psi compacted the form, reducing its porosity for greater strength and less susceptibility to cracks. The cap was dried at 200 °F, removed from its die, dried further, and then fired at 3,300 °F for 10 hours. The paper honeycomb burned out in the course of the firing. Following visual and x-ray inspection, the finished zirconia cap was ready for machin­ing to shape in the attachment area, where the TZM ring-and-clamp arrangement waste anchor it to the fuselage.[1048]

The nose cap, outer panels, and primary structure all were built to limit their temperatures through passive methods: radiation and insulation. Active cooling also played a role, reducing temperatures within the pilot’s compartment and two equipment bays. These used a "water wall” that mounted absorbent material between sheet-metal panels to hold a mix of water and a gel. The gel retarded flow of this fluid, while the absorbent wicking kept it distributed uniformly to prevent hotspots.

During reentry, heat reached the water walls as it penetrated into the vehicle. Some of the moisture evaporated as steam, transferring heat to a set of redundant water-glycol loops that were cooled by liquid hydro­gen from an onboard supply. A catalytic bed combined the stream of warmed hydrogen with oxygen that again came from an onboard supply. This produced gas that drove the turbine of Dyna-Soar’s auxiliary power unit, which provided both hydraulic and electric power to the craft.

A cooled hydraulic system also was necessary, to move the con­trol surfaces as on a conventional airplane. The hydraulic fluid oper­ating temperature was limited to 400 °F by using the fluid itself as an initial heat-transfer medium. It flowed through an intermediate water – glycol loop that removed its heat by being cooled with hydrogen. Major hydraulic components, including pumps, were mounted within an actively cooled compartment. Control-surface actuators, along with associated valves and plumbing, were insulated using inch-thick blan­kets of Q-felt. Through this combination of passive and active cool­ing methods, the Dyna-Soar program avoided a need to attempt to develop truly high-temperature arrangements, remaining instead within the state of the art.[1049]

Specific vehicle parts and components brought their own thermal problems. Bearings, both ball and antifriction, needed strength to carry mechanical loads at high temperatures. For ball bearings, the cobalt – base superalloy Stellite 19 was known to be acceptable up to 1,200 °F. Investigation showed that it could perform under high load for short durations at 1,350 °F. Dyna-Soar nevertheless needed ball bearings qual­ified for 1,600 °F and obtained them as spheres of Rene 41 plated with gold. The vehicle also needed antifriction bearings as hinges for control surfaces, and here there was far less existing art. The best available bear­ings used stainless steel and were suitable only to 600 °F, whereas Dyna – Soar again faced a requirement of 1,600 °F. A survey of 35 candidate materials led to selection of titanium carbide with nickel as a binder.[1050]

Antenna windows demanded transparency to radio waves at sim­ilarly high temperatures. A separate program of materials evaluation led to selection of alumina, with the best grade being available from the Coors Porcelain Company.[1051]

Hot Structures: Dyna-Soar

Hot Structures: Dyna-SoarABLATIVE HEAT SHIELDS

Подпись: 9

T —500 °F

WATER COOLING

(T<150 °F)

INSULATION

Подпись: LAYER

ABLATION

LAYER

NASA concepts for passive and actively cooled ablative heat shields, 1960. NASA.

The pilot needed his own windows. The three main ones, facing for­ward, were the largest yet planned for a piloted spacecraft. They had double panes of fused silica, with infrared-reflecting surfaces on all surfaces except the outermost. This inhibited the inward flow of heat by radiation, reducing the load on the active cooling of the pilot’s com­partment. The window frames expanded when hot; to hold the panes in position, those frames were fitted with springs of Rene. The windows also needed thermal protection so they were covered with a shield of D-36. It was supposed to be jettisoned following reentry, around Mach 5, but this raised a question: what if it remained attached? The cock­pit had two other windows, one on each side, which faced a less severe environment and were to be left unshielded throughout a flight. Over a quarter century earlier, Charles Lindbergh had flown the Spirit of St. Louis across the North Atlantic from New York to Paris using just side vision and a crude periscope. But that was a far cry from a plummeting lifting reentry vehicle. Now, test pilot Neil Armstrong flew Dyna-Soar – like approaches and landings in a modified Douglas F5D-1 fighter with side vision only and showed it was still possible.[1052]

The vehicle was to touch down at 220 knots. It lacked wheeled landing gear, for inflated rubber tires would have demanded their own cooled compartments. For the same reason, it was not possible to use a conventional oil-filled strut as a shock absorber. The craft therefore deployed tricycle landing skids. The two main skids, from Goodyear, were of Waspalloy nickel steel and mounted wire bristles of Rene 41. These gave a high coefficient of friction, enabling the vehicle to skid to a stop in a planned length of 5,000 feet while accommodating run­way irregularities. In place of the usual oleo strut, a long rod of Inconel stretched at the moment of touchdown and took up the energy of impact, thereby serving as a shock absorber. The nose skid, from Bendix, was forged from Rene 41 and had an undercoat of tungsten carbide to resist wear. Fitted with its own energy-absorbing Inconel rod, the front skid had a reduced coefficient of friction, which helped to keep the craft pointing straight ahead during slide-out.[1053]

Through such means, the Dyna-Soar program took long strides toward establishing hot structures as a technology suitable for opera­tional use during reentry from orbit. The X-15 had introduced heat sink fabricated from Inconel X, a nickel steel. Dyna-Soar went considerably further, developing radiation-cooled insulated structures fabricated from Rene 41 and from refractory materials. A chart from Boeing made the point that in 1958, prior to Dyna-Soar, the state of the art for advanced aircraft structures involved titanium and stainless steel, with tempera­ture limits of 600 °F. The X-15 with its Inconel X could withstand tem­peratures above 1,200 °F. Against this background, Dyna-Soar brought substantial advances in the temperature limits of aircraft structures.[1054]

Understanding of FBW Benefits

By the early 1970s, the full range of benefits that could be possible by the use of fly-by-wire flight control had become ever more apparent to aircraft designers and pilots. Relevant technologies were rapidly matur­ing, and various forms of fly-by-wire flight control had successfully been implemented in missiles, aircraft, and spacecraft. Fly-by-wire had many advantages over more conventional flight control systems, in addition to those made possible from the elimination of mechanical linkages. A computer-controlled fly-by-wire flight control system could generate integrated pitch, yaw, and roll control instructions at very high rates to maintain the directed flight path. It would automatically provide artifi­cial stability by constantly compensating for any flight path deviations. When the pilot moved his cockpit controls, commands would be auto­matically be generated to modify the artificial stability enough to enable the desired maneuvers to be accomplished. It could also prevent the pilot from commanding maneuvers that would exceed established air­craft limits in either acceleration or angle of attack. Additionally, the
flight control system could automatically extend high-lift devices, such as flaps, to improve maneuverability.

Подпись: 10Conceptual design studies indicated that active fly-by-wire flight control systems could enable new aircraft to be developed that featured smaller aerodynamic control surfaces. This was possible by reducing the inherent static stability traditionally designed into conventional aircraft. The ability to relax stability while maintaining good handling qualities could also lead to improved agility. Agility is a measure of an aircraft’s ability to rapidly change its position. In the 1960s, a concept known as energy maneuverability was developed within the Air Force in an attempt to quantify agility. This concept states that the energy state of a maneuvering aircraft can be expressed as the sum of its kinetic energy and its potential energy. An aircraft that possesses higher overall energy inherently has higher agility than another aircraft with lower energy. The ability to retain a high-energy state while maneuvering requires high excess thrust and low drag at high-lift maneuvering conditions.[1148] Aircraft designers began synthesizing unique conceptual fighter designs using energy maneuver theory along with exploiting an aerodynamic phenomenon known as vortex lift.[1149] This approach, coupled with com­puter-controlled fly-by-wire flight control systems, was felt to be a key to unique new fighter aircraft with very high agility levels.

Neutrally stable or even unstable aircraft appeared to be within the realm of practical reality and were the subject of ever increasing interest and widespread study in NASA and the Air Force, as well as in foreign governments and the aerospace industry. Often referred to at the time as Control Configured Vehicles, such aircraft could be optimized for specific missions with fly-by-wire flight control system characteristics
designed to improve aerodynamic performance, maneuverability, and agility while reducing airframe weight. Other CCV possibilities included the ability to control structural loads while maneuvering (maneuver load control) and the potential for implementation of unconventional control modes. Maneuver load control could allow new designs to be optimized, for example, by using automated control surface deflections to actively modify the spanwise lift distribution to alleviate wing bending loads on larger aircraft. Unconventional or decoupled control modes would be possible by using various combinations of direct-force flight controls to change the aircraft flight path without changing its attitude or, alter­natively, to point the aircraft without changing the aircraft flight path. These unconventional flight control modes were felt at the time to pro­vide an improved ability to point and fire weapons during air combat.[1150]

Подпись: 10In summary, the full range of benefits possible through the appli­cation of active fly-by-wire flight control in properly tailored aircraft design applications was understood to include:

• Enhanced performance and improved mission effective­ness made possible by the incorporation of relaxed static stability and automatically activated high-lift devices into mission-optimized aircraft designs to reduce drag, optimize lift, and improve agility and handling quali­ties throughout the flight and maneuvering envelope.

• New approaches to aircraft control, such as the use of automatically controlled thrust modulation and thrust vectoring fully integrated with the movement of the air­craft’s aerodynamic flight control surfaces and activa­tion of its high-lift devices.

• Increased safety provided by automatic angle-of-attack and angle-of-sideslip suppression as well as automatic limiting of normal acceleration and roll rates. These measures protect from stall and/or loss of control, pre­vent inadvertent overstressing of the airframe, and give the pilot maximum freedom to focus on effectively maneuvering the aircraft.

• Improved survivability made possible by the elimination

of highly vulnerable hydraulic lines and incorpora­tion of fault tolerant flight control system designs and components.

• Greatly improved flight control system reliability and lower maintenance costs resulting from less mechani­cal complexity and automated built-in system test and diagnostic capabilities.

Подпись: 10Automatic flight control system reconfiguration to allow safe flight, recovery, and landing following battle dam­age or system failures.

Aircraft Certification Contributions

Подпись: 10Certification of new aircraft with digital fly-by-wire flight control systems, especially for civilian airline service, requires software designs that pro­vide highly reliable, predictable, and repeatable performance. For this reason, NASA experts concluded that a comprehensive understanding of all possible software system behaviors is essential, especially in the case of highly complex systems. This knowledge base must be formally docu­mented and accurately communicated for both design and system certifi­cation purposes. This was highlighted in a 1993 research paper sponsored by NASA and the Federal Aviation Administration (FAA) that noted:

This formal documentation process would prove to be a tre­mendously difficult and challenging task. It was only feasible if the underlying software was rationally designed using prin­ciples of abstraction, layering, information-hiding, and any other technique that can advance the intellectual manage­ability of the task. This calls strongly for an architecture that promotes separation of concerns (whose lack seems to be the main weakness of asynchronous designs), and for a method of description that exposes the rationale for design decisions and that allows, in principle, the behavior of the system to be calculated (i. e., predicted or, in the limit, proved) . . . formal methods can make their strongest contribution to quality assur­ance for ultra-dependable systems: they address (as nothing else does) [NASA engineer Dale] Mackall’s plea for ‘a method to make system designs more understandable, more visible.'[1205]

Formal software development methodologies for critical aeronautical and space systems developments have been implemented within NASA and are contained in certification guidebooks and other documents for use by those involved in mission critical computer and software systems.[1206] Designed to help transition Formal Methods from experimental use into
practical application for critical software requirements and systems design within NASA, they discuss technical issues involved in applying Formal Methods techniques to aerospace and avionics software systems. Dryden’s flight-test experience and the observations obtained from flight­testing of such systems were exceptionally well-documented and would prove to be highly relevant to NASA, the FAA, and military service pro­grams oriented to developing Formal Methods and structured approaches in the design, development, verification, validation, testing, and certifica­tion of aircraft with advanced digital flight control systems.[1207] The NASA DFBW F-8 and AFTI/F-16 experiences (among many others) were also used as background by Government and industry experts tasked with preparing the FAA Digital Systems Validation Handbook. Today, the FAA uses Formal Methods in the specification and verification of software and hardware requirements, designs, and implementations; in the identification of the benefits, weaknesses, and difficulties in applying these Formal Methods to digital systems used in critical applications; and in support of aircraft software systems certification.

NASA Advanced Control Technology for Integrated Vehicles

In 1994, after the conclusion of Air Force S/MTD testing, the aircraft was transferred to NASA Dryden for the NASA Advanced Control Technology for Integrated Vehicles (ACTIVE) research project. ACTIVE was oriented to determining if axisymmetric vectored thrust could contribute to drag reduction and increased fuel economy and range compared with con­ventional aerodynamic controls. The project was a collaborative effort between NASA, the Air Force Research Laboratory, Pratt & Whitney,
and Boeing (formerly McDonnell-Douglas). An advanced digital flight fly-by-wire control system was integrated into the NF-15B, which was given NASA tail No. 837. Higher-thrust versions of the Pratt & Whitney F100 engine with newly developed axisymmetric thrust-vectoring engine exhaust nozzles were installed. The nozzles could deflect engine exhaust up to 20 degrees off centerline. This allowed variable thrust control in both pitch and yaw, or combinations of the two axes. An integrated pro­pulsion and flight control system controlled both aerodynamic flight control surfaces and the engines. New cockpit controls and electron­ics from an F-15E aircraft were also installed in the NF-15B. The first supersonic flight using yaw vectoring occurred in early 1996. Pitch and yaw thrust vectoring were demonstrated at speeds up to Mach 2.0, and yaw vectoring was used at angles of attack up to 30 degrees. An adaptive performance software program was developed and successfully tested in the NF-15B flight control computer. It automatically determined the optimal setting or trim for the thrust-vectoring nozzles and the aero­dynamic control surfaces to minimize aircraft drag. An improvement of Mach 0.1 in level flight was achieved at Mach 1.3 at 30,000 feet with no increase in engine thrust. The ACTIVE NF-15B continued investiga­tions of integrated flight and propulsion control with thrust-vectoring during 1997 and 1998, including an experiment that combined thrust vectoring with aerodynamic controls during simulated ground attack missions. Following completion of the ACTIVE project, the NF-15B was used as a testbed for several other NASA Dryden research experiments, which included the efforts described below.[1275]

Fuel Efficiency Takes Flight

Caitlin Harrington

Подпись: 12Decades of NASA research have led to breakthroughs in understand­ing the physical processes of pollution and determining how to secure unprecedented levels of propulsion and aerodynamic efficiency to reduce emissions. Goaded by recurring fuel supply crises, NASA has responded with a series of research plans that have dramatically improved the efficiency of gas turbine propulsion systems, the lift-to – drag ratio of new aircraft designs, and myriad other challenges.

A

LTHOUGH NASA’S AERONAUTICS BUDGET has fallen dramatically in recent years,[1372] the Agency has nevertheless managed to spear­head some of America’s biggest breakthroughs in fuel-efficient and environmentally friendly aircraft technology. The National Aeronautics and Space Administration (NASA) has engaged in major programs to increase aircraft fuel efficiency that have laid the groundwork for engines, airframes, and new energy sources—such as alternative fuel and fuel cells—that are still in use today. NASA’s research on aircraft emissions in the 1970s also was groundbreaking, leading to a widely accepted view at the national—and later, global—level that pollution can damage the ozone layer and spawning a series of efforts inside and outside NASA to reduce aircraft emissions.[1373]

This case study will explore NASA’s efforts to improve the fuel effi­ciency of aircraft and also reduce emissions, with a heavy emphasis on the 1970s, when the energy crisis and environmental concerns cre­ated a national demand for "lean and green” airplanes.[1374] The launch of

Sputnik in 1957 and the resulting space race with the Soviet Union spurred the National Advisory Committee for Aeronautics (NACA)— subsequently restructured within the new National Aeronautics and Space Administration—to shift its research heavily toward rocketry— at the expense of aeronautics—until the mid-1960s.[1375] But as commer­cial air travel grew in the 1960s, NASA began to embark on a series of ambitious programs that connected aeronautics, energy, and the envi­ronment. This case study will discuss some of NASA’s most important programs in this area.

Подпись: 12Key propulsion initiatives to be discussed include the Energy Efficient Engine program—perhaps NASA’s greatest contribution to fuel-efficient flight—as well as later efforts to increase propulsion efficiency, includ­ing the Advanced Subsonic Technology (AST) initiative and the Ultra Efficient Engine Technology (UEET) program. Another propulsion effort that paved the way for the development of fuel-efficient engine technol­ogy was the Advanced Turboprop, which led to current NASA and indus­try attempts to develop fuel-efficient "open rotor” concepts.

In addition to propulsion research, this case study will also explore several NASA programs aimed at improving aircraft structures to pro­mote fuel efficiency, including initiatives to develop supercritical wings and winglets and efforts to employ laminar flow concepts. NASA has also sought to develop alternative fuels to improve performance, maximize efficiency, and minimize emissions; this case study will touch on liquid hydrogen research conducted by NASA’s predecessor—the NACA—as well as subsequent attempts to develop synthetic fuels to replace hydro­carbon-based jet fuel.

Second-Generation DOE-NASA Wind Turbine Systems (Mod-2)

While the primary objectives of the Mod-0, Mod-0A, and Mod-1 pro­grams were research and development, the primary goal of the sec­ond-generation Mod-2 project was for direct and efficient commercial application. The Mod-2 program was designed to determine the poten­tial cost-effectiveness of megawatt-sized remote site operation wind tur­bines when located in areas of moderate (14 mph) winds. Significant changes from the Mod-0 and Mod-1 included use of a soft-shell-type

Подпись: DOE-NASA Mod-2 megawatt wind turbine cluster, Goldendale, WA. NASA. Подпись: 13

tower, an epicyclic gearbox, a quill shaft to attenuate torque and power oscillations, and a rotor designed primarily to commercial steel fabri­cation standards. Other significant changes were the switch from fixed to a teetered (pivot connection) hub rotor, which reduced rotor fatigue, weight, and cost; use of tip control rather than full span control; and orienting the rotor upwind rather than downwind, which reduced rotor fatigue and resulted in a 2.5-percent increase in power produced by the system. Each of these changes resulted in a favorable decrease in the cost of electricity. One of the more important changes, as noted in a Boeing conference presentation, was the switch from the stiff truss type tower to a soft shell tower that weighed less, was much cheaper to fabricate, and enabled the use of heavy but economical and reliable rotor designs.[1505]

Four primary Mod-2 wind turbine units were designed, built, and operated under the second-generation phase of the DOE-NASA pro­gram. The first three machines were built as a cluster at Goldendale, WA, where the Department of Energy selected the Bonneville Power

Подпись: 13 Second-Generation DOE-NASA Wind Turbine Systems (Mod-2)

Administration as the participating utility. The operation of several wind turbines at one site afforded NASA the opportunity to study the effects of single and multiple wind turbines operating together while feeding into a power network. The Goldendale project demonstrated the suc­cessful operation of a cluster of large NASA Mod-2 horizontal-axis wind turbines operating in an unattended mode within a power grid. For con­struction of these machines, DOE-NASA awarded a competitively bid contract in 1977 to Boeing. The first of the three wind turbines started operation in November 1980, and the two additional machines went into service between March and May 1981. As of January 1985, the three – turbine cluster had generated over 5,100 megawatthours of electricity while synchronized to the power grid for over 4,100 hours. The Mod-2 machines had a rated power of 2.5 megawatts, a rotor-blade diameter of 300 feet, and a hub height (distance of the center of blade rotation to the ground) of 300 feet. Boeing evaluated a number of design options and tradeoffs, including upwind or downwind rotors, two – or three-bladed

rotors, teetered or rigid hubs, soft or rigid towers, and a number of different drive train and power generation configurations. A fourth 2.5-megawatt Mod-2 wind turbine was purchased by the Department of the Interior, Bureau of Reclamation, for installation near Medicine Bow, WY, and a fifth turbine unit was purchased by Pacific Gas and Electric for operation in Solano County, CA.[1506]

Inventing the Supercritical Wing

Whitcomb was hardly an individual content to rest on his laurels or bask in the glow of previous successes, and after his success with area rul­ing, he wasted no time in moving further into the transonic and super­sonic research regime. In the late 1950s, the introduction of practical subsonic commercial jetliners led many in the aeronautical community to place a new emphasis on what would be considered the next logical step: a Supersonic Transport (SST). John Stack recognized the impor­tance of the SST to the aeronautics program in NASA in 1958. As NASA placed its primary emphasis on space, he and his researchers would work on the next plateau in commercial aviation. Through the Supersonic Transport Research Committee, Stack and his successor, Laurence K. Loftin, Jr., oversaw work on the design of a Supersonic Commercial Air Transport (SCAT). The goal was to create an airliner capable of outper­forming the cruise performance of the Mach 3 North American XB-70 Valkyrie bomber. Whitcomb developed a six-engine arrowlike highly swept wing SST configuration that stood out as possessing the best lift – to-drag (L/D) ratio among the Langley designs called SCAT 4.[194]

Manufacturers’ analyses indicated that Whitcomb’s SCAT 4 exhib­ited the lowest range and highest weight among a group of designs that would generate high operating and fuel costs and was too heavy when compared with subsonic transports. Despite President John F. Kennedy’s June 1963 commitment to the development of "a commercially success­ful supersonic transport superior to that being built in any other country in the world,” Whitcomb saw the writing on the wall and quickly disas­sociated himself from the American supersonic transport program in 1963.[195] Always keeping in mind his priorities based on practicality and what he could do to improve the airplane, Whitcomb said: "I’m going back where I know I can make things pay off.”[196] For Whitcomb, practi­cality outweighed the lure of speed equated with technological progress.

Whitcomb decided to turn his attention back toward improving sub­sonic aircraft, specifically a totally new airfoil shape. Airfoils and wings had been evolving over the course of the 20th century. They reflected the ever-changing knowledge and requirements for increased aircraft perfor­mance and efficiency. They also represented the bright minds that devel­oped them. The thin cambered airfoil of the Wright brothers, the thick airfoils of the Germans in World War I, the industry-standard Clark Y of the 1920s, and the NACA four – and five-digit series airfoils innovated by Eastman Jacobs exemplified advances in and general approaches toward airfoil design and theory.[197]

Despite these advances and others, subsonic aircraft flew at 85-percent efficiency.[198] The problem was that, as subsonic airplanes moved toward their maximum speed of 660 mph, increased drag and instability devel­oped. Air moving over the upper surface of wings reached supersonic speeds, while the rest of the airplane traveled at a slower rate. The plane had to fly at slower speeds at decreased performance and efficiency.[199]

When Whitcomb returned to transonic research in 1964, he specifi­cally wanted to develop an airfoil for commercial aircraft that delayed the onset of high transonic drag near Mach 1 by reducing air friction and turbu-

Inventing the Supercritical Wing

Whitcomb inspecting a supercritical wing model in the 8-Foot TPT. NASA.

lence across an aircraft’s major aerodynamic surface, the wing. Whitcomb went intuitively against conventional airfoil design, in which the upper sur­face curved downward on the leading and trailing edges to create lift. He envisioned a smoother flow of air by turning a conventional airfoil upside down. Whitcomb’s airfoil was flat on top with a downward curved rear sec­tion.[200] The shape delayed the formation of shock waves and moved them further toward the rear of the wing to increase total wing efficiency. The rear lower surface formed into deeper, more concave curve to compen­sate for the lift lost along the flattened wing top. The blunt leading edge facilitated better takeoff, landing, and maneuvering performance. Overall, Whitcomb’s airfoil slowed airflow, which lessened drag and buffeting, and improved stability.[201]

With the wing captured in his mind’s eye, Whitcomb turned it into mathematical calculations and transformed his findings into a wind tun­nel model created by his own hands. He spent days at a time in the 8-foot Transonic Pressure Tunnel (TPT), sleeping on a nearby cot when needed, as he took advantage of the 24-hour schedule to confirm his findings.[202]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[203]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[204] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[205] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[206]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[207]

Just as if he were still in his boyhood laboratory, Whitcomb stated that: "When I’ve got an idea, I’m up in the tunnel. The 8-foot runs on two shifts, so you have to stay with the job 16 hours a day. I didn’t want to drive back and forth just to sleep, so I ended up bringing a cot out here.”[208]

Whitcomb and researcher Larry L. Clark published their wind tunnel findings in "An Airfoil Shape for Efficient Flight at Supercritical Mach Numbers,” which summarized much of the early work at Langley. Their investigation compared a supercritical airfoil with a NASA airfoil. They concluded that the former developed more abrupt drag rise than the latter.[209] Whitcomb presented those initial findings at an aircraft aerody­namics conference held at Langley in May 1966.[210] He called his new inno­vation a "supercritical wing” by combining "super” (meaning "beyond”) with "critical” Mach number, which is the speed supersonic flow revealed itself above the wing. Unlike a conventional wing, where a strong shock wave and boundary layer separation occurred in the transonic regime, a supercritical wing had both a weaker shock wave and less developed boundary layer separation. Whitcomb’s tests revealed that a supercriti­cal wing with 35-degree sweep produced 5 percent less drag, improved stability, and encountered less buffeting than a conventional wing at speeds up to Mach 0.90.[211]

Langley Director of Aeronautics Laurence K. Loftin believed that Whitcomb’s new supercritical airfoil would reduce transonic drag and result in improved fuel economy. He also knew that wind tunnel data alone would not convince aircraft manufacturers to adopt the new airfoil. Loftin first endorsed the independent analyses of Whitcomb’s idea at the Courant Institute at New York University, which proved the viability of the concept. More importantly, NASA had to prove the value of the new technology to industry by actually building, installing, and flying the wing on an aircraft.[212]

The major players met in March 1967 to discuss turning Whitcomb’s concept into a reality. The practicalities of manufacturing, flight char­acteristics, structural integrity, and safety required a flight research program. The group selected the Navy Chance Vought F-8A fighter as the flight platform. The F-8A possessed specific attributes that made it ideal for the program. While not an airliner, the F-8A had an easily removable modular wing readymade for replacement, fuselage-mounted landing gear that did not interfere with the wing, engine thrust capable of opera­tion in the transonic regime, and lower operating costs than a multi-engine airliner. Langley contracted Vought to design a supercritical wing for the F-8 and collaborated with Whitcomb during wind tunnel testing begin­ning during the summer of 1967. Unfortunately for the program, NASA Headquarters suspended all ongoing contracts in January 1968 and Vought withdrew from the program.[213]

SST Reincarnated: Birth of the High-Speed Civil Transport

For much of the next decade, the most active sonic boom research took place as part of the Air Force’s Noise and Sonic Boom Impact Technology (NSBIT) program. This was a comprehensive effort started in 1981 to study the noises resulting from military training and operations, espe­cially those involving environmental impact statements and similar assess­ments. Although NASA was not intimately involved with NSBIT, Domenic Maglieri (just before his retirement from the Langley Center) and the recently retired Harvey Hubbard compiled a comprehensive annotated bib­liography of sonic boom research, organized into 10 major areas, to help inform NSBIT participants of the most relevant sources of information.[460]

One of the noteworthy achievements of the NSBIT program was to continue building a detailed sonic boom database (known as Boomfile) on all U. S. supersonic aircraft by flying them over a large array of newly developed sensors at Edwards AFB in the summer of 1987. Called the Boom Event Analyzer Recorder (BEAR), these unmanned devices

recorded the full sonic boom waveform in digital format.[461] Other con­tributions of NSBIT were long-term sonic boom monitoring of combat training areas, continued assessment of structures exposed to sonic booms, studies of the effects of sonic booms on livestock and wildlife, and inten­sified research on focused booms (long an issue with maneuvering fighter aircraft). The latter included a specialized computer program (derived from that originated by NASAs Thomas) called PCBoom to predict these events.[462] In a separate project, fighter pilots were successfully trained to lay down super booms at specified locations (an idea first broached in the early 1950s).[463]

By the mid-1980s, the growing economic importance of nations in Asia was drawing attention to the long flight times required to cross the Pacific Ocean or the ability to reach most of Asia from Europe. The White House Office of Science and Technology (OST), reversing the administration’s initial opposition to civilian aeronautical research, took various steps to gain support for such activities. In March 1985, the OST released a report, "National Aeronautical R&D Goals: Technology for America’s Future,” which included a long-range supersonic transport.[464] Then, in his State of the Union Address in January 1986, President Reagan ignited interest in the possibility of a hypersonic transport—the National Aero-Space Plane (NASP)—dubbed the "Orient Express.” The Battelle Memorial Institute, which established the Center for High-Speed Commercial Flight in April 1986, became a focal point and influential advocate for these proposals.[465]

NASA had been working with the Defense Advanced Research Projects Agency (DARPA) on hypersonic technology for what became the NASP since the early 1980s. In February 1987, the OST issued an updated National Aeronautical R&D Goals, subtitled "Agenda for Achievement.”

It called for both aggressively pursuing the NASP and developing the "fundamental technology, design, and business foundation for a long – range supersonic transport.”[466] In response, NASA accelerated its hyper­sonic research and began a new quest to develop commercially viable supersonic technology. This started with contracts to Boeing and Douglas aircraft companies in October 1986 for market and feasibility studies on what was now named the High-Speed Civil Transport (HSCT), accom­panied by several internal NASA assessments. These studies soon ruled out hypersonic speeds (above Mach 5) as being impractical for pas­senger service. Eventually, NASA and its industry partners settled on a cruise speed of Mach 2.4.[467] Although only marginally faster than the Concorde, the HSCT was expected to double its range and carry three times as many passengers. Meanwhile, the NASP survived as a NASA – DOD program (the X-30) until 1994, with its sonic boom potential stud­ied by current and former NASA specialists.[468]

The contractual studies on the HSCT emphasized the need to resolve environmental issues, including the restrictions on cruising over land because of sonic booms, before it could meet the goal of efficient long­distance supersonic flight. On January 19-20, 1988, the Langley Center hosted a workshop on the status of sonic boom methodology and under­standing. Sixty representatives from Government, academia, and industry attended—including many of those involved in the SST and SCR efforts and several from the Air Force’s NSBIT program. Working groups on sonic boom theory, minimization, atmospheric effects, and human response deter­mined that the following areas most needed more research: boom carpets, focused booms, high-Mach predictions, atmospheric effects, acceptability metrics, signature prediction, and low-boom airframe designs.

The report from this workshop served as a baseline on the latest knowledge about sonic booms and some of the challenges that lay ahead. One of these was the disconnect between aerodynamic efficiency and lowering shock strength that had long plagued efforts at boom min­imization. Simply stated, near-field shockwaves from a streamlined airframe coalesce more readily into strong front and tail shocks, while the near-field shock waves from a higher-drag airframe are less likely to join together, thus allowing a more relaxed N-wave signature. This paradox (illustrated by Figure 6) would have to be solved before a low – boom supersonic transport would be both permissible and practical.[469]

Resolving the Challenge of Aerodynamic Damping

Researchers in the early supersonic era also faced the challenges posed by the lack of aerodynamic damping. Aerodynamic damping is the nat­ural resistance of an airplane to rotational movement about its center of gravity while flying in the atmosphere. In its simplest form, it consists of forces created on aerodynamic surfaces that are some distance from the center of gravity (cg). For example, when an airplane rotates about the cg in the pitch axis, the horizontal tail, being some distance aft of the cg, will translate up or down. This translational motion produces a vertical lift force on the tail surface and a moment (force times dis­tance) that tends to resist the rotational motion. This lift force opposes the rotation regardless of the direction of the motion. The resisting force will be proportional to the rate of rotation, or pitch rate. The faster the rotational rate, the larger will be the resisting force. The magnitude of

the resisting tail lift force is dependent on the change in angle of attack created by the rotation. This change in angle of attack is the vector sum of the rotational velocity and the forward velocity of the airplane. For low forward velocities, the angle of attack change is quite large and the natural damping is also large. The high aerodynamic damping associ­ated with the low speeds of the Wright brothers flights contributed a great deal to the brothers’ ability to control the static longitudinal insta­bility of their early vehicles.

At very high forward speed, the same pitch rate will produce a much smaller change in angle of attack and thus lower damping. For practical purposes, all aerodynamic damping can be considered to be inversely proportional to true velocity. The significance of this is that an airplane’s natural resistance to oscillatory motion, in all axes, disappears as the true speed increases. At hypersonic speeds (above Mach 5), any rota­tional disturbance will create an oscillation that will essentially not damp out by itself.

As airplanes flew ever faster, this lightly damped, oscillatory ten­dency became more obvious and was a hindrance to accurate weap­ons delivery for military aircraft, and pilot and passenger comfort for commercial aircraft. Evaluating the seriousness of the damping chal­lenge in an era when aircraft design was changing markedly (from the straight-wing propeller-driven airplane to the swept and delta wing jet and beyond). It occupied a great amount of attention from the NACA and early NASA researchers, who recognized that it would pose a con­tinuing hindrance to the exploitation of the transonic and supersonic region, and the hypersonic beyond.[678]

In general, aerodynamic damping has a positive influence on han­dling qualities, because it tends to suppress the oscillatory tendencies of a naturally stable airplane. Unfortunately, it gradually disappears as the speed increases, indicating the need for some artificial method of suppressing these oscillations during high-speed flight. In the preelec­tronic flight control era, the solution was the modification of flight con­trol systems to incorporate electronic damper systems, often referred to as Stability Augmentation Systems (SAS). A damper system for one axis con­sisted of a rate gyro measuring rotational rate in that axis, a gain-chang­ing circuit that adjusted the size of the needed control command, and a

servo mechanism that added additional control surface commands to the commands from the pilot’s stick. Control surface commands were generated that were proportional to the measured rotational rate (feed­back) but opposite in sign, thus driving the rotational rate toward zero.

Damper systems were installed in at least one axis of all of the Century – series fighters (F-100 through F-107), and all were successful in stabilizing the aircraft in high-speed flight.[679] Development of stability augmentation systems—and their refinement through contractor, Air Force-Navy, and NACA-NASA testing—was crucial to meeting the challenge of developing Cold War airpower forces, made yet more demanding because the United States and the larger NATO alliance chose a conscious strategy of using advanced technology to generate high-leverage aircraft systems that could offset larger numbers of less-individually capable Soviet-bloc designs.[680]

Early, simple damper systems were so-called single-string systems and were designed to be "fail-safe.” A single gyro, servo, and wiring system were installed for each axis. The feedback gains were quite low, tailored to the damping requirements at high speed, at which very little control surface travel was necessary. The servo travel was limited to a very small value, usually less than 2 degrees of control surface movement. A failure in the system could drive the servo to its maximum travel, but the transient motion was small and easily compensated by the pilot. Loss of a damper at high speed thus reduced the comfort level, or weapons delivery accu­racy, but was tolerable, and, at lower speeds associated with takeoff and landing, the natural aerodynamic damping was adequate.

One of the first airplanes to utilize electronic redundancy in the design of its flight control system was the X-15 rocket-powered research air­plane, which, at the time of its design, faced numerous unknowns. Because of the extreme flight conditions (Mach 6 and 250,000-foot alti­tude), the servo travel needed for damping was quite large, and the pilot could not compensate if the servo received a hard-over signal.

The solution was the incorporation of an independent, but identical, feedback "monitoring” channel in addition to the "working” channel in each axis. The servo commands from the monitor and working channel were continuously compared, and when a disagreement was detected, the system was automatically disengaged and the servo centered. This provided the equivalent level of protection to the limited-authority fail­safe damper systems incorporated in the Century series fighters. Two of the three X-15s retained this fail-safe damper system throughout the 9-year NASA-Air Force-Navy test program, although a backup roll rate gyro was added to provide fail-operational, fail-safe capability in the roll axis.[681] Refining the X-15’s SAS system necessitated a great amount of analysis and simulator work before the pilots deemed it acceptable, particularly as the X-15’s stability deteriorated markedly at higher angles of attack above Mach 2. Indeed, one of the major aspects of the X-15’s research program was refining understanding of the complexities of hypersonic stability and control, particularly during reentry at high angles of attack.[682]

The electronic revolution dramatically reshaped design approaches to damping and stability. Once it was recognized that electronic assis­tance was beneficial to a pilot’s ability to control an airplane, the con­cept evolved rapidly. By adding a third independent channel, and some electronic voting logic, a failed channel could be identified and its sig­nal "voted out,” while retaining the remaining two channels active. If a second failure occurred (that is, the two remaining channels did not agree), the system would be disconnected and the damper would become inoperable. Damper systems of this type were referred to as fail – operational, fail-safe (FOFS) systems. Further enhancement was provided by comparing the pilot’s stick commands with the measured airplane response and using analog computer circuits to tailor servo commands so that the airplane response was nearly the same for all flight con­ditions. These systems were referred to as Command Augmentation Systems (CAS). The next step in the evolution was the incorporation of a mathematical model of the desired aircraft response into the ana­log computer circuitry. An error signal was generated by comparing

the instantaneous measured aircraft response with the desired mathe­matical-model response, and the servo commands forced the airplane to fly per the mathematical model, regardless of the airplane’s inherent aerodynamic tendencies. These systems were called "model-following.”

Even higher levels of redundancy were necessary for safe operation of these advanced control concepts after multiple failures, and the fail­ure logic became increasingly more complex. Establishing the proper "trip” levels, where an erroneous comparison would result in the exclu­sion of one channel, was an especially challenging task. If the trip levels were too tight, a small difference between the outputs of two perfectly good gyros would result in nuisance trips, while trip levels that were too loose could result in a failed gyro not being recognized in a timely manner. Trip levels were usually adjusted during flight test to provide the safest settings.

NASA’s Space Shuttle orbiter utilized five independent control system computers. Four had identical software. This provided fail-operational, fail-operational, fail-safe (FOFOFS) capability. The fifth computer used a different software program with a "get-me-home” capability as a last resort (often referred to as the "freeze-dried” control system computer).