Category AERONAUTICS

Reusable Surface Insulation

Early in the 1960s, researchers at Lockheed introduced an entirely dif­ferent approach to thermal protection, which in time became the stan­dard. Ablatives were unrivalled for once-only use, but during that decade the hot structure continued to stand out as the preferred approach for reusable craft such as Dyna-Soar. As noted, it used an insulated primary or load-bearing structure with a skin of outer panels. These emitted heat by radiation, maintaining a temperature that was high but steady.

Подпись: ULTIMATE TENSILE STRENGTH Подпись: MAX TEMP

Reusable Surface Insulation2124-T851

7075-T76S1

Подпись: INCO 718 RENE 41 Подпись: 9

LOCKALLOV

Подпись: TI-6AI-4VHAYNES 188

HAYNES 188

Vi RENE 41 INC0 62S

Strength versus temperature for various superalloys, including Rene 41, the primary structural material used on the X-20 Dyna-Soar. NASA.

Metal fittings supported these panels, and while the insulation could be high in quality, these fittings unavoidably leaked heat to the underlying structure. This raised difficulties in crafting this structure of aluminum and even of titanium, which had greater heat resistance. On Dyna-Soar, only Rene 41 would do.[1062]

Ablatives avoided such heat leaks while being sufficiently capable as insulators to permit the use of aluminum. In principle, a third approach combined the best features of hot structures and ablatives. It called for the use of temperature-resistant tiles, made perhaps of ceramic, which could cover the vehicle skin. Like hot-structure panels, they would radi­ate heat, while remaining cool enough to avoid thermal damage. In addition, they were to be reusable. They also were to offer the excellent insulating properties of good ablators, preventing heat from reaching the underlying structure—which once more might be of aluminum. This concept became known as reusable surface insulation (RSI). In time, it gave rise to the thermal protection of the Shuttle.

RSI grew out of ongoing work with ceramics for thermal protec­tion. Ceramics had excellent temperature resistance, light weight, and good insulating properties. But they were brittle and cracked rather than stretched in response to the flexing under load of an underlying metal primary structure. Ceramics also were sensitive to thermal shock, as when heated glass breaks when plunged into cold water. In flight, such thermal shock resulted from rapid temperature changes during reentry.[1063]

Monolithic blocks of the ceramic zirconia had been specified for the nose cap of Dyna-Soar, but a different point of departure used mats of solid fiber in lieu of the solid blocks. The background to the Shuttle’s tiles lay in work with such mats that took place early in the 1960s at Lockheed Missiles and Space Company. Key people included R. M. Beasley, Ronald Banas, Douglas Izu, and Wilson Schramm. A Lockheed patent disclo­sure of December 1960 gave the first presentation of a reusable insula­tion made of ceramic fibers for use as a heat shield. Initial research dealt with casting fibrous layers from a slurry and bonding the fibers together.

Related work involved filament-wound structures that used long continuous strands. Silica fibers showed promise and led to an early success: a conical radome of 32-inch diameter built for Apollo in 1962. Designed for reentry, it had a filament-wound external shell and a light­weight layer of internal insulation cast from short fibers of silica. The two sections were densified with a colloid of silica particles and sintered into a composite. This gave a nonablative structure of silica composite reinforced with fiber. It never flew, as design requirements changed dur­ing the development of Apollo. Even so, it introduced silica fiber into the realm of reentry design.

Another early research effort, Lockheat, fabricated test versions of fibrous mats that had controlled porosity and microstructure. These were impregnated with organic fillers such as Plexiglas (methyl meth­acrylate). These composites resembled ablative materials, though the filler did not char. Instead it evaporated or volatilized, producing an outward flow of cool gas that protected the heat shield at high heat – transfer rates. The Lockheat studies investigated a range of fibers that included silica, alumina, and boria. Researchers constructed multilayer composite structures of filament-wound and short-fiber materials that resembled the Apollo radome. Impregnated densities were 40 to 60 lb/ ft3, the higher density being close to that of water. Thicknesses of no more than an inch gave acceptably low back-face temperatures during simulations of reentry.

This work with silica-fiber ceramics was well underway during 1962. Three years later, a specific formulation of bonded silica fibers was ready for further development. Known as LI-1500, it was 89 percent porous and had a density of 15 lb/ft3, one-fourth that of water. Its external sur­face was impregnated with filler to a predetermined depth, again to provide additional protection during the most severe reentry heating. By the time this filler was depleted, the heat shield was to have entered a zone of more moderate heating, where the fibrous insulation alone could provide protection.

Initial versions of LI-1500, with impregnant, were intended for use with small space vehicles similar to Dyna-Soar that had high heating rates. Space Shuttle concepts were already attracting attention—the January 1964 issue of Astronautics & Aeronautics, the journal of the American Institute of Aeronautics and Astronautics, presents the think­ing of the day—and in 1965 a Lockheed specialist, Maxwell Hunter, introduced an influential configuration called Star Clipper. His design employed LI-1500 for thermal protection.

Like other Shuttle concepts, Star Clipper was to fly repeatedly, but the need for an impregnant in LI-1500 compromised its reusability. But in contrast to earlier entry vehicle concepts, Star Clipper was large, offer­ing exposed surfaces that were sufficiently blunt to benefit from H. Julian Allen’s blunt-body principle. They had lower temperatures and heating rates, which made it possible to dispense with the impregnant. An unfilled version of LI-1500, which was inherently reusable, now could serve.

Here was the first concept of a flight vehicle with reusable insula­tion, bonded to the skin, which could reradiate heat in the fashion of a hot structure. However, the matted silica by itself was white and had low thermal emissivity, making it a poor radiator of heat. This brought excessive surface temperatures that called for thick layers of the silica insulation, adding weight. To reduce the temperatures and the thick­ness, the silica needed a coating that could turn it black for high emis – sivity. It then would radiate well and remain cooler.

The selected coating was a borosilicate glass, initially with an admix­ture of Cr2O3 and later with silicon carbide, which further raised the emissivity. The glass coating and the silica substrate were both silicon dioxide; this assured a match of their coefficients of thermal expansion, to prevent the coating from developing cracks under the temperature changes of reentry. The glass coating could soften at very high temper­atures to heal minor nicks or scratches. It also offered true reusability, surviving repeated cycles to 2,500 °F. A flight test came in 1968 as NASA Langley investigators mounted a panel of LI-1500 to a Pacemaker reen­try test vehicle along with several candidate ablators. This vehicle car­ried instruments and was recovered. Its trajectory reproduced the peak heating rates and temperatures of a reentering Star Clipper. The LI-1500 test panel reached 2,300 °F and did not crack, melt, or shrink. This proof – of-concept test gave further support to the concept of high-emittance reradiative tiles of coated silica for thermal protection.[1064]

Lockheed conducted further studies at its Palo Alto Research Center. Investigators cut the weight of RSI by raising its porosity from the 89 percent of LI-1500 to 93 percent. The material that resulted, LI-900, weighed only 9 pounds per cubic foot, one-seventh the density of water.[1065] There also was much fundamental work on materials. Silica exists in three crystalline forms: quartz, cristobalite, and tridymite. These not only have high coefficients of thermal expansion but also show sud­den expansion or contraction with temperature because of solid-state phase changes. Cristobalite is particularly noteworthy; above 400 °F, it expands by more than 1 percent as it transforms from one phase to another. Silica fibers for RSI were to be glass, an amorphous rather than a crystalline state with a very low coefficient of thermal expansion and an absence of phase changes. The glassy form thus offered superb resis­tance to thermal stress and thermal shock, which would recur repeat­edly during each return from orbit.[1066]

The raw silica fiber came from Johns Manville, which produced it from high-purity sand. At elevated temperatures, it tended to undergo "devitrification,” transforming from a glass into a crystalline state. Then, when cooling, it passed through phase-change temperatures and the fiber suddenly shrank, producing large internal tensile stresses. Some fibers broke, giving rise to internal cracking within the RSI and degra­dation of its properties. These problems threatened to grow worse dur­ing subsequent cycles of reentry heating.

To prevent devitrification, Lockheed worked to remove impurities from the raw fiber. Company specialists raised the purity of the silica to 99.9 percent while reducing contaminating alkalis to as low as 6 parts per million. Lockheed proceeded to do these things not only in the lab­oratory but also in a pilot plant. This plant took the silica from raw material to finished tile, applying 140 process controls along the way. Established in 1970, the pilot plant was expanded in 1971 to attain a true manufacturing capability. Within this facility, Lockheed produced tiles of LI-1500 and LI-900 for use in extensive programs of test and evalua­tion. In turn, the increasing availability of these tiles encouraged their selection for Shuttle protection in lieu of a hot-structure approach.[1067]

General Electric (GE) also became actively involved, studying types of RSI made from zirconia and from mullite, 3Al2O3+2SiO2, as well as from silica. The raw fibers were commercial grade, with the zirconia coming from Union Carbide and the mullite from Babcock and Wilcox. Devitrification was a problem, but whereas Lockheed had addressed it by purifying its fiber, GE took the raw silica from Johns Manville and tried to use it with little change. The basic fiber, the Q-felt of Dyna – Soar, also had served as insulation on the X-15. It contained 19 different elements as impurities. Some were present at a few parts per million, but others—aluminum, calcium, copper, lead, magnesium, potassium, sodium—ran from 100 to 1,000 parts per million. In total, up to 0.3 percent was impurity.

General Electric treated this fiber with a silicone resin that served as a binder, pyrolyzing the resin and causing it to break down at high temperatures. This transformed the fiber into a composite, sheathing each strand with a layer of amorphous silica that had a purity of 99.98 percent or more. This high purity resulted from that of the resin. The amorphous silica bound the fibers together while inhibiting their devit­rification. General Electric’s RSI had a density of 11.5 lb/ft3, midway between that of LI-900 and LI-1500.[1068]

Many Shuttle managers had supported hot structures, but by mid – 1971 they were in trouble. In Washington, the Office of Management and Budget (OMB) now was making it clear that it expected to impose stringent limits on funding for the Shuttle, which brought a demand for new configurations that could cut the cost of development. Within weeks, the contractors did a major turnabout. They abandoned hot struc­tures and embraced RSI. Managers were aware that it might take time to develop for operational use, but they were prepared to use ablatives for interim thermal protection and to switch to RSI once it was ready.[1069]

What brought this dramatic change? The advent of RSI production at Lockheed was critical. This drew attention from Max Faget, a long­time NACA-NASA leader who had kept his hand in the field of Shuttle design, offering a succession of conceptual design configurations that had helped to guide the work of the contractors. His most important concept, designated MSC-040, came out in September 1971 and served as a point of reference. It used RSI and proposed to build the Shuttle of aluminum rather than Rene 41 or anything similar.[1070]

Why aluminum? "My history has always been to take the most con­servative approach,” Faget explained subsequently. Everyone knew how to work with aluminum, for it was the most familiar of materials, but everything else carried large question marks. Titanium, for one, was lit­erally a black art. Much of the pertinent shop-floor experience had been gained within the SR-71 program and was classified. Few machine shops had pertinent background, for only Lockheed had constructed an air­plane—the SR-71—that used titanium hot structure. The situation was worse for columbium and the superalloys, for these metals had been used mostly in turbine blades. Lockheed had encountered serious difficul­ties as its machinists and metallurgists wrestled with titanium. With the Shuttle facing the OMB’s cost constraints, no one cared to risk an overrun while machinists struggled with the problems of other new materials.[1071]

NASA Langley had worked to build a columbium heat shield for the Shuttle and had gained a particularly clear view of its difficulties. It was heavier than RSI but offered no advantage in temperature resistance.

In addition, coatings posed serious problems. Silicides showed promise of reusability and long life, but they were fragile and easily damaged. A localized loss of coating could result in rapid oxygen embrittlement at high temperatures. Unprotected columbium oxidized readily, and above the melting point of its oxide, 2,730 °F, it could burst into flame.[1072] "The least little scratch in the coating, the shingle would be destroyed dur­ing reentry,” Faget said. Charles Donlan, the Shuttle Program Manager at NASA Headquarters, placed this in a broader perspective in 1983:

Phase B was the first really extensive effort to put together studies related to the completely reusable vehicle. As we went along, it became increasingly evident that there were some prob­lems. And then as we looked at the development problems, they became pretty expensive. We learned also that the metallic heat shield, of which the wings were to be made, was by no means ready for use. The slightest scratch and you are in trouble.[1073]

Other refractory metals offered alternatives to columbium, but even when proposing to use them, the complexity of a hot structure also mil­itated against its selection. As a mechanical installation, it called for large numbers of clips, brackets, standoffs, frames, beams, and fasten­ers. Structural analysis loomed as a formidable task. Each of many panel geometries needed its own analysis, to show with confidence that the panels would not fail through creep, buckling, flutter, or stress under load. Yet this confidence might be fragile, for hot structures had lim­ited ability to resist over-temperatures. They also faced the continuing issue of sealing panel edges against ingestion of hot gas during reentry.[1074]

In this fashion, having taken a long look at hot structures, NASA did an about-face as it turned toward the RSI that Lockheed’s Max Hunter had recommended as early as 1965. Then, in January 1972, President Richard Nixon gave his approval to the Space Shuttle program, thereby raising it to the level of a Presidential initiative. Within days, NASA’s Dale Myers spoke to a conference in Houston and stated that the Agency had made the basic decision to use RSI. Requests for proposal soon went out, inviting leading aerospace corporations to bid for the prime contract on the Shuttle orbiter, and North American won this $2.6-billion prize in July. However, the RSI wasn’t Lockheed’s. The proposal specified mullite RSI for the undersur­face and forward fuselage, a design feature that had been held over from the company’s studies of a fully reusable orbiter during the previous year.[1075]

Still, was mullite RSI truly the one to choose? It came from General Electric and had lower emissivity than the silica RSI of Lockheed but could withstand higher temperatures. Yet the true basis for selection lay in the ability to withstand 100 reentries as simulated in ground test. NASA conducted these tests during the last 5 months of 1972, using facilities at its Ames, Johnson, and Kennedy Centers, with support from Battelle Memorial Institute.

The main series of tests ran from August to November and gave a clear advantage to Lockheed. That firm’s LI-900 and LI-1500 went through 100 cycles to 2,300 °F and met specified requirements for main­tenance of low back-face temperature and minimal thermal conductiv­ity. The mullite showed excessive back-face temperatures and higher thermal conductivity, particularly at elevated temperatures. As test con­ditions increased in severity, the mullite also developed coating cracks and gave indications of substrate failure.

The tests then introduced acoustic loads, with each cycle of the sim­ulation now subjecting the RSI to loud roars of rocket flight along with the heating of reentry. LI-1500 continued to show promise. By mid – November, it demonstrated the equivalent of 20 cycles to 160 decibels, the acoustic level of a large launch vehicle, and 2,300 °F. A month later, NASA conducted what Lockheed describes as a "sudden death shoot­out”: a new series of thermal-acoustic tests, in which the contending materials went into a single large 24-tile array at NASA Johnson. After 20 cycles, only Lockheed’s LI-900 and LI-1500 remained intact. In sepa­rate tests, LI-1500 withstood 100 cycles to 2,500 °F and survived a ther­mal overshoot to 3,000 °F, as well as an acoustic overshoot to 174 dB. Clearly, this was the material NASA wanted.[1076]

Подпись: Thermal protection system for the proposed National Hypersonic Flight Research Facility, 1978. NASA. Подпись: 9

As insulation, the tiles were astonishing. A researcher could heat one in a furnace until it was white hot, remove it, allow its surface to cool for a couple of minutes, and pick it up at its edges using his or her fingers, with its interior still at white heat. Lockheed won the thermal-protection subcontract in 1973, with NASA specifying LI-900 as the baseline RSI. The firm responded with preparations for a full – scale production facility in Sunnyvale, CA. With this, tiles entered the mainstream of thermal protection.

The NASA Digital Fly-By-Wire F-8 Program

A former Navy F-8C Crusader fighter was chosen for modification, with the goal being to both validate the benefits of a digital fly-by-wire aircraft
flight control system and provide additional confidence on its use. Mel Burke had worked with the Navy to arrange for the transfer of four LTV F-8C Crusader supersonic fighters to the Flight Research Center. One would be modified for the F-8 Super Cruise Wing project, one was converted into the F-8 DFBW Iron Bird ground simulator, another was modified as the DFBW F-8, and one was retained in its basic service con­figuration and used for pilot familiarization training and general pro­ficiency flying. When Burke left for a job at NASA Headquarters, Cal Jarvis, a highly experienced engineer who worked on fly-by-wire sys­tems on the X-15 and LLRV programs, took over as program manager. In March 1971, modifications began to create the F-8 DFBW Iron Bird simulator. The Iron Bird effort was planned to ensure that development of the ground simulator always kept ahead of conversion efforts on the DFBW flight-test aircraft. This, the very first F-8C built for the Navy in 1958 (bureau No. 1445546), carried the NASA tail No. 802 along with a "DIGITAL FLY-BY-WIRE” logo painted in blue on its fuselage sides.

Highly Maneuverable Aircraft Technology

The Highly Maneuverable Aircraft Technology (HiMAT) program pro­vides an interesting perspective on the use of unmanned research air­craft equipped with digital fly-by-wire flight control systems, one that is perhaps most relevant to today’s rapidly expanding fleet of unpiloted aircraft whose use has proliferated throughout the military services over the past decade. HiMAT research at Dryden was conducted jointly by NASA and the Air Force Flight Dynamics Laboratory at NASA Dryden between 1979 and 1983. The project began in 1973, and, in August 1975, Rockwell International was awarded a contract to construct two HiMAT vehicles based on the use of advanced technologies applicable to future highly maneuverable fighter aircraft. Designed to provide a level of maneuverability that would enable a sustained 8 g turn at 0.9 Mach at an altitude of 25,000 feet, the HiMAT vehicles were approxi­mately half the size of an F-16. Wingspan was about 16 feet, and length was 23.5 feet. A GE J85 turbojet that produced 5,000 pounds of static thrust at sea level powered the vehicle that could attain about Mach

1. 4. Launched from the NASA B-52 carrier aircraft, the HiMAT weighed about 4,000 pounds, including 660 pounds of fuel. About 30 percent of the airframe consisted of experimental composite materials, mainly fiberglass and graphite epoxy. Rear-mounted swept wings, a digital flight control system, and controllable forward canards enabled exceptional maneuverability with a turn radius about half of a conventional piloted fighter. For example, at Mach 0.9 at 25,000 feet, the HiMAT could

Подпись: Research on the HiMAT remotely piloted test vehicle was conducted by NASA and the Air Force Flight Dynamics Laboratory between 1979 and 1983. NASA. Подпись: 10

sustain an 8-g turn, while F-16 capability under the same conditions is about 4.5 g.[1211]

Ground-based, digital fly-by-wire control systems, developed at Dryden on programs such as the DFBW F-8, were vital to success of the HiMAT remotely piloted research vehicle approach. NASA Ames Research Center and Dryden worked closely with Rockwell International in design and development of the two HiMAT vehicles and their ground control system, rapidly bringing the test vehicles to flight status. Many tests that would have been required for a more conventional piloted research aircraft were eliminated, an approach largely made possible by extensive use of computational aerodynamic design tools developed at Ames. This resulted in drastic reductions in wind tunnel testing but caused the need to devote several HiMAT flights to obtain stability and control data needed for refinements to the digital flight control system.[1212]

The HiMAT flight-test maneuver autopilot was based on a design developed by Teledyne Ryan Aeronautical, then a well-known man­
ufacturer of target drones and remotely piloted aircraft. Teledyne also developed the backup flight control system.[1213] Refining the vehicle control laws was an extremely challenging task. Dryden engineers and test pilots evaluated the contractor-developed flight control laws in a ground simulation facility and then tested them in flight, making adjust­ments until the flight control system performed properly. The HiMAT flight-test maneuver autopilot provided precise, repeatable control, enabling large quantities of reliable test data to be quickly gathered. It proved to be a broadly applicable technique for use in future flight research programs.[1214]

Подпись: 10Launched from the NASA B-52 at 45,000 feet at Mach 0.68, the HiMAT vehicle was remotely controlled by a NASA research pilot in a ground station at Dryden, using control techniques similar to those in conventional aircraft. The flight control system used a ground-based computer interlinked with the HiMAT vehicle through an uplink and downlink telemetry system. The pilot used proportional stick and rud­der inputs to command the computer in the primary flight control sys­tem. A television camera mounted in the cockpit provided visual cues to the pilot. A two-seat Lockheed TF-104G aircraft was used to chase each HiMAT mission. The F-104G was equipped with remote control capability, and it could take control of the HiMAT vehicle if problems developed at the ground control site. A set of retractable skids was deployed for landing, which was accomplished on the dry lakebed adja­cent to Dryden. Stopping distance was about 4,500 feet. During one of the HiMAT flight tests, a problem was encountered that resulted in a landing with the skids retracted. A timing change had been made in the ground-based HiMAT control system and in the onboard software that used the uplinked landing gear deployment command to extend the skids. Additionally, an onboard failure of one uplink receiver con­tributed to cause the anomaly. The timing change had been thoroughly tested with the onboard flight software. However, subsequent testing determined that the flight software operated differently when an uplink failure was present.[1215]

HiMAT research also brought about advances in digital flight con­trol systems used to monitor and automatically reconfigure aircraft flight control surfaces to compensate for in-flight failures. HiMAT pro­vided valuable information on a number of other advanced design fea­tures. These included integrated computerized flight control systems, aeroelastic tailoring, close-coupled canards and winglets, new compos­ite airframe materials, and a digital integrated propulsion control sys­tem. Most importantly, the complex interactions of this set of then-new technologies to enhance overall vehicle performance were closely eval­uated. The first HiMAT flight occurred July 27, 1979. The research pro­gram ended in January 1983, with the two vehicles completing a total of 26 flights, during which 11 hours of flying time were recorded.[1216] The two HiMAT research vehicles are today on exhibit at the NASA Ames Research Center and the Smithsonian Institution National Air and Space Museum.

Intelligent Flight Control System

Beginning in 1999, the NF-15B supported the Intelligent Flight Control System (IFCS) neural network project. This was oriented to developing a flight control system that could identify aircraft characteristics through the use of neural network technology in order to optimize performance and compensate for in-flight failures by automatically reconfiguring the flight control system. IFCS is an extension of the digital fly-by-wire flight control system and is intended to maintain positive aircraft con­trol under certain failure conditions that would normally lead to loss of control. IFCS would automatically vary engine thrust and reconfigure flight control surfaces to compensate for in-flight failures. This is accom­plished through the use of upgrades to the digital flight control system software that incorporate self-learning neural network technology. A
neural network that could train itself to analyze flight properties of an aircraft was developed, integrated into the NASA NF-15B, and evaluated in flight testing. The neural network "learns” aircraft flight characteris­tics in real time, using inputs from the aircraft sensors and from error corrections provided by the primary flight control computer. It uses this information to create different aircraft flight characteristic models. The neural network learns to recognize when the aircraft is in a stable flight condition. If one of the flight control surfaces becomes damaged or non­responsive, the IFCS detects this fault and changes the flight charac­teristic model for the aircraft. The neural network then drives the error between the reference model and the actual aircraft state to zero. Dryden test pilot Jim Smolka flew the first IFCS test mission on March 19, 1999, with test engineer Gerard Schkolnik in the rear cockpit.[1278]

Подпись: 10The NF-15B IFCS test program provided the opportunity for a limited flight evaluation of a direct adaptive neural network-based flight control system.[1279] This effort was led by the Dryden Flight Research Center, with collaboration from the Ames Research Center, Boeing, the Institute for Scientific Research at West Virginia University, and the Georgia Institute of Technology.[1280] John Bosworth was the NASA Dryden IFCS chief engi­neer. Flight-testing of the direct adaptive neural network-based flight control system began in 2003 and evaluated the outputs of the neural network. The neural network had been pretrained using flight charac­teristics obtained for the F-15 S/MTD aircraft from wind tunnel testing. During this phase of testing, the neural network did not actually pro­vide any flight control inputs in-flight. The outputs of the neural network were run directly to instrumentation for data collection purposes only.

In 2005, a fully integrated direct adaptive neural-network-based flight control system demonstrated that it could continuously provide error corrections and measure the effects of these corrections in order to learn new flight models or adjust existing ones. To measure the aircraft state, the neural network took a large number of inputs from the roll, pitch, and yaw axes and the aircraft’s control surfaces. If differences were detected between the measured aircraft state and the flight model, the neural network adjusted the outputs from the primary flight computer
to bring the differences to zero before they were sent to the actuator con­trol electronics that moved the control surfaces.[1281] IFCS software evalu­ations with the NF-15B included aircraft handling qualities maneuvers, envelope boundary maneuvers, control surface excitations for real-time parameter identification that included pitch, roll, and yaw doublets, and neural network performance assessments.[1282] During NF-15B flight-test­ing, a simulated failure was introduced into the right horizontal stabi­lizer that simulated a frozen pitch control surface. Handling qualities were evaluated with and without neural network adaptation. The per­formance of the adaptation system was assessed in terms of its abil­ity to decouple roll and pitch response and reestablish good onboard model tracking. Flight-testing with the simulated stabilator failure and the adaptive neural network flight control system adaptation showed general improvement in pitch response. However, a tendency for pilot – induced roll oscillations was encountered.[1283]

Подпись: 10Concurrent with NF-15B IFCS flight-testing, NASA Ames conducted a similar neutral network flight research program using a remotely con­trolled Experimental Air Vehicle (EAV) equipped with an Intelligent Flight Controller (IFC). Aerodynamically, the EAV was a one-quarter-scale model of the widely used Cessna 182 Skylane general aviation aircraft. The EAV was equipped with two electrical power supplies, one for the digital flight control system that incorporated the neural-network IFC capability and one for the avionics installation that included three video cameras to assist the pilots with situation awareness. Several pilots flew the EAV during the test program. Differences in individual pilot control techniques were found to have a noticeable effect on the performance of the Intelligent Flight Controller. Interestingly, IFCS flight-testing with the NF-15B aircraft uncovered many of the same issues related to the controller that the EAV program found. IFCS was determined to pro­vide increased stability margins in the presence of large destabilizing failures. The adaptive system provided better closed-loop behavior with

improved matching of the onboard reference model. However, the con­vergent properties of the controller were found to require improvement because continued maneuvering caused continued adaptation change. During ground simulator evaluation of the IFCS, a trained light-plane pilot was able to successfully land a heavily damaged large jet airliner despite the fact that he had no experience with such an aircraft. Test data from the IFCS program provided a basis for analysis and under­standing of neural network-based adaptive flight control system tech­nology as an option for implementation into future aircraft.[1284]

Подпись: 10After a 35-year career, during which it had flown with McDonnell – Douglas, the Air Force, and NASA, the NF-15B was retired following its final flight, on January 30, 2009. During its 14 years at NASA Dryden, the aircraft had flown 251 times. The NF-15B will be on permanent dis­play with a group of other retired NASA research aircraft at Dryden.[1285]

Lean and Clean Propulsion Systems

NASA’s efforts to improve engine design stand out as the Agency’s great­est breakthroughs in "lean and green” engine development because of their continuing relevance today. Engineers are constantly seeking to increase efficiency to make their engines more attractive to commer­cial airlines: with increased efficiency comes reduced fuel costs and increased performance in terms of speed, range, or payload.[1396] Emissions have also remained a concern for commercial aviation. The International Civil Aviation Organization (ICAO) has released increasingly strict stan­dards for NOx emissions since 1981.[1397] The Environmental Protection

Agency has adopted emissions standards to match those of ICAO and also has issued emissions standards for aircraft and aircraft engines under the Clean Air Act.[1398]

Подпись: 12NASA’s most important contribution to fuel-efficient aircraft technol­ogy to date has arguably been E Cubed, a program focused on improv­ing propulsion systems mainly to increase fuel efficiency. The end goal was not to produce a production-ready fuel-efficient engine, but rather to develop technologies that could—and did—result in propulsion effi­ciency breakthroughs at major U. S. engine companies. These break­throughs included advances in thermal and propulsive efficiency, as well as improvements in the design of component engine parts. Today, General Electric and Pratt & Whitney (P&W) continue to produce engines and evaluate propulsion system designs based on research conducted under the E Cubed program.

The U. S. Government’s high expectations for E Cubed were reflected in the program’s budget, which stood at about $250 million, in 1979 dol­lars.[1399] The money was divided between P&W and GE, which each used the funding to sweep its most cutting-edge technology into a demonstra­tor engine that would showcase the latest technology for conserving fuel, reducing emissions, and mitigating noise. Lawmakers funded E Cubed with the expectation that it would lead to a dramatic 12-percent reduc­tion in specific fuel consumption (SFC), a term to describe the mass of fuel needed to provide a certain amount of thrust for a given period.[1400] Other E Cubed goals included a 5-percent reduction in direct operat­ing costs, a 50-percent reduction in the rate of performance deteriora­tion, and further reductions in noise and emissions levels compared to other turbofan engines at the time.[1401]

The investment paid off in spades. What began as a proposal on Capitol Hill in 1975 to improve aircraft engine efficiency ended in 1983[1402] with GE and P&W testing engine demonstrators that improved SFC between 14 and 15 percent, exceeding the 12-percent goal. The dem­
onstrators were also able to achieve a reduction in emissions. A NASA report from 1984 hailed E Cubed for helping to "keep American engine technology at the forefront of the world market.”[1403] Engineers involved in E Cubed at both GE and P&W said the technology advances were game changing for the aircraft propulsion industry.

Подпись: 12"The E Cubed program is probably the single biggest impact that NASA has ever had on aircraft propulsion,” GE’s John Baughman said. "The improvements in fuel efficiency and noise and emissions that have evolved from the E Cubed program are going to be with us for years to come.”[1404] Ed Crow, former Senior Vice President of Engineering at P&W, agreed that E Cubed marked the pinnacle of NASA’s involvement in improving aircraft fuel efficiency. "This was a huge program,” he said. "It was NASA and the Government’s attempt to make a huge step forward.”[1405]

E Cubed spurred propulsion research that led to improved fuel effi­ciency in three fundamental ways:

First, E Cubed allowed both GE and P&W to improve the thermal efficiency of their engine designs. Company engineers were able to sig­nificantly increase the engine-pressure ratio, which means the pressure inside the combustor becomes much higher than atmospheric pres­sure. They were able to achieve the higher pressure ratio by improv­ing the efficiency of the engine’s compressor, which condenses air and forces it into the combustor.

In fact, one of the most significant outcomes of the E Cubed pro­gram was GE’s development of a new "E Cubed compressor” that dra­matically increased the pressure ratio while significantly reducing the number of compression stages. If there are too many stages, the engine can become big, heavy, and long; what is gained in fuel efficiency may be lost in the weight and cost of the engine. GE’s answer to that prob­lem was to develop a compressor that had only 10 stages and produced a pressure ratio of about 23 to 1, compared to the company’s previous compressors, which had 14 stages and produced a pressure ratio of 14 to 1.[1406] That compressor is still in use today in GE’s latest engines, including the GE-90.[1407]

P&W’s E Cubed demonstrator had a bigger, 14-stage compressor, but the company was able to increase the pressure ratio by modify­ing the compressor blades to allow for increased loading per stage. P&W’s engines prior to E Cubed had pressure ratios around 20 to 1; P&W’s E Cubed demonstrator took pressure ratios to about 33 to 1, according to Crow.[1408]

Подпись: 12The second major improvement enabled by E Cubed research was a substantial increase in propulsive efficiency. Air moves most efficiently through an engine when its velocity doesn’t change much. The way to ensure that the velocity remains relatively constant is to maximize the engine’s bypass ratio: in other words, a relatively large mass of air must bypass the engine core—where air is mixed with fuel—and go straight out the back of the engine at a relatively low exhaust speed. Both GE and P&W employed more efficient turbines and improved aerodynam­ics on the fan blades to increase the bypass ratio to about 7 to 1 (com­pared with about 4 to 1 on P&W’s older engines).[1409]

Finally, E Cubed enabled major improvements in engine com­ponent parts. This was critical, because other efficiencies can’t be maximized unless the engine parts are lightweight, durable, and aero­dynamic. Increasing the pressure ratio, for example, leads to very high temperatures that can stress the engine. Both P&W and GE devel­oped materials and cooling systems to ensure that engine components did not become too hot.

In addition to efforts to improve fuel efficiency, E Cubed gave both GE and P&W opportunities to build combustors that would reduce emissions. E Cubed emissions goals were based on the Environmental Protection Agency’s 1981 guidelines and called for reductions in car­bon monoxide, hydrocarbons, NOx, and smoke. Both companies devel­oped their emissions-curbing combustor technology under NASA’s Experimental Clean Combustor program, which ran from 1972 to 1976. Their main efforts were focused on controlling where and in what pro­portions air and fuel were mixed inside the combustor. Managing the fuel/air mix inside the combustor is critical to maximize combustion efficiency (and reduce carbon dioxide emissions as a natural byprod­uct) and to ensure that temperatures do not get so high that NOx is generated. GE tackled the mixing issue by developing a dual annular
combustor, while P&W went with a two-stage combustor that had two in-line combustor zones to control emissions.[1410]

Подпись: 12Ultimately, E Cubed provided the financial backing required for both GE & P&W to pursue propulsion technology that has fed into their biggest engine lines. GE’s E Cubed compressor technology is used to power three types of GE engines, including the GE90-115B, which powers the Boeing 777-300ER and holds the world record for thrust.[1411] Other GE engines incorporating the E Cubed compressor include the GP-7200, which recently went into service on the Airbus A380, and the GE-NX, which is about to enter service on the Boeing 787.[1412] P&W also got some mileage out of the technologies developed under E Cubed. The company’s E Cubed demonstrator engine served as the inspiration for the PW2037, which fed into other engine designs that today power the Boeing 757 commercial airliner (the engine is designated PW2000) and the U. S. military’s C-17 cargo aircraft (the engine is designated F117).[1413]

NASA’s Wind Turbine Supporting Research and Technology Contributions

A very significant NASA Lewis contribution to wind turbine development involved the Center’s Supporting Research and Technology (SR&T) pro­gram. The primary objectives of this component of NASA’s overall wind energy program were to gather and report new experimental data on var­ious aspects of wind turbine operation and to provide more accurate ana­lytical methods for predicting wind turbine operation and performance. The research and technology activity covered the four following areas: (1) aerodynamics, (2) structural dynamics and aeroelasticity, (3) com­posite materials, and (4) multiple wind turbine system interaction. In the area of aerodynamics, NASA testing indicated that rounded blade tips improved rotor performance as compared with square rotor tips, result­ing in an increase in peak rotor efficiency by approximately 10 percent. Also in the aerodynamics area, significant improvements were made in the design and fabrication of the rotor blades. Early NASA rotor blades used standard airfoil shapes from the aircraft industry, but wind turbine rotors operated over a significantly wider range of angles of attack (angles between the centerline of the blade and incoming airstream). The rotor
blades also needed to be designed to last up to 20 or 30 years, which represented a challenging problem because of the extremely high num­ber of cyclic loads involved in operating wind turbines. To help solve these problems, NASA awarded development grants to the Ohio State University to design and wind tunnel test various blade models, and to the University of Wichita to wind tunnel test a rotor airfoil with ailerons.[1516]

Подпись: 13In the structural dynamics area, NASA was presented with prob­lems related to wind loading conditions, including wind shear (vari­ation of wind velocity with altitude), nonuniform wind gusts over the swept rotor area, and directional changes in the wind velocity vec­tor field. NASA overcame this problem by developing a variable speed generator system that permitted the rotor speed to vary with the wind condition, thus producing constant power.

Development work on the blade component of the wind turbine systems, including selecting the material for fabrication of the blades, represents another example of supporting technology. As noted above, NASA Lewis brought considerable structural design expertise in this area to the wind energy program as a result of previous work on heli­copter rotor blades. Early in the program, NASA tested blades made of steel, aluminum, and wood. For the 2-megawatt Mod-1 phase of the program, however, NASA Lewis decided to contract with the Kaman Aerospace Corporation for the design, manufacture, and ground-test­ing of two 100-foot fiberglass composite blades. NASA provided the general design parameters, as well as the static and fatigue load infor­mation, required for Kaman to complete the structural design of the blades. As noted in Kaman’s report on the project, the use of fiberglass, which later became the preferred material for most wind turbine blades, had a number of advantages, including nearly unlimited design flexibil­ity in adopting optimum planform tapers, wall thickness taper, twist, and natural frequency control; resistance to corrosion and other envi­ronmental effects; low notch sensitivity with slow failure propagation rate; low television interference; and low cost potential because of adapt­ability to highly automated production methods.[1517]

The above efforts resulted in a significant number of technical reports, analytical tests and studies, and computer models based upon contributions of a number of NASA, university, and industry engineers and technicians. Many of the findings grew out of tests conducted on the Mod-0 testbed wind turbine at Plum Brook Station. One is work done by Larry A. Viterna, a senior NASA Lewis engineer working on the wind energy project, in aerodynamics. In studying wind turbine performance at high angles of attack, he developed a method (often referred to as the Viterna method or model) that is widely used throughout the wind tur­bine industry and is integrated into design codes that are available from the Department of Energy. The codes have been approved for worldwide certification of wind turbines. Tests with the Mod-0 and Gedser wind turbines formed the basis for his work on this analytical model, which, while not widely accepted at the time, later gained wide acceptance. Twenty-five years later, in 2006, NASA recognized Larry Viterna and Bob Corrigan, who assisted Viterna on data testing, with the Agency’s Space Act Award from the Inventions and Contributions Board.[1518]

Winglets—Yet Another Whitcomb Innovation

Whitcomb continued to search for ways to improve the subsonic air­plane beyond his work on supercritical airfoils. The Organization of the Petroleum Exporting Countries (OPEC) oil embargo of 1973-1974 dramat­ically affected the cost of airline operations with high fuel prices.[232] NASA implemented the Aircraft Energy Efficiency (ACEE) program as part of

the national energy conservation effort in the 1970s. At this time, Science magazine featured an article discussing how soaring birds used their tip feathers to control flight characteristics. Whitcomb immediately shifted focus toward the wingtips of an aircraft—specifically flow phenomena related to induced drag—for his next challenge.[233]

Two types of drag affect the aerodynamic efficiency of a wing: pro­file drag and induced drag. Profile drag is a two-dimensional phenom­enon and is clearly represented by the iconic airflow in the slipstream image that represents aerodynamics. Induced drag results from three­dimensional airflow near the wingtips. That airflow rolls up over the tip and produces vortexes trailing behind the wing. The energy exhausted in the wingtip vortex creates induced drag. Wings operating in high-lift, low-speed performance regimes can generate large amounts of induced drag. For subsonic transports, induced drag amounts to as much as 50 percent of the total drag of the airplane.[234]

As part of the program, Whitcomb chose to address the wingtip vor­tex, the turbulent air found at the end of an airplane wing. These vor­texes resulted from differences in air pressure generated on the upper and lower surfaces of the wing. As the higher-pressure air forms along the lower surface of the wing, it creates its own airflow along the length of the wing. At the wingtip, the airflow curls upward and forms an energy-rob­bing vortex that trails behind. Moreover, wingtip vortexes create enough turbulent air to endanger other aircraft that venture into their wake.

Whitcomb sought a way to control the wingtip vortex with a new aeronautical structure called the winglet. Winglets are vertical wing-like surfaces that extend above and sometimes below the tip of each wing. A winglet designer can balance the relationship between cant, the angle the winglet bends from the vertical, and toe, the angle the winglet devi­ates from airflow, to produce a lift force that, when placed forward of the airfoils, generates thrust from the turbulent wingtip vortexes. This phenomenon is akin to a sailboat tacking upwind while, in the words of aviation observer George Larson: "the keel squeezes the boat forward like a pinched watermelon seed.”[235]

There were precedents for the use of what Whitcomb would call a "nonplanar,” or nonhorizontal, lifting system. It was known in the bur­geoning aeronautical community of the late 1800s that the induced drag of wingtip vortexes degraded aerodynamic efficiency. Aeronautical pio­neer Frederick W. Lanchester patented vertical surfaces, or "endplates,” to be mounted at an airplane’s wingtips, in 1897. His research revealed that vertical structures reduced drag at high lift. Theoretical studies conducted by the Army Air Service Engineering Division in 1924 and the NACA in 1938 in the United States and by the British Aeronautical Research Committee in 1956 investigated various nonplanar lifting sys­tems, including vertical wingtip surfaces.[236] They argued that theoretically, these structures would provide significant aerodynamic improvements for aircraft. Experimentation revealed that while there was the poten­tial of reducing induced drag, the use of simple endplates produced too much profile drag to justify their use.[237]

Whitcomb and his research team investigated the drag-reducing properties of winglets for a first-generation, narrow-body subsonic jet transport in the 8-foot TPT from 1974 to 1976. They used a semispan model, meaning it was cut in half and mounted on the tunnel wall to enable the use of a larger test object that would facilitate a higher Reynolds number and the use of specific test equipment. He compared a wing with a winglet and the same wing with a straight extension to increase its span. The constant was that both the winglet and extension exerted the same structural load on the wing. Whitcomb found that winglets reduced drag by approximately 20 percent and doubled the improvement in the lift-to-drag ratio to 9 percent compared with the straight wing exten­sion. Whitcomb published his findings in "A Design Approach and Selected Wind-Tunnel Results at High Subsonic Speeds for Wing-Tip Mounted Winglets.”[238] It was obvious that the reduction in drag generated by a pair of winglets boosted performance by enabling higher cruise speeds.

With the results, Whitcomb provided a general design approach for the basic design of winglets based on theoretical calculations, physical flow considerations, and emulation of his overall approach to aerody­namics, primarily "extensive exploratory experiments.” What made a winglet rather than a simple vertical surface attached to the end of a wing was the designer’s ability to use well-known wing design princi­ples to incorporate side forces to reduce lift-induced inflow above the wingtip and outflow below the tip to create a vortex diffuser. The place­ment and optimum height of the winglet reflected both aerodynamic and structural considerations in which the designer had to take into account the efficiency of the winglet as well as its weight. For practical operational purposes, the lower portion of the winglet could not hang down far below the wingtip for fear of damage on the ground. The fact that the ideal airfoil shape for a winglet was NASA’s general aviation air­foil made it even easier to incorporate winglets into an aircraft design.[239] Whitcomb’s basic rules provided that foundation.

Experimental wind tunnel studies of winglets in the 8-foot TPT continued through the 1970s. Whitcomb and his colleagues Stuart G. Flechner and Peter F. Jacobs concentrated next on the effects of wing­lets on a representative second-generation jet transport—the semispan model vaguely resembled a Douglas DC-10—at high subsonic speeds, specifically Mach 0.7 to 0.83. They concluded that winglets significantly reduced the induced drag coefficient while lowering overall drag. The smoothing out of the vortex behind the wingtip by the winglet accounted for the reduction in induced drag. As in the previous study, they saw that winglets generated a small increase in lift. The researchers calculated that winglets reduced drag better than simple wingtip extensions did, despite a minor increase in structural bending moments.[240]

Another benefit derived from winglets was the increase in the aspect ratio of wing without compromising its structural integrity. The aspect ratio of a wing is the relationship between span—the distance from tip to tip—and chord—the distance between the leading and trailing edge. A long, thin wing has a high aspect ratio, which produces longer range at a certain cruise speed because it does not suffer from wingtip vortexes and the corresponding energy losses as badly as a short and wide chord low aspect ratio wing. The drawback to a high aspect ratio wing is that its long, thin structure flexes easily under aerodynamic loads. Making this type of wing structurally stable required strengthening that added weight. Winglets offered increased aspect ratio with no increase in wing­span. For every 1-foot increase in wingspan, meaning aspect ratio, there was an increase in wing-bending force. Wings structurally strong enough to support a 2-foot span increase would also support 3-foot winglets while producing the same gain in aspect ratio.[241]

NASA made sure the American aviation industry was aware of the results of Whitcomb’s winglet studies and its part in the ACEE program. Langley organized a meeting focusing on advanced technologies devel­oped by NASA for Conventional Take-Off and Landing (CTOL) aircraft, primarily airliners, business jets, and personal aircraft, from February 28 to March 3, 1978. During the session dedicated to advanced aero-dynamic controls, Flechner and Jacobs summarized the results of wind tunnel results on winglets applied to a Boeing KC-135 aerial tanker, Lockheed L-1011 and McDonnell-Douglas DC-10 airliners, and a generic model with high aspect ratio wings.[242] Presentations from McDonnell-Douglas and Boeing representatives revealed ongoing industry work done under contract with NASA. Interest in winglets was widespread at the con­ference and after as manufacturers across the United States began to consider their use and current and future designs.[243]

Whitcomb’s winglets first found use on general aviation aircraft at the same time he and his colleagues at Langley began testing them on air transport models and a good 4 years before the pivotal CTOL conference. Another visionary aeronautical engineer, Burt Rutan, adopted them for his revolutionary designs. The homebuilt Vari-Eze of 1974 incorporated winglets combined with vertical control surfaces. The airplane was an overall innovative aerodynamic configuration with its forward canard, high aspect ratio wings, low-weight composite materials, a lightweight engine, and pusher propeller, Whitcomb’s winglets on Rutan’s Vari-Eze offered private pilots a stunning alternative to conventional airplanes. His nonstop world-circling Voyager and the Beechcraft Starship of 1986 also featured winglets.[244]

The business jet community was the first to embrace winglets and incorporate them into production aircraft. The first jet-powered air­plane to enter production with winglets was the Learjet Model 28 in 1977. Learjet was in the process of developing a new business jet, the Model 55, and built the Model 28 as a testbed to evaluate its new propri­etary high aspect ratio wing and winglet system, called the Longhorn. The manufacturer developed the system on its own initiative without assistance from Whitcomb or NASA, but it was clear where the winglets came from. The comparison flight tests of the Model 28 with and with­out winglets showed that the former increased its range by 6.5 percent. An additional benefit was improved directional stability. Learjet exhib­ited the Model 28 at the National Business Aircraft Association conven­tion and put it into production because of its impressive performance and included winglets on its successive business jets.[245] Learjet’s com­petitor, Gulfstream, also investigated the value of winglets to its aircraft in the late 1970s. The Gulfstream III, IV, and V aircraft included winglets in their designs. The Gulfstream V, able to cruise at Mach 0.8 for a dis­tance of 6,500 nautical miles, captured over 70 national and world flight records and received the 1997 Collier Trophy. Records aside, the ability to fly business travelers nonstop from New York to Tokyo was unprece­dented after the introduction of the Gulfstream V in 1995.[246]

Actual acceptance on the part of the airline industry was mixed in the beginning. Boeing, Lockheed, and Douglas each investigated the possibility of incorporating winglets into current aircraft as part of the ACEE program. Winglets were a fundamental design technology, and each manufacturer

Winglets—Yet Another Whitcomb Innovation

The KC-135 winglet test vehicle in flight over Dryden. NASA.

had to design them for the specific airframe. NASA awarded contracts to manufacturers to experiment with incorporating them into existing and new designs. Boeing concluded in May 1977 that the economic benefits of winglets did not justify the cost of fabrication for the 747. Lockheed chose to extend the wingtips for the L-1011 and install flight controls to alleviate the increased structural loads. McDonnell-Douglas imme­diately embraced winglets as an alternative to increasing the span of a wing and modified a DC-10 for flight tests.[247]

The next steps for Whitcomb and NASA were flight tests to dem­onstrate the viability of winglets for first and second transport and air­liner generations. Whitcomb and his team chose the Air Force’s Boeing KC-135 aerial tanker as the first test airframe. The KC-135 shared with its civilian version, the pioneering 707, and other early airliners and transports an outer wing that exhibited elliptical span loading with high loading at the outer panels. This wingtip loading was ideal for winglets. Additionally, the Air Force wanted to improve the performance and fuel efficiency of the aging aerial tanker. Whitcomb and this team designed the winglet, and Boeing handled the structural design and fabrication of winglets for an Air Force KC-135. NASA and the Air Force performed the flights tests at Dryden Flight Research Center in 1979 and 1980. The tests revealed a 20-percent reduction in drag because of lift, with a

7-percent gain in the lift-to-drag ratio at cruise, which confirmed Whitcomb’s findings at Langley.[248]

McDonnell-Douglas conducted a winglet flight evaluation program with a DC-10 airliner as part of NASA’s Energy Efficient Transport (EET) program within the larger ACEE program in 1981. The DC-10 represented a second-generation airliner with a wing designed to produce nonelliptic loading to avoid wingtip pitch-up characteristics. As a result, the wing bending moments and structural requirements were not as dramatic as those found on a first-generation airliner, such as the 707. Whitcomb and his team conducted a preliminary wind tunnel examination of a DC-10 model in the 8-foot TPT. McDonnell-Douglas engineers designed the aerodynamic and structural shape of the winglets and manufacturing personnel fabricated them. The company performed flights tests over 16 months, which included 61 comparison flights with a DC-10 leased from Continental Airlines. These industry flight tests revealed that the addition of winglets to a DC-10, combined with a drooping of the outboard aile­rons, produced a 3-percent reduction in fuel consumption at passenger­carrying distances, which met the bottom line for airline operators.[249]

The DC-10 did not receive winglets because of the prohibitive cost of Federal Aviation Administration (FAA) recertification. Nevertheless, McDonnell-Douglas was a zealous convert and used the experience and design data for the advanced derivative of the DC-10, the MD-11, when that program began in 1986. The first flight in January 1990 and the gru­eling 10-month FAA certification process that followed validated the use of winglets on the MD-11. The extended range version could carry almost 300 passengers at distances over 8,200 miles, which made it one of the far­ther flying aircraft in history and ideal for expanding Pacific air routes.[250]

Despite its initial reluctance, Boeing justified the incorporation of winglets into the new 747-400 in 1985, making it the first large U. S. com­mercial transport to incorporate winglets. The technology increased the new airplane’s range by 3 percent, enabling it to fly farther and with more passengers or cargo. The Boeing winglet differed from the McDonnell-Douglas design in that it did not have a smaller fin below the wingtip. Boeing engineers felt the low orientation of the 747 wing, combined with the practical presence of airport ground-handling equip­ment, made the deletion necessary.[251]

It was clear that Boeing included winglets on the 747-400 for improved performance. Boeing also offered winglets as a customer option for its 737 series aircraft and adopted blended winglets for its 737 and the 737-derivative Business Jet provided by Aviation Partners, Inc., of Seattle in the early 1990s. The specialty manufacturer introduced its proprietary "blended winglet” technology—the winglet is joined to the wing via a characteristic curve—and started retrofitting them to Gulfstream II business jets. The performance accessory increased fuel efficiency by 7 percent. That work lead to commercial airliner accounts. Winglets for the 737 offered fuel savings and reduced noise pollution. The relationship with Boeing lead to a joint venture called Aviation Partners Boeing, which now produces winglets for the 757 and 767 airliners. By 2003, there were over 2,500 Boeing jets flying with blended winglets. The going rate for a set of the 8-foot winglets in 2006 was $600,000.[252]

Whitcomb’s winglets found use on transport, airliner, and business jet applications in the United States and Europe. Airbus installed them on production A319, A320, A330, and A340 airliners. It was apparent that regardless of national origin, airlines chose a pair of winglets for their aircraft because they offered a savings of 5 percent in fuel costs. Rather than fly at the higher speeds made possible by winglets, most airline operators simply cruised at their pre-winglet speeds to save on fuel.[253]

Whitcomb’s aerodynamic winglets also found a place outside aero­nautics, as they met the hydrodynamic needs of the international yacht racing community. In preparation for the America’s Cup yacht race in 1983, Australian entrepreneur Alan Bond embraced Whitcomb’s work on spiraling vortex drag and believed it could be applied to racing yachts. He assembled an international team that designed a winged keel, essentially a winglet tacked onto the bottom of the keel, for Australia II. Stunned by Australia II’s upsetting the American 130-year winning streak, the international yachting community heralded the innovation as the key to winning the race. Bond argued that the 1983 America’s Cup race was instrumental to the airline industry’s adoption of the winglet and erro­neously believed that McDonnell-Douglas engineers began experiment­ing with winglets during the summer of 1984.[254]

Of the three triumphant innovations pioneered by Whitcomb, the area rule fuselage, the supercritical wing, and the winglet, perhaps it is the last that is the most easily recognizable for everyday air travel­ers and aviation observers. Engineer and historian Joseph R. Chambers remarked that: "no single NASA concept has seen such widespread use on an international level as Whitcomb’s winglets.” The application to commercial, military, and general aviation aircraft continues.[255]

Proof at Last: The Shaped Sonic Boom Demonstration

After the HSR program dropped plans for an overland supersonic air­liner, Domenic Maglieri compiled a NASA study of all known proposals for smaller supersonic aircraft intended for business customers.[501] In 1998, one year after the drafting of this report, Richard Seebass (now with the University of Colorado) gave some lectures at NATO’s von Karman Institute in Belgium. He reflected on NASA’s conclusion that a practical, commercial­sized supersonic transport would have a sonic boom that was not accept­able to enough people. On the other hand, he believed the recent high-speed research "leads us to conclude that a small, appropriately designed super­sonic business jet’s sonic boom may be nearly inaudible outdoors and hardly discernible indoors.” Such an airplane, he stated, "appears to have a significant market. . . if. . . certifiable over most land areas.”[502]

At the start of the new century, the prospects for a small supersonic aircraft received a shot in the arm from the Defense Advanced Research Projects Agency, well known for encouraging innovative technologies. DARPA received $7 million in funding starting in FY 2001 to explore design concepts for a Quiet Supersonic Platform (QSP)—an airplane that could have both military and civilian potential. Richard W. Wlezien, a NASA official on loan to DARPA as QSP program manager, wanted ideas that might lead to a Mach 2.4, 100,000-pound aircraft that "won’t rattle your windows or shake the china in your cabinet.” It was hoped that a shaped sonic boom signature of no more than 0.3 psf would allow unrestricted operations over land. By the end of 2000, 16 companies and laboratories had been selected to participate in the QSP project, with the University of Colorado and Stanford University to work on sonic boom propagation and minimization.[503] Support from NASA would include modeling exper­tise, wind tunnel facilities, and flight-test operations.

Although the later phase of the QSP program emphasized military requirements, its most publicized achievement was the Shaped Sonic Boom Demonstration (SSBD). This was not one of its original components.

Proof at Last: The Shaped Sonic Boom Demonstration

In 1995, the Dryden Flight Research Center used an F-16XL to make detailed in-flight supersonic shock wave measurements as near as 80 feet from an SR-71. NASA.

Resurrecting an idea from the HSR program, Domenic Maglieri and col­leagues at Eagle Aeronautics recommended that DARPA include a flight – test program using the BQM-34E Firebee II as a proof-of-concept for the QSP’s sonic boom objectives. Liking this idea, Northrop Grumman Corporation (NGC) wasted no time in acquiring the last remaining Firebee IIs from the Naval Air Weapons Station at Point Mugu, CA, but later determined that they were now too old for test purposes. As an alternative, NGC aerodynamicist David Graham recommended using different versions of the Northrop F-5 (which had been modified into larger training and reconnaissance models) for sonic boom compari­sons. Maglieri then suggested modifications to an F-5E that could flat­ten its sonic boom signature. Based largely on NGC’s proposal for an F-5E Shaped Sonic Boom Demonstration, DARPA in July 2001 selected it over QSP proposals from the other two system integrators, Boeing Phantom Works and Lockheed Martin’s Skunk Works.[504]

In designing the modifications, a Northrop Grumman team in El Segundo, CA, led by David Graham, benefited from its partnership with

a multitalented working group. This team included Kenneth Plotkin of Wyle Laboratories, Domenic Maglieri and Percy Bobbitt of Eagle Aeronautics, Peter G. Coen and colleagues at the Langley Center, John Morgenstern of Lockheed Martin, and other experts from Boeing, Gulfstream, and Raytheon. They applied knowledge gained from the HSR with the latest in CFD technology to begin design of a nose exten­sion and other modifications to reshape the F-5E’s sonic boom. The mod­erate size and flexibility of the basic F-5E design, which had allowed different configurations in the past, made it the perfect choice for the SSBD. The shaped-signature modifications (which harked back to the stillborn SR-71 proposal of the HSR program) were tested in a supersonic wind tunnel at NASA’s Glenn Research Center with favorable results.[505]

In further preparation for the SSBD, the Dryden Center conducted the Inlet Spillage Shock Measurement (ISSM) experiment in February 2002. One of its F-15Bs equipped with an instrumented nose boom gathered pressure data from a standard F-5E flying at about Mach 1.4 and 32,000 feet. The F-15 did these probes at separation distances ranging from 60 to 1,355 feet. In addition to serving as a helpful "dry run” for the planned demonstration, the ISSM experiment proved to be of great value in val­idating and refining Grumman’s proprietary GCNSfv CFD code (based on the Ames Center’s ARC3D code), which was being used to design the SSBD configuration. Application of the flight test measurements nearly doubled the size of the CFD grid, to approximately 14 million points.[506]

For use in the Shaped Sonic Boom Demonstration, the Navy loaned Northrop Grumman one of its standard F-5Es, which the company began to modify at its depot facility in St. Augustine, FL, in January 2003. Under supervision of the company’s QSP program manager, Charles Boccadoro, NGC technicians installed a nose glove and 35-foot fairing under the fuselage (resulting in a "pelican-shaped” profile). The modi­fications, which extended the plane’s length from 46 to approximately 50 feet, were designed to strengthen the bow shock but weaken and stretch out the shock waves from the cockpit, inlets, and wings—keep­ing them from coalescing to form the sharp initial peak of the N-wave signature.[507] After checkout flights in Florida starting on July 25, 2003, the modified F-5E, now called the SSBD F-5E, arrived in early August at Palmdale, CA, for more functional check flights.

On August 27, 2003, on repeated runs through an Edwards super­sonic corridor, the SSBD F-5E, piloted by NGC’s Roy Martin, proved for the first time that—as theorized since the 1960s—a shaped sonic boom signature from a supersonic aircraft could persist through the real atmosphere to the ground. Flying at Mach 1.36 and 32,000 feet on an early-morning run, the SSBD F-5E was followed 45 seconds later by an unmodified F-5E from the Navy’s aggressor training squadron at Fallon, NV. They flew over a high-tech ground array of various sensors manned by personnel from Dryden, Langley, and almost all the organi­zations in the SSBD working group. Figure 9 shows the subtle but sig­nificant difference between the flattened waveform from the SSBD F-5E (blue) and the peaked N-wave from its unmodified counterpart (red) as recorded by a Base Amplitude and Direction Sensor (BADS) on this his­toric occasion. As a bonus, the initial rise in pressure of the shaped sig­nature was only about 0.83 psf as compared with the 1.2 psf from the standard F-5E—resulting in a much quieter sonic boom.[508]

During the last week of August, the two F-5Es flew three missions to provide many more comparative sonic boom recordings. On two other mis­sions, using the technique developed for the SR-71 during HSR, a Dryden F-15B with a specially instrumented nose boom followed the SSBD-modified F-5E to gather near-field measurements. The data from the F-15B probing missions showed how the F-5E’s modifications changed its normal shock wave signature, which data from the ground sensors revealed as persist­ing down through the atmosphere to consistently produce the quieter flat – topped sonic boom signatures. The SSBD met expectations, but unusually high temperatures (even for the Antelope Valley in August) limited the top speed and endurance of the F-5Es. Because of this and a desire to gather more data on maneuvers and different atmospheric conditions,

Peter Coen, Langley’s manager for supersonic vehicles technology, and researchers at Dryden led by SSBD project manager David Richwine and principal investigator Ed Haering, began planning a NASA-funded 4 Shaped Super Boom Experiment (SSBE) to follow up on the SSBD.[509]

NASA successfully conducted the SSBE with 21 more flights during 11 days in January 2004. These met or exceeded all test objectives. Eight of these flights were again accompanied by an unmodified Navy F-5E from Fallon, while Dryden’s F-15B flew four more probing flights to acquire additional near-field measurements. An instrumented L-23 sailplane from the USAF Test Pilot School obtained boom measurements from 8,000 feet (well above the ground turbulence layer) on 13 flights. All events were pre­cisely tracked by differential GPS receivers and Edwards AFB’s extensive telemetry system. In all, the SSBE yielded over 1,300 sonic boom signature recordings and 45 probe datasets—obtaining more information about the effects of turbulence, helping to confirm CFD predictions and wind tun­nel validations, and bequeathing a wealth of data for future engineers and designers.[510] In addition to a series of scientific papers, the SSBD-SSBE accomplishments were the subject of numerous articles in the trade and popular press, and participants presented well-received briefings at vari­ous aeronautics and aviation venues.

Flight Control Systems and Pilot-Induced Oscillations

Pilot-induced oscillations (PIO) occur when the pilot commands the control surfaces to move at a frequency and/or magnitude beyond the capability of the surface actuators. When a hydraulic actuator is com­manded to move beyond its design rate limit, it will lag behind the com­manded deflection. If the command is oscillatory in nature, then the resulting surface movement will be smaller, and at a lower rate, than commanded. The pilot senses a lack of responsiveness and commands even larger surface deflections. This is the same instability that can be generated by a high-gain limit-cycle, except that the feedback path is through the pilot’s stick, rather than through a sensor and an electronic servo. The instability will continue until the pilot reduces his gain (ceases to command large rapid surface movement), thus allowing the actuator to return to its normal operating range.

The prototype General Dynamics YF-16 Lightweight Fighter (LWF) unexpectedly encountered a serious PIO problem on a high-speed taxi test in 1974. The airplane began to oscillate in roll near the end of the test. The pilot, Philip Oestricher, applied large, corrective stick inputs, which saturated the control actuators and produced a pilot-induced oscillation. When the airplane began heading toward the side of the runway, the pilot elected to add power and fly the airplane rather than veer into the dirt along side of the runway. Shortly after the airplane became airborne, his large stick inputs ceased, and the PIO and limit – cycle stopped. Oestricher then flew a normal pattern and landed the air­plane safely. Several days later, after suitable modifications to its flight control system, it completed its "official” first flight.

The cause of this problem was primarily related to the "force stick” used in the prototype YF-16. The control stick was rigidly attached to the airplane, and strain gages on the stick measured the force being applied by the pilot. This electrical signal was transmitted to the flight control sys­tem as the pilot’s command. There was no motion of the stick, thus no feedback to the pilot of how much control deflection he was commanding. During the taxi test, the pilot was unaware that he was commanding full deflection in roll, thus saturating the actuators. The solution was a reduc­tion in the gain of the pilot’s command signal, as well as a geometry change to the stick that allowed a small amount of stick movement. This gave the pilot some tactile feedback as to the amount of control deflection being commanded, and a hard stop when the stick was commanding full deflec­tion.[687] The incident offered lessons in both control system design and in human factors engineering, particularly on the importance of ensuring that pilots receive indications of the magnitude of their control inputs via movable sticks. Subsequent fly-by-wire (FBW) aircraft have incor­porated this feature, as opposed to the "fixed” stick concept tried on the YF-16. As for the YF-16, it won the Lightweight Fighter design competi­tion, was placed in service in more developed form as the F-16 Fighting Falcon, and subsequently became a widely produced Western jet fighter.

Another PIO occurred during the first runway landing of the NASA – Rockwell Space Shuttle orbiter during its approach and landing tests in 1978. After the flare, and just before touchdown, astronaut pilot Fred Haise commanded a fairly large pitch control input that saturated the

Flight Control Systems and Pilot-Induced Oscillations

The General Dynamics YF-1 6 prototype Lightweight Fighter (LWF) in flight over the Edwards range. USAF.

control actuators. At touchdown, the orbiter bounced slightly and the rate-limiting saturation transferred to the roll axis. In an effort to keep the wings level, the pilot made additional roll inputs that created a momen­tary pilot-induced oscillation that continued until the final touchdown. At one point, it seemed the orbiter might veer toward spectators, one of whom was Britain’s Prince Charles, then on a VIP tour of the United States. (Ironically, days earlier, the Prince of Wales had "flown” the Shuttle simu­lator at the NASA Johnson Space Center, encountering the same kind of lateral PIO that Haise did on touchdown.) Again, the cause was related to the high sensitivity of the stick in comparison with the Shuttle’s slow – moving elevon actuators. The incident sparked a long and detailed study of the orbiter’s control system in simulators and on the actual vehicle. Several changes were made to the control system, including a reduced sensitivity of the stick and an increase in the maximum actuator rates.[688]

The above discussion of electronic control system evolution has sequentially addressed the increasing complexity of the systems. This was not necessarily the actual chronological sequence. The North American F-107, an experimental nuclear strike fighter derived from the earlier F-100 Super Sabre, utilized one of the first fly-by-wire control systems— Augmented Longitudinal Control System (ALCS)—in 1956. One of the three prototypes was used by NASA, thus providing the Agency with its first exposure to fly-by-wire technology. Difficult maintenance of the one-of-kind subsystems in the F-107 forced NASA to abandon its use as a research airplane after about 1 year of flying.

Dynamic Instabilities

There are dangerous situations that can occur because of either a coupling of the aerodynamics in different axes or a coupling of the aerodynamics with the inertial characteristics of an airplane. Several of these—Chuck Yeager’s close call with the X-1A in December 1953 and Milburn Apt’s fatal encounter in September 1956—have been mentioned previously.

Inertial Roll Coupling

Inertial roll coupling is the dynamic loss of control of an airplane occur­ring during a rapid roll maneuver. The phenomenon of inertial roll cou­pling is directly related to the evolution of aircraft design. At the time of the Wrights through much of the interwar years, wingspan greatly exceeded fuselage length. As aircraft flight speeds rose, the aspect ratio of wings decreased, and the fineness ratio of fuselages rose, so that by the end of the Second World War, wingspan and fuselage length were roughly equal. In the supersonic era that followed, wingspan reduced dramati­cally, and fuselage length grew appreciably (think, for example, of an air­craft such as the Lockheed F-104). Such aircraft were highly vulnerable to pitch/yaw/roll-coupling when a rapid rolling maneuver was initiated.

The late NACA-NASA engineer and roll-coupling expert Dick Day described inertial roll coupling as "a resonant divergence in pitch or yaw when roll rate equals the lower of the pitch or yaw natural frequencies.”[738]

The existence of inertial roll coupling was first revealed by NACA Langley engineer William H. Phillips in 1948, 5 years before it became a danger­ous phenomenon.[739] Phillips not only described the reason for the potential loss of control but also defined the criteria for identifying the boundaries of loss of control for different aircraft. During the 1950s, several research airplanes and the Century series fighters encountered fairly severe inertial coupling problems exactly as predicted by Phillips. These airplanes dif­fered from the earlier prop-driven airplanes by having thin, short wings and the mass of the jet engine and fuel concentrated along the fuselage longitudinal axis. This resulted in a higher moment of inertia in the pitch and yaw axis but a significantly lower inertia in the roll axis. The low roll inertia also allowed these airplanes to achieve higher roll rates than their predecessors had. The combination allowed the mass along the fuselage to be slung outward when the airplane was rolled rapidly, producing an unexpected increase in pitching and yawing motion. This divergence in pitch or yaw was related to the magnitude of the roll rate and the dura­tion of the roll. If the roll were sustained long enough, the pitch or yaw angles would become quite large, and the airplane would tumble out of control. In most cases, the yaw axis had the lowest level of static stability, so the divergence was observed as a steady increase in sideslip.[740]

In 1954, after North American Aviation had encountered roll instabil­ity with its F-100 aircraft, the Air Force and NAA transferred an F-100A to NACA FRC to allow the NACA to explore the problem through flight­testing and identify a fix. The NACA X-3 research airplane was of a con­figuration much like the modern fighters and was also used by NACA FRC to explore the inertial coupling problem. These results essentially confirmed Phillips’s earlier predictions and determined that increasing the directional stability via larger vertical fin area would mitigate the

problem. The Century series fighters were all reconfigured to reduce their susceptibility to inertial coupling. The vertical tail size was increased for the F-100C and D airplanes.[741] All F-104s were retrofitted with a ven­tral fin on the lower aft fuselage, which increased their directional stabil­ity by 10 to 15 percent. The F-104B, and later models, also had a larger vertical fin and rudder. The F-102 and F-105 received a larger vertical tails than their predecessors (the YF-102 and YF-105) did, and the Mach 2+ F-106 had a larger vertical tail than the F-102 had. Control limiting and placards against continuous rolls (more than 720 degrees of bank) were instituted to ensure safe operation. The X-15 was also susceptible to inertial coupling, and its roll divergence tendencies could be demon­strated on the X-15 simulator. Since high roll rates were not necessary for the high-speed, high-altitude mission of the airplane, the pilots were instructed to avoid high roll rates, and, fortunately, no inertial coupling problems occurred during its flight-testing.