Category AERONAUTICS

NASA’s Wind Turbine Supporting Research and Technology Contributions

A very significant NASA Lewis contribution to wind turbine development involved the Center’s Supporting Research and Technology (SR&T) pro­gram. The primary objectives of this component of NASA’s overall wind energy program were to gather and report new experimental data on var­ious aspects of wind turbine operation and to provide more accurate ana­lytical methods for predicting wind turbine operation and performance. The research and technology activity covered the four following areas: (1) aerodynamics, (2) structural dynamics and aeroelasticity, (3) com­posite materials, and (4) multiple wind turbine system interaction. In the area of aerodynamics, NASA testing indicated that rounded blade tips improved rotor performance as compared with square rotor tips, result­ing in an increase in peak rotor efficiency by approximately 10 percent. Also in the aerodynamics area, significant improvements were made in the design and fabrication of the rotor blades. Early NASA rotor blades used standard airfoil shapes from the aircraft industry, but wind turbine rotors operated over a significantly wider range of angles of attack (angles between the centerline of the blade and incoming airstream). The rotor
blades also needed to be designed to last up to 20 or 30 years, which represented a challenging problem because of the extremely high num­ber of cyclic loads involved in operating wind turbines. To help solve these problems, NASA awarded development grants to the Ohio State University to design and wind tunnel test various blade models, and to the University of Wichita to wind tunnel test a rotor airfoil with ailerons.[1516]

Подпись: 13In the structural dynamics area, NASA was presented with prob­lems related to wind loading conditions, including wind shear (vari­ation of wind velocity with altitude), nonuniform wind gusts over the swept rotor area, and directional changes in the wind velocity vec­tor field. NASA overcame this problem by developing a variable speed generator system that permitted the rotor speed to vary with the wind condition, thus producing constant power.

Development work on the blade component of the wind turbine systems, including selecting the material for fabrication of the blades, represents another example of supporting technology. As noted above, NASA Lewis brought considerable structural design expertise in this area to the wind energy program as a result of previous work on heli­copter rotor blades. Early in the program, NASA tested blades made of steel, aluminum, and wood. For the 2-megawatt Mod-1 phase of the program, however, NASA Lewis decided to contract with the Kaman Aerospace Corporation for the design, manufacture, and ground-test­ing of two 100-foot fiberglass composite blades. NASA provided the general design parameters, as well as the static and fatigue load infor­mation, required for Kaman to complete the structural design of the blades. As noted in Kaman’s report on the project, the use of fiberglass, which later became the preferred material for most wind turbine blades, had a number of advantages, including nearly unlimited design flexibil­ity in adopting optimum planform tapers, wall thickness taper, twist, and natural frequency control; resistance to corrosion and other envi­ronmental effects; low notch sensitivity with slow failure propagation rate; low television interference; and low cost potential because of adapt­ability to highly automated production methods.[1517]

The above efforts resulted in a significant number of technical reports, analytical tests and studies, and computer models based upon contributions of a number of NASA, university, and industry engineers and technicians. Many of the findings grew out of tests conducted on the Mod-0 testbed wind turbine at Plum Brook Station. One is work done by Larry A. Viterna, a senior NASA Lewis engineer working on the wind energy project, in aerodynamics. In studying wind turbine performance at high angles of attack, he developed a method (often referred to as the Viterna method or model) that is widely used throughout the wind tur­bine industry and is integrated into design codes that are available from the Department of Energy. The codes have been approved for worldwide certification of wind turbines. Tests with the Mod-0 and Gedser wind turbines formed the basis for his work on this analytical model, which, while not widely accepted at the time, later gained wide acceptance. Twenty-five years later, in 2006, NASA recognized Larry Viterna and Bob Corrigan, who assisted Viterna on data testing, with the Agency’s Space Act Award from the Inventions and Contributions Board.[1518]

Winglets—Yet Another Whitcomb Innovation

Whitcomb continued to search for ways to improve the subsonic air­plane beyond his work on supercritical airfoils. The Organization of the Petroleum Exporting Countries (OPEC) oil embargo of 1973-1974 dramat­ically affected the cost of airline operations with high fuel prices.[232] NASA implemented the Aircraft Energy Efficiency (ACEE) program as part of

the national energy conservation effort in the 1970s. At this time, Science magazine featured an article discussing how soaring birds used their tip feathers to control flight characteristics. Whitcomb immediately shifted focus toward the wingtips of an aircraft—specifically flow phenomena related to induced drag—for his next challenge.[233]

Two types of drag affect the aerodynamic efficiency of a wing: pro­file drag and induced drag. Profile drag is a two-dimensional phenom­enon and is clearly represented by the iconic airflow in the slipstream image that represents aerodynamics. Induced drag results from three­dimensional airflow near the wingtips. That airflow rolls up over the tip and produces vortexes trailing behind the wing. The energy exhausted in the wingtip vortex creates induced drag. Wings operating in high-lift, low-speed performance regimes can generate large amounts of induced drag. For subsonic transports, induced drag amounts to as much as 50 percent of the total drag of the airplane.[234]

As part of the program, Whitcomb chose to address the wingtip vor­tex, the turbulent air found at the end of an airplane wing. These vor­texes resulted from differences in air pressure generated on the upper and lower surfaces of the wing. As the higher-pressure air forms along the lower surface of the wing, it creates its own airflow along the length of the wing. At the wingtip, the airflow curls upward and forms an energy-rob­bing vortex that trails behind. Moreover, wingtip vortexes create enough turbulent air to endanger other aircraft that venture into their wake.

Whitcomb sought a way to control the wingtip vortex with a new aeronautical structure called the winglet. Winglets are vertical wing-like surfaces that extend above and sometimes below the tip of each wing. A winglet designer can balance the relationship between cant, the angle the winglet bends from the vertical, and toe, the angle the winglet devi­ates from airflow, to produce a lift force that, when placed forward of the airfoils, generates thrust from the turbulent wingtip vortexes. This phenomenon is akin to a sailboat tacking upwind while, in the words of aviation observer George Larson: "the keel squeezes the boat forward like a pinched watermelon seed.”[235]

There were precedents for the use of what Whitcomb would call a "nonplanar,” or nonhorizontal, lifting system. It was known in the bur­geoning aeronautical community of the late 1800s that the induced drag of wingtip vortexes degraded aerodynamic efficiency. Aeronautical pio­neer Frederick W. Lanchester patented vertical surfaces, or "endplates,” to be mounted at an airplane’s wingtips, in 1897. His research revealed that vertical structures reduced drag at high lift. Theoretical studies conducted by the Army Air Service Engineering Division in 1924 and the NACA in 1938 in the United States and by the British Aeronautical Research Committee in 1956 investigated various nonplanar lifting sys­tems, including vertical wingtip surfaces.[236] They argued that theoretically, these structures would provide significant aerodynamic improvements for aircraft. Experimentation revealed that while there was the poten­tial of reducing induced drag, the use of simple endplates produced too much profile drag to justify their use.[237]

Whitcomb and his research team investigated the drag-reducing properties of winglets for a first-generation, narrow-body subsonic jet transport in the 8-foot TPT from 1974 to 1976. They used a semispan model, meaning it was cut in half and mounted on the tunnel wall to enable the use of a larger test object that would facilitate a higher Reynolds number and the use of specific test equipment. He compared a wing with a winglet and the same wing with a straight extension to increase its span. The constant was that both the winglet and extension exerted the same structural load on the wing. Whitcomb found that winglets reduced drag by approximately 20 percent and doubled the improvement in the lift-to-drag ratio to 9 percent compared with the straight wing exten­sion. Whitcomb published his findings in "A Design Approach and Selected Wind-Tunnel Results at High Subsonic Speeds for Wing-Tip Mounted Winglets.”[238] It was obvious that the reduction in drag generated by a pair of winglets boosted performance by enabling higher cruise speeds.

With the results, Whitcomb provided a general design approach for the basic design of winglets based on theoretical calculations, physical flow considerations, and emulation of his overall approach to aerody­namics, primarily "extensive exploratory experiments.” What made a winglet rather than a simple vertical surface attached to the end of a wing was the designer’s ability to use well-known wing design princi­ples to incorporate side forces to reduce lift-induced inflow above the wingtip and outflow below the tip to create a vortex diffuser. The place­ment and optimum height of the winglet reflected both aerodynamic and structural considerations in which the designer had to take into account the efficiency of the winglet as well as its weight. For practical operational purposes, the lower portion of the winglet could not hang down far below the wingtip for fear of damage on the ground. The fact that the ideal airfoil shape for a winglet was NASA’s general aviation air­foil made it even easier to incorporate winglets into an aircraft design.[239] Whitcomb’s basic rules provided that foundation.

Experimental wind tunnel studies of winglets in the 8-foot TPT continued through the 1970s. Whitcomb and his colleagues Stuart G. Flechner and Peter F. Jacobs concentrated next on the effects of wing­lets on a representative second-generation jet transport—the semispan model vaguely resembled a Douglas DC-10—at high subsonic speeds, specifically Mach 0.7 to 0.83. They concluded that winglets significantly reduced the induced drag coefficient while lowering overall drag. The smoothing out of the vortex behind the wingtip by the winglet accounted for the reduction in induced drag. As in the previous study, they saw that winglets generated a small increase in lift. The researchers calculated that winglets reduced drag better than simple wingtip extensions did, despite a minor increase in structural bending moments.[240]

Another benefit derived from winglets was the increase in the aspect ratio of wing without compromising its structural integrity. The aspect ratio of a wing is the relationship between span—the distance from tip to tip—and chord—the distance between the leading and trailing edge. A long, thin wing has a high aspect ratio, which produces longer range at a certain cruise speed because it does not suffer from wingtip vortexes and the corresponding energy losses as badly as a short and wide chord low aspect ratio wing. The drawback to a high aspect ratio wing is that its long, thin structure flexes easily under aerodynamic loads. Making this type of wing structurally stable required strengthening that added weight. Winglets offered increased aspect ratio with no increase in wing­span. For every 1-foot increase in wingspan, meaning aspect ratio, there was an increase in wing-bending force. Wings structurally strong enough to support a 2-foot span increase would also support 3-foot winglets while producing the same gain in aspect ratio.[241]

NASA made sure the American aviation industry was aware of the results of Whitcomb’s winglet studies and its part in the ACEE program. Langley organized a meeting focusing on advanced technologies devel­oped by NASA for Conventional Take-Off and Landing (CTOL) aircraft, primarily airliners, business jets, and personal aircraft, from February 28 to March 3, 1978. During the session dedicated to advanced aero-dynamic controls, Flechner and Jacobs summarized the results of wind tunnel results on winglets applied to a Boeing KC-135 aerial tanker, Lockheed L-1011 and McDonnell-Douglas DC-10 airliners, and a generic model with high aspect ratio wings.[242] Presentations from McDonnell-Douglas and Boeing representatives revealed ongoing industry work done under contract with NASA. Interest in winglets was widespread at the con­ference and after as manufacturers across the United States began to consider their use and current and future designs.[243]

Whitcomb’s winglets first found use on general aviation aircraft at the same time he and his colleagues at Langley began testing them on air transport models and a good 4 years before the pivotal CTOL conference. Another visionary aeronautical engineer, Burt Rutan, adopted them for his revolutionary designs. The homebuilt Vari-Eze of 1974 incorporated winglets combined with vertical control surfaces. The airplane was an overall innovative aerodynamic configuration with its forward canard, high aspect ratio wings, low-weight composite materials, a lightweight engine, and pusher propeller, Whitcomb’s winglets on Rutan’s Vari-Eze offered private pilots a stunning alternative to conventional airplanes. His nonstop world-circling Voyager and the Beechcraft Starship of 1986 also featured winglets.[244]

The business jet community was the first to embrace winglets and incorporate them into production aircraft. The first jet-powered air­plane to enter production with winglets was the Learjet Model 28 in 1977. Learjet was in the process of developing a new business jet, the Model 55, and built the Model 28 as a testbed to evaluate its new propri­etary high aspect ratio wing and winglet system, called the Longhorn. The manufacturer developed the system on its own initiative without assistance from Whitcomb or NASA, but it was clear where the winglets came from. The comparison flight tests of the Model 28 with and with­out winglets showed that the former increased its range by 6.5 percent. An additional benefit was improved directional stability. Learjet exhib­ited the Model 28 at the National Business Aircraft Association conven­tion and put it into production because of its impressive performance and included winglets on its successive business jets.[245] Learjet’s com­petitor, Gulfstream, also investigated the value of winglets to its aircraft in the late 1970s. The Gulfstream III, IV, and V aircraft included winglets in their designs. The Gulfstream V, able to cruise at Mach 0.8 for a dis­tance of 6,500 nautical miles, captured over 70 national and world flight records and received the 1997 Collier Trophy. Records aside, the ability to fly business travelers nonstop from New York to Tokyo was unprece­dented after the introduction of the Gulfstream V in 1995.[246]

Actual acceptance on the part of the airline industry was mixed in the beginning. Boeing, Lockheed, and Douglas each investigated the possibility of incorporating winglets into current aircraft as part of the ACEE program. Winglets were a fundamental design technology, and each manufacturer

Winglets—Yet Another Whitcomb Innovation

The KC-135 winglet test vehicle in flight over Dryden. NASA.

had to design them for the specific airframe. NASA awarded contracts to manufacturers to experiment with incorporating them into existing and new designs. Boeing concluded in May 1977 that the economic benefits of winglets did not justify the cost of fabrication for the 747. Lockheed chose to extend the wingtips for the L-1011 and install flight controls to alleviate the increased structural loads. McDonnell-Douglas imme­diately embraced winglets as an alternative to increasing the span of a wing and modified a DC-10 for flight tests.[247]

The next steps for Whitcomb and NASA were flight tests to dem­onstrate the viability of winglets for first and second transport and air­liner generations. Whitcomb and his team chose the Air Force’s Boeing KC-135 aerial tanker as the first test airframe. The KC-135 shared with its civilian version, the pioneering 707, and other early airliners and transports an outer wing that exhibited elliptical span loading with high loading at the outer panels. This wingtip loading was ideal for winglets. Additionally, the Air Force wanted to improve the performance and fuel efficiency of the aging aerial tanker. Whitcomb and this team designed the winglet, and Boeing handled the structural design and fabrication of winglets for an Air Force KC-135. NASA and the Air Force performed the flights tests at Dryden Flight Research Center in 1979 and 1980. The tests revealed a 20-percent reduction in drag because of lift, with a

7-percent gain in the lift-to-drag ratio at cruise, which confirmed Whitcomb’s findings at Langley.[248]

McDonnell-Douglas conducted a winglet flight evaluation program with a DC-10 airliner as part of NASA’s Energy Efficient Transport (EET) program within the larger ACEE program in 1981. The DC-10 represented a second-generation airliner with a wing designed to produce nonelliptic loading to avoid wingtip pitch-up characteristics. As a result, the wing bending moments and structural requirements were not as dramatic as those found on a first-generation airliner, such as the 707. Whitcomb and his team conducted a preliminary wind tunnel examination of a DC-10 model in the 8-foot TPT. McDonnell-Douglas engineers designed the aerodynamic and structural shape of the winglets and manufacturing personnel fabricated them. The company performed flights tests over 16 months, which included 61 comparison flights with a DC-10 leased from Continental Airlines. These industry flight tests revealed that the addition of winglets to a DC-10, combined with a drooping of the outboard aile­rons, produced a 3-percent reduction in fuel consumption at passenger­carrying distances, which met the bottom line for airline operators.[249]

The DC-10 did not receive winglets because of the prohibitive cost of Federal Aviation Administration (FAA) recertification. Nevertheless, McDonnell-Douglas was a zealous convert and used the experience and design data for the advanced derivative of the DC-10, the MD-11, when that program began in 1986. The first flight in January 1990 and the gru­eling 10-month FAA certification process that followed validated the use of winglets on the MD-11. The extended range version could carry almost 300 passengers at distances over 8,200 miles, which made it one of the far­ther flying aircraft in history and ideal for expanding Pacific air routes.[250]

Despite its initial reluctance, Boeing justified the incorporation of winglets into the new 747-400 in 1985, making it the first large U. S. com­mercial transport to incorporate winglets. The technology increased the new airplane’s range by 3 percent, enabling it to fly farther and with more passengers or cargo. The Boeing winglet differed from the McDonnell-Douglas design in that it did not have a smaller fin below the wingtip. Boeing engineers felt the low orientation of the 747 wing, combined with the practical presence of airport ground-handling equip­ment, made the deletion necessary.[251]

It was clear that Boeing included winglets on the 747-400 for improved performance. Boeing also offered winglets as a customer option for its 737 series aircraft and adopted blended winglets for its 737 and the 737-derivative Business Jet provided by Aviation Partners, Inc., of Seattle in the early 1990s. The specialty manufacturer introduced its proprietary "blended winglet” technology—the winglet is joined to the wing via a characteristic curve—and started retrofitting them to Gulfstream II business jets. The performance accessory increased fuel efficiency by 7 percent. That work lead to commercial airliner accounts. Winglets for the 737 offered fuel savings and reduced noise pollution. The relationship with Boeing lead to a joint venture called Aviation Partners Boeing, which now produces winglets for the 757 and 767 airliners. By 2003, there were over 2,500 Boeing jets flying with blended winglets. The going rate for a set of the 8-foot winglets in 2006 was $600,000.[252]

Whitcomb’s winglets found use on transport, airliner, and business jet applications in the United States and Europe. Airbus installed them on production A319, A320, A330, and A340 airliners. It was apparent that regardless of national origin, airlines chose a pair of winglets for their aircraft because they offered a savings of 5 percent in fuel costs. Rather than fly at the higher speeds made possible by winglets, most airline operators simply cruised at their pre-winglet speeds to save on fuel.[253]

Whitcomb’s aerodynamic winglets also found a place outside aero­nautics, as they met the hydrodynamic needs of the international yacht racing community. In preparation for the America’s Cup yacht race in 1983, Australian entrepreneur Alan Bond embraced Whitcomb’s work on spiraling vortex drag and believed it could be applied to racing yachts. He assembled an international team that designed a winged keel, essentially a winglet tacked onto the bottom of the keel, for Australia II. Stunned by Australia II’s upsetting the American 130-year winning streak, the international yachting community heralded the innovation as the key to winning the race. Bond argued that the 1983 America’s Cup race was instrumental to the airline industry’s adoption of the winglet and erro­neously believed that McDonnell-Douglas engineers began experiment­ing with winglets during the summer of 1984.[254]

Of the three triumphant innovations pioneered by Whitcomb, the area rule fuselage, the supercritical wing, and the winglet, perhaps it is the last that is the most easily recognizable for everyday air travel­ers and aviation observers. Engineer and historian Joseph R. Chambers remarked that: "no single NASA concept has seen such widespread use on an international level as Whitcomb’s winglets.” The application to commercial, military, and general aviation aircraft continues.[255]

Proof at Last: The Shaped Sonic Boom Demonstration

After the HSR program dropped plans for an overland supersonic air­liner, Domenic Maglieri compiled a NASA study of all known proposals for smaller supersonic aircraft intended for business customers.[501] In 1998, one year after the drafting of this report, Richard Seebass (now with the University of Colorado) gave some lectures at NATO’s von Karman Institute in Belgium. He reflected on NASA’s conclusion that a practical, commercial­sized supersonic transport would have a sonic boom that was not accept­able to enough people. On the other hand, he believed the recent high-speed research "leads us to conclude that a small, appropriately designed super­sonic business jet’s sonic boom may be nearly inaudible outdoors and hardly discernible indoors.” Such an airplane, he stated, "appears to have a significant market. . . if. . . certifiable over most land areas.”[502]

At the start of the new century, the prospects for a small supersonic aircraft received a shot in the arm from the Defense Advanced Research Projects Agency, well known for encouraging innovative technologies. DARPA received $7 million in funding starting in FY 2001 to explore design concepts for a Quiet Supersonic Platform (QSP)—an airplane that could have both military and civilian potential. Richard W. Wlezien, a NASA official on loan to DARPA as QSP program manager, wanted ideas that might lead to a Mach 2.4, 100,000-pound aircraft that "won’t rattle your windows or shake the china in your cabinet.” It was hoped that a shaped sonic boom signature of no more than 0.3 psf would allow unrestricted operations over land. By the end of 2000, 16 companies and laboratories had been selected to participate in the QSP project, with the University of Colorado and Stanford University to work on sonic boom propagation and minimization.[503] Support from NASA would include modeling exper­tise, wind tunnel facilities, and flight-test operations.

Although the later phase of the QSP program emphasized military requirements, its most publicized achievement was the Shaped Sonic Boom Demonstration (SSBD). This was not one of its original components.

Proof at Last: The Shaped Sonic Boom Demonstration

In 1995, the Dryden Flight Research Center used an F-16XL to make detailed in-flight supersonic shock wave measurements as near as 80 feet from an SR-71. NASA.

Resurrecting an idea from the HSR program, Domenic Maglieri and col­leagues at Eagle Aeronautics recommended that DARPA include a flight – test program using the BQM-34E Firebee II as a proof-of-concept for the QSP’s sonic boom objectives. Liking this idea, Northrop Grumman Corporation (NGC) wasted no time in acquiring the last remaining Firebee IIs from the Naval Air Weapons Station at Point Mugu, CA, but later determined that they were now too old for test purposes. As an alternative, NGC aerodynamicist David Graham recommended using different versions of the Northrop F-5 (which had been modified into larger training and reconnaissance models) for sonic boom compari­sons. Maglieri then suggested modifications to an F-5E that could flat­ten its sonic boom signature. Based largely on NGC’s proposal for an F-5E Shaped Sonic Boom Demonstration, DARPA in July 2001 selected it over QSP proposals from the other two system integrators, Boeing Phantom Works and Lockheed Martin’s Skunk Works.[504]

In designing the modifications, a Northrop Grumman team in El Segundo, CA, led by David Graham, benefited from its partnership with

a multitalented working group. This team included Kenneth Plotkin of Wyle Laboratories, Domenic Maglieri and Percy Bobbitt of Eagle Aeronautics, Peter G. Coen and colleagues at the Langley Center, John Morgenstern of Lockheed Martin, and other experts from Boeing, Gulfstream, and Raytheon. They applied knowledge gained from the HSR with the latest in CFD technology to begin design of a nose exten­sion and other modifications to reshape the F-5E’s sonic boom. The mod­erate size and flexibility of the basic F-5E design, which had allowed different configurations in the past, made it the perfect choice for the SSBD. The shaped-signature modifications (which harked back to the stillborn SR-71 proposal of the HSR program) were tested in a supersonic wind tunnel at NASA’s Glenn Research Center with favorable results.[505]

In further preparation for the SSBD, the Dryden Center conducted the Inlet Spillage Shock Measurement (ISSM) experiment in February 2002. One of its F-15Bs equipped with an instrumented nose boom gathered pressure data from a standard F-5E flying at about Mach 1.4 and 32,000 feet. The F-15 did these probes at separation distances ranging from 60 to 1,355 feet. In addition to serving as a helpful "dry run” for the planned demonstration, the ISSM experiment proved to be of great value in val­idating and refining Grumman’s proprietary GCNSfv CFD code (based on the Ames Center’s ARC3D code), which was being used to design the SSBD configuration. Application of the flight test measurements nearly doubled the size of the CFD grid, to approximately 14 million points.[506]

For use in the Shaped Sonic Boom Demonstration, the Navy loaned Northrop Grumman one of its standard F-5Es, which the company began to modify at its depot facility in St. Augustine, FL, in January 2003. Under supervision of the company’s QSP program manager, Charles Boccadoro, NGC technicians installed a nose glove and 35-foot fairing under the fuselage (resulting in a "pelican-shaped” profile). The modi­fications, which extended the plane’s length from 46 to approximately 50 feet, were designed to strengthen the bow shock but weaken and stretch out the shock waves from the cockpit, inlets, and wings—keep­ing them from coalescing to form the sharp initial peak of the N-wave signature.[507] After checkout flights in Florida starting on July 25, 2003, the modified F-5E, now called the SSBD F-5E, arrived in early August at Palmdale, CA, for more functional check flights.

On August 27, 2003, on repeated runs through an Edwards super­sonic corridor, the SSBD F-5E, piloted by NGC’s Roy Martin, proved for the first time that—as theorized since the 1960s—a shaped sonic boom signature from a supersonic aircraft could persist through the real atmosphere to the ground. Flying at Mach 1.36 and 32,000 feet on an early-morning run, the SSBD F-5E was followed 45 seconds later by an unmodified F-5E from the Navy’s aggressor training squadron at Fallon, NV. They flew over a high-tech ground array of various sensors manned by personnel from Dryden, Langley, and almost all the organi­zations in the SSBD working group. Figure 9 shows the subtle but sig­nificant difference between the flattened waveform from the SSBD F-5E (blue) and the peaked N-wave from its unmodified counterpart (red) as recorded by a Base Amplitude and Direction Sensor (BADS) on this his­toric occasion. As a bonus, the initial rise in pressure of the shaped sig­nature was only about 0.83 psf as compared with the 1.2 psf from the standard F-5E—resulting in a much quieter sonic boom.[508]

During the last week of August, the two F-5Es flew three missions to provide many more comparative sonic boom recordings. On two other mis­sions, using the technique developed for the SR-71 during HSR, a Dryden F-15B with a specially instrumented nose boom followed the SSBD-modified F-5E to gather near-field measurements. The data from the F-15B probing missions showed how the F-5E’s modifications changed its normal shock wave signature, which data from the ground sensors revealed as persist­ing down through the atmosphere to consistently produce the quieter flat – topped sonic boom signatures. The SSBD met expectations, but unusually high temperatures (even for the Antelope Valley in August) limited the top speed and endurance of the F-5Es. Because of this and a desire to gather more data on maneuvers and different atmospheric conditions,

Peter Coen, Langley’s manager for supersonic vehicles technology, and researchers at Dryden led by SSBD project manager David Richwine and principal investigator Ed Haering, began planning a NASA-funded 4 Shaped Super Boom Experiment (SSBE) to follow up on the SSBD.[509]

NASA successfully conducted the SSBE with 21 more flights during 11 days in January 2004. These met or exceeded all test objectives. Eight of these flights were again accompanied by an unmodified Navy F-5E from Fallon, while Dryden’s F-15B flew four more probing flights to acquire additional near-field measurements. An instrumented L-23 sailplane from the USAF Test Pilot School obtained boom measurements from 8,000 feet (well above the ground turbulence layer) on 13 flights. All events were pre­cisely tracked by differential GPS receivers and Edwards AFB’s extensive telemetry system. In all, the SSBE yielded over 1,300 sonic boom signature recordings and 45 probe datasets—obtaining more information about the effects of turbulence, helping to confirm CFD predictions and wind tun­nel validations, and bequeathing a wealth of data for future engineers and designers.[510] In addition to a series of scientific papers, the SSBD-SSBE accomplishments were the subject of numerous articles in the trade and popular press, and participants presented well-received briefings at vari­ous aeronautics and aviation venues.

Flight Control Systems and Pilot-Induced Oscillations

Pilot-induced oscillations (PIO) occur when the pilot commands the control surfaces to move at a frequency and/or magnitude beyond the capability of the surface actuators. When a hydraulic actuator is com­manded to move beyond its design rate limit, it will lag behind the com­manded deflection. If the command is oscillatory in nature, then the resulting surface movement will be smaller, and at a lower rate, than commanded. The pilot senses a lack of responsiveness and commands even larger surface deflections. This is the same instability that can be generated by a high-gain limit-cycle, except that the feedback path is through the pilot’s stick, rather than through a sensor and an electronic servo. The instability will continue until the pilot reduces his gain (ceases to command large rapid surface movement), thus allowing the actuator to return to its normal operating range.

The prototype General Dynamics YF-16 Lightweight Fighter (LWF) unexpectedly encountered a serious PIO problem on a high-speed taxi test in 1974. The airplane began to oscillate in roll near the end of the test. The pilot, Philip Oestricher, applied large, corrective stick inputs, which saturated the control actuators and produced a pilot-induced oscillation. When the airplane began heading toward the side of the runway, the pilot elected to add power and fly the airplane rather than veer into the dirt along side of the runway. Shortly after the airplane became airborne, his large stick inputs ceased, and the PIO and limit – cycle stopped. Oestricher then flew a normal pattern and landed the air­plane safely. Several days later, after suitable modifications to its flight control system, it completed its "official” first flight.

The cause of this problem was primarily related to the "force stick” used in the prototype YF-16. The control stick was rigidly attached to the airplane, and strain gages on the stick measured the force being applied by the pilot. This electrical signal was transmitted to the flight control sys­tem as the pilot’s command. There was no motion of the stick, thus no feedback to the pilot of how much control deflection he was commanding. During the taxi test, the pilot was unaware that he was commanding full deflection in roll, thus saturating the actuators. The solution was a reduc­tion in the gain of the pilot’s command signal, as well as a geometry change to the stick that allowed a small amount of stick movement. This gave the pilot some tactile feedback as to the amount of control deflection being commanded, and a hard stop when the stick was commanding full deflec­tion.[687] The incident offered lessons in both control system design and in human factors engineering, particularly on the importance of ensuring that pilots receive indications of the magnitude of their control inputs via movable sticks. Subsequent fly-by-wire (FBW) aircraft have incor­porated this feature, as opposed to the "fixed” stick concept tried on the YF-16. As for the YF-16, it won the Lightweight Fighter design competi­tion, was placed in service in more developed form as the F-16 Fighting Falcon, and subsequently became a widely produced Western jet fighter.

Another PIO occurred during the first runway landing of the NASA – Rockwell Space Shuttle orbiter during its approach and landing tests in 1978. After the flare, and just before touchdown, astronaut pilot Fred Haise commanded a fairly large pitch control input that saturated the

Flight Control Systems and Pilot-Induced Oscillations

The General Dynamics YF-1 6 prototype Lightweight Fighter (LWF) in flight over the Edwards range. USAF.

control actuators. At touchdown, the orbiter bounced slightly and the rate-limiting saturation transferred to the roll axis. In an effort to keep the wings level, the pilot made additional roll inputs that created a momen­tary pilot-induced oscillation that continued until the final touchdown. At one point, it seemed the orbiter might veer toward spectators, one of whom was Britain’s Prince Charles, then on a VIP tour of the United States. (Ironically, days earlier, the Prince of Wales had "flown” the Shuttle simu­lator at the NASA Johnson Space Center, encountering the same kind of lateral PIO that Haise did on touchdown.) Again, the cause was related to the high sensitivity of the stick in comparison with the Shuttle’s slow – moving elevon actuators. The incident sparked a long and detailed study of the orbiter’s control system in simulators and on the actual vehicle. Several changes were made to the control system, including a reduced sensitivity of the stick and an increase in the maximum actuator rates.[688]

The above discussion of electronic control system evolution has sequentially addressed the increasing complexity of the systems. This was not necessarily the actual chronological sequence. The North American F-107, an experimental nuclear strike fighter derived from the earlier F-100 Super Sabre, utilized one of the first fly-by-wire control systems— Augmented Longitudinal Control System (ALCS)—in 1956. One of the three prototypes was used by NASA, thus providing the Agency with its first exposure to fly-by-wire technology. Difficult maintenance of the one-of-kind subsystems in the F-107 forced NASA to abandon its use as a research airplane after about 1 year of flying.

Dynamic Instabilities

There are dangerous situations that can occur because of either a coupling of the aerodynamics in different axes or a coupling of the aerodynamics with the inertial characteristics of an airplane. Several of these—Chuck Yeager’s close call with the X-1A in December 1953 and Milburn Apt’s fatal encounter in September 1956—have been mentioned previously.

Inertial Roll Coupling

Inertial roll coupling is the dynamic loss of control of an airplane occur­ring during a rapid roll maneuver. The phenomenon of inertial roll cou­pling is directly related to the evolution of aircraft design. At the time of the Wrights through much of the interwar years, wingspan greatly exceeded fuselage length. As aircraft flight speeds rose, the aspect ratio of wings decreased, and the fineness ratio of fuselages rose, so that by the end of the Second World War, wingspan and fuselage length were roughly equal. In the supersonic era that followed, wingspan reduced dramati­cally, and fuselage length grew appreciably (think, for example, of an air­craft such as the Lockheed F-104). Such aircraft were highly vulnerable to pitch/yaw/roll-coupling when a rapid rolling maneuver was initiated.

The late NACA-NASA engineer and roll-coupling expert Dick Day described inertial roll coupling as "a resonant divergence in pitch or yaw when roll rate equals the lower of the pitch or yaw natural frequencies.”[738]

The existence of inertial roll coupling was first revealed by NACA Langley engineer William H. Phillips in 1948, 5 years before it became a danger­ous phenomenon.[739] Phillips not only described the reason for the potential loss of control but also defined the criteria for identifying the boundaries of loss of control for different aircraft. During the 1950s, several research airplanes and the Century series fighters encountered fairly severe inertial coupling problems exactly as predicted by Phillips. These airplanes dif­fered from the earlier prop-driven airplanes by having thin, short wings and the mass of the jet engine and fuel concentrated along the fuselage longitudinal axis. This resulted in a higher moment of inertia in the pitch and yaw axis but a significantly lower inertia in the roll axis. The low roll inertia also allowed these airplanes to achieve higher roll rates than their predecessors had. The combination allowed the mass along the fuselage to be slung outward when the airplane was rolled rapidly, producing an unexpected increase in pitching and yawing motion. This divergence in pitch or yaw was related to the magnitude of the roll rate and the dura­tion of the roll. If the roll were sustained long enough, the pitch or yaw angles would become quite large, and the airplane would tumble out of control. In most cases, the yaw axis had the lowest level of static stability, so the divergence was observed as a steady increase in sideslip.[740]

In 1954, after North American Aviation had encountered roll instabil­ity with its F-100 aircraft, the Air Force and NAA transferred an F-100A to NACA FRC to allow the NACA to explore the problem through flight­testing and identify a fix. The NACA X-3 research airplane was of a con­figuration much like the modern fighters and was also used by NACA FRC to explore the inertial coupling problem. These results essentially confirmed Phillips’s earlier predictions and determined that increasing the directional stability via larger vertical fin area would mitigate the

problem. The Century series fighters were all reconfigured to reduce their susceptibility to inertial coupling. The vertical tail size was increased for the F-100C and D airplanes.[741] All F-104s were retrofitted with a ven­tral fin on the lower aft fuselage, which increased their directional stabil­ity by 10 to 15 percent. The F-104B, and later models, also had a larger vertical fin and rudder. The F-102 and F-105 received a larger vertical tails than their predecessors (the YF-102 and YF-105) did, and the Mach 2+ F-106 had a larger vertical tail than the F-102 had. Control limiting and placards against continuous rolls (more than 720 degrees of bank) were instituted to ensure safe operation. The X-15 was also susceptible to inertial coupling, and its roll divergence tendencies could be demon­strated on the X-15 simulator. Since high roll rates were not necessary for the high-speed, high-altitude mission of the airplane, the pilots were instructed to avoid high roll rates, and, fortunately, no inertial coupling problems occurred during its flight-testing.

The Supersonic Blunt Body Problem

On November 1, 1952, the United States detonated a 10.4-megaton hydrogen test device on Eniwetok Atoll in the Marshall Islands, the first implementation of physicist Edward Teller’s concept for a "super bomb” and a major milestone toward the development of the American hydrogen bomb. With it came the need for a new entry vehicle beyond the long-range strategic bomber, namely the intercontinental ballistic missile (ICBM). This vehicle would be launched by a rocket booster, go into a suborbital trajectory in space, and then enter Earth’s atmosphere
at hypersonic speeds near orbital velocity. This was a brand-new flight regime, and the design of the entry vehicle was dominated by an emerging design consideration: aerodynamic heating. Knowledge of the existence of aerodynamic heating was not new. Indeed, in 1876, Lord Rayleigh published a paper in which he noted that the compression process that creates a high stagnation pressure on a high-velocity body also results in a correspondingly large increase in temperature. In particular, he com­mented on the flow-field characteristic of a meteor entering Earth’s atmo­sphere, noting: "The resistance to a meteor moving at speeds comparable with 20 miles per second must be enormous, as also the rise of temper­ature due to the compression of the air. In fact it seems quite unneces­sary to appeal to friction in order to explain the phenomena of light and heat attending the entrance of a meteor into the earth’s atmosphere.”[772] We note that 20 miles per second is a Mach number greater than 100. Thus, the concept of aerodynamic heating on very high-speed bodies dates back before the 20th century. However, it was not until the mid­dle of the 20th century that aerodynamic heating suddenly became a showstopper in the design of high-speed vehicles, initiated by the press­ing need to design the nose cones of ICBMs.

The Supersonic Blunt Body ProblemIn 1952, conventional wisdom dictated that the shape of a missile’s nose cone should be a slender, sharp-nosed configuration. This was a natural extension of good supersonic design in which the supersonic body should be thin and slender with a sharp nose, all designed to reduce the strength of the shock wave at the nose and therefore reduce the supersonic wave drag. (Among airplanes, the Douglas X-3 Stiletto and the Lockheed F-104A Starfighter constituted perfect exemplars of good supersonic vehicle design, with long slender fuselage, sharp noses, and very thin low aspect ratio [that is, stubby] wings having extremely sharp leading edges. This is all to reduce the strength of the shock waves on the vehicle. The X-3 and F-104 were the first jet airplanes designed for flight at Mach 2, hence their design was driven by the desire to reduce wave drag.) With this tradition in mind, early thinking of ICBM nose cones for hypersonic flight was more of the same, only more so. On the other hand, early calculations showed that the aerodynamic heat­ing to such slender bodies would be enormous. This conventional wis­dom was turned on its head in 1951 because of an epiphany by Harry

Julian Allen ("Harvey” Allen to his friends because of Allen’s delight in the rabbit character named Harvey, played by Jimmy Stewart in the movie of the same name). Allen was at that time the Chief of the High­Speed Research Division at the NACA Ames Research Laboratory. One day, Harvey Allen walked into the office and simply stated that hyper­sonic bodies should "look like cannonballs.”

The Supersonic Blunt Body ProblemHis reasoning was so fundamental and straightforward that it is worth noting here. Imagine a vehicle coming in from space and enter­ing the atmosphere. At the edge of the atmosphere the vehicle velocity is high, hence it has a lot of kinetic energy (one-half the product of its mass and velocity squared). Also, because it is so far above the surface of Earth (the outer edge of the atmosphere is about 400,000 feet), it has a lot of potential energy (its mass times its distance from Earth times the acceleration of gravity). At the outer edge of the atmosphere, the vehicle simply has a lot of energy. By the time it impacts the surface of Earth, its velocity is zero and its height is zero—no kinetic or potential energy remains. Where has all the energy gone? The answer is the only two places it could: the air itself and the body. To reduce aerodynamic heating to the body, you want more of this energy to go into the air and less into the body. Now imagine two bodies of opposite shapes, a very blunt body (like a cannonball) and a very slender body (like a needle), both coming into the atmosphere at hypersonic speeds. In front of the blunt body, there will be a very strong bow shock wave detached from the surface with a very high gas temperature behind the strong shock (typically about 8,000 kelvins). Hence the air is massively heated by the strong shock wave. A lot of energy goes into the air, and therefore, only a moderate amount of energy goes into the body. In contrast, in front of the slender body there will be a much weaker attached shock wave with more moderate gas temperatures behind the shock. Hence the air is only moderately heated, and a massive amount of energy is left to go into the body. As a result, a blunt body shape will reduce the aero­dynamic heating in comparison to a slender body. Indeed, if a slender body would be used, the heating would melt and blunt the nose anyway. This was Allen’s thinking. It led to the use of blunt noses on all modern hypersonic vehicles, and it stands as one of the most important aerody­namic contributions of the NACA over its history.

When Allen introduced his blunt body concept in the early 1950s, there were no theoretical solutions of the flow over a blunt body mov­ing at supersonic or hypersonic speeds. In the flow behind the strong
curved bow shock wave, the flow behind the almost vertical portion of the shock near the centerline is subsonic, and that behind the weaker, more inclined part of the shock wave further above the centerline is super­sonic. There were no pure theoretical solutions to this flow. Numerical solutions of this flow were tried in the 1950s, but all without success. Whatever technique worked in the subsonic region of the flow fell apart in the supersonic region, and whatever technique worked in the supersonic region of the flow fell apart in the subsonic region. This was a potential disaster, because the United States was locked in a grim struggle with the Soviet Union to field and employ intercontinental and intermedi­ate-range ballistic missiles, and the design of new missile nose cones desperately needed solutions of the flow over the body were the United States to ever successfully field a strategic missile arsenal.

The Supersonic Blunt Body ProblemOn the scene now crept CFD. A small ray of hope came from one of the NACA’s and later NASA’s most respected theoreticians, Milton O. Van Dyke. Spurred by the importance of solving the supersonic blunt body problem, Van Dyke developed an early numerical solution for the blunt body flow field using an inverse approach: take a curved shock wave of given shape, calculate the flow behind the shock, and solve for the shape of the body that would generate the assumed shock shape. In turn, the flow over a blunt body of given shape could be approached by repetitive applications of this inverse solution, eventually converg­ing to the shape of interest. If critical, it was nevertheless a potentially tedious task that could have consumed thousands of hours by hand calculation, but by using the early IBM computers at Ames, Van Dyke was able to obtain the first reliable numerical solution of the super­sonic blunt body flow field, publishing his pioneering work in the first NASA Technical Report issued after the establishment of the Agency.[773] Van Dyke’s solution constituted the first important and practical use of CFD but was not without limitations. Although the first major advance­ment toward the solution of the supersonic blunt body problem, it was only half a loaf. His procedure worked well in the subsonic region of the flow field, but it could penetrate only a small distance into the super­sonic region before blowing up. A uniform solution of the whole flow field, including both the subsonic and supersonic regions, was still not obtainable. The supersonic blunt body problem rode into the decade of

the 1960s as daunting as it ever was. Then came the breakthrough, which was both conceptual and numerical.

The Supersonic Blunt Body ProblemFirst the conceptual breakthrough: at this time the flow was being calculated as a steady flow using the Euler equations, i. e., the flow was assumed to be inviscid (frictionless). For this flow, the governing par­tial differential equations of continuity, momentum, and energy (the Euler equations) exhibited one type of mathematical behavior (called elliptic behavior) in the subsonic region of the flow and a completely different type of mathematical behavior (called hyperbolic behavior) in the supersonic region of the flow. The equations themselves remain identical in these two regions, but the actual behavior of the mathemat­ical solutions is different. (This is no real surprise because the physi­cal behavior of the flow is certainly different between a subsonic and a supersonic flow.) This change in the mathematical characteristics of the equations was the root cause of all the problems in obtaining a solu­tion to the supersonic blunt body problem. Hence, any numerical solu­tion appropriate for the elliptic (subsonic) region simply was ill-posed in the supersonic region, and any numerical solution appropriate for the hyperbolic (supersonic) region was ill-posed in the subsonic region. Hence, no unified solutions for the whole flow field could be obtained. Then, in the middle 1960s, the following idea surfaced: the Euler equa­tions written for an unsteady flow (carrying along the time derivatives in the equations) were completely hyperbolic with respect to time no matter whether the flow were locally subsonic or supersonic. Why not solve the blunt body flow field by first arbitrarily assuming flow-field properties at all the grid points, calling this the initial flow field at time zero, and then solving the unsteady Euler equations in steps of time, obtaining new flow-field values at each new step in time? The problem is properly posed because the unsteady equations are hyperbolic with respect to time throughout the whole flow field. After continuing this process over a large number of time steps, eventually the changes in the flow properties from one time step to the next grow smaller, and if one goes out to a sufficiently large number of time steps, the flow con­verges to the steady-state solution. It is this steady-state solution that is desired. The time-marching process is simply a means to the end of obtaining the solution.[774]

The numerical breakthrough was the implementation of this time­marching approach by means of CFD. Indeed, this process can only be carried out in a practical fashion on a high-speed computer using CFD techniques. The time-marching approach revolutionized CFD. Today, this approach is used for the solution of a whole host of different flow problems, but it got its start with the supersonic blunt body problem. The first practical implementation of the time-marching idea to the super­sonic blunt body was carried out by Gino Moretti and Mike Abbett in 1966.[775] Their work transformed the field of CFD. The supersonic blunt body problem in the 1950s and 1960s was worked on by platoons of researchers leading to hundreds of research papers at an untold num­ber of conferences, and it cost millions of dollars. Today, because of the implementation of the time-marching approach by Moretti and Abbett using a finite-difference CFD solution, the blunt body solution is read­ily carried out in many Government and university aerodynamic labo­ratories, and is a staple of those aerospace companies concerned with supersonic and hypersonic flight. Indeed, this approach is so straight­forward that I have assigned the solution of the supersonic blunt body problem as a homework problem in a graduate course in CFD. What bet­ter testimonial of the power of CFD! A problem that used to be unsolv­able and for which much time and money was expended to obtain its solution is now reduced to being a "teachable moment” in a graduate engineering course.

Dryden Flight Research Center

NASA Dryden has a deserved reputation as a flight research and flight­testing center of excellence. Its personnel had been technically respon­sible for flight-testing every significant high-performance aircraft since the advent of the world’s first supersonic research airplane, the Bell XS-1. When this facility first became part of the NACA, as the Muroc Flight Test Unit in the late 1940s, there was no overall engineering functional orga­nization. There was a small team attached to each test aircraft, consist­ing of a project engineer, an engineer, and "computers”—highly skilled women mathematicians. There were also three supporting groups: Flight Operations (pilots, crew chiefs, and mechanics), Instrumentation, and Maintenance. By 1954, however, the High-Speed Flight Station (as it was then called) had been organized into four divisions: Research, Flight Operations, Instrumentation, and Administrative. The Research division included three branches: Stability & Control, Loads, and Performance.

Shortly thereafter, Instrumentation became Data Systems, to include Computing and Simulation (sometimes together, sometimes separately). There were changes to the organization, mostly gradual, after that, but these essential functions were always present from that time forward.[862] There are approximately 50 people in the structures, structural dynamics, and loads disciplines.[863]

Analysis efforts at Dryden include establishing safety of flight for the aircraft tested there, flight-test and ground-test data analysis, and the development and improvement of computational methods for prediction. Commercially available codes are used when they meet the need, and in­house development is undertaken when necessary. Methods development has been conducted in the fields of general finite element analysis, reen­try problems, fatigue and structural life prediction, structural dynamics and flutter, and aeroservoelasticity.

Reentry heating has been an important problem at Dryden since the X15 program. Extensive thermal research was conducted during the NASA YF-12 flight project, which is discussed in a later section. One very significant application of thermal-structural predictive methods was the thermal modeling of the Space Shuttle orbiter, using the Lewis-developed Structural Performance and Redesign (SPAR) finite element code. Prior to first flight, the conditions of the boundary layer on various parts of the vehicle in actual reentry conditions were not known. SPAR was used to model the temperature distribution in the Shuttle structure, for three dif­ferent cases of aerodynamic heating: laminar boundary layer, turbulent boundary layer, and separated flow. Analysis was based on the space trans­portation system—trajectory 1 (STS-1) flight profile—and results were compared with temperature time histories from the first mission. The analysis showed that theflight data were best matched under the assump­tion of extensive laminar flow on the lower surface, and partial laminar flow on the upper surface. This was one piece of evidence confirming the important realization that laminar boundary layers could exist, under conditions of practical interest for hypersonic flight.[864]

Dryden has a unique thermal loads laboratory, large enough to house an SR-71 or similar-sized aircraft and heat the entire airframe to tem­peratures representative of high-speed flight conditions. This facility is used to calibrate flight instrumentation at expected temperatures and also to independently apply thermal and structural loads for the pur­pose of validating predictive methods or gaining a better understanding of the effects of each. It was built during the X15 program in the 1960s and is still in use today.

Aeroservoelastics—the interaction of air loads, flexible structures, and active control systems—has become increasingly important since the late 1970s. As active fly-by-wire control entered widespread use in high-performance aircraft, engineers at Dryden worked to integrate con­trol system modeling with finite-element structural analysis and aero­dynamic modeling. Structural Analysis Routines (STARS) and other programs were developed and improved from the 1980s through the present. Recent efforts have addressed the modeling of uncertainty and adaptive control.[865]

At Dryden, much of the technology transfer to industry comes not so much from the release of codes developed at Dryden, but from the interaction of the contractors who develop the aircraft with the techni­cal groups at Dryden who participate in the analysis and testing. Dryden has been involved, for example, in aeroservoelastic analysis of the X-29; F15s and F18s in standard and modified configurations (including phys­ical airframe modifications and/or modifications to the control laws); High Altitude Long Endurance (HALE) unpiloted vehicles, which have their own set of challenges, usually flying at lower speeds but also hav­ing longer and more flexible structures than fighter class aircraft; and many other aircraft types.

Structural Tailoring of Engine Blades (STAEBL, Glenn, 1985)

This computer program "was developed to perform engine fan blade numerical optimizations. These blade optimizations seek a minimum weight or cost design that satisfies realistic blade design constraints, by tuning one to twenty design variables. The STAEBL system has been gen­eralized to include both fan and compressor blade numerical optimiza­tions. The system analyses have been significantly improved through the inclusion of an efficient plate finite element analysis for blade stress and frequency determinations. Additionally, a finite element based approx­imate severe foreign object damage (FOD) analysis has been included. The new FOD analysis gives very accurate estimates of the full nonlinear

bird ingestion solution. Optimizations of fan and compressor blades have been performed using the system, showing significant cost and weight reductions, while comparing very favorably with refined design validation procedures.”[981]

Cooling

Hypersonics has much to say about heating, so it is no surprise that it also has something to say about cooling. Active cooling merits only slight attention, as in the earlier discussion of Dyna-Soar. Indeed, two books on Shuttle technology run for hundreds of pages and give com­plete treatments of tiles for thermal protection—but give not a word about active cooling.[1077]

What the topic of cooling mostly comprises is the use of passive cooling, which allowed the Shuttle to be built of aluminum.

During the early 1970s, when there was plenty of talk of using a liquid-fueled booster from Marshall Space Flight Center, many design­ers considered building that booster largely of aluminum. This raised the question of how bare aluminum, without protection, could serve in a Shuttle booster. It was common understanding that aluminum airframes lost strength because of aerodynamic heating at speeds beyond Mach 2, with titanium being necessary at higher speeds. But this held true for aircraft in cruise, which faced their temperatures continually. Boeing’s reusable booster was to reenter at Mach 7, matching the top speed of the X-15. Still, its thermal environment resembled a fire that does not burn the hand when one whisks it through quickly. Designers addressed the problem of heating on the vehicle’s vulnerable underside by the simple expedient of using thicker metal construction to cope with anticipated thermal loads. Even these areas were limited in extent, with the contractors noting that "the material gauges (thicknesses) required for strength exceed the minimum heat sink gauges over the majority of the vehicle.”[1078]

McDonnell-Douglas went further. In mid-1971, it introduced its own orbiter, which lowered the staging velocity to 6,200 ft/sec. Its winged booster was 82 percent aluminum heat sink. Its selected configuration was optimized from a thermal standpoint, bringing the largest savings in the weight of thermal protection.[1079] Then, in March 1972, NASA selected solid-propellant rockets for the boosters. The issue of their thermal pro­tection now went away entirely, for these big solids used steel casings that were half an inch thick and that provided heat sink very effectively.[1080]

Aluminum structure, protected by ablatives, also was in the forefront during the Precision Recovery Including Maneuvering Entry (PRIME) program. Martin Marietta, builder of the X-24A lifting body, also devel­oped the PRIME flight vehicle, the SV-5D that later was referred to as the X-23. Although it was only 7 feet in length, it faithfully duplicated the shape of the X-24, even including a small bubble-like protrusion near the front that represented the cockpit canopy.

PRIME complemented ASSET, with both programs conducting flight tests of boost-glide vehicles. However, while ASSET pushed the state of the art in materials and hot structures, PRIME used ablative thermal protection for a more straightforward design and emphasized flight performance. Accelerated to near-orbital velocities by Atlas launch vehicles, the PRIME missions called for boost-glide flight from Vandenberg Air Force Base (AFB) to locations in the western Pacific near Kwajalein Atoll. The SV-5D had higher L/D than Gemini or Apollo did, and, as with those NASA programs, it was to demonstrate preci­sion reentry. The plans called for cross range, with the vehicle flying up to 710 nautical miles to the side of a ballistic trajectory and then arriv­ing within 10 miles of its recovery point.

The piloted X-24A supersonic lifting body, used to assess the SV-5 shape’s approach and landing characteristics, was built of aluminum. The SV-5D also used this material for both its skin and primary struc­ture. It mounted both aerodynamic and reaction controls, the former consisting of right and left body-mounted flaps set well aft. Deflected symmetrically, they controlled pitch; deflected individually (asymmet­rically), they produced yaw and roll. These flaps were beryllium plates that provided a useful thermal heat sink. The fins were of steel honey­comb, likewise with surfaces of beryllium sheet.

Most of the vehicle surface obtained thermal protection from ESA 3560 HF, a flexible ablative blanket of phenolic fiberglass honeycomb that used a silicone elastomer as the filler, with fibers of nylon and silica holding the ablative char in place during reentry. ESA 5500 HF, a high – density form of this ablator, gave added protection in hotter areas. The nose cap and the beryllium flaps used a different material: carbon-phe­nolic composite. At the nose, its thickness reached 3.5 inches.[1081]

The PRIME program made three flights that took place between December 1966 and April 1967. All returned data successfully, with the third flight vehicle also being recovered. The first mission reached 25,300 ft/sec and flew 4,300 miles downrange, missing its target by only 900 feet. The vehicle executed pitch maneuvers but made no attempt at cross range. The next two flights indeed achieved cross range, respec-

Подпись: 9 Cooling
Cooling
Подпись: ALUMINUM ALLOT SKIN

CoolingFILLER BAR. NOMCI PELT

LRSI = Low Temperature Reusable Surface Insulation

HRSI = High Temperature Reusable Surface Insulation

RCG = Reaction Coated Glass

RTV = Room Temperature Vulcanizing Adhesive

INSTALLATION OF TILES ON SHUTTLE

Schematic of low – and high-temperature reusable surface insulation tiles, showing how they were bonded to the skin of the Space Shuttle. NASA.

tively of 500 and 800 miles, and the precision again was impressive. Flight 2 missed its aim point by less than 2 miles. Flight 3 missed by over 4 miles, but this still was within the allowed limit. Moreover, the terminal guidance radar had been inoperative, which probably contrib­uted to the lack of absolute accuracy.[1082]

A few years later, the Space Shuttle brought the question of whether its primary structure and skin should perhaps be built of titanium. Titanium offered a potential advantage because of its temperature resis­tance; hence, its thermal protection might be lighter. But the apparent weight saving was largely lost because of a need for extra insulation to protect the crew cabin, payload bay, and onboard systems. Aluminum could compensate for its lack of heat resistance because it had higher
thermal conductivity than titanium. It therefore could more readily spread its heat throughout the entire volume of the primary structure.

Designers expected to install RSI tiles by bonding them to the skin, and for this aluminum had a strong advantage. Both metals form thin lay­ers of oxide when exposed to air, but that of aluminum is more strongly bound. Adhesive, applied to aluminum, therefore held tightly. The bond with titanium was considerably weaker and appeared likely to fail in operational use at around 500 °F. This was not much higher than the limit for aluminum, 350 °F, which showed that the temperature resis­tance of titanium did not lend itself to operational use.[1083]

F-8 DFBW: Phase I

In implementing the DFBW F-8 program, the Flight Research Center chose to remove all the mechanical linkages and cables to the flight control surfaces, thus ensuring that the aircraft would be a pure digital fly-by-wire system from the start. The flight control surfaces would be hydraulically activated, based on electronic signals transmitted via cir­cuits that were controlled by the digital flight control system (DFCS). The F-8C’s gun bays were used to house auxiliary avionics, the Apollo Display and Keyboard (DSKY) unit,[1155] and the backup analog flight control sys­tem. The Apollo digital guidance computer, its related cooling system, and the inertial platform that also came from the Apollo program were installed in what had been the F-8C avionics equipment bay. The refer­ence information for the digital flight control system was provided by the Apollo Inertial Management System (IMS). In the conversion of the F-8 to the fly-by-wire configuration, the original F-8 hydraulic actuator slider values were replaced with specially developed secondary actua­tors. Each secondary actuator had primary and backup modes. In the primary mode, the digital computer sent analog position signals for a single actuation cylinder. The cylinder was controlled by a dual self-mon­
itoring servo valve. One valve controlled the servo; the other was used as a model for comparison. If the position values differed by a predeter­mined amount, the backup was engaged. In the backup mode, three servo cylinders were operated in a three-channel, force-summed arrangement.[1156]

Подпись: 10The triply redundant backup analog-computer-based flight control system—known as the Backup Control System (BCS)—used an indepen­dent power supply and was based on the use of three Sperry analog com­puters.[1157] In the event of loss of electrical power, 24-volt batteries could keep the BCS running for about 1 hour. Flight control was designed to revert to the BCS if any inputs from the primary digital control system to the flight control surface actuators did not match up; if the primary (digital) computer self-detected internal failures, in the event of electri­cal power loss to the primary system; and if inputs to secondary actua­tors were lost. The pilot had the ability to disengage the primary flight control system and revert to the BCS using a paddle switch mounted on the control column. The pilot could also vary the gains[1158] to the digi­tal flight control system using rotary switches in the cockpit, a valuable feature in a research aircraft intended to explore the development of a revolutionary new flight control system.

The control column, rudder pedals, and electrical trim switches from the F-8C were retained. Linear Differential Variable Transformers (LDVTs) installed in the base of the control stick were used to detect pilot control inputs. They generated electrical signals to the flight con­trol system to direct aircraft pitch and yaw changes. Pilot inputs to the rudder pedals were detected by LDVTs in the tail of the aircraft. There were two LDVTs in each aircraft control axis, one for the primary (dig­ital) flight control system and one for the BCS. The IMS supplied the flight control system with attitude, velocity, acceleration, and position change references that were compared to the pilot’s control inputs; the flight control computer would then calculate required control surface position changes to maneuver the aircraft as required.

By the end of 1971, software for the Phase I effort was well along, and the aircraft conversion was nearly complete. Extensive testing of the air­craft’s flight control systems was accomplished using the Iron Bird, and

Подпись: For the DFBW F-8 program, the Flight Research Center removed all mechanical linkages and cables to the flight control surfaces. NASA. Подпись: 10

planned test mission profiles were evaluated. On May 25, 1972, NASA test pilot Gary Krier made the first flight ever of an aircraft under dig­ital computer control, when he took off from Edwards Air Force Base. Envelope expansion flights and tests of the analog BCS followed with supersonic flight being achieved by mid-June. Problems were encoun­tered with the stability augmentation system especially, in formation flight because of the degree of attention required by the pilot to control the aircraft in the roll axis. As airspeeds approached 400 knots, control about all axes became too sensitive. Despite modifications, roll axis con­trol remained a problem with lag encountered between control stick movement and aircraft response. In September 1972, Tom McMurtry flew the aircraft, finding that the roll response was highly sensitive and could lead to lateral pilot-induced oscillations (PIOs). By May 1973, 23 flights had been completed in the Phase I DFBW program. Another seven flights were accomplished in June and July, during which differ­ent gain combinations were evaluated at various airspeeds.

In August 1973, the DFBW F-8 was modified to install an YF-16 side stick controller.[1159] It was connected to the analog BCS only. The center stick installation was retained. Initially, test flights by Gary Krier and Tom McMurtry were restricted to takeoff and landing using the center control stick, with transition to the BCS and side stick control being made at altitude. Aircraft response and handling qualities were rated as highly positive. A wide range of maneuvers, including takeoffs and landings, were accomplished by the time the side stick evaluation was completed in October 1973. The two test pilots concluded that the YF-16 side stick control scheme was feasible and easy for pilots to adapt to. This inspired high confidence in the concept and resulted in the incor­poration of the side stick controller into the YF-16 flight control design. Subsequently, four other NASA test pilots flew the aircraft using the side stick controller in the final six flights of the DFBW F-8 Phase I effort, which concluded in November 1973. Among these pilots was General Dynamics chief test pilot Phil Oestricher, who would later fly the YF-16 on its first flight in January 1974. The others were NASA test pilots William H. Dana (a former X-15 pilot), Einar K. Enevoldson, and astronaut Kenneth Mattingly. During Phase I flight-testing, the Apollo digital computer maintained its reputation for high reliability and the three-channel analog backup fly-by-wire system never had to be used.