Category AERONAUTICS

Validation in Flight

As Whitcomb was discovering the area rule, Convair in San Diego, CA, was finalizing its design of a new supersonic all-weather fighter-inter­ceptor, began in 1951, for a substantial Air Force contract. The YF-102 Delta Dagger combined Mach’s ideal high-speed bullet-shaped fuselage and delta wings pioneered on the Air Force’s Convair XF-92A research airplane with the new Pratt & Whitney J57 turbojet, the world’s most powerful at 10,000 pounds thrust. Armed entirely with air-to-air and for­ward-firing missiles, the YF-102 was to be the prototype for America’s first piloted air defense weapon’s system.[165] Convair heard of the NACA’s transonic research at Langley and feared that its investment in the YF-102 and the payoff with the Air Force would come to naught if the new air­plane could not fly supersonic.[166] Convair’s reputation and a consider­able Department of Defense contract were at stake.

A delegation of Convair engineers visited Langley in mid-August 1952, where the engineers witnessed a disappointing test of an YF-102 model in the 8-foot HST. The data indicated, according to the NACA at least, that the YF-102 was unable to reach Mach 1 in level flight. The transonic drag exhibited near Mach 1 simply counteracted the ability of the J57 to push the YF-102 through the sound barrier. They asked Whitcomb what could be done, and he unveiled his new rule of thumb for the design of supersonic aircraft. The data, Whitcomb’s solution, and what was perceived as the continued skepticism on the part of his boss, John Stack, left the Convair engineers unconvinced as they went back to San Diego with their model.[167] They did not yet see the area rule as the solution to their perceived problem.

Nevertheless, Whitcomb worked with Convair’s aerodynamicists to incorporate the area rule into the YF-102. New wind tunnel evaluations in May 1953 revealed a nominal decrease in transonic drag. He traveled to San Diego in August to assist Convair in reshaping the YF-102 fuselage. The NACA notified Convair that the modified design, soon be designated the YF-102A, was capable of supersonic flight in October.[168]

Despite the fruitful collaboration with Whitcomb, Convair was hedg­ing its bets when it continued the production of the prototype YF-102 in the hope that it was a supersonic airplane. The new delta wing fighter with a straight fuselage was unable to reach its designed supersonic speeds during its full-scale flight evaluation and tests by the Air Force in January 1954. The disappointing performance of the YF-102 to reach only Mach 0.98 in level flight confirmed the NACAs wind tunnel findings and validated Whitcomb’s research that led to his area rule. The Air Force realistically shifted the focus toward production of the YF-102A after NACA Director Hugh Dryden guaranteed that Chief of Staff of the Air Force Gen. Nathan F. Twining developed a solution to the problem and that the information had been made available to Convair and the rest of the aviation industry. The Air Force ordered Convair to stop production of the YF-102 and retool to manufacture the improved area rule design.[169]

It took Convair only 7 months to prepare the prototype YF-102A, thanks to the collaboration with Whitcomb. Overall, the new fighter-interceptor was much more refined than its predecessor was, with sharper features at the redesigned nose and canopy. An even more powerful version of the J57 turbojet engine produced 17,000 pounds thrust with afterburner. The primary difference was the contoured fuselage that resembled a wasp’s waist and obvious fairings that expanded the circumference of the tail. With an area rule fuselage, the newly re-designed YF-102A easily went supersonic. Convair test pilot Pete Everest undertook the second flight test on December 21, 1954, during which the YF-102A climbed away from Lindbergh Field, San Diego, and "slipped easily past the sound barrier and kept right on going.” More importantly, the YF-102A’s top speed was 25 percent faster, at Mach 1.2.[170]

The Air Force resumed the contract with Convair, and the manu­facturer delivered 975 production F-102A air defense interceptors, with the first entering active service in mid-1956. The fighter-intercep­tors equipped Air Defense Command and United States Air Force in Europe squadrons during the critical period of the late 1950s and 1960s. The increase in performance was dramatic. The F-102A could cruise at 1,000 mph and at a ceiling of over 50,000 feet. It replaced three subsonic interceptor aircraft in the Air Force inventory—the North American F-86D Sabre, F-89 Scorpion, and F-94 Starfire—which were 600-650 mph aircraft with a 45,000-foot ceiling range. Besides speed and alti­tude, the F-102A was better equipped to face the Soviet Myasishchev Bison, Tupolev Bear, and Ilyushin Badger nuclear-armed bombers with a full complement of Hughes Falcon guided missiles and Mighty Mouse rockets. Convair incorporated the F-102A’s armament in a drag – reducing internal weapons bay.

When the F-102A entered operational service, the media made much of the fact that the F-102 "almost ended up in the discard heap” because of its "difficulties wriggling its way through the sound barrier.” With an area rule fuselage, the F-102A "swept past the sonic problem.” The downside to the F-102A’s supersonic capability was the noise from its J57 turbojet. The Air Force regularly courted civic leaders from areas near Air Force bases through familiarization flights so that they would understand the mission and role of the F-102A.[171]

Validation in Flight

The Air Force’s F-102 got a whole new look after implementing Richard Whitcomb’s area rule. At left is the YF-102 without the area rule, and at right is the new YF-102A version. NASA.

Convair produced the follow-on version, the F-106 Delta Dart, from 1956 to 1960. The Dart was capable of twice the speed of the Dagger with its Pratt & Whitney J75 engine.[172] The F-106 was the primary air defense interceptor defending the continental United States up to the early 1980s. Convair built upon its success with the F-102A and the F-106, two cor­nerstone aircraft in the Air Force’s Century series of aircraft, and intro­duced more area rule aircraft: the XF2Y-1 Sea Dart and the B-58 Hustler.[173]

The YF-102/YF-102A exercise was valuable in demonstrating the importance of the area rule and of the NACA to the aviation industry and the military, especially when a major contract was at stake.[174] Whitcomb’s revolutionary and intuitive idea enabled a new generation of supersonic military aircraft, and it spread throughout the industry. Like Convair, Chance Vought redesigned its F8U Crusader carrier-based interceptor with an area rule fuselage. The first production aircraft appeared in September 1956, and deliveries began in March 1957. Four months later, in July 1957, Marine Maj. John H. Glenn, Jr., as part of Project Bullet,

made a recordbreaking supersonic transcontinental flight from Los Angeles to New York in 3 hours 23 minutes. Crusaders served in Navy and Marine fighter and reconnaissance squadrons throughout the 1960s and 1970s, with the last airframes leaving operational service in 1987.[175]

Grumman was the first to design and manufacture from the ground up an area rule airplane. Under contract to produce a carrier-based supersonic fighter, the F9F-9 Tiger, for the Navy, Grumman sent a team of engineers to Langley, just 2 weeks after receiving Whitcomb’s pivotal September 1952 report, to learn more about transonic drag. Whitcomb traveled to Bethpage, NY, in February 1953 to evaluate the design before wind tunnel and rocket-model tests were to be conducted by the NACA. The tests revealed that the new fighter was capable of supersonic speeds in level flight with no appreciable transonic drag. Grumman constructed the prototype, and in August 1954, with company test pilot C. H. "Corky” Meyer at the controls, the F9F-9 achieved Mach 1 in level flight without the assistance of an afterburner, which was a good 4 months before the supersonic flight of the F-102A.[176] The Tiger, later designated the F11F – 1, served with the fleet as a frontline carrier fighter from 1957 to 1961 and with the Navy’s demonstration team, the Blue Angels.[177]

Another aircraft designed from the ground up with an area rule fuselage represented the next step in military aircraft performance in the late 1950s. The legendary Lockheed "Skunk Works” introduced the F-104 Starfighter, "the missile with a man in it,” in 1954. Characterized by its short, stubby wings and needle nose, the production prototype F-104, powered by a General Electric J79 turbojet, was the first jet to exceed Mach 2 (1,320 mph) in flight, on April 24, 1956. Starfighters joined operational Air Force units in 1958. An international manu­facturing scheme and sales to 14 countries in Europe, Asia, and the Middle East ensured that the Starfighter was in frontline use through the rest of the 20th century.[178]

Validation in Flight

The area rule profile of the Grumman Tiger. National Air and Space Museum.

The area rule opened the way for the further refinement of super­sonic aircraft, which allowed for concentration on other areas within the synergistic system of the airplane. Whitcomb and his colleagues con­tinued to issue reports refining the concept and giving designers more options to design aircraft with higher performance. Working by himself and with researcher Thomas L. Fischetti, Whitcomb worked to refine high-speed aircraft, especially the Chance Vought F8U-1 Crusader, which evolved into one of the finest fighters of the postwar era.[179]

Spurred on by the success of the F-104, NACA researchers at the Lewis Flight Propulsion Laboratory in Cleveland, OH, estimated that innovations in jet engine design would increase aircraft speeds upward

of 2,600 mph, or Mach 4, based on advanced metallurgy and the sophis­ticated aerodynamic design of engine inlets, including variable-geom­etry inlets and exhaust nozzles.[180] One thing was for certain: supersonic aircraft of the 1950s and 1960s would have an area rule fuselage.

The area rule gave the American defense establishment breathing room in the tense 1950s, when the Cold War and the constant need to possess the technological edge, real or perceived, was crucial to the sur­vival of the free world. The design concept was a state secret at a time when no jets were known to be capable of reaching supersonic speeds, due to transonic drag. The aviation press had known about it since January 1954 and kept the secret for national security purposes. The NACA intended to make a public announcement when the first aircraft incorporating the design element entered production. Aero Digest unof­ficially broke the story a week early in its September 1955 issue, when it proclaimed, "The SOUND BARRIER has been broken for good,” and declared the area rule the "first major aerodynamic breakthrough in the past decade.” In describing the area rule and the Grumman XF9F-9 Tiger, Aero Digest stressed the bottom line for the innovation: the area rule provided the same performance with less power.[181]

The official announcement followed. Secretary of the Air Force Donald A. Quarles remarked on the CBS Sunday morning television news program Face the Nation on September 11, 1955, that the area rule was "the kind of breakthrough that makes fundamental research so very important.”[182] Aviation Week declared it "one of the most significant military scientific breakthroughs since the atomic bomb.”[183] These statements highlighted the crucial importance of the NACA to American aeronautics.

The news of the area rule spread out to the American public. The media likened the shape of an area rule fuselage to a "Coke bottle,” a "wasp waist,” an "hourglass,” or the figure of actress Marilyn Monroe.[184] While the Coke bottle description of the area rule is commonplace today, the NACA contended that Dietrich Kuchemann’s Coke bottle and Whitcomb’s area rule were not the same and lamented the use of the term. Kuchemann’s 1944 design concept pertained only to swept wings and tailored the specific flow of streamlines. Whitcomb’s rule applied to any shape and contoured a fuselage to maintain an area equivalent to the entire stream tube.[185] Whitcomb actually preferred "indented.”[186] One learned writer explained to readers of the Christian Science Monitor that an aircraft with an area rule slipped through the transonic barrier due to the "Huckleberry Finn technique,” which the character used to suck in his stomach to squeeze through a hole in Aunt Polly’s fence.[187]

Whitcomb quickly received just recognition from the aeronautical community for his 3-year development of the area rule. The National Aeronautics Association awarded him the Collier Trophy for 1954 for his creation of "a powerful, simple, and useful method” of reducing transonic drag and the power needed to overcome it.[188] Moreover, the award cita­tion designated the area rule as "a contribution to basic knowledge” that increased aircraft speed and range while reducing drag and using the same power.[189] As Vice President Richard M. Nixon presented him the award at the ceremony, Whitcomb joined the other key figures in aviation history, including Orville Wright, Glenn Curtiss, and his boss, John Stack, in the pantheon of individuals crucial to the growth of American aeronautics.[190]

Besides the Collier, Whitcomb received the Exceptional Service Medal of the U. S. Air Force in 1955 and the inaugural NACA Distinguished Service Medal in 1956.[191] At the age of 35, he accepted an honorary doc­tor of engineering degree from his alma mater, Worcester Polytechnic

Institute, in 1956.[192] Whitcomb also rose within the ranks at Langley, where he became head of Transonic Aerodynamics Branch in 1958.

Whitcomb’s achievement was part of a highly innovative period for Langley and the rest of the NACA, all of which contributed to the success of the second aeronautical revolution. Besides John Stack’s involvement in the X-1 program, the NACA worked with the Air Force, Navy, and the aerospace industry on the resultant high-speed X-aircraft programs. Robert

T. Jones developed his swept wing theory. Other NACA researchers gen­erated design data on different aircraft configurations, such as variable – sweep wings, for high-speed aircraft. Whitcomb was directly involved in two of these major innovations: the slotted tunnel and the area rule.[193]

Flight Control Systems and Their Design

During the Second World War, there were multiple documented inci­dents and several fatalities that occurred when fighter pilots dove their propeller-driven airplanes at speeds approaching the speed of sound. Pilots reported increasing levels of buffet and loss of control at these speeds. Wind tunnels at that time were incapable of producing reliable meaningful data in the transonic speed range because the local shock waves were reflected off the wind tunnel walls, thus invalidating the data measurements. The NACA and the Department of Defense (DOD) cre­ated a new research airplane program to obtain a better understanding of transonic phenomena through flight-testing. The first of the resulting aircraft was the Bell XS-1 (later X-1) rocket-powered research airplane.

On NACA advice, Bell had designed the X-1 with a horizontal tail configuration consisting of an adjustable horizontal stabilizer with a hinged elevator at the rear for pitch control, at a time when a fixed hor­izontal tail and hinged elevator constituted the standard pitch control configuration for that period.[674] The X-1 incorporated this as an emergency means to increase its longitudinal (pitch) control authority at transonic speeds. It proved a wise precaution because, during the early buildup flights, the X-1 encountered similar buffet and loss of control as reported by earlier fighters. Analysis showed that local shock waves were form­ing on the tail surface, eventually migrating to the elevator hinge line. When they reached the hinge line, the effectiveness of the elevator was significantly reduced, thus causing the loss of control. The X-1 NACA-

U. S. Air Force (USAF) test team determined to go ahead, thanks to the

X-1 having an adjustable horizontal tail. They subsequently validated that the airplane could be controlled in the transonic region by moving the horizontal stabilizer and the elevator together as a single unit. This discovery allowed Capt. Charles E. Yeager to exceed the speed of sound in controlled flight with the X-1 on October 14, 1947.[675]

An extensive program of transonic testing was undertaken at the NACA High-Speed Flight Station (HSFS; subsequently the Dryden Flight Research Center) on evaluating aircraft handling qualities using the conventional elevator and then the elevator with adjustable stabi­lizer.[676] As a result, subsequent transonic airplanes were all designed to use a one-piece, all-flying horizontal stabilizer, which solved the control problem and was incorporated on the prototypes of the first supersonic American jet fighters, the North American YF-100A, and the Vought XF8U-1 Crusader, flown in 1953 and 1954. Today, the all-moving tail is a standard design ele­ment of virtually all high-speed aircraft developed around the globe.[677]

The Concept of Finite Differences Enters the Mathematical Scene

The earliest concrete idea of how to simulate a partial derivative with an algebraic difference quotient was the brainchild of L. F. Richardson in
1910.[767] He was the first to introduce the numerical solution of partial dif­ferential equations by replacing each derivative in the equations with an algebraic expression involving the values of the unknown dependent vari­ables in the immediate neighborhood of a point and then solving simul­taneously the resulting massive system of algebraic equations at all grid points. Richardson named this approach a "finite-difference solution,” a name that has come down without change since 1910. Richardson did not attempt to solve the Navier-Stokes equations, however. He chose a problem reasonably described by a simpler partial differential equation, Laplace’s equation, which in mathematical speak is a linear partial dif­ferential equation and which the mathematicians classify as an ellip­tic partial differential equation.[768] He set up a numerical approach that is still used today for the solution of elliptic partial differential equa­tions called a relaxation method, wherein a sweep is taken throughout the whole grid and new values of the dependent variables are calculated from the old values at neighboring grid points, and then the sweep is repeated over and over until the new values at each grid point converges to the old value from the previous sweep, i. e., the numbers "relax” even­tually to the correct solution.

The Concept of Finite Differences Enters the Mathematical SceneIn 1928, Richard Courant, K. O. Friedrichs, and Hans Lewy pub­lished "On the Partial Difference Equations of Mathematical Physics,” a paper many consider as marking the real beginning of modern finite difference solutions; "Problems involving the classical linear partial dif­ferential equations of mathematical physics can be reduced to algebraic ones of a very much simpler structure,” they wrote, "by replacing the differentials by difference quotients on some (say rectilinear) mesh.”[769] Courant, Friedrichs, and Lewy introduced the idea of "marching solu­tions,” whereby a spatial marching solution starts at one end of the flow and literally marches the finite-difference solution step by step from one

end to the other end of the flow. A time marching solution starts with the all the flow variables at each grid point at some instant in time and marches the finite-difference solution at all the grid points in steps of time to some later value of time. These marching solutions can only be carried out for parabolic or hyperbolic partial differential equations, not for elliptic equations.

The Concept of Finite Differences Enters the Mathematical SceneCourant, Friedrichs, and Lewy highlighted another important aspect of numerical solutions of partial differential equations. Anyone attempt­ing numerical solutions of this nature quickly finds out that the numbers being calculated begin to look funny, make no sense, oscillate wildly, and finally result in some impossible operation such as dividing by zero or taking the square root of a negative number. When this happens, the solution has blown up, i. e., it becomes no solution at all. This is not a ramification of the physics, but rather, a peculiarity of the numerical processes. Courant, Friedrichs, and Lewy studied the stability aspects of numerical solutions and discovered some essential criteria to main­tain stability in the numerical calculations. Today, this stability criterion is referred to as the "CFL criterion” in honor of the three who identified it. Without it, many attempted CFD solutions would end in frustration.

So by 1928, the academic foundations of finite difference solutions of partial differential equations were in place. The Navier-Stokes equa­tions finally stood on the edge of being solved, albeit numerically. But who had the time to carry out the literally millions of calculations that are required to step through the solution? For all practical purposes, it was an impossible task, one beyond human endurance. Then came the electronic revolution and, with it, the digital computer.

Miscellaneous NASA Structural Analysis Programs

Note: Miscellaneous computer programs, and in some cases test facili­ties or other related projects, that have contributed to the advancement of the state of the art in various ways are described here. In some cases, there simply was not room to include them in the main body of the paper; in others, there was not enough information found, or not enough time to do further research, to adequately describe the programs and docu­ment their significance. Readers are advised that these are merely exam­ples; this is not an exhaustive list of all computer programs developed by NASA for structural analysis to the 2010 time period. Dates indicate introduction of capability. Many of the programs were subsequently enhanced. Some of the programs were eventually phased out.

Understanding of FBW Benefits

By the early 1970s, the full range of benefits that could be possible by the use of fly-by-wire flight control had become ever more apparent to aircraft designers and pilots. Relevant technologies were rapidly matur­ing, and various forms of fly-by-wire flight control had successfully been implemented in missiles, aircraft, and spacecraft. Fly-by-wire had many advantages over more conventional flight control systems, in addition to those made possible from the elimination of mechanical linkages. A computer-controlled fly-by-wire flight control system could generate integrated pitch, yaw, and roll control instructions at very high rates to maintain the directed flight path. It would automatically provide artifi­cial stability by constantly compensating for any flight path deviations. When the pilot moved his cockpit controls, commands would be auto­matically be generated to modify the artificial stability enough to enable the desired maneuvers to be accomplished. It could also prevent the pilot from commanding maneuvers that would exceed established air­craft limits in either acceleration or angle of attack. Additionally, the
flight control system could automatically extend high-lift devices, such as flaps, to improve maneuverability.

Подпись: 10Conceptual design studies indicated that active fly-by-wire flight control systems could enable new aircraft to be developed that featured smaller aerodynamic control surfaces. This was possible by reducing the inherent static stability traditionally designed into conventional aircraft. The ability to relax stability while maintaining good handling qualities could also lead to improved agility. Agility is a measure of an aircraft’s ability to rapidly change its position. In the 1960s, a concept known as energy maneuverability was developed within the Air Force in an attempt to quantify agility. This concept states that the energy state of a maneuvering aircraft can be expressed as the sum of its kinetic energy and its potential energy. An aircraft that possesses higher overall energy inherently has higher agility than another aircraft with lower energy. The ability to retain a high-energy state while maneuvering requires high excess thrust and low drag at high-lift maneuvering conditions.[1148] Aircraft designers began synthesizing unique conceptual fighter designs using energy maneuver theory along with exploiting an aerodynamic phenomenon known as vortex lift.[1149] This approach, coupled with com­puter-controlled fly-by-wire flight control systems, was felt to be a key to unique new fighter aircraft with very high agility levels.

Neutrally stable or even unstable aircraft appeared to be within the realm of practical reality and were the subject of ever increasing interest and widespread study in NASA and the Air Force, as well as in foreign governments and the aerospace industry. Often referred to at the time as Control Configured Vehicles, such aircraft could be optimized for specific missions with fly-by-wire flight control system characteristics
designed to improve aerodynamic performance, maneuverability, and agility while reducing airframe weight. Other CCV possibilities included the ability to control structural loads while maneuvering (maneuver load control) and the potential for implementation of unconventional control modes. Maneuver load control could allow new designs to be optimized, for example, by using automated control surface deflections to actively modify the spanwise lift distribution to alleviate wing bending loads on larger aircraft. Unconventional or decoupled control modes would be possible by using various combinations of direct-force flight controls to change the aircraft flight path without changing its attitude or, alter­natively, to point the aircraft without changing the aircraft flight path. These unconventional flight control modes were felt at the time to pro­vide an improved ability to point and fire weapons during air combat.[1150]

Подпись: 10In summary, the full range of benefits possible through the appli­cation of active fly-by-wire flight control in properly tailored aircraft design applications was understood to include:

• Enhanced performance and improved mission effective­ness made possible by the incorporation of relaxed static stability and automatically activated high-lift devices into mission-optimized aircraft designs to reduce drag, optimize lift, and improve agility and handling quali­ties throughout the flight and maneuvering envelope.

• New approaches to aircraft control, such as the use of automatically controlled thrust modulation and thrust vectoring fully integrated with the movement of the air­craft’s aerodynamic flight control surfaces and activa­tion of its high-lift devices.

• Increased safety provided by automatic angle-of-attack and angle-of-sideslip suppression as well as automatic limiting of normal acceleration and roll rates. These measures protect from stall and/or loss of control, pre­vent inadvertent overstressing of the airframe, and give the pilot maximum freedom to focus on effectively maneuvering the aircraft.

• Improved survivability made possible by the elimination

of highly vulnerable hydraulic lines and incorpora­tion of fault tolerant flight control system designs and components.

• Greatly improved flight control system reliability and lower maintenance costs resulting from less mechani­cal complexity and automated built-in system test and diagnostic capabilities.

Подпись: 10Automatic flight control system reconfiguration to allow safe flight, recovery, and landing following battle dam­age or system failures.

NASA Advanced Control Technology for Integrated Vehicles

In 1994, after the conclusion of Air Force S/MTD testing, the aircraft was transferred to NASA Dryden for the NASA Advanced Control Technology for Integrated Vehicles (ACTIVE) research project. ACTIVE was oriented to determining if axisymmetric vectored thrust could contribute to drag reduction and increased fuel economy and range compared with con­ventional aerodynamic controls. The project was a collaborative effort between NASA, the Air Force Research Laboratory, Pratt & Whitney,
and Boeing (formerly McDonnell-Douglas). An advanced digital flight fly-by-wire control system was integrated into the NF-15B, which was given NASA tail No. 837. Higher-thrust versions of the Pratt & Whitney F100 engine with newly developed axisymmetric thrust-vectoring engine exhaust nozzles were installed. The nozzles could deflect engine exhaust up to 20 degrees off centerline. This allowed variable thrust control in both pitch and yaw, or combinations of the two axes. An integrated pro­pulsion and flight control system controlled both aerodynamic flight control surfaces and the engines. New cockpit controls and electron­ics from an F-15E aircraft were also installed in the NF-15B. The first supersonic flight using yaw vectoring occurred in early 1996. Pitch and yaw thrust vectoring were demonstrated at speeds up to Mach 2.0, and yaw vectoring was used at angles of attack up to 30 degrees. An adaptive performance software program was developed and successfully tested in the NF-15B flight control computer. It automatically determined the optimal setting or trim for the thrust-vectoring nozzles and the aero­dynamic control surfaces to minimize aircraft drag. An improvement of Mach 0.1 in level flight was achieved at Mach 1.3 at 30,000 feet with no increase in engine thrust. The ACTIVE NF-15B continued investiga­tions of integrated flight and propulsion control with thrust-vectoring during 1997 and 1998, including an experiment that combined thrust vectoring with aerodynamic controls during simulated ground attack missions. Following completion of the ACTIVE project, the NF-15B was used as a testbed for several other NASA Dryden research experiments, which included the efforts described below.[1275]

Second-Generation DOE-NASA Wind Turbine Systems (Mod-2)

While the primary objectives of the Mod-0, Mod-0A, and Mod-1 pro­grams were research and development, the primary goal of the sec­ond-generation Mod-2 project was for direct and efficient commercial application. The Mod-2 program was designed to determine the poten­tial cost-effectiveness of megawatt-sized remote site operation wind tur­bines when located in areas of moderate (14 mph) winds. Significant changes from the Mod-0 and Mod-1 included use of a soft-shell-type

Подпись: DOE-NASA Mod-2 megawatt wind turbine cluster, Goldendale, WA. NASA. Подпись: 13

tower, an epicyclic gearbox, a quill shaft to attenuate torque and power oscillations, and a rotor designed primarily to commercial steel fabri­cation standards. Other significant changes were the switch from fixed to a teetered (pivot connection) hub rotor, which reduced rotor fatigue, weight, and cost; use of tip control rather than full span control; and orienting the rotor upwind rather than downwind, which reduced rotor fatigue and resulted in a 2.5-percent increase in power produced by the system. Each of these changes resulted in a favorable decrease in the cost of electricity. One of the more important changes, as noted in a Boeing conference presentation, was the switch from the stiff truss type tower to a soft shell tower that weighed less, was much cheaper to fabricate, and enabled the use of heavy but economical and reliable rotor designs.[1505]

Four primary Mod-2 wind turbine units were designed, built, and operated under the second-generation phase of the DOE-NASA pro­gram. The first three machines were built as a cluster at Goldendale, WA, where the Department of Energy selected the Bonneville Power

Подпись: 13 Second-Generation DOE-NASA Wind Turbine Systems (Mod-2)

Administration as the participating utility. The operation of several wind turbines at one site afforded NASA the opportunity to study the effects of single and multiple wind turbines operating together while feeding into a power network. The Goldendale project demonstrated the suc­cessful operation of a cluster of large NASA Mod-2 horizontal-axis wind turbines operating in an unattended mode within a power grid. For con­struction of these machines, DOE-NASA awarded a competitively bid contract in 1977 to Boeing. The first of the three wind turbines started operation in November 1980, and the two additional machines went into service between March and May 1981. As of January 1985, the three – turbine cluster had generated over 5,100 megawatthours of electricity while synchronized to the power grid for over 4,100 hours. The Mod-2 machines had a rated power of 2.5 megawatts, a rotor-blade diameter of 300 feet, and a hub height (distance of the center of blade rotation to the ground) of 300 feet. Boeing evaluated a number of design options and tradeoffs, including upwind or downwind rotors, two – or three-bladed

rotors, teetered or rigid hubs, soft or rigid towers, and a number of different drive train and power generation configurations. A fourth 2.5-megawatt Mod-2 wind turbine was purchased by the Department of the Interior, Bureau of Reclamation, for installation near Medicine Bow, WY, and a fifth turbine unit was purchased by Pacific Gas and Electric for operation in Solano County, CA.[1506]

Softening the Sonic Boom: 50 Years of NASA Research

Lawrence R. Benson

The advent of practical supersonic flight brought with it the shatter­ing shock of the sonic boom. From the onset of the supersonic age in 1947, NACA-NASA researchers recognized that the sonic boom would work against acceptance of routine overland supersonic air­craft operation. In concert with researchers from other Federal and mil­itary organizations, they developed flight-test programs and innovative design approaches to reshape aircraft to minimize boom effects while retaining desirable high-speed behavior and efficient flight performance.

A

FTER ITS FORMATION IN 1958, the National Aeronautics and Space Administration (NASA) began devoting most of its resources to the Nation’s new civilian space programs. Yet 1958 also marked the start of a program in the time-honored aviation mission that the Agency inherited from the National Advisory Committee for Aeronautics (NACA). This task was to help foster an advanced passenger plane that would fly at least twice the speed of sound.

Because of economic and political factors, developing such an aircraft became more than a purely technological challenge. One of the major barriers to producing a supersonic transport involved a phenomenon of atmospheric physics barely understood in the late 1950s: the shock waves generated by supersonic flight. Studying these "sonic booms” and learning how to con­trol them became a specialized and enduring field of NASA research for the next five decades. During the first decade of the 21st century, all the study, testing, and experimentation of the past finally began to reap tangible benefits in the same California airspace where supersonic flight began.[322]

Flutter: The Insidious Threat

The most dramatic interaction of airplane structure with aerodynam­ics is "flutter”: a dynamic, high-frequency oscillation of some part of the structure. Aeroelastic flutter is a rapid, self-excited motion, potentially destructive to aircraft structures and control surfaces. It has been a par­ticularly persistent problem since invention of the cantilever monoplane at the end of the First World War. The monoplane lacked the "bridge truss” rigidity found in the redundant structure of the externally braced biplane and, as it consisted of a single surface unsupported except at the wing root, was prone to aerodynamic induced flutter. The simplest example of flutter is a free-floating, hinged control surface at the trail­ing edge of a wing, such as an aileron. The control surface will begin to oscillate (flap, like the trailing edge of a flag) as the speed increases. Eventually the motion will feed back through the hinge, into the struc­ture, and the entire wing will vibrate and eventually self-destruct. A similar situation can develop on a single fixed aerodynamic surface, like a wing or tail surface. When aerodynamic forces and moments are applied to the surface, the structure will respond by twisting or bending

about its elastic axis. Depending on the relationship between the elas­tic axis of the structure and the axis of the applied forces and moments, the motion can become self-energizing and a divergent vibration—one increasing in both frequency and amplitude—can follow. The high fre­quency and very rapid divergence of flutter causes it to be one of the most feared, and potentially catastrophic, events that can occur on an aircraft. Accordingly, extensive detailed flutter analyses are performed during the design of most modern aircraft using mathematical mod­els of the structure and the aerodynamics. Flight tests are usually per­formed by temporarily fitting the aircraft with a flutter generator. This consists of an oscillating mass, or small vane, which can be controlled and driven at different frequencies and amplitudes to force an aerody­namic surface to vibrate. Instrumentation monitors and measures the natural damping characteristics of the structure when the flutter gener­ator is suddenly turned off. In this way, the flutter mathematical model (frequency and damping) can be validated at flight conditions below the point of critical divergence.

Traditionally, if flight tests show that flutter margins are insuffi­cient, operational limits are imposed, or structural beef-ups might be accomplished for extreme cases. But as electronic flight control tech­nology advances, the prospect exists for so-called "active” suppression of flutter by using rapid, computer-directed control surface deflections. In the 1970s, NASA Langley undertook the first tests of such a system, on a one-seventeenth scale model of a proposed Boeing Supersonic Transport (SST) design, in the Langley Transonic Dynamics Tunnel (TDT). Encouraged, Center researchers followed this with TDT tests of a stores flutter suppression system on the model of the Northrop YF-17, in concert with the Air Force Flight Dynamics Laboratory (AFFDL, now the Air Force Research Laboratory’s Air Vehicles Directorate), later implementing a similar program on the General Dynamics YF-16. Then, NASA DFRC researchers modified a Ryan Firebee drone with such a system. This program, Drones for Aerodynamic and Structural Testing (DAST), used a Ryan BQM-34 Firebee II, an uncrewed aerial vehicle, rather than an inhabited system, because of the obvious risk to the pilot for such an experiment.

The modified Firebee made two successful flights but then, in June 1980, crashed on its third flight. Postflight analysis showed that one of the software gains had been inadvertently set three times higher than planned, causing the airplane wing to flutter explosively right after launch

Flutter: The Insidious Threat

A Drones for Aerodynamic and Structural Testing (DAST) unpiloted structural test vehicle, derived from the Ryan Firebee, during a 1980 flight test. NASA.

from the B-52 mother ship. In spite of the accident, progress was made in the definition of various control laws that could be used in the future for control and suppression of flutter.[714] Overall, NASA research on active flutter suppression has been generally so encouraging that the fruits of it were applied to new aircraft designs, most notably in the "growth” ver­sion of the YF-17, the McDonnell-Douglas (now Boeing) F/A-18 Hornet strike fighter. It used an Active Oscillation Suppression (AOS) system to suppress flutter tendencies induced by its wing-mounted stores and wingtip Sidewinder missiles, inspired to a significant degree by earlier YF-17 and YF-16 Transonic Dynamics Tunnel testing.[715]

The Advent of Direct Analog Computers

The first computers were analog computers. Direct analog computers are networks of physical components (most commonly, electrical components: resistors, capacitors, inductances, and transformers) whose behavior is gov­erned by the same equations as some system of interest that is being mod­eled. Direct analog computers were used in the 1950s and 1960s to solve problems in structural analysis, heat transfer, fluid flow, and other fields.

The method of analysis and the needs that were driving the move from classical idealizations such as slender-beam theory toward computational

Подпись: Representation of structural elements by analog circuits. NASA. Подпись: 8

methods are well stated in the following passage, from an NACA-sponsored paper by Stanley Benscoter and Richard MacNeal (subsequently a cofounder of the MacNeal Schwendler Corporation [MSC] and member of the NASTRAN development team):

The theory is expressed entirely in terms of first-order differ­ence equations in order that analogous electrical circuits can be readily designed and solutions obtained on the Caltech ana­log computer. . . . In the process of designing thin supersonic wings for minimum weight it is found that a convenient con­struction with aluminum alloy consists of a rather thick skin with closely spaced spars and no stringers. Such a wing deflects in the manner of a plate rather than as a beam. Internal stress distributions may be considerably different from those given by beam theory.[794]

Their implementation of analog circuitry for bending loads is illus­trated here and serves as an example of the direct analog modeling of structures.[795]

Direct analog computing had its advocates well into the 1960s. "For complex problems [direct analog] computers are inherently faster than digital machines since they solve the equations for the several nodes simultaneously, while the digital machines solve them sequen­tially. Direct analogs have, moreover, the advantage of visualization;

computer setups as well as programming are more closely related to the actual problem and are based primarily on physical insight rather than on numerical skills.”[796]

The advantages came at a price, however. It could take weeks, in some cases, to set up an analog computer to solve a particular type of problem. And there was no way to store a problem to be revisited at a later date. These drawbacks may not have seemed so important when there was no other recourse available, but they became more and more apparent as the programmable digital computer began to mature.

Hybrid direct-analog/digital computers were hypothesized in the 1960s: essentially a direct analog computer controlled by a digital computer capable of storing and executing program instructions. This would have overcome some of the drawbacks of direct analog com­puters.[797] However, this possibility was most likely overtaken by the rapid progress of digital computers. At the same time these hybrid ana – log/digital computers were just being thought about, NASTRAN was already in development.

A different type of analog computer—the active-element, or indi­rect, analog—consisted of operational amplifiers that performed arith­metic operations. These solved programmed mathematical equations, rather than mimicking a physical system. Several NACA locations— including Langley, Ames, and the Flight Research Center (now Dryden Flight Research Center)—used analog computers of this type for flight simulation. Ames installed its first analog computer in 1947.[798] The Flight Research Center flight simulators used analog computers exclusively from 1955 to 1964 and in combination with digital computers until 1975.[799] This type of analog computer can be thought of as simply a less precise, less reliable, and less versatile predecessor to the digital computer.