Category NASA’S CONTRIBUTIONS TO AERONAUTICS

New Issues: The F/A-I8E/F Program

Подпись: 13The U. S. Navy funded the F/A-18E/F Super Hornet program in 1992 to design its next-generation fighter as a replacement for the canceled A-12 aircraft and the earlier legacy F/A-18 versions. Although some­what similar in configuration to existing F/A-18C aircraft, the new design was a larger aircraft with critical differences in wing design and other features that impact high-angle-of-attack behavior. Two of the first configuration design issues centered on the shape of the wing leading-edge extension and the ability to obtain crisp nose-down control for recovery at extreme angles of attack. Representatives of Langley’s high-angle-of-attack specialty areas were participants in a 15-member NASA-industry-DOD team who conducted wind tunnel studies and anal­yses that provided the basis for the final design of the F/A-18E/F LEX.[1324]

Aerodynamic stability and control characteristics for the Super Hornet for high-angle-of-attack conditions were conducted in the Full – Scale Tunnel to develop a database for piloted simulator evaluations using the Langley and Boeing simulators. Once again, the Spin Tunnel was used for identifying spin modes, spin recovery characteristics, an acceptable emergency spin recovery parachute, and measurement of rotational aerodynamic characteristics using the rotary-balance tech­nique. Langley used an extremely large (over 1,000 pounds) drop model for departure susceptibility and poststall testing at the NASA Wallops Flight Facility to provide risk reduction for the subsequent full-scale flight-test program.[1325]

One of NASA’s more critical contributions to the Super Hornet pro­gram began in March 1996, when a preproduction F/A-18E experienced an unacceptable uncommanded abrupt roll-off that randomly occurred at high angles of attack (below maximum lift) at transonic speeds and involved rapid bank angle changes of up to 60 degrees in the heart of the maneuvering envelope. Engineering analyses indicated that the wing drop was caused by a sudden asymmetric loss of lift on the wing, but the fundamental cause of the problem was not well understood. Following the formation of a DOD Blue Ribbon Panel, a research pro­gram was recommended to be undertaken to develop design methods to avoid such problems on future fighter aircraft. This recommenda­
tion was accepted, and a joint NASA and Navy Abrupt Wing Stall (AWS) program was initiated to conduct the research.[1326]

Подпись: 13Meanwhile, extensive efforts by industry and the Navy were under­way to resolve the wing-drop problem through wind tunnel tests and "cut and try” airframe modifications during flight tests. Over 25 potential wing modifications were assessed, and computational fluid dynamics studies were undertaken without a feasible fix identified. Subsequently, the automatically programmed wing leading-edge flaps were examined as a solution. Typical of current advanced fighters, the F/A-18E/F uses flaps with deflection programs scheduled as functions of angle of attack and Mach number. A revised deflection schedule was adopted in 1997 as a major improvement, but the aircraft still exhibited less serious wing drops at many test conditions. As the Navy test and evaluation staff con­tinued to explore further solutions to wing drop, exploratory flight tests with the outer-wing fold fairing removed indicated that the wing drop had been eliminated. However, unacceptable performance and buffet characteristics resulted from removing the fairing.

Langley personnel suggested that passive porosity be examined as a more acceptable treatment of the wing fold area based on NASA’s exten­sive fundamental research. Subsequently evaluated by the Navy flight – test team, the porous fold doors became a feature of the production F/A-18E/F and permitted continued production of the aircraft.

With the F/A-18E/F wing-drop problem resolved, NASA and the Naval Air Systems Command began their efforts in the AWS research program that used a coordinated approach involving static and dynamic tests at Langley in several wind tunnels, piloted simulator studies, and compu­tational fluid dynamics studies conducted by the Navy and NASA. The scope of research focused on the causes and resolution of the unexpected wing drop that had been experienced for the preproduction F/A-18E/F and the wealth of aerodynamic wind tunnel and flight data that had been collected, but the program was intentionally designed to include assessments of other aircraft for validation of conclusions. The stud­ies included the F/A-18C and the F-16 (both of which do not exhibit wing drop) and the AV-8B and the preproduction version of the F/A-18E (which do exhibit wing drop at the extremes of the flight envelope).

After 3 years of intense research on the complex topic of transonic shock-induced asymmetric stall at high angles of attack, the AWS program produced an unprecedented amount of design information, engineering tools, and recommendations regarding developmental approaches to avoid wing drop for future fighters. Particularly signifi­cant output from the program included the development and validation of a single-degree-of-freedom free-to-roll wind tunnel testing technique for detection of wing-drop tendencies, an assessment of advanced CFD codes for prediction of steady and unsteady shock-induced separation at high angles of attack for transonic flight, and a definition of simulator model requirements for assessment and prediction of wing drop. NASA and Lockheed Martin have already applied the free-to-roll concept in the development of the wing geometry for the F-35 fighter.[1327]

NASA’s Flight Test of the Russian Tu-144 SST

Robert A. Rivers

I

Подпись: 15 The aeronautics community has always had a strong international flavor. This case study traces how NASA researches in the late 1990s used a Russian supersonic airliner, the Tupolev Tu-144LL — built as a visible symbol of technological prowess at the height of the Cold War—to derive supersonic cruise and aerodynamic data. Despite numerous technical, organizational, and political challenges, the joint research team obtained valuable information and engendered much goodwill.

O

N A COOL, CLEAR, AND GUSTY SEPTEMBER MORNING in 1998, two NASA research pilots flew a one-of-a-kind, highly modi­fied Russian Tupolev Tu-144LL Mach 2 Supersonic Transport (SST) side by side with a Tupolev test pilot, navigator, and flight engi­neer from a formerly secret Soviet-era test facility, the Zhukovsky Air Development Center 45 miles southeast of Moscow, on the first of 3 flights to be flown by Americans.[1458] These flights in Phase II of the joint United States-Russian Tu-144 flight experiments sponsored by NASA’s High-Speed Research (HSR) program were the culmination of 5 years of preparation and cooperation by engineers, technicians, and pilots in the largest joint aeronautics program ever accomplished by the two countries. The two American pilots became the first and only non­Russian pilots to fly the former symbol of Soviet aeronautics prowess, the Soviet counterpart of the Anglo-French Concorde SST.

They completed a comprehensive handling qualities evaluation of the Tu-144 while 6 other experiments gathered data from hundreds of onboard sensors that had been painstakingly mounted to the airframe
in the preceding 3 years by NASA, Tupolev, and Boeing engineers and technicians. Only four more flights in the program awaited the Tu-144LL, the last of its kind, before it was retired. With the removal from service of the Concorde several years later, the world lost its only supersonic passenger aircraft and witnessed the end of an amazing era.

Подпись: 15This is the story of a remarkable flight experiment involving the United States and Russia, NASA and Tupolev, and the men and women who worked together to accomplish a series of unique flight tests from late 1996 to early 1999 while overcoming numerous technical, program­matic, and political obstacles. What they accomplished in the late 1990s cannot be accomplished today. There are no more Supersonic Transports to be used as test platforms, no more national programs to explore com­mercial supersonic flight. NASA and Tupolev established a benchmark for international cooperation and trust while producing data of incal­culable value with a class of vehicles that no longer exists in a regime that cannot be reached by today’s transport airplanes.[1459]

Lightning and the Composite, Electronic Airplane

FAA Federal Air Regulation (FAR) 23.867 governs protection of aircraft against lightning and static electricity, reflecting the influence of decades of NASA lightning research, particularly the NF-106B program. FAR 23.867 directs that an airplane "must be protected against catastrophic effects from lightning,” by bonding metal components to the airframe or, in the case of both metal and nonmetal components, designing them so that if they are struck, the effects on the aircraft will not be catastrophic. Additionally, for nonmetallic components, FAR 23.867 directs that air­craft must have "acceptable means of diverting the resulting electrical current so as not to endanger the airplane.”[166]

Among the more effective means of limiting lightning damage to aircraft is using a material that resists or minimizes the powerful pulse of an electromagnetic strike. Late in the 20th century, the aerospace industry realized the excellent potential of composite materials for that purpose. Aside from older bonded-wood-and-resin aircraft of the inter­war era, the modern all-composite aircraft may be said to date from the 1960s, with the private-venture Windecker Eagle, anticipating later air­craft as diverse as the Cirrus SR-20 lightplane, the Glasair III LP (the first composite homebuilt aircraft to meet the requirements of FAR 23), and the Boeing 787. The 787 is composed of 50-percent carbon lami­nate, including the fuselage and wings; a carbon sandwich material in the engine nacelles, control surfaces, and wingtips; and other compos­ites in the wings and vertical fin. Much smaller portions are made of aluminum and titanium. In contrast, indicative of the rising prevalence of composites, the 777 involved just 12-percent composites.

An even newer composite testbed design is the Advanced Composite Cargo Aircraft (ACCA). The modified twin-engine Dornier 328Jet’s rear fuse­lage and vertical stabilizer are composed of advanced composite materials produced by out-of-autoclave curing. First flown in June 2009, the ACCA is the product of a 10-year project by the Air Force Research Laboratory.[167]

NASA research on lightning protection for conventional aircraft structures translated into use for composite airframes as well. Because experience proved that lightning could strike almost any spot on an airplane’s surface—not merely (as previously believed) extremities such as wings and propeller tips—researchers found a lesson for designers using new materials. They concluded, "That finding is of great impor­tance to designers employing composite materials, which are less con­ductive, hence more vulnerable to lightning damage than the aluminum allows they replace.”[168] The advantages of fiberglass and other compos­ites have been readily recognized: besides resistance to lightning strikes, composites offer exceptional strength for light weight and are resistant to corrosion. Therefore, it was inevitable that aircraft designers would increasingly rely upon the new materials.[169]

But the composite revolution was not just the province of established manufacturers. As composites grew in popularity, they increasingly were employed by manufacturers of kit planes. The homebuilt aircraft market, a feature of American aeronautics since the time of the Wrights, expanded greatly over the 1980s and afterward. NASA’s heavy investment in light­ning research carried over to the kit-plane market, and Langley released a Small Business Innovation Research (SBIR) contract to Stoddard- Hamilton Aircraft, Inc., and Lightning Technologies, Inc., for develop­ment of a low-cost lightning protection system for kit-built composite aircraft. As a result, Stoddard-Hamilton’s composite-structure Glasair III LP became the first homebuilt aircraft to meet the standards of FAR 23.[170]

One of the benefits of composite/fiberglass airframe materials is inherent resistance to structural damage. Typically, composites are produced by laying spaced bands of high-strength fibers in an angu­lar pattern of perhaps 45 degrees from one another. Selectively wind­ing the material in alternating directions produces a "basket weave” effect that enhances strength. The fibers often are set in a thermo­plastic resin four or more layers thick, which, when cured, produces extremely high strength and low weight. Furthermore, the weave pat­tern affords excellent resistance to peeling and delamination, even when struck by lightning. Among the earliest aviation uses of composites were engine cowlings, but eventually, structural components and then entire composite airframes were envisioned. Composites can provide addi­tional electromagnetic resistance by winding conductive filaments in a spiral pattern over the structure before curing the resin. The filaments help dissipate high-voltage energy across a large area and rapidly divert the impulses before they can inflict significant harm.[171]

It is helpful to compare the effects of lightning on aluminum aircraft to better understand the advantage of fiberglass structures. Aluminum readily conducts electromagnetic energy through the airframe, requir­ing designers to channel the energy away from vulnerable areas, espe­cially fuel systems and avionics. The aircraft’s outer skin usually offers the path of least resistance, so the energy can be "vented” overboard. Fiberglass is a proven insulator against electromagnetic charges. Though composites conduct electricity, they do so less readily than do alumi­num and other metals. Consequently, though it may seem counterintu­itive, composites’ resistance to EMP strokes can be enhanced by adding small metallic mesh to the external surfaces, focusing unwanted currents away from the interior. The most common mesh materials are alumi­num and copper impressed into the carbon fiber. Repairs of lightning – damaged composites must take into account the mesh in the affected area and the basic material and attendant structure. Composites miti­gate the effect of a lightning strike not only by resisting the immediate area of impact, but also by spreading the effects over a wider area. Thus, by reducing the energy for a given surface area (expressed in amps per square inch), a potentially damaging strike can be rendered harmless.

Because technology is still emerging for detection and diagno­sis of lightning damage, NASA is exploring methods of in-flight and postflight analysis. Obviously, the most critical is in-flight, with aircraft sensors measuring the intensity and location of a lightning strike’s cur­rent, employing laboratory simulations to establish baseline data for a specific material. Thus, the voltage/current test measurements can be compared with statistical data to estimate the extent of damage likely upon the composite. Aircrews thereby can evaluate the safety of flight risks after a specific strike and determine whether to continue or to land.

NASA’s research interests in addressing composite aircraft are threefold:

• Deploying onboard sensors to measure lightning-strike strength, location, and current flow.

• Obtaining conductive paint or other coatings to facili­tate current flow, mitigating airframe structural dam­age, and eliminating requirements for additional internal shielding of electronics and avionics.

• Compiling physics-based models of complex compos­ites that can be adapted to simulate lightning strikes to quantify electrical, mechanical, and thermal parameters to provide real-time damage information.

As testing continues, NASA will provide modeling data to manufac­turers of composite aircraft as a design tool. Similar benefits can accrue to developers of wind turbines, which increasingly are likely to use com­posite blades. Other nonaerospace applications can include the electric power industry, which experiences high-voltage situations.[172]

Performance Data Analysis and Reporting System

In yet another example of NASA developing a database system with and for the FAA, the Performance Data Analysis and Reporting System (PDARS) began operation in 1999 with the goal of collecting, analyz­ing, and reporting of performance-related data about the National Airspace System. The difference between PDARS and the Aviation Safety Reporting System is that input for the ASRS comes voluntarily from people who see something they feel is unsafe and report it, while input for PDARS comes automatically—in real time—from electronic sources such as ATC radar tracks and filed flight plans. PDARS was created as an element of NASA’s Aviation Safety Monitoring and Modeling project.[239]

From these data, PDARS calculates a variety of performance mea­sures related to air traffic patterns, including traffic counts, travel times between airports and other navigation points, distances flown, gen­eral traffic flow parameters, and the separation distance from trailing

aircraft. Nearly 1,000 reports to appropriate FAA facilities are automat­ically generated and distributed each morning, while the system also allows for sharing data and reports among facilities, as well as facilitat­ing larger research projects. With the information provided by PDARS, FAA managers can quickly determine the health, quality, and safety of day-to-day ATC operations and make immediate corrections.[240]

The system also has provided input for several NASA and FAA stud­ies, including measurement of the benefits of the Dallas/Fort Worth Metroplex airspace, an analysis of the Los Angeles Arrival Enhancement Procedure, an analysis of the Phoenix Dryheat departure procedure, measurement of navigation accuracy of aircraft using area navigation en route, a study on the detection and analysis of in-close approach changes, an evaluation of the benefits of domestic reduced vertical separation minimum implementation, and a baseline study for the airspace flow program. As of 2008, PDARS was in use at 20 Air Route Traffic Control Centers, 19 Terminal Radar Approach Control facil­ities, three FAA service area offices, the FAA’s Air Traffic Control System Command Center in Herndon, VA, and at FAA Headquarters in Washington, DC.[241]

Human Factors Research: Meshing Pilots with Planes

Human Factors Research: Meshing Pilots with PlanesSteven A. Ruffin

The invention of flight exposed human limitations. Altitude effects endan­gered early aviators. As the capabilities of aircraft grew, so did the challenges for aeromedical and human factors researchers. Open cock­pits gave way to pressurized cabins. Wicker seats perched on the lead­ing edge of frail wood-and-fabric wings were replaced by robust metal seats and eventually sophisticated rocket-boosted ejection seats. The casual cloth work clothes and hats presaged increasingly complex suits.

S MERCURY ASTRONAUT ALAN B. SHEPARD, JR., lay flat on his back, sealed in a metal capsule perched high atop a Redstone rocket on the morning of May 5, 1961, many thoughts proba­bly crossed his mind: the pride he felt of becoming America’s first man in space, or perhaps, the possibility that the powerful rocket beneath him would blow him sky high. . . in a bad way, or maybe even a greater fear he would "screw the pooch” by doing something to embarrass him­self—or far worse—jeopardize the U. S. space program.

After lying there nearly 4 hours and suffering through several launch delays, however, Shepard was by his own admission not thinking about any of these things. Rather, he was consumed with an issue much more down to earth: his bladder was full, and he desperately needed to relieve himself. Because exiting the capsule was out of the question at this point, he literally had no place to go. The designers of his modified Goodrich

U. S. Navy Mark IV pressure suit had provided for nearly every contin­gency imaginable, but not this; after all, the flight was only scheduled to last a few minutes.

Finally, Shepard was forced to make his need known to the control­lers below. As he candidly described later, "You heard me, I’ve got to pee. I’ve been in here forever.”[286] Despite the unequivocal reply of "No!” to

Human Factors Research: Meshing Pilots with Planes

Mercury 7 astronaut Alan B. Shepard, Jr., preparing for his historic flight of May 5, 1961. His gleaming silver pressure suit had all the bells and whistles. . . except for one. NASA.

his request, Shepard’s bladder gave him no alternative but to persist. Historic flight or not, he had to go—and now.

When the powers below finally accepted that they had no choice, they gave the suffering astronaut a reluctant thumbs up: so, "pee,” he did. . . all over his sensor-laden body and inside his gleaming silver spacesuit. And then, while the world watched—unaware of this behind- the-scenes drama—Shepard rode his spaceship into history. . . drenched in his own urine.

This inauspicious moment should have been something of an epiph­any for the human factors scientists who worked for the newly formed

National Aeronautics and Space Administration (NASA). It graphi­cally pointed out the obvious: human requirements—even the most basic ones—are not optional; they are real, and accommodations must always be made to meet them. But NASA’s piloted space program had advanced so far technologically in such a short time that this was only one of many lessons that the Agency’s planners had learned the hard way. There would be many more in the years to come.

As described in the Tom Wolfe book and movie of the same name, The Right Stuff, the first astronauts were considered by many of their contemporary non-astronaut pilots—including the ace who first broke the sound barrier, U. S. Air Force test pilot Chuck Yeager—as little more than "spam in a can.”[287] In fact, Yeager’s commander in charge of all the test pilots at Edwards Air Force Base had made it known that he didn’t particularly want his top pilots volunteering for the astronaut program; he considered it a "waste of talent.”[288] After all, these new astronauts— more like lab animals than pilots—had little real function in the early flights, other than to survive, and sealed as they were in their tiny metal capsules with no realistic means of escape, the cynical "spam in a can” metaphor was not entirely inappropriate.

But all pilots appreciated the dangers faced by this new breed of American hero: based on the space program’s much-publicized recent history of one spectacular experimental launch failure after another, it seemed like a morbidly fair bet to most observers that the brave astro­nauts, sitting helplessly astride 30 tons of unstable and highly explo­sive rocket fuel, had a realistic chance of becoming something akin to America’s most famous canned meat dish. It was indeed a dangerous job, even for the 7 overqualified test-pilots-turned-astronauts who had been so carefully chosen from more than 500 actively serving military test pilots.[289] Clearly, piloted space flight had to become considerably more human-friendly if it were to become the way of the future.

NASA had existed less than 3 years before Shepard’s flight. On July 19, 1958, President Dwight D. Eisenhower signed into law the National Aeronautics and Space Act of 1958, and chief among the provisions was the establishment of NASA. Expanding on this act’s stated purpose of conducting research into the "problems of flight within and outside the earth’s atmosphere” was an objective to develop vehicles capable of carrying—among other things—"living organisms” through space.[290]

Because this official directive clearly implied the intention of send­ing humans into space, NASA was from its inception charged with formulating a piloted space program. Consequently, within 3 years after it was created, the budding space agency managed to successfully launch its first human, Alan Shepard, into space. The astronaut com­pleted NASA Mercury mission MR-3 to become America’s first man in space. Encapsulated in his Freedom 7 spacecraft, he lifted off from Cape Canaveral, FL, and flew to an altitude of just over 116 miles before splashing down into the Atlantic Ocean 302 miles downrange.[291] It was only a 15-minute suborbital flight and, as related above, not without problems, but it accomplished its objective: America officially had a piloted space program.

This was no small accomplishment. Numerous major technological barriers had to be surmounted during this short time before even this most basic of piloted space flights was possible. Among these obstacles, none was more challenging than the problems associated with main­taining and supporting human life in the ultrahostile environment of space. Thus, from the beginning of the Nation’s space program and con­tinuing to the present, human factors research has been vital to NASA’s comprehensive research program.

Traffic Collision Avoidance System

By the 1980s, increasing airspace congestion had made the risk of cata­strophic midair collision greater than ever before. Consequently, the 100th Congress passed Public Law 100-223, the Airport and Airway Safety and Capacity Expansion Improvement Act of 1987. This required, among other provisions, that passenger-carrying aircraft be equipped with a Traffic Collision Avoidance System (TCAS), independent of air traffic control, that would alert pilots of other aircraft flying in their surrounding airspace.[395]

In response to this mandate, NASA, the FAA, the Air Transport Association, the Air Line Pilots Association, and various aviation technology industries teamed up to develop and evaluate such a system, TCAS I, which later evolved to the current TCAS II. From 1988 to 1992, NASA Ames Research Center played a pivotal role in this major collabor­ative effort by evaluating the human performance factors that came into play with the use of TCAS. By employing ground-based simulators oper­ated by actual airline flightcrews, NASA showed that this system was prac­ticable, at least from a human factors standpoint.[396] The crews were found to be able to accurately use the system. This research also led to improved displays and aircrew training procedures, as well as the validation of a set of pilot collision-evading performance parameters.[397] One example of the new technologies developed for incorporation into the TCAS system is the Advanced Air Traffic Management Display. This innovative system provides pilots with a three-dimensional air traffic virtual-visualization display that increases their situational awareness while decreasing their workload.[398] This visualization system has been incorporated into TCAS system displays and has become the industry standard for new designs.[399]

High-Speed Investigations

High-speed studies of dynamic stability were very active at Wallops. The scope and contributions of the Wallops rocket-boosted model research programs for aircraft configurations, missiles, and airframe components covered an astounding number of technical areas, including aerodynamic performance, flutter, stability and control, heat transfer, automatic controls, boundary-layer control, inlet performance, ramjets, and separation behav­ior of aircraft components and stores. As an example of test productivity, in just 3 years beginning in 1947, over 386 models were launched at Wallops to evaluate a single topic: roll control effectiveness at transonic conditions. These tests included generic configurations and models with wings repre­sentative of the historic Douglas D-558-2 Skyrocket, Douglas X-3 Stiletto, and Bell X-2 research aircraft.[471] Fundamental studies of dynamic stability and control were also conducted with generic research models to study basic phenomena such as longitudinal trim changes, dynamic longitudi­nal stability, control-hinge moments, and aerodynamic damping in roll.[472] Studies with models of the D-558-2 also detected unexpected coupling of longitudinal and lateral oscillations, a problem that would subsequently prove to be common for configurations with long fuselages and relatively small wings.[473] Similar coupled motions caused great concern in the X-3 and F-100 aircraft development programs and spurred on numerous stud­ies of the phenomenon known as inertial coupling.

More than 20 specific aircraft configurations were evaluated during the Wallops studies, including early models of such well-known aircraft as the Douglas F4D Skyray, the McDonnell F3H Demon, the Convair B-58 Hustler, the North American F-100 Super Sabre, the Chance Vought F8U Crusader, the Convair F-102 Delta Dagger, the Grumman F11F Tiger, and the McDonnell F-4 Phantom II.

High-Speed Investigations

Shadowgraph of X-15 model in free flight during high-speed tests in the Ames SFFT facility. Shock wave patterns emanating from various airframe components are visible. NASA.

High-speed dynamic stability testing techniques at the Ames SFFT included studies of the static and dynamic stability of blunt-nose reen­try shapes, including analyses of boundary-layer separation.[474] This work included studies of the supersonic dynamic stability characteristics of the Mercury capsule. Noting the experimental observation of nonlinear varia­tions of pitching moment with angle of attack typically exhibited by blunt bodies, Ames researchers contributed a mathematical method for includ­ing such nonlinearities in theoretical analyses and predictions of capsule dynamic stability at supersonic speeds. During the X-15 program, Ames conducted free-flight testing in the SFFT to define stability, control, and flow-field characteristics of the configuration at high supersonic speeds.[475]

The Pace Quickens

Beginning in the early 1960s, a flurry of new military aircraft develop­ment programs resulted in an unprecedented workload for the drop – model personnel. Support was requested by the military services for the General Dynamics F-111, Grumman F-14, McDonnell-Douglas F-15, Rockwell B-1A, and McDonnell-Douglas F/A-18 development programs. In addition, drop-model tests were conducted in support of the Grumman

X-29 and the X-31—sponsored by the Defense Advanced Research Projects Agency (DARPA)—research aircraft programs, which were scheduled for high-angle-of-attack full-scale flight tests at the Dryden flight facility. The specific objectives and test programs conducted with the drop models were considerably different for each configura­tion. Overviews of the results of the military programs are given in this volume, in another case study by this author.

Matching the Tunnel to the Supercomputer

The use of sophisticated wind tunnels and their accompanying complex mathematical equations led observers early on to call aerodynamics the

Matching the Tunnel to the Supercomputer

A model of the X-43A and the Pegasus Launch Vehicle in the Langley 31-Inch Mach 10 Tunnel. NASA.

"science” of flight. There were three major methods of evaluating an air­craft or spacecraft: theoretical analysis, the wind tunnel, and full-flight testing. The specific order of use was ambiguous. Ideally, researchers originated a theoretical goal and began their work in a wind tunnel, with the final confirmation of results occurring during full-flight testing. Researchers at Langley sometimes addressed a challenge first by study­ing it in flight, then moving to the wind tunnel for more extreme testing, such as dangerous and unpredictable high speeds, and then following up with the creation of a theoretical framework. The lack of knowledge of the effect of Reynolds number was at the root of the inability to trust wind tun­nel data. Moreover, tunnel structures such as walls, struts, and supports affected the performance of a model in ways that were hard to quantify.[602]

From the early days of the NACA and other aeronautical research facilities, an essential component of the science was the "computer.” Human computers, primarily women, worked laboriously to finish the myriad of calculations needed to interpret the data generated in wind

tunnel tests. Data acquisition became increasingly sophisticated as the NACA grew in the 1940s. The Langley Unitary Plan Wind Tunnel pos­sessed the capability of remote and automatic collection of pressure, force, temperature data from 85 locations at 64 measurements a second, which was undoubtedly faster than manual collection. Computers pro­cessed the data and delivered it via monitors or automated plotters to researchers during the course of the test. The near-instantaneous avail­ability of test data was a leap from the manual (and visual) inspection of industrial scales during testing.[603]

Computers beginning in the 1970s were capable of mathematically calculating the nature of fluid flows quickly and cheaply, which contrib­uted to the idea of what Baals and Corliss called the "electronic wind tunnel.”[604] No longer were computers only a tool to collect and interpret data faster. With the ability to perform billions of calculations in seconds to mathematically simulate conditions, the new supercomputers poten­tially could perform the job of the wind tunnel. The Royal Aeronautical Society published The Future of Flight in 1970, which included an arti­cle on computers in aerodynamic design by Bryan Thwaites, a profes­sor of theoretical aerodynamics at the University of London. His essay would be a clarion call for the rise of computational fluid dynamics (CFD) in the late 20th century.[605] Moreover, improvements in comput­ers and algorithms drove down the operating time and cost of compu­tational experiments. At the same time, the time and cost of operating wind tunnels increased dramatically by 1980. The fundamental limita­tions of wind tunnels centered on the age-old problems related to model size and Reynolds number, temperature, wall interference, model sup­port ("sting”) interference, unrealistic aeroelastic model distortions under load, stream nonuniformity, and unrealistic turbulence levels. Problematic results from the use of test gases were a concern for the design of vehicles for flight in the atmospheres of other planets.[606]

Matching the Tunnel to the Supercomputer

The control panels of the Langley Unitary Wind Tunnel in 1956. NASA.

The work of researchers at NASA Ames influenced Thwaites’s asser­tions about the potential of CFD to benefit aeronautical research. Ames researcher Dean Chapman highlighted the new capabilities of super­computers in his Dryden Lecture in Research for 1979 at the American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting in New Orleans, LA, in January 1979. To Chapman, innovations in com­puter speed and memory led to an "extraordinary cost reduction trend in computational aerodynamics,” while the cost of wind tunnel exper­iments had been "increasing with time.” He brought to the audience’s attention that a meager $1,000 and 30 minutes computer time allowed the numerical simulation of flow over an airfoil. The same task in 1959 would have cost $10 million and would have been completed 30 years later. Chapman made it clear that computers could cure the "many ills of wind-tunnel and turbomachinery experiments” while providing "impor­tant new technical capabilities for the aerospace industry.”[607]

The crowning achievement of the Ames work was the establishment of the Numerical Aerodynamic Simulation (NAS) Facility, which began operations in 1987. The facility’s Cray-2 supercomputer was capable of 250 million computations a second and 1.72 billion per second for short periods, with the possibility of expanding capacity to 1 billion computa­tions per second. That capability reduced the time and cost of developing aircraft designs and enabled engineers to experiment with new designs without resorting to the expense of building a model and testing it in a wind tunnel. Ames researcher Victor L. Peterson said the new facility, and those like it, would allow engineers "to explore more combinations of the design variables than would be practical in the wind tunnel.”[608]

The impetus for the NAS program arose from several factors. First, its creation recognized that computational aerodynamics offered new capabilities in aeronautical research and development. Primarily, that meant the use of computers as a complement to wind tunnel testing, which, because of the relative youth of the discipline, also placed heavy demands on those computer systems. The NAS Facility represented the committed role of the Federal Government in the development and use of large-scale scientific computing systems dating back to the use of the ENIAC for hydrogen bomb and ballistic missile calculations in the late 1940s.[609]

It was clear to NASA that supercomputers were part of the Agency’s future in the late 1980s. Futuristic projects that involved NASA super­computers included the National Aero-Space Plane (NASP), which had an anticipated speed of Mach 25; new main engines and a crew escape system for the Space Shuttle; and refined rotors for helicopters. Most importantly from the perspective of supplanting the wind tunnel, a supercomputer generated data and converted them into pictures that captured flow phenomena that had been previously unable to be sim­ulated.[610] In other words, the "mind’s eye” of the wind tunnel engineer could be captured on film.

Nevertheless, computer simulations were not to replace the wind tunnel. At a meeting sponsored by Advisory Group for Aerospace

Research & Development (AGARD) on the Integration of Computers and Wind Testing in September 1980, Joseph G. Marvin, the chief of the Experimental Fluid Dynamics Branch at Ames, asserted CFD was an "attractive means of providing that necessary bridge between wind – tunnel simulation and flight.” Before that could happen, a careful and critical program of comparison with wind tunnel experiments had to take place. In other words, the wind tunnel was the tool to verify the accuracy of CFD.[611] Dr. Seymour M. Bogdonoff of Princeton University commented in 1988 that "computers can’t do anything unless you know what data to put in them.” The aerospace community still had to dis­cover and document the key phenomena to realize the "future of flight” in the hypersonic and interplanetary regimes. The next step was input­ting the data into the supercomputers.[612]

Researchers Victor L. Peterson and William F. Ballhaus, Jr., who worked in the NAS Facility, recognized the "complementary nature of computation and wind tunnel testing,” where the "combined use” of each captured the "strengths of each tool.” Wind tunnels and comput­ers brought different strengths to the research. The wind tunnel was best for providing detailed performance data once a final configura­tion was selected, especially for investigations involving complex aero­dynamic phenomena. Computers facilitated the arrival and analysis of that final configuration through several steps. They allowed develop­ment of design concepts such as the forward-swept wing or jet flap for lift augmentation and offered a more efficient process of choosing the most promising designs to evaluate in the wind tunnel. Computers also made the instrumentation of test models easier and corrected wind tun­nel data for scaling and interference errors.[613]

Enhancing General Aviation Safety

Flying and handling qualities are, per se, an important aspect of opera­tional safety. But many other issues affect safety as well. The GA airplane of the postwar era was very different from its prewar predecessor—gone was fabric and wood or steel tube, with some small engine and a two – bladed fixed-pitch propeller. Instead, many were sleek all-metal mono­planes with retractable landing gears, near-or-over-200-mph cruising speeds, and, as noted in the previous section, often challenging and demanding flying and handling qualities. In November 1971, NASA spon­sored a meeting at the Langley Research Center to discuss technologies that might be applied to future civil aviation in the 1970s and beyond. Among the many papers presented was a survey of GA by Jack Fischel and Marvin Barber of the Flight Research Center.[835] Barber and Fischel offered an incisive survey and synthesis of applicable technologies, including the then-new concept of the supercritical wing, which was of course applicable to propeller design as well. They addressed opportu­nities to employ new structural design concepts and materials advances (as were then beginning to be explored for military aircraft). Boron and graphite composites, which could be laid up and injection molded, prom­ised to reduce both weight and labor costs, offering higher strength – to-weight ratios than conventional aluminum and steel construction. They noted the potentiality of increasingly reliable and cheap gas tur­bine engines (and the then-fashionable rotary combustion engine as well), and improved avionics could provide greater utility and safety for pilots of lower flight experience. Barber and Fischel concluded that,

On the basis of current and projected near-future tech­nology, it is believed that the main technology effort in the next decade will be devoted to improving the

economy, performance, utility, and safety of General Aviation aircraft.[836]

Of these, the greatest challenges involved safety. By the early 1970s, the fatality rate for GA was 10 times higher per passenger miles than that of automobiles.[837] Many accidents were caused by pilots exceeding their flying abilities, leading one manufacturing executive to ruefully remark at a NASA conference, "If we don’t soon find ways to improve the safety of our airplanes, we are going to be putting placards on the airplanes which say ‘Flying airplanes may be hazardous to your health.’”[838] Alarmed, NASA set an aviation safety goal to reduce fatality rates by 80 percent by the mid-1980s.[839] While basic changes in pilot training and practices could accomplish a great deal of good, so, too, could better understanding of GA safety challenges to create aircraft that were easier and more toler­ant of pilot error, together with sub-systems such as advanced avionics and flight controls that could further enhance flight safety. Underpinning all of this was a continuing need for the highest quality information and analysis that NASA research could furnish. The following examples offer an appreciation of some of the contributions NACA-NASA researchers made confronting some of the major challenges to GA safety.