Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Lightning and the Composite, Electronic Airplane

FAA Federal Air Regulation (FAR) 23.867 governs protection of aircraft against lightning and static electricity, reflecting the influence of decades of NASA lightning research, particularly the NF-106B program. FAR 23.867 directs that an airplane "must be protected against catastrophic effects from lightning,” by bonding metal components to the airframe or, in the case of both metal and nonmetal components, designing them so that if they are struck, the effects on the aircraft will not be catastrophic. Additionally, for nonmetallic components, FAR 23.867 directs that air­craft must have "acceptable means of diverting the resulting electrical current so as not to endanger the airplane.”[166]

Among the more effective means of limiting lightning damage to aircraft is using a material that resists or minimizes the powerful pulse of an electromagnetic strike. Late in the 20th century, the aerospace industry realized the excellent potential of composite materials for that purpose. Aside from older bonded-wood-and-resin aircraft of the inter­war era, the modern all-composite aircraft may be said to date from the 1960s, with the private-venture Windecker Eagle, anticipating later air­craft as diverse as the Cirrus SR-20 lightplane, the Glasair III LP (the first composite homebuilt aircraft to meet the requirements of FAR 23), and the Boeing 787. The 787 is composed of 50-percent carbon lami­nate, including the fuselage and wings; a carbon sandwich material in the engine nacelles, control surfaces, and wingtips; and other compos­ites in the wings and vertical fin. Much smaller portions are made of aluminum and titanium. In contrast, indicative of the rising prevalence of composites, the 777 involved just 12-percent composites.

An even newer composite testbed design is the Advanced Composite Cargo Aircraft (ACCA). The modified twin-engine Dornier 328Jet’s rear fuse­lage and vertical stabilizer are composed of advanced composite materials produced by out-of-autoclave curing. First flown in June 2009, the ACCA is the product of a 10-year project by the Air Force Research Laboratory.[167]

NASA research on lightning protection for conventional aircraft structures translated into use for composite airframes as well. Because experience proved that lightning could strike almost any spot on an airplane’s surface—not merely (as previously believed) extremities such as wings and propeller tips—researchers found a lesson for designers using new materials. They concluded, "That finding is of great impor­tance to designers employing composite materials, which are less con­ductive, hence more vulnerable to lightning damage than the aluminum allows they replace.”[168] The advantages of fiberglass and other compos­ites have been readily recognized: besides resistance to lightning strikes, composites offer exceptional strength for light weight and are resistant to corrosion. Therefore, it was inevitable that aircraft designers would increasingly rely upon the new materials.[169]

But the composite revolution was not just the province of established manufacturers. As composites grew in popularity, they increasingly were employed by manufacturers of kit planes. The homebuilt aircraft market, a feature of American aeronautics since the time of the Wrights, expanded greatly over the 1980s and afterward. NASA’s heavy investment in light­ning research carried over to the kit-plane market, and Langley released a Small Business Innovation Research (SBIR) contract to Stoddard- Hamilton Aircraft, Inc., and Lightning Technologies, Inc., for develop­ment of a low-cost lightning protection system for kit-built composite aircraft. As a result, Stoddard-Hamilton’s composite-structure Glasair III LP became the first homebuilt aircraft to meet the standards of FAR 23.[170]

One of the benefits of composite/fiberglass airframe materials is inherent resistance to structural damage. Typically, composites are produced by laying spaced bands of high-strength fibers in an angu­lar pattern of perhaps 45 degrees from one another. Selectively wind­ing the material in alternating directions produces a "basket weave” effect that enhances strength. The fibers often are set in a thermo­plastic resin four or more layers thick, which, when cured, produces extremely high strength and low weight. Furthermore, the weave pat­tern affords excellent resistance to peeling and delamination, even when struck by lightning. Among the earliest aviation uses of composites were engine cowlings, but eventually, structural components and then entire composite airframes were envisioned. Composites can provide addi­tional electromagnetic resistance by winding conductive filaments in a spiral pattern over the structure before curing the resin. The filaments help dissipate high-voltage energy across a large area and rapidly divert the impulses before they can inflict significant harm.[171]

It is helpful to compare the effects of lightning on aluminum aircraft to better understand the advantage of fiberglass structures. Aluminum readily conducts electromagnetic energy through the airframe, requir­ing designers to channel the energy away from vulnerable areas, espe­cially fuel systems and avionics. The aircraft’s outer skin usually offers the path of least resistance, so the energy can be "vented” overboard. Fiberglass is a proven insulator against electromagnetic charges. Though composites conduct electricity, they do so less readily than do alumi­num and other metals. Consequently, though it may seem counterintu­itive, composites’ resistance to EMP strokes can be enhanced by adding small metallic mesh to the external surfaces, focusing unwanted currents away from the interior. The most common mesh materials are alumi­num and copper impressed into the carbon fiber. Repairs of lightning – damaged composites must take into account the mesh in the affected area and the basic material and attendant structure. Composites miti­gate the effect of a lightning strike not only by resisting the immediate area of impact, but also by spreading the effects over a wider area. Thus, by reducing the energy for a given surface area (expressed in amps per square inch), a potentially damaging strike can be rendered harmless.

Because technology is still emerging for detection and diagno­sis of lightning damage, NASA is exploring methods of in-flight and postflight analysis. Obviously, the most critical is in-flight, with aircraft sensors measuring the intensity and location of a lightning strike’s cur­rent, employing laboratory simulations to establish baseline data for a specific material. Thus, the voltage/current test measurements can be compared with statistical data to estimate the extent of damage likely upon the composite. Aircrews thereby can evaluate the safety of flight risks after a specific strike and determine whether to continue or to land.

NASA’s research interests in addressing composite aircraft are threefold:

• Deploying onboard sensors to measure lightning-strike strength, location, and current flow.

• Obtaining conductive paint or other coatings to facili­tate current flow, mitigating airframe structural dam­age, and eliminating requirements for additional internal shielding of electronics and avionics.

• Compiling physics-based models of complex compos­ites that can be adapted to simulate lightning strikes to quantify electrical, mechanical, and thermal parameters to provide real-time damage information.

As testing continues, NASA will provide modeling data to manufac­turers of composite aircraft as a design tool. Similar benefits can accrue to developers of wind turbines, which increasingly are likely to use com­posite blades. Other nonaerospace applications can include the electric power industry, which experiences high-voltage situations.[172]

Performance Data Analysis and Reporting System

In yet another example of NASA developing a database system with and for the FAA, the Performance Data Analysis and Reporting System (PDARS) began operation in 1999 with the goal of collecting, analyz­ing, and reporting of performance-related data about the National Airspace System. The difference between PDARS and the Aviation Safety Reporting System is that input for the ASRS comes voluntarily from people who see something they feel is unsafe and report it, while input for PDARS comes automatically—in real time—from electronic sources such as ATC radar tracks and filed flight plans. PDARS was created as an element of NASA’s Aviation Safety Monitoring and Modeling project.[239]

From these data, PDARS calculates a variety of performance mea­sures related to air traffic patterns, including traffic counts, travel times between airports and other navigation points, distances flown, gen­eral traffic flow parameters, and the separation distance from trailing

aircraft. Nearly 1,000 reports to appropriate FAA facilities are automat­ically generated and distributed each morning, while the system also allows for sharing data and reports among facilities, as well as facilitat­ing larger research projects. With the information provided by PDARS, FAA managers can quickly determine the health, quality, and safety of day-to-day ATC operations and make immediate corrections.[240]

The system also has provided input for several NASA and FAA stud­ies, including measurement of the benefits of the Dallas/Fort Worth Metroplex airspace, an analysis of the Los Angeles Arrival Enhancement Procedure, an analysis of the Phoenix Dryheat departure procedure, measurement of navigation accuracy of aircraft using area navigation en route, a study on the detection and analysis of in-close approach changes, an evaluation of the benefits of domestic reduced vertical separation minimum implementation, and a baseline study for the airspace flow program. As of 2008, PDARS was in use at 20 Air Route Traffic Control Centers, 19 Terminal Radar Approach Control facil­ities, three FAA service area offices, the FAA’s Air Traffic Control System Command Center in Herndon, VA, and at FAA Headquarters in Washington, DC.[241]

Human Factors Research: Meshing Pilots with Planes

Human Factors Research: Meshing Pilots with PlanesSteven A. Ruffin

The invention of flight exposed human limitations. Altitude effects endan­gered early aviators. As the capabilities of aircraft grew, so did the challenges for aeromedical and human factors researchers. Open cock­pits gave way to pressurized cabins. Wicker seats perched on the lead­ing edge of frail wood-and-fabric wings were replaced by robust metal seats and eventually sophisticated rocket-boosted ejection seats. The casual cloth work clothes and hats presaged increasingly complex suits.

S MERCURY ASTRONAUT ALAN B. SHEPARD, JR., lay flat on his back, sealed in a metal capsule perched high atop a Redstone rocket on the morning of May 5, 1961, many thoughts proba­bly crossed his mind: the pride he felt of becoming America’s first man in space, or perhaps, the possibility that the powerful rocket beneath him would blow him sky high. . . in a bad way, or maybe even a greater fear he would "screw the pooch” by doing something to embarrass him­self—or far worse—jeopardize the U. S. space program.

After lying there nearly 4 hours and suffering through several launch delays, however, Shepard was by his own admission not thinking about any of these things. Rather, he was consumed with an issue much more down to earth: his bladder was full, and he desperately needed to relieve himself. Because exiting the capsule was out of the question at this point, he literally had no place to go. The designers of his modified Goodrich

U. S. Navy Mark IV pressure suit had provided for nearly every contin­gency imaginable, but not this; after all, the flight was only scheduled to last a few minutes.

Finally, Shepard was forced to make his need known to the control­lers below. As he candidly described later, "You heard me, I’ve got to pee. I’ve been in here forever.”[286] Despite the unequivocal reply of "No!” to

Human Factors Research: Meshing Pilots with Planes

Mercury 7 astronaut Alan B. Shepard, Jr., preparing for his historic flight of May 5, 1961. His gleaming silver pressure suit had all the bells and whistles. . . except for one. NASA.

his request, Shepard’s bladder gave him no alternative but to persist. Historic flight or not, he had to go—and now.

When the powers below finally accepted that they had no choice, they gave the suffering astronaut a reluctant thumbs up: so, "pee,” he did. . . all over his sensor-laden body and inside his gleaming silver spacesuit. And then, while the world watched—unaware of this behind- the-scenes drama—Shepard rode his spaceship into history. . . drenched in his own urine.

This inauspicious moment should have been something of an epiph­any for the human factors scientists who worked for the newly formed

National Aeronautics and Space Administration (NASA). It graphi­cally pointed out the obvious: human requirements—even the most basic ones—are not optional; they are real, and accommodations must always be made to meet them. But NASA’s piloted space program had advanced so far technologically in such a short time that this was only one of many lessons that the Agency’s planners had learned the hard way. There would be many more in the years to come.

As described in the Tom Wolfe book and movie of the same name, The Right Stuff, the first astronauts were considered by many of their contemporary non-astronaut pilots—including the ace who first broke the sound barrier, U. S. Air Force test pilot Chuck Yeager—as little more than "spam in a can.”[287] In fact, Yeager’s commander in charge of all the test pilots at Edwards Air Force Base had made it known that he didn’t particularly want his top pilots volunteering for the astronaut program; he considered it a "waste of talent.”[288] After all, these new astronauts— more like lab animals than pilots—had little real function in the early flights, other than to survive, and sealed as they were in their tiny metal capsules with no realistic means of escape, the cynical "spam in a can” metaphor was not entirely inappropriate.

But all pilots appreciated the dangers faced by this new breed of American hero: based on the space program’s much-publicized recent history of one spectacular experimental launch failure after another, it seemed like a morbidly fair bet to most observers that the brave astro­nauts, sitting helplessly astride 30 tons of unstable and highly explo­sive rocket fuel, had a realistic chance of becoming something akin to America’s most famous canned meat dish. It was indeed a dangerous job, even for the 7 overqualified test-pilots-turned-astronauts who had been so carefully chosen from more than 500 actively serving military test pilots.[289] Clearly, piloted space flight had to become considerably more human-friendly if it were to become the way of the future.

NASA had existed less than 3 years before Shepard’s flight. On July 19, 1958, President Dwight D. Eisenhower signed into law the National Aeronautics and Space Act of 1958, and chief among the provisions was the establishment of NASA. Expanding on this act’s stated purpose of conducting research into the "problems of flight within and outside the earth’s atmosphere” was an objective to develop vehicles capable of carrying—among other things—"living organisms” through space.[290]

Because this official directive clearly implied the intention of send­ing humans into space, NASA was from its inception charged with formulating a piloted space program. Consequently, within 3 years after it was created, the budding space agency managed to successfully launch its first human, Alan Shepard, into space. The astronaut com­pleted NASA Mercury mission MR-3 to become America’s first man in space. Encapsulated in his Freedom 7 spacecraft, he lifted off from Cape Canaveral, FL, and flew to an altitude of just over 116 miles before splashing down into the Atlantic Ocean 302 miles downrange.[291] It was only a 15-minute suborbital flight and, as related above, not without problems, but it accomplished its objective: America officially had a piloted space program.

This was no small accomplishment. Numerous major technological barriers had to be surmounted during this short time before even this most basic of piloted space flights was possible. Among these obstacles, none was more challenging than the problems associated with main­taining and supporting human life in the ultrahostile environment of space. Thus, from the beginning of the Nation’s space program and con­tinuing to the present, human factors research has been vital to NASA’s comprehensive research program.

Traffic Collision Avoidance System

By the 1980s, increasing airspace congestion had made the risk of cata­strophic midair collision greater than ever before. Consequently, the 100th Congress passed Public Law 100-223, the Airport and Airway Safety and Capacity Expansion Improvement Act of 1987. This required, among other provisions, that passenger-carrying aircraft be equipped with a Traffic Collision Avoidance System (TCAS), independent of air traffic control, that would alert pilots of other aircraft flying in their surrounding airspace.[395]

In response to this mandate, NASA, the FAA, the Air Transport Association, the Air Line Pilots Association, and various aviation technology industries teamed up to develop and evaluate such a system, TCAS I, which later evolved to the current TCAS II. From 1988 to 1992, NASA Ames Research Center played a pivotal role in this major collabor­ative effort by evaluating the human performance factors that came into play with the use of TCAS. By employing ground-based simulators oper­ated by actual airline flightcrews, NASA showed that this system was prac­ticable, at least from a human factors standpoint.[396] The crews were found to be able to accurately use the system. This research also led to improved displays and aircrew training procedures, as well as the validation of a set of pilot collision-evading performance parameters.[397] One example of the new technologies developed for incorporation into the TCAS system is the Advanced Air Traffic Management Display. This innovative system provides pilots with a three-dimensional air traffic virtual-visualization display that increases their situational awareness while decreasing their workload.[398] This visualization system has been incorporated into TCAS system displays and has become the industry standard for new designs.[399]

High-Speed Investigations

High-speed studies of dynamic stability were very active at Wallops. The scope and contributions of the Wallops rocket-boosted model research programs for aircraft configurations, missiles, and airframe components covered an astounding number of technical areas, including aerodynamic performance, flutter, stability and control, heat transfer, automatic controls, boundary-layer control, inlet performance, ramjets, and separation behav­ior of aircraft components and stores. As an example of test productivity, in just 3 years beginning in 1947, over 386 models were launched at Wallops to evaluate a single topic: roll control effectiveness at transonic conditions. These tests included generic configurations and models with wings repre­sentative of the historic Douglas D-558-2 Skyrocket, Douglas X-3 Stiletto, and Bell X-2 research aircraft.[471] Fundamental studies of dynamic stability and control were also conducted with generic research models to study basic phenomena such as longitudinal trim changes, dynamic longitudi­nal stability, control-hinge moments, and aerodynamic damping in roll.[472] Studies with models of the D-558-2 also detected unexpected coupling of longitudinal and lateral oscillations, a problem that would subsequently prove to be common for configurations with long fuselages and relatively small wings.[473] Similar coupled motions caused great concern in the X-3 and F-100 aircraft development programs and spurred on numerous stud­ies of the phenomenon known as inertial coupling.

More than 20 specific aircraft configurations were evaluated during the Wallops studies, including early models of such well-known aircraft as the Douglas F4D Skyray, the McDonnell F3H Demon, the Convair B-58 Hustler, the North American F-100 Super Sabre, the Chance Vought F8U Crusader, the Convair F-102 Delta Dagger, the Grumman F11F Tiger, and the McDonnell F-4 Phantom II.

High-Speed Investigations

Shadowgraph of X-15 model in free flight during high-speed tests in the Ames SFFT facility. Shock wave patterns emanating from various airframe components are visible. NASA.

High-speed dynamic stability testing techniques at the Ames SFFT included studies of the static and dynamic stability of blunt-nose reen­try shapes, including analyses of boundary-layer separation.[474] This work included studies of the supersonic dynamic stability characteristics of the Mercury capsule. Noting the experimental observation of nonlinear varia­tions of pitching moment with angle of attack typically exhibited by blunt bodies, Ames researchers contributed a mathematical method for includ­ing such nonlinearities in theoretical analyses and predictions of capsule dynamic stability at supersonic speeds. During the X-15 program, Ames conducted free-flight testing in the SFFT to define stability, control, and flow-field characteristics of the configuration at high supersonic speeds.[475]

The Pace Quickens

Beginning in the early 1960s, a flurry of new military aircraft develop­ment programs resulted in an unprecedented workload for the drop – model personnel. Support was requested by the military services for the General Dynamics F-111, Grumman F-14, McDonnell-Douglas F-15, Rockwell B-1A, and McDonnell-Douglas F/A-18 development programs. In addition, drop-model tests were conducted in support of the Grumman

X-29 and the X-31—sponsored by the Defense Advanced Research Projects Agency (DARPA)—research aircraft programs, which were scheduled for high-angle-of-attack full-scale flight tests at the Dryden flight facility. The specific objectives and test programs conducted with the drop models were considerably different for each configura­tion. Overviews of the results of the military programs are given in this volume, in another case study by this author.

Matching the Tunnel to the Supercomputer

The use of sophisticated wind tunnels and their accompanying complex mathematical equations led observers early on to call aerodynamics the

Matching the Tunnel to the Supercomputer

A model of the X-43A and the Pegasus Launch Vehicle in the Langley 31-Inch Mach 10 Tunnel. NASA.

"science” of flight. There were three major methods of evaluating an air­craft or spacecraft: theoretical analysis, the wind tunnel, and full-flight testing. The specific order of use was ambiguous. Ideally, researchers originated a theoretical goal and began their work in a wind tunnel, with the final confirmation of results occurring during full-flight testing. Researchers at Langley sometimes addressed a challenge first by study­ing it in flight, then moving to the wind tunnel for more extreme testing, such as dangerous and unpredictable high speeds, and then following up with the creation of a theoretical framework. The lack of knowledge of the effect of Reynolds number was at the root of the inability to trust wind tun­nel data. Moreover, tunnel structures such as walls, struts, and supports affected the performance of a model in ways that were hard to quantify.[602]

From the early days of the NACA and other aeronautical research facilities, an essential component of the science was the "computer.” Human computers, primarily women, worked laboriously to finish the myriad of calculations needed to interpret the data generated in wind

tunnel tests. Data acquisition became increasingly sophisticated as the NACA grew in the 1940s. The Langley Unitary Plan Wind Tunnel pos­sessed the capability of remote and automatic collection of pressure, force, temperature data from 85 locations at 64 measurements a second, which was undoubtedly faster than manual collection. Computers pro­cessed the data and delivered it via monitors or automated plotters to researchers during the course of the test. The near-instantaneous avail­ability of test data was a leap from the manual (and visual) inspection of industrial scales during testing.[603]

Computers beginning in the 1970s were capable of mathematically calculating the nature of fluid flows quickly and cheaply, which contrib­uted to the idea of what Baals and Corliss called the "electronic wind tunnel.”[604] No longer were computers only a tool to collect and interpret data faster. With the ability to perform billions of calculations in seconds to mathematically simulate conditions, the new supercomputers poten­tially could perform the job of the wind tunnel. The Royal Aeronautical Society published The Future of Flight in 1970, which included an arti­cle on computers in aerodynamic design by Bryan Thwaites, a profes­sor of theoretical aerodynamics at the University of London. His essay would be a clarion call for the rise of computational fluid dynamics (CFD) in the late 20th century.[605] Moreover, improvements in comput­ers and algorithms drove down the operating time and cost of compu­tational experiments. At the same time, the time and cost of operating wind tunnels increased dramatically by 1980. The fundamental limita­tions of wind tunnels centered on the age-old problems related to model size and Reynolds number, temperature, wall interference, model sup­port ("sting”) interference, unrealistic aeroelastic model distortions under load, stream nonuniformity, and unrealistic turbulence levels. Problematic results from the use of test gases were a concern for the design of vehicles for flight in the atmospheres of other planets.[606]

Matching the Tunnel to the Supercomputer

The control panels of the Langley Unitary Wind Tunnel in 1956. NASA.

The work of researchers at NASA Ames influenced Thwaites’s asser­tions about the potential of CFD to benefit aeronautical research. Ames researcher Dean Chapman highlighted the new capabilities of super­computers in his Dryden Lecture in Research for 1979 at the American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting in New Orleans, LA, in January 1979. To Chapman, innovations in com­puter speed and memory led to an "extraordinary cost reduction trend in computational aerodynamics,” while the cost of wind tunnel exper­iments had been "increasing with time.” He brought to the audience’s attention that a meager $1,000 and 30 minutes computer time allowed the numerical simulation of flow over an airfoil. The same task in 1959 would have cost $10 million and would have been completed 30 years later. Chapman made it clear that computers could cure the "many ills of wind-tunnel and turbomachinery experiments” while providing "impor­tant new technical capabilities for the aerospace industry.”[607]

The crowning achievement of the Ames work was the establishment of the Numerical Aerodynamic Simulation (NAS) Facility, which began operations in 1987. The facility’s Cray-2 supercomputer was capable of 250 million computations a second and 1.72 billion per second for short periods, with the possibility of expanding capacity to 1 billion computa­tions per second. That capability reduced the time and cost of developing aircraft designs and enabled engineers to experiment with new designs without resorting to the expense of building a model and testing it in a wind tunnel. Ames researcher Victor L. Peterson said the new facility, and those like it, would allow engineers "to explore more combinations of the design variables than would be practical in the wind tunnel.”[608]

The impetus for the NAS program arose from several factors. First, its creation recognized that computational aerodynamics offered new capabilities in aeronautical research and development. Primarily, that meant the use of computers as a complement to wind tunnel testing, which, because of the relative youth of the discipline, also placed heavy demands on those computer systems. The NAS Facility represented the committed role of the Federal Government in the development and use of large-scale scientific computing systems dating back to the use of the ENIAC for hydrogen bomb and ballistic missile calculations in the late 1940s.[609]

It was clear to NASA that supercomputers were part of the Agency’s future in the late 1980s. Futuristic projects that involved NASA super­computers included the National Aero-Space Plane (NASP), which had an anticipated speed of Mach 25; new main engines and a crew escape system for the Space Shuttle; and refined rotors for helicopters. Most importantly from the perspective of supplanting the wind tunnel, a supercomputer generated data and converted them into pictures that captured flow phenomena that had been previously unable to be sim­ulated.[610] In other words, the "mind’s eye” of the wind tunnel engineer could be captured on film.

Nevertheless, computer simulations were not to replace the wind tunnel. At a meeting sponsored by Advisory Group for Aerospace

Research & Development (AGARD) on the Integration of Computers and Wind Testing in September 1980, Joseph G. Marvin, the chief of the Experimental Fluid Dynamics Branch at Ames, asserted CFD was an "attractive means of providing that necessary bridge between wind – tunnel simulation and flight.” Before that could happen, a careful and critical program of comparison with wind tunnel experiments had to take place. In other words, the wind tunnel was the tool to verify the accuracy of CFD.[611] Dr. Seymour M. Bogdonoff of Princeton University commented in 1988 that "computers can’t do anything unless you know what data to put in them.” The aerospace community still had to dis­cover and document the key phenomena to realize the "future of flight” in the hypersonic and interplanetary regimes. The next step was input­ting the data into the supercomputers.[612]

Researchers Victor L. Peterson and William F. Ballhaus, Jr., who worked in the NAS Facility, recognized the "complementary nature of computation and wind tunnel testing,” where the "combined use” of each captured the "strengths of each tool.” Wind tunnels and comput­ers brought different strengths to the research. The wind tunnel was best for providing detailed performance data once a final configura­tion was selected, especially for investigations involving complex aero­dynamic phenomena. Computers facilitated the arrival and analysis of that final configuration through several steps. They allowed develop­ment of design concepts such as the forward-swept wing or jet flap for lift augmentation and offered a more efficient process of choosing the most promising designs to evaluate in the wind tunnel. Computers also made the instrumentation of test models easier and corrected wind tun­nel data for scaling and interference errors.[613]

Enhancing General Aviation Safety

Flying and handling qualities are, per se, an important aspect of opera­tional safety. But many other issues affect safety as well. The GA airplane of the postwar era was very different from its prewar predecessor—gone was fabric and wood or steel tube, with some small engine and a two – bladed fixed-pitch propeller. Instead, many were sleek all-metal mono­planes with retractable landing gears, near-or-over-200-mph cruising speeds, and, as noted in the previous section, often challenging and demanding flying and handling qualities. In November 1971, NASA spon­sored a meeting at the Langley Research Center to discuss technologies that might be applied to future civil aviation in the 1970s and beyond. Among the many papers presented was a survey of GA by Jack Fischel and Marvin Barber of the Flight Research Center.[835] Barber and Fischel offered an incisive survey and synthesis of applicable technologies, including the then-new concept of the supercritical wing, which was of course applicable to propeller design as well. They addressed opportu­nities to employ new structural design concepts and materials advances (as were then beginning to be explored for military aircraft). Boron and graphite composites, which could be laid up and injection molded, prom­ised to reduce both weight and labor costs, offering higher strength – to-weight ratios than conventional aluminum and steel construction. They noted the potentiality of increasingly reliable and cheap gas tur­bine engines (and the then-fashionable rotary combustion engine as well), and improved avionics could provide greater utility and safety for pilots of lower flight experience. Barber and Fischel concluded that,

On the basis of current and projected near-future tech­nology, it is believed that the main technology effort in the next decade will be devoted to improving the

economy, performance, utility, and safety of General Aviation aircraft.[836]

Of these, the greatest challenges involved safety. By the early 1970s, the fatality rate for GA was 10 times higher per passenger miles than that of automobiles.[837] Many accidents were caused by pilots exceeding their flying abilities, leading one manufacturing executive to ruefully remark at a NASA conference, "If we don’t soon find ways to improve the safety of our airplanes, we are going to be putting placards on the airplanes which say ‘Flying airplanes may be hazardous to your health.’”[838] Alarmed, NASA set an aviation safety goal to reduce fatality rates by 80 percent by the mid-1980s.[839] While basic changes in pilot training and practices could accomplish a great deal of good, so, too, could better understanding of GA safety challenges to create aircraft that were easier and more toler­ant of pilot error, together with sub-systems such as advanced avionics and flight controls that could further enhance flight safety. Underpinning all of this was a continuing need for the highest quality information and analysis that NASA research could furnish. The following examples offer an appreciation of some of the contributions NACA-NASA researchers made confronting some of the major challenges to GA safety.

On TARGIT: Civil Aviation Crash Testing in the

On December 1, 1984, a Boeing 720B airliner crashed near the east shore of Rogers Dry Lake. Although none of the 73 passengers walked away from the flaming wreckage, there were no fatalities. The occu­pants were plastic, anthropomorphic dummies, some of them instrumented to collect research data. There was no flight crew on board; the pilot was seated in a ground-based cockpit 6 miles away at NASA Dryden.

As early as 1980, Federal Aviation Administration (FAA) and NASA officials had been planning a full-scale transport aircraft crash dem­onstration to study impact dynamics and new safety technologies to improve aircraft crashworthiness. Initially dubbed the Transport Crash Test, the project was later renamed Transport Aircraft Remotely Piloted Ground Impact Test (TARGIT). In August 1983, planners set­tled on the name Controlled Impact Demonstration (CID). Some wags immediately twisted the acronym to stand for "Crash in the Desert” or "Cremating Innocent Dummies.”[954] In point of fact, no fireball was expected. One of the primary test objectives included demonstration of anti-misting kerosene (AMK) fuel, which was designed to prevent for­mation of a postimpact fireball. While many airplane crashes are surviv – able, most victims perish in postcrash fire resulting from the release of fuel from shattered tanks in the wings and fuselage. In 1977, FAA offi­cials looked into the possibility of using an additive called Avgard FM-9 to reduce the volatility of kerosene fuel released during catastrophic crash events. Ground-impact studies using surplus Lockheed SP-2H airplanes

showed great promise, because the FM-9 prevented the kerosene from forming a highly volatile mist as the airframe broke apart.[955] As a result of these early successes, the FAA planned to implement the require­ment that airlines add FM-9 to their fuel. Estimates made calculated that the impact of adopting AMK would have included a one-time cost to airlines of $25,000-$35,000 for retrofitting each high-bypass turbine engine and a 3- to 6-percent increase in fuel costs, which would drive ticket prices up by $2-$4 each. In order to definitively prove the effective­ness of AMK, officials from the FAA and NASA signed a Memorandum of Agreement in 1980 for a full-scale impact demonstration. The FAA was responsible for program management and providing a test aircraft, while NASA scientists designed the experiments, provided instrumen­tation, arranged for data retrieval, and integrated systems.[956] The FAA supplied the Boeing 720B, a typical intermediate-range passenger trans­port that entered airline service in the mid-1960s. It was selected for the test because its construction and design features were common to most contemporary U. S. and foreign airliners. It was powered by four Pratt & Whitney JT3C-7 turbine engines and carried 12,000 gallons of fuel. With a length of 136 feet, a 130-foot wingspan, and maximum takeoff weight of 202,000 pounds, it was the world’s largest RPRV. FAA Program Manager John Reed headed overall CID project development and coor­dination with all participating researchers and support organizations.

Researchers at NASA Langley were responsible for characteriz­ing airframe structural loads during impact and developing a data – acquisition system for the entire aircraft. Impact forces during the demonstration were characterized as being survivable for planning purposes, with the primary danger to be from postimpact fire. Study data to be gathered included measurements of structural, seat, and occu­pant response to impact loads, to corroborate analytical models devel­oped at Langley, as well as data to be used in developing a crashworthy seat and restraint system. Robert J. Hayduk managed NASA crashwor­thiness and cabin-instrumentation requirements.[957] Dryden personnel,

under the direction of Marvin R. "Russ” Barber, were responsible for overall flight research management, systems integration, and flight operations. These included RPRV control and simulation, aircraft/ground interface, test and systems hardware integration, impact-site preparation, and flight-test operations.

The Boeing 720B was equipped to receive uplinked commands from the ground cockpit. Commands providing direct flight path control were routed through the autopilot, while other functions were fed directly to appropriate systems. Information on engine performance, navigation, attitude, altitude, and airspeed was downlinked to the ground pilot.[958] Commands from the ground cockpit were conditioned in control-law computers, encoded, and transmitted to the aircraft from either a pri­mary or backup antenna. Two antennas on the top and bottom of the Boeing 720B provided omnidirectional telemetry coverage, each feeding a separate receiver. The output from the two receivers was then combined into a single input to a decoder that processed uplink data and generated commands to the controls. Additionally, the flight engineer could select redundant uplink transmission antennas at the ground station. There were three pulse-code modulation systems for downlink telemetry, two for exper­imental data, and one to provide aircraft control and performance data.

The airplane was equipped with two forward-facing television cam­eras—a primary color system and a black-and-white backup—to give the ground pilot sufficient visibility for situational awareness. Ten high­speed motion picture cameras photographed the interior of the pas­senger cabin to provide researchers with footage of seat and occupant motion during the impact sequence.[959] Prior to the final CID mission, 14 test flights were made with a safety crew on board. During these flights, 10 remote takeoffs, 13 remote landings (the initial landing was made by the safety pilot), and 69 CID approaches were accomplished. All remote takeoffs were flown from the Edwards Air Force Base main runway. Remote landings took place on the emergency recovery run­way (lakebed Runway 25).

Research pilots for the project included Edward T. Schneider, Fitzhugh L. Fulton, Thomas C. McMurtry, and Donald L. Mallick.

William R. "Ray” Young, Victor W. Horton, and Dale Dennis served as flight engineers. The first flight, a functional checkout, took place March 7, 1984. Schneider served as ground pilot for the first three flights, while two of the other pilots and one or two engineers acted as safety crew. These missions allowed researchers to test the uplink/downlink systems and autopilot, as well as to conduct airspeed calibration and collect ground-effects data. Fulton took over as ground pilot for the remain­ing flight tests, practicing the CID flight profile while researchers qual­ified the AMK system (the fire retardant AMK had to pass through a degrader to convert it into a form that could be burned by the engines) and tested data-acquisition equipment. The final pre-CID flight was com­pleted November 26. The stage was set for the controlled impact test.[960] The CID crash scenario called for a symmetric impact prior to encoun­tering obstructions as if the airliner were involved in a gear-up landing short of the runway or an aborted takeoff. The remote pilot was to slide the airplane through a corridor of heavy steel structures designed to slice open the wings, spilling fuel at a rate of 20 to 100 gallons per second. A specially prepared surface consisting of a rectangular grid of crushed rock peppered with powered electric landing lights provided ignition sources on the ground, while two jet-fueled flame generators in the air­plane’s tail cone provided onboard ignition sources.

On December 1, 1984, the Boeing 720B was prepared for its final flight. The airplane had a gross takeoff weight of 200,455 pounds, including 76,058 gallons of AMK fuel. Fitz Fulton initiated takeoff from the remote cockpit and guided the Boeing 720B into the sky for the last time.[961]At an altitude of 200 feet, Fulton lined up on final approach to the impact site. He noticed that the airplane had begun to drift to the right of centerline but not enough to warrant a missed approach. At 150 feet, now fully commit­ted to touchdown because of activation of limited-duration photographic and data-collection systems, he attempted to center the flight path with a left aileron input, which resulted in a lateral oscillation.

The Boeing 720B struck the ground 285 feet short of the planned impact point, with the left outboard engine contacting the ground first.

On TARGIT: Civil Aviation Crash Testing in the

NASA and the FAA conducted a Controlled Impact Demonstration with a remotely piloted Boeing 720 aircraft. NASA.

 

9

 

This caused the airplane to yaw during the slide, bringing the right inboard engine into contact with one of the wing openers and releasing large quantities of degraded (i. e., highly flammable) AMK and exposing them to a high-temperature ignition source. Other obstructions sliced into the fuselage, permitting fuel to enter beneath the passenger cabin. The resulting fireball was spectacular.[962]

To casual observers, this might have made the CID project appear a failure, but such was not the case. The conditions prescribed for the AMK test were very narrow and failed to account for a wide range of variables, some of which were illustrated during the flight test. The results were sufficient to cause FAA officials to abandon the idea of forc­ing U. S. airlines to use AMK, but the CID provided researchers with a wide range of data for improving transport-aircraft crash survivability.

The experiment also provided significant information for improv­ing RPV technology. The 14 test flights leading up to the final demon­stration gave researchers an opportunity to verify analytical models, simulation techniques, RPV control laws, support software, and hard­ware. The remote pilot assessed the airplane’s handling qualities, allowing
programmers to update the simulation software and validate the con­trol laws. All onboard systems were thoroughly tested, including AMK degraders, autopilot, brakes, landing gear, nose wheel steering, and instrumentation systems. The CID team also practiced emergency pro­cedures, such as the ability to abort the test and land on a lakebed run­way under remote control, and conducted partial testing of an uplinked flight termination system to be used in the event that control of the air­plane was lost. Several anomalies—intermittent loss of uplink signal, brief interruption of autopilot command inputs, and failure of the uplink decoder to pass commands—cropped up during these tests. Modifications were implanted, and the anomalies never recurred.[963] Handling qualities were generally good. The ground pilot found landings to be a special challenge as a result of poor depth perception (because of the low- resolution television monitor) and lack of peripheral vision. Through flight tests, the pilot quickly learned that the CID profile was a high – workload task. Part of this was due to the fact that the tracking radar used in the guidance system lacked sufficient accuracy to meet the impact parameters. To compensate, several attempts were made to improve the ground pilot’s performance. These included changing the flight path to give the pilot more time to align his final trajectory, improving ground markings at the impact site, turning on the runway lights on the test sur­face, and providing a frangible 8-foot-high target as a vertical reference on the centerline. All of these attempts were compromised to some degree by the low-resolution video monitor. After the impact flight, members of the control design team agreed that some form of head-up display (HUD) would have been helpful and that more of the piloting tasks should have been automated to alleviate pilot workload.[964] In terms of RPRV research, the project was considered highly successful. The remote pilots accu­mulated 16 hours and 22 minutes of RPV experience in preparation for the impact mission, and the CID showed the value of comparing pre­dicted results with flight-test data. U. S. Representative William Carney, ranking minority member of the House Transportation, Aviation, and Materials Subcommittee, observed the CID test. "To those who were disappointed with the outcome,” he later wrote, "I can only say that the
results dramatically illustrated why the tests were necessary. I hope we never lose sight of the fact that the first objective of a research program is to learn, and failure to predict the outcome of an experiment should be viewed as an opportunity, not a failure.”[965]

Civilian Supersonic Cruise: The National SST Effort

The fascination for higher speeds of the 1950s and the new long-range comfortable jet airliners combined to create an interest in a supersonic airliner. The dominance of American aircraft manufacturers designs in the long-range subsonic jet airliner market meant that European man­ufacturers turned their sights on that goal. As early as 1959, when jet traffic was just commencing, Sir Peter Masefield, an influential avia­tion figure, said that a supersonic airliner should be a national goal for Britain. Development of such an airplane would contribute to national
prestige, enhance the national technology skill level, and contribute to a favorable trade balance by foreign sales. He recognized that the under­taking would be expensive and that the government would have to sup­port the development of the aircraft. The possibility was also suggested of a cooperative design effort with the United States. Meanwhile, the French aviation industry was pursuing a similar course. Eventually, in 1962, Britain and France merged their efforts to produce a joint European aircraft cruising at Mach 2.2.16

Подпись: 10A Supersonic Transport had also been envisioned in the United States, and low-level studies had been initiated at NACA Langley in 1956, headed by John Stack. But the European initiatives triggered an intensification of American efforts, for essentially the same reasons listed by Masefield. In 1960, Convair proposed a new 52-seat modified-fuselage version of its Mach 2 B-58, preceded by a testbed B-58 with 5 intrepid volunteers in airline seats in the belly pod (windows and a life-support system were to be installed).[1070] The influential magazine Aviation Week reflected the tenor of the American feeling by proposing that the United States make SST a national priority, akin to the response to Sputnik.[1071] Articles appeared outlining the technology for supersonic cruise speeds up to Mach 4 with existing technology. The USAF’s Wright Air Development Division con­vened a conference in late 1960 to discuss the SST for military as well as civilian use.[1072] And in 1961, the newly created Federal Aviation Agency (FAA) began to work with the newly created NASA and the Air Force in Project Horizon to study an American SST program. One of the big questions was whether the design cruise speed should be Mach 2, as the Europeans were striving for, or closer to Mach 3.[1073]

Подпись: 10 Civilian Supersonic Cruise: The National SST Effort

Both Langley and Ames had been engaged in large supersonic air­craft design studies for years and had provided technical support for the Air Force WS-110 program that became the Mach 3 cruise B-70.[1074] Langley had also pioneered work on variable-sweep wings, in part draw­ing upon variable wing sweep technology as explored by the Bell X-5 in NACA testing, to solve the problem of approach speeds for heavy air­planes with highly swept wings for supersonic cruise but also required to operate from existing jet runways. Langley embarked upon develop­ing baseline configurations for a theoretical Supersonic Commercial Air Transport (SCAT), with Ames also participating. Clinton Brown and

F. Edward McLean at Langley developed the so-called arrow wing, with highly swept leading and trailing edges, that promised to produce higher L/D at supersonic cruise speeds. In June 1963, the theoretical research
became more developmental, as President John F. Kennedy announced that the United States would build an SST with Government funding of up to $1 billion provided to industry to aid in the development.

Подпись: 10In September 1963, NASA Langley hosted a conference for the air­craft industry presenting independent detailed analyses by Boeing and Lockheed of four NASA-developed configurations known as SCAT 4 (arrow wing), 15 (arrow wing with variable sweep), 16 (variable sweep), and 17 (delta with canard). Langley research had produced the first three, and Ames had produced SCAT 17.[1075] Additionally, papers on NASA research on SST technology were presented. The detailed analyses by both contractors of the baselines concluded that a supersonic transport was technologically feasible, and that the specified maximum range of 3,200 nautical miles would be possible at Mach 3 but not at Mach 2.2. The economic feasibility of an SST was not evaluated directly, although each contractor commented on operating cost comparisons with the Boeing 707. Although the initial FAA SST specification called for Mach 2.2 cruise, the conference baseline was Mach 3, with one of the configu­rations also being evaluated at Mach 2.2. The results and the need to make the American SST more attractive to airlines than the European Concorde shifted the SST baseline to a Mach 2.7 to Mach 3 cruise speed. This speed was similar to that of the XB-70, so the results of its test program could be directly applicable to development of an SST. As the 1963 conference report stated, "Significant research will be required in the areas of aero­dynamic performance, handling qualities, sonic boom, propulsion, and structural fabrication before the supersonic transport will be a success.”[1076]