Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Human Factors Research: Meshing Pilots with Planes

Human Factors Research: Meshing Pilots with PlanesSteven A. Ruffin

The invention of flight exposed human limitations. Altitude effects endan­gered early aviators. As the capabilities of aircraft grew, so did the challenges for aeromedical and human factors researchers. Open cock­pits gave way to pressurized cabins. Wicker seats perched on the lead­ing edge of frail wood-and-fabric wings were replaced by robust metal seats and eventually sophisticated rocket-boosted ejection seats. The casual cloth work clothes and hats presaged increasingly complex suits.

S MERCURY ASTRONAUT ALAN B. SHEPARD, JR., lay flat on his back, sealed in a metal capsule perched high atop a Redstone rocket on the morning of May 5, 1961, many thoughts proba­bly crossed his mind: the pride he felt of becoming America’s first man in space, or perhaps, the possibility that the powerful rocket beneath him would blow him sky high. . . in a bad way, or maybe even a greater fear he would "screw the pooch” by doing something to embarrass him­self—or far worse—jeopardize the U. S. space program.

After lying there nearly 4 hours and suffering through several launch delays, however, Shepard was by his own admission not thinking about any of these things. Rather, he was consumed with an issue much more down to earth: his bladder was full, and he desperately needed to relieve himself. Because exiting the capsule was out of the question at this point, he literally had no place to go. The designers of his modified Goodrich

U. S. Navy Mark IV pressure suit had provided for nearly every contin­gency imaginable, but not this; after all, the flight was only scheduled to last a few minutes.

Finally, Shepard was forced to make his need known to the control­lers below. As he candidly described later, "You heard me, I’ve got to pee. I’ve been in here forever.”[286] Despite the unequivocal reply of "No!” to

Human Factors Research: Meshing Pilots with Planes

Mercury 7 astronaut Alan B. Shepard, Jr., preparing for his historic flight of May 5, 1961. His gleaming silver pressure suit had all the bells and whistles. . . except for one. NASA.

his request, Shepard’s bladder gave him no alternative but to persist. Historic flight or not, he had to go—and now.

When the powers below finally accepted that they had no choice, they gave the suffering astronaut a reluctant thumbs up: so, "pee,” he did. . . all over his sensor-laden body and inside his gleaming silver spacesuit. And then, while the world watched—unaware of this behind- the-scenes drama—Shepard rode his spaceship into history. . . drenched in his own urine.

This inauspicious moment should have been something of an epiph­any for the human factors scientists who worked for the newly formed

National Aeronautics and Space Administration (NASA). It graphi­cally pointed out the obvious: human requirements—even the most basic ones—are not optional; they are real, and accommodations must always be made to meet them. But NASA’s piloted space program had advanced so far technologically in such a short time that this was only one of many lessons that the Agency’s planners had learned the hard way. There would be many more in the years to come.

As described in the Tom Wolfe book and movie of the same name, The Right Stuff, the first astronauts were considered by many of their contemporary non-astronaut pilots—including the ace who first broke the sound barrier, U. S. Air Force test pilot Chuck Yeager—as little more than "spam in a can.”[287] In fact, Yeager’s commander in charge of all the test pilots at Edwards Air Force Base had made it known that he didn’t particularly want his top pilots volunteering for the astronaut program; he considered it a "waste of talent.”[288] After all, these new astronauts— more like lab animals than pilots—had little real function in the early flights, other than to survive, and sealed as they were in their tiny metal capsules with no realistic means of escape, the cynical "spam in a can” metaphor was not entirely inappropriate.

But all pilots appreciated the dangers faced by this new breed of American hero: based on the space program’s much-publicized recent history of one spectacular experimental launch failure after another, it seemed like a morbidly fair bet to most observers that the brave astro­nauts, sitting helplessly astride 30 tons of unstable and highly explo­sive rocket fuel, had a realistic chance of becoming something akin to America’s most famous canned meat dish. It was indeed a dangerous job, even for the 7 overqualified test-pilots-turned-astronauts who had been so carefully chosen from more than 500 actively serving military test pilots.[289] Clearly, piloted space flight had to become considerably more human-friendly if it were to become the way of the future.

NASA had existed less than 3 years before Shepard’s flight. On July 19, 1958, President Dwight D. Eisenhower signed into law the National Aeronautics and Space Act of 1958, and chief among the provisions was the establishment of NASA. Expanding on this act’s stated purpose of conducting research into the "problems of flight within and outside the earth’s atmosphere” was an objective to develop vehicles capable of carrying—among other things—"living organisms” through space.[290]

Because this official directive clearly implied the intention of send­ing humans into space, NASA was from its inception charged with formulating a piloted space program. Consequently, within 3 years after it was created, the budding space agency managed to successfully launch its first human, Alan Shepard, into space. The astronaut com­pleted NASA Mercury mission MR-3 to become America’s first man in space. Encapsulated in his Freedom 7 spacecraft, he lifted off from Cape Canaveral, FL, and flew to an altitude of just over 116 miles before splashing down into the Atlantic Ocean 302 miles downrange.[291] It was only a 15-minute suborbital flight and, as related above, not without problems, but it accomplished its objective: America officially had a piloted space program.

This was no small accomplishment. Numerous major technological barriers had to be surmounted during this short time before even this most basic of piloted space flights was possible. Among these obstacles, none was more challenging than the problems associated with main­taining and supporting human life in the ultrahostile environment of space. Thus, from the beginning of the Nation’s space program and con­tinuing to the present, human factors research has been vital to NASA’s comprehensive research program.

Traffic Collision Avoidance System

By the 1980s, increasing airspace congestion had made the risk of cata­strophic midair collision greater than ever before. Consequently, the 100th Congress passed Public Law 100-223, the Airport and Airway Safety and Capacity Expansion Improvement Act of 1987. This required, among other provisions, that passenger-carrying aircraft be equipped with a Traffic Collision Avoidance System (TCAS), independent of air traffic control, that would alert pilots of other aircraft flying in their surrounding airspace.[395]

In response to this mandate, NASA, the FAA, the Air Transport Association, the Air Line Pilots Association, and various aviation technology industries teamed up to develop and evaluate such a system, TCAS I, which later evolved to the current TCAS II. From 1988 to 1992, NASA Ames Research Center played a pivotal role in this major collabor­ative effort by evaluating the human performance factors that came into play with the use of TCAS. By employing ground-based simulators oper­ated by actual airline flightcrews, NASA showed that this system was prac­ticable, at least from a human factors standpoint.[396] The crews were found to be able to accurately use the system. This research also led to improved displays and aircrew training procedures, as well as the validation of a set of pilot collision-evading performance parameters.[397] One example of the new technologies developed for incorporation into the TCAS system is the Advanced Air Traffic Management Display. This innovative system provides pilots with a three-dimensional air traffic virtual-visualization display that increases their situational awareness while decreasing their workload.[398] This visualization system has been incorporated into TCAS system displays and has become the industry standard for new designs.[399]

High-Speed Investigations

High-speed studies of dynamic stability were very active at Wallops. The scope and contributions of the Wallops rocket-boosted model research programs for aircraft configurations, missiles, and airframe components covered an astounding number of technical areas, including aerodynamic performance, flutter, stability and control, heat transfer, automatic controls, boundary-layer control, inlet performance, ramjets, and separation behav­ior of aircraft components and stores. As an example of test productivity, in just 3 years beginning in 1947, over 386 models were launched at Wallops to evaluate a single topic: roll control effectiveness at transonic conditions. These tests included generic configurations and models with wings repre­sentative of the historic Douglas D-558-2 Skyrocket, Douglas X-3 Stiletto, and Bell X-2 research aircraft.[471] Fundamental studies of dynamic stability and control were also conducted with generic research models to study basic phenomena such as longitudinal trim changes, dynamic longitudi­nal stability, control-hinge moments, and aerodynamic damping in roll.[472] Studies with models of the D-558-2 also detected unexpected coupling of longitudinal and lateral oscillations, a problem that would subsequently prove to be common for configurations with long fuselages and relatively small wings.[473] Similar coupled motions caused great concern in the X-3 and F-100 aircraft development programs and spurred on numerous stud­ies of the phenomenon known as inertial coupling.

More than 20 specific aircraft configurations were evaluated during the Wallops studies, including early models of such well-known aircraft as the Douglas F4D Skyray, the McDonnell F3H Demon, the Convair B-58 Hustler, the North American F-100 Super Sabre, the Chance Vought F8U Crusader, the Convair F-102 Delta Dagger, the Grumman F11F Tiger, and the McDonnell F-4 Phantom II.

High-Speed Investigations

Shadowgraph of X-15 model in free flight during high-speed tests in the Ames SFFT facility. Shock wave patterns emanating from various airframe components are visible. NASA.

High-speed dynamic stability testing techniques at the Ames SFFT included studies of the static and dynamic stability of blunt-nose reen­try shapes, including analyses of boundary-layer separation.[474] This work included studies of the supersonic dynamic stability characteristics of the Mercury capsule. Noting the experimental observation of nonlinear varia­tions of pitching moment with angle of attack typically exhibited by blunt bodies, Ames researchers contributed a mathematical method for includ­ing such nonlinearities in theoretical analyses and predictions of capsule dynamic stability at supersonic speeds. During the X-15 program, Ames conducted free-flight testing in the SFFT to define stability, control, and flow-field characteristics of the configuration at high supersonic speeds.[475]

The Pace Quickens

Beginning in the early 1960s, a flurry of new military aircraft develop­ment programs resulted in an unprecedented workload for the drop – model personnel. Support was requested by the military services for the General Dynamics F-111, Grumman F-14, McDonnell-Douglas F-15, Rockwell B-1A, and McDonnell-Douglas F/A-18 development programs. In addition, drop-model tests were conducted in support of the Grumman

X-29 and the X-31—sponsored by the Defense Advanced Research Projects Agency (DARPA)—research aircraft programs, which were scheduled for high-angle-of-attack full-scale flight tests at the Dryden flight facility. The specific objectives and test programs conducted with the drop models were considerably different for each configura­tion. Overviews of the results of the military programs are given in this volume, in another case study by this author.

Matching the Tunnel to the Supercomputer

The use of sophisticated wind tunnels and their accompanying complex mathematical equations led observers early on to call aerodynamics the

Matching the Tunnel to the Supercomputer

A model of the X-43A and the Pegasus Launch Vehicle in the Langley 31-Inch Mach 10 Tunnel. NASA.

"science” of flight. There were three major methods of evaluating an air­craft or spacecraft: theoretical analysis, the wind tunnel, and full-flight testing. The specific order of use was ambiguous. Ideally, researchers originated a theoretical goal and began their work in a wind tunnel, with the final confirmation of results occurring during full-flight testing. Researchers at Langley sometimes addressed a challenge first by study­ing it in flight, then moving to the wind tunnel for more extreme testing, such as dangerous and unpredictable high speeds, and then following up with the creation of a theoretical framework. The lack of knowledge of the effect of Reynolds number was at the root of the inability to trust wind tun­nel data. Moreover, tunnel structures such as walls, struts, and supports affected the performance of a model in ways that were hard to quantify.[602]

From the early days of the NACA and other aeronautical research facilities, an essential component of the science was the "computer.” Human computers, primarily women, worked laboriously to finish the myriad of calculations needed to interpret the data generated in wind

tunnel tests. Data acquisition became increasingly sophisticated as the NACA grew in the 1940s. The Langley Unitary Plan Wind Tunnel pos­sessed the capability of remote and automatic collection of pressure, force, temperature data from 85 locations at 64 measurements a second, which was undoubtedly faster than manual collection. Computers pro­cessed the data and delivered it via monitors or automated plotters to researchers during the course of the test. The near-instantaneous avail­ability of test data was a leap from the manual (and visual) inspection of industrial scales during testing.[603]

Computers beginning in the 1970s were capable of mathematically calculating the nature of fluid flows quickly and cheaply, which contrib­uted to the idea of what Baals and Corliss called the "electronic wind tunnel.”[604] No longer were computers only a tool to collect and interpret data faster. With the ability to perform billions of calculations in seconds to mathematically simulate conditions, the new supercomputers poten­tially could perform the job of the wind tunnel. The Royal Aeronautical Society published The Future of Flight in 1970, which included an arti­cle on computers in aerodynamic design by Bryan Thwaites, a profes­sor of theoretical aerodynamics at the University of London. His essay would be a clarion call for the rise of computational fluid dynamics (CFD) in the late 20th century.[605] Moreover, improvements in comput­ers and algorithms drove down the operating time and cost of compu­tational experiments. At the same time, the time and cost of operating wind tunnels increased dramatically by 1980. The fundamental limita­tions of wind tunnels centered on the age-old problems related to model size and Reynolds number, temperature, wall interference, model sup­port ("sting”) interference, unrealistic aeroelastic model distortions under load, stream nonuniformity, and unrealistic turbulence levels. Problematic results from the use of test gases were a concern for the design of vehicles for flight in the atmospheres of other planets.[606]

Matching the Tunnel to the Supercomputer

The control panels of the Langley Unitary Wind Tunnel in 1956. NASA.

The work of researchers at NASA Ames influenced Thwaites’s asser­tions about the potential of CFD to benefit aeronautical research. Ames researcher Dean Chapman highlighted the new capabilities of super­computers in his Dryden Lecture in Research for 1979 at the American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting in New Orleans, LA, in January 1979. To Chapman, innovations in com­puter speed and memory led to an "extraordinary cost reduction trend in computational aerodynamics,” while the cost of wind tunnel exper­iments had been "increasing with time.” He brought to the audience’s attention that a meager $1,000 and 30 minutes computer time allowed the numerical simulation of flow over an airfoil. The same task in 1959 would have cost $10 million and would have been completed 30 years later. Chapman made it clear that computers could cure the "many ills of wind-tunnel and turbomachinery experiments” while providing "impor­tant new technical capabilities for the aerospace industry.”[607]

The crowning achievement of the Ames work was the establishment of the Numerical Aerodynamic Simulation (NAS) Facility, which began operations in 1987. The facility’s Cray-2 supercomputer was capable of 250 million computations a second and 1.72 billion per second for short periods, with the possibility of expanding capacity to 1 billion computa­tions per second. That capability reduced the time and cost of developing aircraft designs and enabled engineers to experiment with new designs without resorting to the expense of building a model and testing it in a wind tunnel. Ames researcher Victor L. Peterson said the new facility, and those like it, would allow engineers "to explore more combinations of the design variables than would be practical in the wind tunnel.”[608]

The impetus for the NAS program arose from several factors. First, its creation recognized that computational aerodynamics offered new capabilities in aeronautical research and development. Primarily, that meant the use of computers as a complement to wind tunnel testing, which, because of the relative youth of the discipline, also placed heavy demands on those computer systems. The NAS Facility represented the committed role of the Federal Government in the development and use of large-scale scientific computing systems dating back to the use of the ENIAC for hydrogen bomb and ballistic missile calculations in the late 1940s.[609]

It was clear to NASA that supercomputers were part of the Agency’s future in the late 1980s. Futuristic projects that involved NASA super­computers included the National Aero-Space Plane (NASP), which had an anticipated speed of Mach 25; new main engines and a crew escape system for the Space Shuttle; and refined rotors for helicopters. Most importantly from the perspective of supplanting the wind tunnel, a supercomputer generated data and converted them into pictures that captured flow phenomena that had been previously unable to be sim­ulated.[610] In other words, the "mind’s eye” of the wind tunnel engineer could be captured on film.

Nevertheless, computer simulations were not to replace the wind tunnel. At a meeting sponsored by Advisory Group for Aerospace

Research & Development (AGARD) on the Integration of Computers and Wind Testing in September 1980, Joseph G. Marvin, the chief of the Experimental Fluid Dynamics Branch at Ames, asserted CFD was an "attractive means of providing that necessary bridge between wind – tunnel simulation and flight.” Before that could happen, a careful and critical program of comparison with wind tunnel experiments had to take place. In other words, the wind tunnel was the tool to verify the accuracy of CFD.[611] Dr. Seymour M. Bogdonoff of Princeton University commented in 1988 that "computers can’t do anything unless you know what data to put in them.” The aerospace community still had to dis­cover and document the key phenomena to realize the "future of flight” in the hypersonic and interplanetary regimes. The next step was input­ting the data into the supercomputers.[612]

Researchers Victor L. Peterson and William F. Ballhaus, Jr., who worked in the NAS Facility, recognized the "complementary nature of computation and wind tunnel testing,” where the "combined use” of each captured the "strengths of each tool.” Wind tunnels and comput­ers brought different strengths to the research. The wind tunnel was best for providing detailed performance data once a final configura­tion was selected, especially for investigations involving complex aero­dynamic phenomena. Computers facilitated the arrival and analysis of that final configuration through several steps. They allowed develop­ment of design concepts such as the forward-swept wing or jet flap for lift augmentation and offered a more efficient process of choosing the most promising designs to evaluate in the wind tunnel. Computers also made the instrumentation of test models easier and corrected wind tun­nel data for scaling and interference errors.[613]

Enhancing General Aviation Safety

Flying and handling qualities are, per se, an important aspect of opera­tional safety. But many other issues affect safety as well. The GA airplane of the postwar era was very different from its prewar predecessor—gone was fabric and wood or steel tube, with some small engine and a two – bladed fixed-pitch propeller. Instead, many were sleek all-metal mono­planes with retractable landing gears, near-or-over-200-mph cruising speeds, and, as noted in the previous section, often challenging and demanding flying and handling qualities. In November 1971, NASA spon­sored a meeting at the Langley Research Center to discuss technologies that might be applied to future civil aviation in the 1970s and beyond. Among the many papers presented was a survey of GA by Jack Fischel and Marvin Barber of the Flight Research Center.[835] Barber and Fischel offered an incisive survey and synthesis of applicable technologies, including the then-new concept of the supercritical wing, which was of course applicable to propeller design as well. They addressed opportu­nities to employ new structural design concepts and materials advances (as were then beginning to be explored for military aircraft). Boron and graphite composites, which could be laid up and injection molded, prom­ised to reduce both weight and labor costs, offering higher strength – to-weight ratios than conventional aluminum and steel construction. They noted the potentiality of increasingly reliable and cheap gas tur­bine engines (and the then-fashionable rotary combustion engine as well), and improved avionics could provide greater utility and safety for pilots of lower flight experience. Barber and Fischel concluded that,

On the basis of current and projected near-future tech­nology, it is believed that the main technology effort in the next decade will be devoted to improving the

economy, performance, utility, and safety of General Aviation aircraft.[836]

Of these, the greatest challenges involved safety. By the early 1970s, the fatality rate for GA was 10 times higher per passenger miles than that of automobiles.[837] Many accidents were caused by pilots exceeding their flying abilities, leading one manufacturing executive to ruefully remark at a NASA conference, "If we don’t soon find ways to improve the safety of our airplanes, we are going to be putting placards on the airplanes which say ‘Flying airplanes may be hazardous to your health.’”[838] Alarmed, NASA set an aviation safety goal to reduce fatality rates by 80 percent by the mid-1980s.[839] While basic changes in pilot training and practices could accomplish a great deal of good, so, too, could better understanding of GA safety challenges to create aircraft that were easier and more toler­ant of pilot error, together with sub-systems such as advanced avionics and flight controls that could further enhance flight safety. Underpinning all of this was a continuing need for the highest quality information and analysis that NASA research could furnish. The following examples offer an appreciation of some of the contributions NACA-NASA researchers made confronting some of the major challenges to GA safety.

On TARGIT: Civil Aviation Crash Testing in the

On December 1, 1984, a Boeing 720B airliner crashed near the east shore of Rogers Dry Lake. Although none of the 73 passengers walked away from the flaming wreckage, there were no fatalities. The occu­pants were plastic, anthropomorphic dummies, some of them instrumented to collect research data. There was no flight crew on board; the pilot was seated in a ground-based cockpit 6 miles away at NASA Dryden.

As early as 1980, Federal Aviation Administration (FAA) and NASA officials had been planning a full-scale transport aircraft crash dem­onstration to study impact dynamics and new safety technologies to improve aircraft crashworthiness. Initially dubbed the Transport Crash Test, the project was later renamed Transport Aircraft Remotely Piloted Ground Impact Test (TARGIT). In August 1983, planners set­tled on the name Controlled Impact Demonstration (CID). Some wags immediately twisted the acronym to stand for "Crash in the Desert” or "Cremating Innocent Dummies.”[954] In point of fact, no fireball was expected. One of the primary test objectives included demonstration of anti-misting kerosene (AMK) fuel, which was designed to prevent for­mation of a postimpact fireball. While many airplane crashes are surviv – able, most victims perish in postcrash fire resulting from the release of fuel from shattered tanks in the wings and fuselage. In 1977, FAA offi­cials looked into the possibility of using an additive called Avgard FM-9 to reduce the volatility of kerosene fuel released during catastrophic crash events. Ground-impact studies using surplus Lockheed SP-2H airplanes

showed great promise, because the FM-9 prevented the kerosene from forming a highly volatile mist as the airframe broke apart.[955] As a result of these early successes, the FAA planned to implement the require­ment that airlines add FM-9 to their fuel. Estimates made calculated that the impact of adopting AMK would have included a one-time cost to airlines of $25,000-$35,000 for retrofitting each high-bypass turbine engine and a 3- to 6-percent increase in fuel costs, which would drive ticket prices up by $2-$4 each. In order to definitively prove the effective­ness of AMK, officials from the FAA and NASA signed a Memorandum of Agreement in 1980 for a full-scale impact demonstration. The FAA was responsible for program management and providing a test aircraft, while NASA scientists designed the experiments, provided instrumen­tation, arranged for data retrieval, and integrated systems.[956] The FAA supplied the Boeing 720B, a typical intermediate-range passenger trans­port that entered airline service in the mid-1960s. It was selected for the test because its construction and design features were common to most contemporary U. S. and foreign airliners. It was powered by four Pratt & Whitney JT3C-7 turbine engines and carried 12,000 gallons of fuel. With a length of 136 feet, a 130-foot wingspan, and maximum takeoff weight of 202,000 pounds, it was the world’s largest RPRV. FAA Program Manager John Reed headed overall CID project development and coor­dination with all participating researchers and support organizations.

Researchers at NASA Langley were responsible for characteriz­ing airframe structural loads during impact and developing a data – acquisition system for the entire aircraft. Impact forces during the demonstration were characterized as being survivable for planning purposes, with the primary danger to be from postimpact fire. Study data to be gathered included measurements of structural, seat, and occu­pant response to impact loads, to corroborate analytical models devel­oped at Langley, as well as data to be used in developing a crashworthy seat and restraint system. Robert J. Hayduk managed NASA crashwor­thiness and cabin-instrumentation requirements.[957] Dryden personnel,

under the direction of Marvin R. "Russ” Barber, were responsible for overall flight research management, systems integration, and flight operations. These included RPRV control and simulation, aircraft/ground interface, test and systems hardware integration, impact-site preparation, and flight-test operations.

The Boeing 720B was equipped to receive uplinked commands from the ground cockpit. Commands providing direct flight path control were routed through the autopilot, while other functions were fed directly to appropriate systems. Information on engine performance, navigation, attitude, altitude, and airspeed was downlinked to the ground pilot.[958] Commands from the ground cockpit were conditioned in control-law computers, encoded, and transmitted to the aircraft from either a pri­mary or backup antenna. Two antennas on the top and bottom of the Boeing 720B provided omnidirectional telemetry coverage, each feeding a separate receiver. The output from the two receivers was then combined into a single input to a decoder that processed uplink data and generated commands to the controls. Additionally, the flight engineer could select redundant uplink transmission antennas at the ground station. There were three pulse-code modulation systems for downlink telemetry, two for exper­imental data, and one to provide aircraft control and performance data.

The airplane was equipped with two forward-facing television cam­eras—a primary color system and a black-and-white backup—to give the ground pilot sufficient visibility for situational awareness. Ten high­speed motion picture cameras photographed the interior of the pas­senger cabin to provide researchers with footage of seat and occupant motion during the impact sequence.[959] Prior to the final CID mission, 14 test flights were made with a safety crew on board. During these flights, 10 remote takeoffs, 13 remote landings (the initial landing was made by the safety pilot), and 69 CID approaches were accomplished. All remote takeoffs were flown from the Edwards Air Force Base main runway. Remote landings took place on the emergency recovery run­way (lakebed Runway 25).

Research pilots for the project included Edward T. Schneider, Fitzhugh L. Fulton, Thomas C. McMurtry, and Donald L. Mallick.

William R. "Ray” Young, Victor W. Horton, and Dale Dennis served as flight engineers. The first flight, a functional checkout, took place March 7, 1984. Schneider served as ground pilot for the first three flights, while two of the other pilots and one or two engineers acted as safety crew. These missions allowed researchers to test the uplink/downlink systems and autopilot, as well as to conduct airspeed calibration and collect ground-effects data. Fulton took over as ground pilot for the remain­ing flight tests, practicing the CID flight profile while researchers qual­ified the AMK system (the fire retardant AMK had to pass through a degrader to convert it into a form that could be burned by the engines) and tested data-acquisition equipment. The final pre-CID flight was com­pleted November 26. The stage was set for the controlled impact test.[960] The CID crash scenario called for a symmetric impact prior to encoun­tering obstructions as if the airliner were involved in a gear-up landing short of the runway or an aborted takeoff. The remote pilot was to slide the airplane through a corridor of heavy steel structures designed to slice open the wings, spilling fuel at a rate of 20 to 100 gallons per second. A specially prepared surface consisting of a rectangular grid of crushed rock peppered with powered electric landing lights provided ignition sources on the ground, while two jet-fueled flame generators in the air­plane’s tail cone provided onboard ignition sources.

On December 1, 1984, the Boeing 720B was prepared for its final flight. The airplane had a gross takeoff weight of 200,455 pounds, including 76,058 gallons of AMK fuel. Fitz Fulton initiated takeoff from the remote cockpit and guided the Boeing 720B into the sky for the last time.[961]At an altitude of 200 feet, Fulton lined up on final approach to the impact site. He noticed that the airplane had begun to drift to the right of centerline but not enough to warrant a missed approach. At 150 feet, now fully commit­ted to touchdown because of activation of limited-duration photographic and data-collection systems, he attempted to center the flight path with a left aileron input, which resulted in a lateral oscillation.

The Boeing 720B struck the ground 285 feet short of the planned impact point, with the left outboard engine contacting the ground first.

On TARGIT: Civil Aviation Crash Testing in the

NASA and the FAA conducted a Controlled Impact Demonstration with a remotely piloted Boeing 720 aircraft. NASA.

 

9

 

This caused the airplane to yaw during the slide, bringing the right inboard engine into contact with one of the wing openers and releasing large quantities of degraded (i. e., highly flammable) AMK and exposing them to a high-temperature ignition source. Other obstructions sliced into the fuselage, permitting fuel to enter beneath the passenger cabin. The resulting fireball was spectacular.[962]

To casual observers, this might have made the CID project appear a failure, but such was not the case. The conditions prescribed for the AMK test were very narrow and failed to account for a wide range of variables, some of which were illustrated during the flight test. The results were sufficient to cause FAA officials to abandon the idea of forc­ing U. S. airlines to use AMK, but the CID provided researchers with a wide range of data for improving transport-aircraft crash survivability.

The experiment also provided significant information for improv­ing RPV technology. The 14 test flights leading up to the final demon­stration gave researchers an opportunity to verify analytical models, simulation techniques, RPV control laws, support software, and hard­ware. The remote pilot assessed the airplane’s handling qualities, allowing
programmers to update the simulation software and validate the con­trol laws. All onboard systems were thoroughly tested, including AMK degraders, autopilot, brakes, landing gear, nose wheel steering, and instrumentation systems. The CID team also practiced emergency pro­cedures, such as the ability to abort the test and land on a lakebed run­way under remote control, and conducted partial testing of an uplinked flight termination system to be used in the event that control of the air­plane was lost. Several anomalies—intermittent loss of uplink signal, brief interruption of autopilot command inputs, and failure of the uplink decoder to pass commands—cropped up during these tests. Modifications were implanted, and the anomalies never recurred.[963] Handling qualities were generally good. The ground pilot found landings to be a special challenge as a result of poor depth perception (because of the low- resolution television monitor) and lack of peripheral vision. Through flight tests, the pilot quickly learned that the CID profile was a high – workload task. Part of this was due to the fact that the tracking radar used in the guidance system lacked sufficient accuracy to meet the impact parameters. To compensate, several attempts were made to improve the ground pilot’s performance. These included changing the flight path to give the pilot more time to align his final trajectory, improving ground markings at the impact site, turning on the runway lights on the test sur­face, and providing a frangible 8-foot-high target as a vertical reference on the centerline. All of these attempts were compromised to some degree by the low-resolution video monitor. After the impact flight, members of the control design team agreed that some form of head-up display (HUD) would have been helpful and that more of the piloting tasks should have been automated to alleviate pilot workload.[964] In terms of RPRV research, the project was considered highly successful. The remote pilots accu­mulated 16 hours and 22 minutes of RPV experience in preparation for the impact mission, and the CID showed the value of comparing pre­dicted results with flight-test data. U. S. Representative William Carney, ranking minority member of the House Transportation, Aviation, and Materials Subcommittee, observed the CID test. "To those who were disappointed with the outcome,” he later wrote, "I can only say that the
results dramatically illustrated why the tests were necessary. I hope we never lose sight of the fact that the first objective of a research program is to learn, and failure to predict the outcome of an experiment should be viewed as an opportunity, not a failure.”[965]

Civilian Supersonic Cruise: The National SST Effort

The fascination for higher speeds of the 1950s and the new long-range comfortable jet airliners combined to create an interest in a supersonic airliner. The dominance of American aircraft manufacturers designs in the long-range subsonic jet airliner market meant that European man­ufacturers turned their sights on that goal. As early as 1959, when jet traffic was just commencing, Sir Peter Masefield, an influential avia­tion figure, said that a supersonic airliner should be a national goal for Britain. Development of such an airplane would contribute to national
prestige, enhance the national technology skill level, and contribute to a favorable trade balance by foreign sales. He recognized that the under­taking would be expensive and that the government would have to sup­port the development of the aircraft. The possibility was also suggested of a cooperative design effort with the United States. Meanwhile, the French aviation industry was pursuing a similar course. Eventually, in 1962, Britain and France merged their efforts to produce a joint European aircraft cruising at Mach 2.2.16

Подпись: 10A Supersonic Transport had also been envisioned in the United States, and low-level studies had been initiated at NACA Langley in 1956, headed by John Stack. But the European initiatives triggered an intensification of American efforts, for essentially the same reasons listed by Masefield. In 1960, Convair proposed a new 52-seat modified-fuselage version of its Mach 2 B-58, preceded by a testbed B-58 with 5 intrepid volunteers in airline seats in the belly pod (windows and a life-support system were to be installed).[1070] The influential magazine Aviation Week reflected the tenor of the American feeling by proposing that the United States make SST a national priority, akin to the response to Sputnik.[1071] Articles appeared outlining the technology for supersonic cruise speeds up to Mach 4 with existing technology. The USAF’s Wright Air Development Division con­vened a conference in late 1960 to discuss the SST for military as well as civilian use.[1072] And in 1961, the newly created Federal Aviation Agency (FAA) began to work with the newly created NASA and the Air Force in Project Horizon to study an American SST program. One of the big questions was whether the design cruise speed should be Mach 2, as the Europeans were striving for, or closer to Mach 3.[1073]

Подпись: 10 Civilian Supersonic Cruise: The National SST Effort

Both Langley and Ames had been engaged in large supersonic air­craft design studies for years and had provided technical support for the Air Force WS-110 program that became the Mach 3 cruise B-70.[1074] Langley had also pioneered work on variable-sweep wings, in part draw­ing upon variable wing sweep technology as explored by the Bell X-5 in NACA testing, to solve the problem of approach speeds for heavy air­planes with highly swept wings for supersonic cruise but also required to operate from existing jet runways. Langley embarked upon develop­ing baseline configurations for a theoretical Supersonic Commercial Air Transport (SCAT), with Ames also participating. Clinton Brown and

F. Edward McLean at Langley developed the so-called arrow wing, with highly swept leading and trailing edges, that promised to produce higher L/D at supersonic cruise speeds. In June 1963, the theoretical research
became more developmental, as President John F. Kennedy announced that the United States would build an SST with Government funding of up to $1 billion provided to industry to aid in the development.

Подпись: 10In September 1963, NASA Langley hosted a conference for the air­craft industry presenting independent detailed analyses by Boeing and Lockheed of four NASA-developed configurations known as SCAT 4 (arrow wing), 15 (arrow wing with variable sweep), 16 (variable sweep), and 17 (delta with canard). Langley research had produced the first three, and Ames had produced SCAT 17.[1075] Additionally, papers on NASA research on SST technology were presented. The detailed analyses by both contractors of the baselines concluded that a supersonic transport was technologically feasible, and that the specified maximum range of 3,200 nautical miles would be possible at Mach 3 but not at Mach 2.2. The economic feasibility of an SST was not evaluated directly, although each contractor commented on operating cost comparisons with the Boeing 707. Although the initial FAA SST specification called for Mach 2.2 cruise, the conference baseline was Mach 3, with one of the configu­rations also being evaluated at Mach 2.2. The results and the need to make the American SST more attractive to airlines than the European Concorde shifted the SST baseline to a Mach 2.7 to Mach 3 cruise speed. This speed was similar to that of the XB-70, so the results of its test program could be directly applicable to development of an SST. As the 1963 conference report stated, "Significant research will be required in the areas of aero­dynamic performance, handling qualities, sonic boom, propulsion, and structural fabrication before the supersonic transport will be a success.”[1076]

JSC, the FAA, and Targets of Opportunity

Подпись: 11As 2005 drew to a close, Michael Coffman at FAA Oklahoma City had convinced his line management that a flight demonstration of the sen­sor fusion technology would be a fine precursor to further FAA interest. FAA Oklahoma City had a problem: how best to protect its approaches of flight-check aircraft certifying instruments for the Department of Defense in combat zones. Coffman and Fox had suggested sensor fusion. If onboard video sensors in a flight-check aircraft could image a termi­nal approach corridor with a partially blended synthetic approach cor­ridor, any obstacle penetrating the synthetic corridor could be quickly identified. Coffman, using the MOU with NASA JSC signed just that July, suggested that an FAA Challenger 604 flight-check aircraft based at FAA Oklahoma City could be configured with SVS equipment to dem­onstrate the technology to NASA and FAA managers. Immediately, Fox, Coffman, Mike Abernathy of RIS, Patrick Laport and Tim Verborgh of AANA, and JSC electronics technician James Secor began discussing how to configure the Challenger 604. Fox tested his ability to scrounge excess material from JSC by acquiring an additional obsolete but ser­viceable Embedded GPS Inertial Navigation System (EGI) navigation processor (identical to the one used in the ACES van) and several pro­cessors to drive three video displays. Coffman found some FAA funds to buy three monitors, and Abernathy and RIS wrote the software nec­essary to drive three monitors with SVS displays with full-sensor fusion capability, while Laport and Verborgh developed the symbology set for the displays. The FAA bought three lipstick cameras, JSC’s Jay Estes designed a pallet to contain the EGI and processors, and a rudimentary portable system began to take shape.[1192]

The author, now a research pilot at the Johnson Space Center, became involved assisting AANA with the design of a notional instrument pro­cedure corridor at JSC’s Ellington Field flight operations base. He also obtained permission from his management chain to use JSC’s Aircraft

Operations Division to host the FAAs Challenger and provide the jet fuel it required. Verborgh, meanwhile, surveyed a number of locations on Ellington Field with the author’s help, using a borrowed portable DGPS system to create by hand a synthetic database of Ellington Field, the group not having access to expensive commercial databases. The author and JSC’s Donald Reed coordinated the flight operations and air traffic control approvals, Fox and Coffman handled the interagency approv­als, and by March 2006, the FAA Challenger 604 was at Ellington Field with the required instrumentation installed and ready for the first sen­sor fusion-guided instrument approach demonstration. Fox had bor­rowed helmet-mounted display hardware and a kneeboard computer to display selected sensor fusion scenes in the cabin, and five demonstra­tion flights were completed for over a dozen JSC Shuttle, Flight Crew Operations, and Constellation managers. In May, the flights were com­pleted to the FAA’s satisfaction. The sensor fusion software and hardware performed flawlessly, and both JSC and FAA Oklahoma City manage­ment gained confidence in the team’s capabilities, a confidence that would continue to pay dividends. For its part, JSC could not afford a more extensive, focused program, nor were Center managers uniformly convinced of the applicability of this technology to their missions. The team, however, had greatly bolstered confidence in its ability to accom­plish critically significant flight tests, demonstrating that it could do so with "shoestring” resources and support. It did so by using a small-team approach, building strong interagency partnerships, creating relation­ships with other research organizations and small businesses, relying on trust in one another’s professional abilities, and following rigorous adherence to appropriate multiagency safety reviews.

Подпись: 11The success of the approach demonstrations allowed the team mem­bers to continue with the SVS work on a not-to-interfere basis with their regularly assigned duties. Fox persuaded his management to allow the ACES van to remotely control the JSC Scout simulated lunar rover on three trips to Meteor Crater, AZ, in 2005-2006 using the same sensor fusion software implementation as that on the Challenger flight test. Throughout the remainder of 2006, the team discussed other possibil­ities to demonstrate its system. Abernathy provided the author with a kneeboard computer, a GPS receiver, and the RIS’s LandForm software (for which JSC had rights) with a compressed, high-resolution database of the Houston area. On NASA T-38 training flights, the author evaluated the performance of the all-aspect software in anticipation of an official

Подпись: 11 JSC, the FAA, and Targets of Opportunity

evaluation as part of a potential T-38 fleet upgrade. The author had con­versations with Coffman, Fox, and Abernathy regarding the FAA’s idea of using a turret on flight-check aircraft to measure in real time the height of approach corridor obstacles. The conservations and the portability of the software and hardware inspired the author to suggest a flight test using one of JSC’s WB-57F High-Altitude Research Airplanes with the WB-57F Acquisition Validation Experiment (WAVE) sensor as a proof of concept. The WB-57F was a JSC high-altitude research airplane capable of extended flight above 60,000 feet with sensor payloads of thousands of pounds and dozens of simultaneous experiments. The WAVE was a sophis­ticated, 360-degree slewable camera tracking system developed after the Columbia accident to track Space Shuttle launches and reentries.[1193]

The author flew the WB-57F at JSC, including WAVE Shuttle tracking missions. Though hardly the optimal airframe (the sensor fusion proof of concept would be flown at only 2,000 feet altitude), the combination of a JSC airplane with a slewable, INS/GPS-supported camera system was hard to beat. The challenges were many. The two WB-57F airframes at JSC were scheduled years in advance, they were expensive for a single
experiment when designed to share costs among up to 40 simultaneous experiments, and the WAVE sensor was maintained by Southern Research Institute (SRI) in Birmingham, AL. Fortunately, Mike Abernathy of RIS spoke directly to John Wiseman of SRI, and an agreement was reached in which SRI would integrate RIS’s LandForm software into WAVE for no cost if it were allowed to use it for other potential WAVE projects.

Подпись: 11The team sought FAA funding on the order of $30,000-$40,000 to pay for the WB-57F operation and integration costs and transport the WAVE sensor from Birmingham to Houston. In January 2007, the team invited Frederic Anderson—Manager of Aero-Nav Services at FAA Oklahoma City—to visit JSC to examine the ACES van, meet with the WB-57F Program Office and NASA Exploration Program officials, and receive a demonstration of the sensor fusion capabilities. Anderson was convinced of the potential of using the WB-57/WAVE to prove that an object on the ground could be passively, remotely measured in real time to high accuracy. He was willing to commit $40,000 of FAA money to this idea. With one challenge met, the next challenge was to find a hole in the WB-57F’s schedule.

In mid-March 2007, the author was notified that a WB-57F would be available the first week in April. In 3 weeks Fox, pushed through a Space Act Agreement to get FAA Oklahoma City funds transferred to JSC, with pivotal help from the JSC Legal Office. RIS and AANA, work­ing nonstop with SRI, integrated the sensor fusion software into the WAVE computers. Due to a schedule slip with the WB-57, the team only had a day and a half to integrate the RIS hardware into the WB-57, with the invaluable help of WB-57 engineers. Finally, on April 6, on a 45-minute flight from Ellington Field, the author and WAVE operator Dominic Del Rosso of JSC for the first time measured an object on the ground (the JSC water tower) in flight in real time using SVS technol­ogy. The video signal from the WAVE acquisition camera was blended with synthetic imagery to provide precise scaling.

The in-flight measurement was within 0.5 percent of the surveyed data. The ramifications of this accomplishment were immediate and profound: the FAA was convinced of the power of the SVS sensor fusion technology and began incorporating the capability into its planned flight-check fleet upgrade.[1194]

Подпись: 11 JSC, the FAA, and Targets of Opportunity

Building on this success, Fox, Coffman, Abernathy, and the author looked at new ways to showcase sensor fusion. In the back of their minds had been the concept of simulating a lunar approach into a vir­tual lunar base anchored over Ellington Field. The thought was to use the FAA Challenger 604 with the SVS portable pallet installed as before. The problem was money. A solution came from a collaboration between the author and his partner in a NASA JSC aircraft fleet upgrade study, astronaut Joseph Tanner. They had extra money from their fleet study budget, and Tanner was intrigued by the proposed lunar approach sim­ulation because it related to a possible future lunar approach training aircraft. The two approached Brent Jett, who was Director of Flight Crew Operations and the sponsor of their study, in addition to oversee­ing the astronauts and flight operations at JSC. Jett was impressed with the idea and approved the necessary funds to pay the operational cost of the Challenger 604. FAA Oklahoma City would provide the airplane and crew at its expense.

Once again, RIS and AANA on their own modified the software to simulate a notional lunar approach designed by the author and Fox, derived from the performance of the Challenger 604 aircraft. Coffman was able to retrieve the monitors and cameras from the approach
flight tests of 2006. Jim Secor spent a day at FAA Oklahoma City rein­stalling the SVS pallet and performing the necessary integration with Michael Coffman. The author worked with Houston Approach Control to gain approval for this simulated lunar approach with a relatively steep flightpath into Ellington Field and within the Houston Class B (Terminal Control Area) airspace. The trajectory commenced at 20,000 feet, with a steep power-off dive to 10,000 feet, at which point a 45-degree course correction maneuver was executed. The approach terminated at 2,500 feet at a simulated 150-feet altitude over a virtual lunar base anchored overhead Ellington Field. Because there was no digital database avail­able for any of the actual proposed lunar landing sites, the team used a modified database for Meteor Crater as a simulated lunar site. The team switched the coordinates to Ellington Field so that the EGI could still provide precise GPS navigation to the virtual landing site anchored overhead the airport.

Подпись: 11

Подпись: Screen shot of the SVS simulated lunar approach PFD. NASA.

In early February 2008, all was ready. The ACES van was used to val­idate the model as there was no time (or money) to do it on the airplane. One instrumentation checkout flight was flown, and several anomalies were corrected. That afternoon, for the first time, an aircraft was used to simulate a lunar approach to a notional lunar base. Sensor fusion was demonstrated on one of the monitors using the actual ambient conditions to provide Sun glare and haze challenges. These were not

representative of actual lunar issues but were indicative of the benefit of sensor fusion to mitigate the postulated 1-degree Sun angles of the south lunar pole. The second monitor showed the SVS Meteor Crater digital terrain database simulating the lunar surface and perfectly matched to the Houston landscape over which the Challenger 604 was flying.[1195] This and two more flights demonstrated the technology to a dozen astronauts and to Constellation and Orion program managers.

Подпись: 11Four flight test experiments and three trips to Meteor Crater were completed in a 3-year period to demonstrate the SVS sensor fusion tech­nology. The United States military is using evolved versions of the orig­inal X-38 SVS and follow-on sensor fusion software with surveillance sensors on various platforms, and the FAA has contracted with RIS to develop an SVS for its flight-check fleet. Constellation managers have shown much interest in the technology, but by 2009, no decision has been reached regarding its incorporation in NASA’s space exploration plans.[1196]

Partners on Ice

Подпись: 12As it is with other areas involving aviation, NASA’s role in aircraft icing is as a leader in research and technology, leaving matters of regulations and certifications to the FAA. Often the FAA comes to NASA with an idea or a need, and the Agency then takes hold of it to make it happen. Both the National Center for Atmospheric Research and NOAA have actively partnered with NASA on icing-related projects. NASA also is a major player in the Aircraft Icing Research Alliance (AIRA), an interna­tional partnership that includes NASA, Environment Canada, Transport Canada, the National Research Council of Canada, the FAA, NOAA, the National Defense of Canada, and the Defence Science and Technology Laboratory (DSTL)-United Kingdom. AIRAs primary research goals com­plement NASA’s, and they are to

• Develop and maintain an integrated aircraft icing research strategic plan that balances short-term and long-term research needs,

• Implement an integrated aircraft icing research strate­gic plan through research collaboration among the AIRA members,

• Strengthen and foster long-term aircraft icing research expertise,

• Exchange appropriate technical and scientific information,

• Encourage the development of critical aircraft icing tech­nologies, and

• Provide a framework for collaboration between AIRA members.

Finally, among the projects NASA is working with AIRA members includes the topics of ground icing, icing for rotorcraft, characteriza­tion of the atmospheric icing environment, high ice water content, icing cloud instrumentation, icing environment remote sensing, propulsion system icing, and ice adhesion/shedding from rotating surfaces—the last two a reference to the internal engine icing problem that is likely to make icing headlines during the next few years.

Подпись: 12The NACA-NASA role in the history of icing research, and in search­ing for means to frustrate this insidious threat to aviation safety, has been one of constant endeavor, constantly matching the growth of scientific understanding and technical capabilities to the threat as it has evolved over time. From crude attempts to apply mechanical fixes, fluids, and heating, NACA and NASA researchers have advanced to sophisticated modeling and techniques matching the advances of aerospace science in the fields of fluid mechanics, atmospheric physics, and computer analysis and simulation. Through all of that, they have demonstrated another constant as well: a persistent dedication to fulfill a mandate of Federal aeronautical research dating to the founding of the NACA itself and well encapsulated in its founding purpose: "to supervise and direct the scientific study of the problems of flight, with a view to their prac­tical solution.”

A drop model of the F/A-1 8E is released for a poststall study high above the NASA Wallops Flight Center. NASA.

 

Partners on Ice