Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Sensor Fusion Arrives

Подпись: 11Integrating an External Vision System was an overarching goal of the HSR program. The XVS would include advanced television and infra­red cameras, passive millimeter microwave radar, and other cutting – edge sensors, fused with an onboard database of navigation information, obstacles, and topography. It would thus furnish a complete, syntheti­cally derived view for the aircrew and associated display symbologies in real time. The pilot would be presented with a visual meteorologi­cal conditions view of the world on a large display screen in the flight, deck simulating a front window. Regardless of actual ambient meteo­rological conditions, the pilot would thus "see” a clear daylight scene, made possible by combining appropriate sensor signals; synthetic scenes derived from the high-resolution terrain, navigation, and obstacle data­bases; and head-up symbology (airspeed, altitude, velocity vector, etc.) provided by symbol generators. Precise GPS navigation input would complete the system. All of these inputs would be processed and dis­played in real time (on the order of 20-30 milliseconds) on the large "virtual window” displays. During the HSR program, Langley did not develop the sensor fusion technology before program termination and, as a result, moved in the direction of integration of the synthetic data­base derived view with sophisticated display symbologies, redefining the implementation of the primary flight display and navigation display. Part of the problem with developing the sensor fusion algorithms was the perceived need for large, expensive computers. Langley continued on this path when the Synthetic Vision Systems project was initiated under NASA’s Aviation Safety Program in 1999 and achieved remark­able results in SVS architecture, display development, human factors engineering, and flight deck integration in both GA and CBA domains.[1164]

Simultaneously with these efforts, JSC was developing the X-38 unpiloted lifting body/parafoil recovery reentry vehicle. The X-38 was a technology demonstrator for a proposed orbital crew rescue vehicle
that could, in an emergency, return up to seven astronauts to Earth, a veritable space-based lifeboat. NASA planners had forecasted a need for such a rescue craft in the early days of planning for Space Station Freedom (subsequently the International Space Station). Under a Langley study program for the Space Station Freedom Crew Emergency Rescue Vehicle (CERV, later shortened to CRV), Agency engineers and research pilots had undertaken extensive simulation studies of one candidate shape, the HL-20 lifting body, whose design was based on the general aerodynamic shape of the Soviet Union’s BOR-4 subscale spaceplane.[1165] The HL-20 did not proceed beyond these tests and a full-scale mockup. Instead, Agency attention turned to another escape vehicle concept, one essentially identical in shape to the nearly four-decade-old body shape of the Martin SV-5D hypersonic lifting reentry test vehicle, sponsored by NASA’s Johnson Space Center. The Johnson configuration spawned its own two-phase demonstrator research effort: the X-38 program, for a series of subsonic drop-shapes air-launched from NASA’s NB-52B Stratofortress, and the second, for an orbital reentry shape to be test – launched from the Space Shuttle from a high-inclination orbit. But while tests of the former did occur at the NASA Dryden Flight Research Center (DFRC) in the late 1990s, the fully developed orbital craft did not pro­ceed to development and orbital test.[1166]

Подпись: 11To remotely pilot this vehicle during its flight-testing at Dryden, project engineers were developing a system displaying the required navigation and control data. Television cameras in the nose of the X-38 provided a data link video signal to a control flight deck on the ground. Video signals alone, however, were insufficient for the remote pilot to perform all the test and control maneuvers, including "flap turns” and "heading hold” commands during the parafoil phase of flight. More infor­mation on the display monitor would be needed. Further complications arose because of the design of the X-38: the crew would be lying on its

backs, looking at displays on the "ceiling” of the vehicle. Accordingly, a team led by JSC X-38 Deputy Avionics Lead Frank J. Delgado was tasked with developing a display system allowing the pilot to control the X-38 from a perspective 90 degrees to the vehicle direction of travel. On the cockpit design team were NASA astronauts Rick Husband (subsequently lost in the Columbia reentry disaster), Scott Altman, and Ken Ham, and JSC engineer Jeffrey Fox.

Подпись: 11Delgado solicited industry assistance with the project. Rapid Imaging Software, Inc., a firm already working with imaginative synthetic vision concepts, received a Phase II Small Business Innovation Research (SBIR) contract to develop the display architecture. RIS subsequently developed LandForm VisualFlight, which blended "the power of a geographic informa­tion system with the speed of a flight simulator to transform a user’s desk­top computer into a ‘virtual cockpit.’”[1167] It consisted of "symbology fusion” software and 3-D "out-the-window” and NAV display presentations oper­ating using a standard Microsoft Windows-based central processing unit (CPU). JSC and RIS were on the path to developing true sensor fusion in the near future, blending a full SVS database with live video signals. The system required a remote, ground-based control cockpit, so Jeff Fox pro­cured an extended van from the JSC motor pool. This vehicle, officially known as the X-38 Remote Cockpit Van, was nicknamed the "Vomit Van” by those poor souls driving around lying on their backs practicing flying a simulated X-38. By spring 2002, JSC was flying the X-38 from the Remote Cockpit Van using an SVS NAV Display, an SVS out-the-window display, and a video display developed by RIS. NASA astronaut Ken Ham judged it as furnishing the "best seat in the house” during X-38 glide flights.[1168]

Indeed, during the X-38 testing, a serendipitous event demon­strated the value of sensor fusion. After release from the NASA NB-52B Stratofortress, the lens of the onboard X-38 television camera became partially covered in frost, occluding over 50 percent of the FOV. This would have proved problematic for the pilot had orienting symbology
not been available in the displays. Synthetic symbology, including spa­tial entities identifying keep-out zones and runway outlines, provided the pilot with a synthetic scene replacing the occluded camera image. This foreshadowed the concept of sensor fusion, in which, for example, blossoming as the camera traversed the Sun could be "blended” out, and haze obscuration could be minimized by adjusting the degree of syn­thetic blend from 0 to 100 percent.[1169]

Подпись: 11But then, on April 29, 2002, faced with rising costs for the International Space Station, NASA canceled the X-38 program.[1170] Surprisingly, the cancellation did not have the deleterious impact upon sensor fusion development that might have been anticipated. Instead, program members Jeff Fox and Eric Boe secured temporary support via the Johnson Center Director’s discretionary fund to keep the X-38 Remote Cockpit Van operating. Mike Abernathy, president of RIS, was eager to continue his company’s sensor fusion work. He supported their efforts, as did Patrick Laport of Aerospace Applications North America (AANA). For the next 2 years, Fox and electronics technician James B. Secor continued to improve the van, working on a not-to-interfere basis with their other duties. In July 2004, Fox secured further Agency funding to convert the remote cockpit, now renamed, at Boe’s suggestion, the Advanced Cockpit Evaluation System (ACES). It was rebuilt with a single, upright seat affording a 180-degree FOV visual system with five large surplus moni­tors. An array of five cameras was mounted on the roof of the van, and its input could be blended in real time with new RIS software to form a complete sensor fusion package for the wraparound monitors or a helmet – mounted display.[1171] Subsequently, tests with this van demonstrated true sensor fusion. Now, the team looked for another flight project it could use to demonstrate the value of SVS.

Its first opportunity came in November 2004, at Creech Air Force Base in Nevada. Formerly known as Indian Springs Auxiliary Air Field, a backwater corner of the Nellis Air Force Base range, Creech had risen
to prominence after the attacks of 9/11, as it was the Air Force’s center of excellence for unmanned aerial vehicle (UAV) operations. It used, as its showcase, the General Atomics Predator UAV. The Predator, modi­fied as a Hellfire-armed attack system, had proven a vital component of the global war on terrorism. With UAVs increasing dramatically in their capabilities, it was natural that the UAV community at Nellis would be interested in the work of the ACES team. Traveling to Nevada to demon­strate its technology to the Air Force, the JSC team used the ACES van in a flight-following mode, receiving downlink video from a Predator UAV. That video was then blended with synthetic terrain database inputs to provide a 180-degree FOV scene for the pilot. The Air Force’s Predator pilots found the ACES system far superior to the narrow-view perspec­tive they then had available for landing the UAV.

Подпись: 11In 2005, astronaut Eric Boe began training for a Shuttle flight and left the group, replaced by the author, who had spent years over 10 years at Langley as a project or research pilot on all of that Center’s SVS and XVS projects. The author transferred to JSC from Langley in 2004 as a research pilot and learned of the Center’s SVS work from Boe. The author’s involve­ment with the JSC group linked Langley and JSC’s SVS efforts, for he provided the JSC group with his experience with Langley’s SVS research.

That spring, a former X-38 cooperative student—Michael Coffman, now an engineer at the FAA’s Mike Monroney Aeronautical Center in Oklahoma City—serendipitously visited Fox at JSC. They discussed using the sensor fusion technology for the FAA’s flight-check mission. Coffman, Fox, and Boe briefed Thomas C. Accardi, Director of Aviation Systems Standards at FAA Oklahoma City, on the sensor fusion work at JSC, and he was interested in its possibilities. Fox seized this opportu­nity to establish a memorandum of understanding (MOU) among the Johnson Space Center, the Mike Monroney Aeronautical Center, RIS, and AANA. All parties would work on a quid pro quo basis, sharing intellectual and physical resources where appropriate, without fund­ing necessarily changing hands. Signed in July 2005, this arrangement was unique in its scope and, as will be seen, its ability to allow contrac­tors and Government agencies to work together without cost. JSC and FAA Oklahoma City management had complete trust in their employ­ees, and both RIS and AANA were willing to work without compensa­tion, predicated on their faith in their product and the likely potential return on their investment, effectively a Skunk Works approach taken to the extreme. The stage was set for major SVS accomplishments, for
during this same period, huge strides in SVS development had been made at Langley, which is where this narrative now returns.[1172]

Aircraft Ice Protection

The Aircraft Ice Protection program focuses on two main areas: devel­opment of remote sensing technologies to measure nearby icing con­ditions, improve current forecast capabilities, and develop systems to transfer and display that information to flight crews, flight controllers, and dispatchers; and development of systems to monitor and assess aircraft performance, notify the cockpit crew about the state of the
aircraft, and/or automatically alter the aircraft controlling systems to prevent stall or loss of control in an icing environment. Keeping those two focus areas in mind, the Aircraft Ice Protection program is subdi­vided to work on these three goals:

• Provide flight crews with real-time icing weather infor­mation so they can avoid the hazard in the first place or find the quickest way out.[1265]

• Improve the ability of an aircraft to operate safely in icing conditions.[1266]

Подпись: 12Improve icing simulation capabilities by develop­ing better instrumentation and measurement tech­niques to characterize atmospheric icing conditions, which also will provide icing weather validation data­bases, and increase basic knowledge of icing physics.[1267]

In terms of remote sensing, the top level goals of this activity are to develop and field-test two forms of remote sensing system technologies that can reduce the exposure of aircraft to in-flight icing hazards. The first technology would be ground based and provide coverage in
a limited terminal area to protect all vehicles. The second technology would be airborne and provide unrestricted flightpath coverage for a commuter class aircraft. In most cases the icing hazard to aircraft is minimized with either de-icing or anti-icing procedures, or by avoid­ing any known icing or possible icing areas altogether. However, being able to avoid the icing hazard depends much on the quality and timing of the latest observed and forecast weather conditions. And once stuck in a severe icing hazard zone, the pilot must have enough information to know how to get out of the area before the aircraft’s ice protection systems are overwhelmed. One way to address these problem areas is to remotely detect icing potential and present the information to the pilot in a clear, easily understood manner. Such systems would allow the pilot to avoid icing conditions and also allow rapid escape from icing if severe conditions were encountered.[1268]

Fifth Generation: The F-22 Program

The Air Force initiated its Advanced Tactical Fighter (ATF) program in 1985 as an effort to augment and ultimately replace the F-15. During the competitive phase of the program between the Northrop-led YF-23 and the Lockheed-led YF-22 designs, the Air Force established that each team could draw on the facilities and expertise of NASA for establish­ing credibility and risk reduction before a competitive fly-off. Lockheed subsequently requested free-flight and spin tests of the YF-22 in the Langley Full-Scale Tunnel and the Langley Spin Tunnel. The relatively
compressed timeframe of the ATF competition would not permit a feasible schedule for the fabrication and testing of a helicopter drop model of the YF-22.

Подпись: 13A joint NASA-Lockheed team conducted conventional tunnel tests in the Full-Scale Tunnel in 1989 to measure YF-22 aerodynamic data for high-angle-of-attack conditions, followed by free-flight model studies to determine the low-speed departure resistance of the configuration. Meanwhile, spin tunnel tests obtained information on spin and recovery characteristics as well as the size and location of an emergency spin recovery parachute for the high-angle-of-attack test airplane. In addition, specialized "rotary-balance” tests were conducted in the spin tunnel to obtain aerodynamic data during simulated spin motions. Lockheed incorporated all of the forego­ing results in the design process, leading to an impressive display of capabilities by the YF-22 during the competitive flight demonstrations in 1990.

Lockheed formally acknowledged its appreciation of NASA’s participation in the YF-22 program in a letter to NASA, which stated:

On behalf of the Lockheed YF-22 Team, I would like to express our appreciation of the contribution that the people of NASA Langley made to our successful YF-22 flight test program, and provide some feedback on how well the flight test measure­ments agreed with the predictions from your wind-tunnel mea­surements. . . . The highlight of the flight test program was the high-angle-of-attack flying qualities. We relied on aerodynamic data obtained in the full-scale wind tunnel to define the low – speed, high-angle-of-attack static and dynamic aerodynamic derivatives; rotary derivatives from your spin tunnel; and free- flight demonstrations in the full-scale tunnel. We expanded the flight envelope from 20° to 60° angle of attack, demonstrating pitch attitude changes and full-stick rolls about the velocity vector in seven calendar days. The reason for this rapid enve­lope expansion was the quality of the aerodynamic data used in the control law design and pre-flight simulations.[1322]

Подпись: Free-flight model tests of the YF-22 in the Full-Scale Tunnel accurately predicted the high-alpha maneuverability of the full-scale airplane and provided risk reduction for the F-22 program. NASA. Подпись: 13

After the team of Lockheed, Boeing, and General Dynamics was announced as the winner of the ATF competition in April 1991, high – angle-of-attack testing of the final F-22 configuration was conducted in the Full-Scale Tunnel and the Spin Tunnel. Aerodynamic force testing was completed in the Full-Scale Tunnel in 1992, with spin – and rotary – balance tests conducted in 1993. A wind tunnel free-flight model was not fabricated for the F-22 program, but a typical full-scale tunnel model was constructed and used for the aerodynamic tests. A notable contribution from the spin tunnel tests was a relocation of the attachment point for the F-22 emergency spin recovery parachute to clear the exhaust plume of the vectoring engine in 1994. Langley’s contributions to the high – angle-of-attack technologies embodied in the F-22 fighter had been com­pleted well in advance of the aircraft’s first flight in September 1997.[1323]

Fatal Accident #2

The remaining XV-5A was rigged with a pilot-operated res­cue hoist, located on the left side of the fuselage just ahead of the wing fan. An evaluation test pilot was fatally injured during the test program while performing a low-speed, steep – descent "pick-up” maneuver at Edwards AFB. The heavily – weighted rescue collar was ingested into the left wing fan as the pilot descended and simultaneously played-out the collar. The damaged fan continued to rotate, but the resultant loss in fan lift caused the aircraft to roll-left and settle toward the ground. The pilot apparently leveled the wings; applied full power and up-collective to correct for the left wing-fan lift loss. The damaged left fan produced enough lift to hold the wings level and somewhat reduce the ensuing descent rate. The pilot elected to eject from the aircraft as it approached the ground in this wings-level attitude. As the pilot released the right-stick displacement and initiated the ejection, the air­craft rolled back to the left which caused the ejected seat tra­jectory to veer-off to a path parallel to the ground. The seat

Подпись: 14

impacted the ground, and the pilot did not survive the ejec­tion. Post-accident analysis revealed that despite the ingestion of the rescue collar and its weight, the wing-fan continued to operate and produce enough lift force to hold a wings-level roll attitude and reduce descent rate to a value that may have allowed the pilot to survive the ensuing "emergency landing” had he stayed with the aircraft. This was a grim testimony as to the ruggedness of the lift-fan.

Tupolev-144 SST on takeoff from Zhukovsky Air Development Center in Russia with a NASA pilot at the controls. NASA.

 

Fatal Accident #2

NASA and Electromagnetic Pulse Research

The phrase "electromagnetic pulse” usually raises visions of a nuclear detonation, because that is the most frequent context in which it is used. While EMP effects upon aircraft certainly would feature in a thermonuclear event, the phenomenon is commonly experienced in and around lightning storms. Lightning can cause a variety of EMP radiations, including radio-frequency pulses. An EMP "fries” electrical circuits by passing a magnetic field past the equipment in one direc­tion, then reversing in an extremely short period—typically a few nano­seconds. Therefore, the magnetic field is generated and collapses within that ephemeral time, creating a focused EMP. It can destroy or render useless any electrical circuit within several feet of impact.

Any survey of lightning-related EMPs brings attention to the phenom­ena of "elves,” an acronym for Emissions of Light and Very low-frequency perturbations from Electromagnetic pulses. Elves are caused by lightning­generated EMPs, usually occurring above thunderstorms and in the ion­osphere, some 300,000 feet above Earth. First recorded on Space Shuttle Mission STS-41 in 1990, elves mostly appear as reddish, expanding flashes that can reach 250 miles in diameter, lasting about 1 millisecond.

EMP research is multifaceted, conducted in laboratories, on air­borne aircraft and rockets, and ultimately outside Earth’s atmosphere. Research into transient electric fields and high-altitude lightning above thunderstorms has been conducted by sounding rockets launched by Cornell University. In 2000, a Black Brant sounding rocket from White Sands was launched over a storm, attaining a height of nearly 980,000 feet. Onboard equipment, including electronic and magnetic instru­ments, provided the first direct observation of the parallel electric field within 62 miles horizontal from the lightning.[155]

By definition, NASA’s NF-106B flights in the 1980s involved EMP research. Among the overlapping goals of the project was quantifica­tion of lightning’s electromagnetic effects, and Langley’s Felix L. Pitts led the program intended to provide airborne data of lightning-strike traits. Bruce Fisher and two other NASA pilots (plus four Air Force pilots) conducted the flights. Fisher conducted analysis of the informa­tion he collected in addition to backseat researchers’ data. Those flying as flight-test engineers in the two-seat jet included Harold K. Carney, Jr., NASA’s lead technician for EMP measurements.

NASA Langley engineers built ultra-wide-bandwidth digital tran­sient recorders carried in a sealed enclosure in the Dart’s missile bay. To acquire the fast lightning transients, they adapted or devised electro­magnetic sensors based on those used for measurement of nuclear pulse radiation. To aid understanding of the lightning transients recorded on the jet, a team from Electromagnetic Applications, Inc., provided math­ematical modeling of the lightning strikes to the aircraft. Owing to the extra hazard of lightning strikes, the F-106 was fueled with JP-5, which is less volatile than the then-standard JP-4. Data compiled from dedi­cated EMP flights permitted statistical parameters to be established for lightning encounters. The F-106’s onboard sensors showed that lightning strikes to aircraft include bursts of pulses lasting shorter than previously thought, but they were more frequent. Additionally, the bursts are more numerous than better-known strikes involving cloud-to-Earth flashes.[156]

Rocket-borne sensors provided the first ionospheric observations of lightning-induced electromagnetic waves from ELF through the medium frequency (MF) bands. The payload consisted of a NASA double-probe electric field sensor borne into the upper atmosphere by a Black Brant sounding rocket that NASA launched over "an extremely active thunder­storm cell.” This mission, named Thunderstorm III, measured lightning EMPs up to 2 megahertz (MHz). Below 738,000 feet, a rising whistler wave was found with a nose-whistler wave shape with a propagating fre­quency near 80 kHz. The results confirmed speculation that the leading intense edge of the lightning EMP was borne on 50-125-kHz waves.[157]

Electromagnetic compatibility is essential to spacecraft performance. The requirement has long been recognized, as the insulating surfaces on early geosynchronous satellites were charged by geomagnetic sub­storms to a point where discharges occurred. The EMPs from such dis­charges coupled into electronic systems, potentially disrupting satellites. Laboratory tests on insulator charging indicated that discharges could be initiated at insulator edges, where voltage gradients could exist.[158]

Apart from observation and study, detecting electromagnetic pulses is a step toward avoidance. Most lightning detections systems include an antenna that senses atmospheric discharges and a processor to deter­mine whether the strobes are lightning or static charges, based upon their electromagnetic traits. Generally, ground-based weather surveillance is more accurate than an airborne system, owing to the greater number of sensors. For instance, ground-based systems employ numerous antennas hundreds of miles apart to detect a lightning stroke’s radio frequency (RF) pulses. When an RF flash occurs, electromagnetic pulses speed outward from the bolt to the ground at hyper speed. Because the antennas cover a large area of Earth’s surface, they are able to triangulate the bolt’s site of origin. Based upon known values, the RF data can determine with con­siderable accuracy the strength or severity of a lightning bolt.

Space-based lightning detection systems require satellites that, while more expensive than ground-based systems, provide instantaneous visual monitoring. Onboard cameras and sensors not only spot light­ning bolts but also record them for analysis. NASA launched its first lightning-detection satellite in 1995, and the Lightning Imaging Sensor, which analyzes lightning through rainfall, was launched 2 years later. From approximately 1993, low-Earth orbit (LEO) space vehicles car­ried increasingly sophisticated equipment requiring increased power levels. Previously, satellites used 28-volt DC power systems as a leg­acy of the commercial and military aircraft industry. At those voltage levels, plasma interactions in LEO were seldom a concern. But use of high-voltage solar arrays increased concerns with electromagnetic compatibility and the potential effects of EMPs. Consequently, space­craft design, testing, and performance assumed greater importance.

NASA researchers noted a pattern wherein insulating surfaces on geosynchronous satellites were charged by geomagnetic substorms, building up to electrical discharges. The resultant electromagnetic pulses can couple into satellite electronic systems, creating potentially disrup­tive results. Reducing power loss received a high priority, and laboratory tests on insulator charging showed that discharges could be initiated at insulator edges, where voltage gradients could exist. The benefits of such tests, coupled with greater empirical knowledge, afforded greater operating efficiency, partly because of greater EMP protection.[159]

Research into lightning EMPs remains a major focus. In 2008, Stanford’s Dr. Robert A. Marshall and his colleagues reported on time­modeling techniques to study lightning-induced effects upon VLF trans­mitter signals called "early VLF events.” Marshall explained:

This mechanism involves electron density changes due to electromagnetic pulses from successive in-cloud light­ning discharges associated with cloud-to-ground dis­charges (CGs), which are likely the source of continuing current and much of the charge moment change in CGs. Through time-domain modeling of the EMP we show that a sequence of pulses can produce appreciable density changes in the lower ionosphere, and that these changes are primarily electron losses through dissociative attach­ment to molecular oxygen. Modeling of the propagat­ing VLF transmitter signal through the disturbed region shows that perturbed regions created by successive hor­izontal EMPs create measurable amplitude changes.[160]

However, the researchers found that modeling optical signatures was difficult when observation was limited by line of sight, especially by ground-based observers. Observation was further complicated by clouds and distance, because elves and "sprites” (large-scale discharges over thunderclouds) were mostly seen at ranges of 185 to 500 statute miles. Consequently, the originating lightning usually was not visible. But empirical evidence shows that an EMP from lightning is extremely short-lived when compared to the propagation time across an elve’s radius. Observers therefore learned to recognize that the illuminated area at a given moment appears as a thin ring rather than as an actual disk.[161]

In addition to the effects of EMPs upon personnel directly engaged with aircraft or space vehicles, concern was voiced about researchers being exposed to simulated pulses. Facilities conducting EMP tests upon avionics and communications equipment were a logical area of investi­gation, but some EMP simulators had the potential to expose operators and the public to electromagnetic fields of varying intensities, includ­ing naturally generated lightning bolts. In 1988, the NASA Astrophysics Data System released a study of bioelectromagnetic effects upon humans. The study stated, "Evidence from the available database does not estab­lish that EMPs represent either an occupational or a public health haz­ard.” Both laboratory research and years of observations on staffs of EMP manufacturing and simulation facilities indicated "no acute or short-term health effects.” The study further noted that the occupational exposure guideline for EMPs is 100 kilovolts per meter, "which is far in excess of usual exposures with EMP simulators.”[162]

NASA’s studies of EMP effects benefited nonaerospace communities. The Lightning Detection and Ranging (LDAR) system that enhanced a safe work environment at Kennedy Space Center was extended to pri­vate industry. Cooperation with private enterprises enhances commercial applications not only in aviation but in corporate research, construction, and the electric utility industry. For example, while two-dimensional commercial systems are limited to cloud-to-ground lightning, NASA’s three-dimensional LDAR provides precise location and elevation of in­cloud and cloud-to-cloud pulses by measuring arrival times of EMPs.

Nuclear – and lightning-caused EMPs share common traits. Nuclear EMPs involve three components, including the "E2” segment, which is similar to lightning. Nuclear EMPs are faster than conventional cir­cuit breakers can handle. Most are intended to stop millisecond spikes caused by lightning flashes rather than microsecond spikes from a high – altitude nuclear explosion. The connection between ionizing radiation and lightning was readily demonstrated during the "Mike” nuclear test at Eniwetok Atoll in November 1952. The yield was 10.4 million tons, with gamma rays causing at least five lightning flashes in the ionized air around the fireball. The bolts descended almost vertically from the cloud above the fireball to the water. The observation demonstrated that, by causing atmospheric ionization, nuclear radiation can trigger a short­ing of the natural vertical electric gradient, resulting in a lightning bolt.[163]

Thus, research overlap between thermonuclear and lightning­generated EMPs is unavoidable. NASA’s workhorse F-106B, apart from NASA’s broader charter to conduct lightning-strike research, was employed in a joint NASA-USAF program to compare the electromag­netic effects of lightning and nuclear detonations. In 1984, Felix L. Pitts of NASA Langley proposed a cooperative venture, leading to the Air Force lending Langley an advanced, 10-channel recorder for measur­ing electromagnetic pulses.

Langley used the recorder on F-106 test flights, vastly expand­ing its capability to measure magnetic and electrical change rates, as well as currents and voltages on wires inside the Dart. In July 1993, an Air Force researcher flew in the rear seat to operate the advanced equipment, when 72 lightning strikes were obtained. In EMP tests at Kirtland Air Force Base, the F-106 was exposed to a nuclear electro­magnetic pulse simulator while mounted on a special test stand and during flybys. NASA’s Norman Crabill and Lightning Technologies’

J. A. Plumer participated in the Air Force Weapons Laboratory review of the acquired data.[164]

With helicopters becoming ever-more complex and with increasing dependence upon electronics, it was natural for researchers to extend the Agency’s interest in lightning to rotary wing craft. Drawing upon the Agency’s growing confidence in numerical computational analysis, Langley produced a numerical modeling technique to investigate the response of helicopters to both lightning and nuclear EMPs. Using a UH-60A Black Hawk as the focus, the study derived three-dimensional time domain finite-difference solutions to Maxwell’s equations, com­puting external currents, internal fields, and cable responses. Analysis indicated that the short-circuit current on internal cables was generally greater for lightning, while the open-circuit voltages were slightly higher for nuclear-generated EMPs. As anticipated, the lightning response was found to be highly dependent upon the rise time of the injected current. Data showed that coupling levels to cables in a helicopter are 20 to 30 decibels (dB) greater than in a fixed wing aircraft.[165]

Glass Cockpit

As aircraft systems became more complex and the amount of naviga­tion, weather, and air traffic information available to pilots grew in abundance, the nostalgic days of "stick and rudder” men (and women) gave way to "cockpit managers.” Mechanical, analog dials showing a

Glass Cockpit

A prototype "glass cockpit” that replaces analog dials and mechanical tapes with digitally driven flat panel displays is installed inside the cabin of NASA’s 737 airborne laboratory, which tested the new hardware and won support for the concept in the aviation community. NASA.

single piece of information (e. g., airspeed or altitude) weren’t sufficient to give pilots the full status of their increasingly complicated aircraft fly­ing in an increasingly crowded sky. The solution came from engineers at NASA’s Langley Research Center in Hampton, VA, who worked with key industry partners to come up with an electronic flight display—what is generally known now as the glass cockpit—that took advantage of pow­erful, small computers and liquid crystal display (LCD) flat panel technol­ogy. Early concepts of the glass cockpit were flight-proven using NASA’s Boeing 737 flying laboratory and eventually certified for use by the FAA.[233]

According to a NASA fact sheet,

The success of the NASA-led glass cockpit work is reflected in the total acceptance of electronic flight displays beginning with the introduction of the Boeing 767 in 1982. Airlines and their passengers, alike, have benefitted. Safety and efficiency of flight have been increased with improved pilot understand­ing of the airplane’s situation relative to its environment.

The cost of air travel is less than it would be with the old technology and more flights arrive on time.[234]

After developing the first glass cockpits capable of displaying basic flight information, NASA has continued working to make more infor­mation available to the pilots,[235] while at the same time being conscious of information overload,[236] the ability of the flight crew to operate the cockpit displays without distraction during critical phases of flight (take­off and landing),[237] and the effectiveness of training pilots to use the glass cockpit.[238]

The Future of ATC

Fifty years of working to improve the Nation’s airways and the equip­ment and procedures needed to manage the system have laid the foun­dation for NASA to help lead the most significant transformation of the National Airspace System in the history of flight. No corner of the air traffic control operation will be left untouched. From airport to airport, every phase of a typical flight will be addressed, and new technology and solutions will be sought to raise capacity in the system, lower oper­ating costs, increase safety, and enhance the security of an air transpor­tation system that is so vital to our economy.

This program originated from the 2002 Commission on the Future of Aerospace in the United States, which recommended an overhaul of the air transportation system as a national priority—mostly from the concern that air traffic is predicted to double, at least, during the next 20 years. Congress followed up with some money, and President George W. Bush signed into law a plan to create a Next Generation Air Transportation System (NextGen). To manage the effort, a Joint Planning and Development Office (JPDO) was created, with NASA, the FAA, the DOD, and other key aviation organizations as members.[281]

NASA then organized itself to manage its NextGen efforts through the Airspace Systems Program. Within the program, NASA’s efforts are fur­ther divided into projects that are in support of either NextGen Airspace or NextGen Airportal. The airspace project is responsible for dealing with air traffic control issues such as increasing capacity, determining how much more automation can be introduced, scheduling, spacing of aircraft, and rolling out a GPS-based navigation system that will change the way we perceive flying. Naturally, the airportal project is examining ways to improve terminal operations in and around the airplanes, includ­ing the possibility of building new airports.[282]

Already, several technologies are being deployed as part of NextGen. One is called the Wide Area Augmentation System, another the Automatic Dependent Surveillance-Broadcast-B (ADS-B). Both have to do with deploying a satellite-based GPS tracking system that would end reliance on radars as the primary means of tracking an aircraft’s approach.[283]

WAAS is designed to enhance the GPS signal from Earth orbit and make it more accurate for use in civilian aviation by correcting for the errors that are introduced in the GPS signal by the planet’s ionosphere.[284] Meanwhile, ADS-B, which is deployed at several locations around the U. S., combines information with a GPS signal and drives a cockpit display that tells the pilots precisely where they are and where other aircraft are in their area, but only if those other aircraft are similarly equipped with the ADS-B hardware. By combining ADS-B, GPS, and WAAS signals, a pilot can navigate to an airport even in low visibility.[285] NASA was a member of the Government and industry team led by the FAA that conducted an ADS-B field test several years ago with United Parcel Service at its hub in Louisville, KY. This work earned the team the 2007 Collier Trophy.

In these various ways, NASA has worked to increase the safety of the air traveler and to enhance the efficiency of the global air transportation

network. As winged flight enters its second century, it is a safe bet that the Agency’s work in coming years will be as comprehensive and influ­ential as it has been in the past, thanks to the competency, dedication, and creativity of NASA people.

The Future of ATC

A Langley Research Center human factors research engineer inspects the interior of a light business aircraft after a simulated crash to assess the loads experienced during accidents and develop means of improving survivability. NASA.

Workload, Strategic Behavior, and Decision-Making

It is well-known that more than half of aircraft incidents and accidents have occurred because of human error. These errors resulted from such factors as flightcrew distractions, interruptions, lapses of attention, and work overload.[391] For this reason, NASA researchers have long been interested in characterizing errors made by pilots and other crewmem­bers while performing the many concurrent flight deck tasks required during normal flight operations. Its Attention Management in the Cockpit program analyzes accident and incident reports, as well as question­naires completed by experienced pilots, to set up appropriate laboratory experiments to examine the problem of concurrent task management and to develop methods and training programs to reduce errors. This research will help design simulated but realistic training scenarios, assist flight – crew members in understanding their susceptibility to errors caused by lapses in attention, and create ways to help them manage heavy work­load demands. The intended result is increased flight safety.[392]

Likewise, safety in the air can be compromised by errors in judg­ment and decision making. To tackle this problem, NASA Ames Research

Center joined with the University of Oregon to study how decisions are made and to develop techniques to decrease the likelihood of bad deci­sion making.[393] Similarly, mission success has been shown to depend on the degree of cooperation between crewmembers. NASA research specifically studied such factors as building trust, sharing information, and managing resources in stressful situations. The findings of this research will be used as the basis for training crews to manage inter­personal problems on long missions.[394]

It can therefore be seen that NASA has indeed played a primary role in developing many of the human factors models in use, relating to air­crew efficiency and mental well-being. These models and the training programs that incorporate them have helped both military and civil­ian flightcrew members improve their management of resources in the cockpit and make better individual and team decisions in the air. This knowledge has also helped more clearly define and minimize the nega­tive effects of crew fatigue and excessive workload demands in the cock­pit. Further, NASA has played a key role in assisting both the aviation industry and DOD in setting up many of the training programs that are utilizing this new technology to improve flight safety.

Progress and Design Data

In the 1920s and 1930s, researchers in several wind tunnel and full-scale aircraft flight groups at Langley conducted analytical and experimental investigations to develop design guidelines to ensure satisfactory stability

and control behavior.[468] Such studies sought to develop methods to reli­ably predict the inherent flight characteristics of aircraft as affected by design variables such as the wing dihedral angle, sizes and locations of the vertical and horizontal tails, wing planform shape, engine power, mass distribution, and control surface geometry. The staff of the Free – Flight Tunnel joined in these efforts with several studies that correlated the qualitative behavior of free-flight models with analytical predictions of dynamic stability and control characteristics. Coupled with the results from other facilities and analytical groups, the free-flight results accel­erated the maturity of design tools for future aircraft from a qualita­tive basis to a quantitative methodology, and many of the methods and design data derived from these studies became classic textbook material.[469]

By combining free-flight testing with theory, the researchers were able to quantify desirable design features, such as the amount of wing – dihedral angle and the relative size of vertical tail required for satisfac­tory behavior. With these data in hand, methods were also developed to theoretically solve the dynamic equations of motion of aircraft and determine dynamic stability characteristics such as the frequency of inherent oscillations and the damping of motions following inputs by pilots or turbulence.

During the final days of model flight projects in the Free-Flight Tunnel in the mid-1950s, various Langley organizations teamed to quan­tify the effects of aerodynamic dynamic stability parameters on flying characteristics. These efforts included correlation of experimentally determined aerodynamic stability derivatives with theoretical predic­tions and comparisons of the results of qualitative free-flight tests with theoretical predictions of dynamic stability characteristics. In some cases, rate gyroscopes and servos were used to artificially vary the magnitudes of dynamic aerodynamic stability parameters such as yawing moment because of rolling.[470] In these studies, the free-flight model result served as a critical test of the validity of theory.

Spin Entry

The helicopter drop-model technique has been used since the early 1950s to evaluate the spin entry behavior of relatively large unpowered mod­els of military aircraft. The objective of these tests has been to evaluate the relative spin resistance of configurations following various combi­nations of control inputs, and the effects of timing of recovery control inputs following departures. A related testing technique used to eval­uate spin resistance of spin entry evaluations of general aviation con­figurations employs remotely controlled powered models that take off from ground runways and fly to the test condition.

In the late 1950s, industry had become concerned over potential scale effects on long pointed fuselage shapes as a result of the XF8U-1

experiences in the Spin Tunnel, as discussed earlier. Thus, interest was growing over the possible use of much larger models than those used in spin tunnel tests, to eliminate or minimize undesirable scale effects. Finally, a major concern arose for some airplane designs over the launch­ing technique used in the Spin Tunnel. Because the spin tunnel model was launched by hand in a very flat attitude with forced rotation, it would quickly seek the developed spin modes—a very valuable output— but the full-scale airplane might not easily enter the spin because of con­trol limitations, poststall motions, or other factors.

One of the first configurations tested, in 1958, to establish the cred­ibility of the drop-model program was a 6.3-foot-long, 90-pound model of the XF8U-1 configuration.[519] With previously conducted spin tunnel results in hand, the choice of this design permitted correlation with the earlier tunnel and aircraft flight-test results. As has been discussed, wind tunnel testing of the XF8U-1 fuselage forebody shape had indi­cated that pro-spin yawing moments would be produced by the fuse­lage for values of Reynolds number below about 400,000, based on the average depth of the fuselage forebody. The Reynolds number for the drop-model tests ranged from 420,000 to 505,000, at which the fuse­lage contribution became antispin and the spins and recovery charac­teristics of the drop model were found to be very similar to the full-scale results. In particular, the drop model did not exhibit a flat-spin mode predicted by the smaller spin tunnel model, and results were in agree­ment with results of the aircraft flight tests, demonstrating the value of larger models from a Reynolds number perspective.

Success in applications of the drop-model technique for studies of spin entry led to the beginning of many military requests for evaluations of emerging fighter aircraft. In 1959, the Navy requested an evaluation of the McDonnell F4H-1 Phantom II airplane using the drop technique.[520] Earlier spin tunnel tests of the configuration indicated the possibility of two types of spins: one of which was steep and oscillatory, from which recoveries were satisfactory, and the other was fast and flat, from which recovery was difficult or impossible. As mentioned previously, the spin tunnel launching technique had led to questions regarding whether the airplane would exhibit a tendency toward the steeper spin or the more

dangerous flat spin. The objective of the drop tests was to determine if it was likely, or even possible, for the F4H-1 to enter the flat spin.

In the F4H-1 investigation, an additional launching technique was used in an attempt to obtain a developed spin more readily and to pos­sibly obtain the flat spin to verify its existence. This technique consisted of prespinning the model on the helicopter launch rig before it was released in a flat attitude with the helicopter in a hovering condition. To achieve even higher initial rotation rates than could be achieved on the launch rig, a detachable flat metal plate was attached to one wingtip of the model to propel it to spin even faster. After the model appeared to be rotating sufficiently fast after release, the vane was jettisoned by the ground-based pilot, who, at the same time, moved the ailerons against the direction of rotation to help promote the spin. The model was then allowed to spin for several turns, after which recovery controls were applied. In some aspects, this approach to testing replicated the spin tunnel launch technique but at a larger scale.

Results of the drop-model investigation for the F4H-1 are especially notable because it established the value of the testing technique to pre­dict spin tendencies as verified by subsequent full-scale results. A total of 35 flights were made, with the model launched 15 times in the pre­rotated condition and 20 times in forward flight. During these 35 flights, poststall gyrations were obtained on 21 occasions, steep spins were obtained on 10 flights, and only 4 flat spins were obtained. No recoveries were possible from the flat spins, but only one flat spin was obtained with­out prerotation. The conclusions of the tests stated that the aircraft was more susceptible to poststall gyrations than spins; that the steeper, more oscillatory spin would be more readily obtainable and recovery could be made by the NASA-recommended control technique; and that the like­lihood of encountering a fast, flat spin was relatively remote. Ultimately, these general characteristics of the airplane were replicated at full-scale test conditions during spin evaluations by the Navy and Air Force.