Category NASA’S CONTRIBUTIONS TO AERONAUTICS

NASA and Electromagnetic Pulse Research

The phrase "electromagnetic pulse” usually raises visions of a nuclear detonation, because that is the most frequent context in which it is used. While EMP effects upon aircraft certainly would feature in a thermonuclear event, the phenomenon is commonly experienced in and around lightning storms. Lightning can cause a variety of EMP radiations, including radio-frequency pulses. An EMP "fries” electrical circuits by passing a magnetic field past the equipment in one direc­tion, then reversing in an extremely short period—typically a few nano­seconds. Therefore, the magnetic field is generated and collapses within that ephemeral time, creating a focused EMP. It can destroy or render useless any electrical circuit within several feet of impact.

Any survey of lightning-related EMPs brings attention to the phenom­ena of "elves,” an acronym for Emissions of Light and Very low-frequency perturbations from Electromagnetic pulses. Elves are caused by lightning­generated EMPs, usually occurring above thunderstorms and in the ion­osphere, some 300,000 feet above Earth. First recorded on Space Shuttle Mission STS-41 in 1990, elves mostly appear as reddish, expanding flashes that can reach 250 miles in diameter, lasting about 1 millisecond.

EMP research is multifaceted, conducted in laboratories, on air­borne aircraft and rockets, and ultimately outside Earth’s atmosphere. Research into transient electric fields and high-altitude lightning above thunderstorms has been conducted by sounding rockets launched by Cornell University. In 2000, a Black Brant sounding rocket from White Sands was launched over a storm, attaining a height of nearly 980,000 feet. Onboard equipment, including electronic and magnetic instru­ments, provided the first direct observation of the parallel electric field within 62 miles horizontal from the lightning.[155]

By definition, NASA’s NF-106B flights in the 1980s involved EMP research. Among the overlapping goals of the project was quantifica­tion of lightning’s electromagnetic effects, and Langley’s Felix L. Pitts led the program intended to provide airborne data of lightning-strike traits. Bruce Fisher and two other NASA pilots (plus four Air Force pilots) conducted the flights. Fisher conducted analysis of the informa­tion he collected in addition to backseat researchers’ data. Those flying as flight-test engineers in the two-seat jet included Harold K. Carney, Jr., NASA’s lead technician for EMP measurements.

NASA Langley engineers built ultra-wide-bandwidth digital tran­sient recorders carried in a sealed enclosure in the Dart’s missile bay. To acquire the fast lightning transients, they adapted or devised electro­magnetic sensors based on those used for measurement of nuclear pulse radiation. To aid understanding of the lightning transients recorded on the jet, a team from Electromagnetic Applications, Inc., provided math­ematical modeling of the lightning strikes to the aircraft. Owing to the extra hazard of lightning strikes, the F-106 was fueled with JP-5, which is less volatile than the then-standard JP-4. Data compiled from dedi­cated EMP flights permitted statistical parameters to be established for lightning encounters. The F-106’s onboard sensors showed that lightning strikes to aircraft include bursts of pulses lasting shorter than previously thought, but they were more frequent. Additionally, the bursts are more numerous than better-known strikes involving cloud-to-Earth flashes.[156]

Rocket-borne sensors provided the first ionospheric observations of lightning-induced electromagnetic waves from ELF through the medium frequency (MF) bands. The payload consisted of a NASA double-probe electric field sensor borne into the upper atmosphere by a Black Brant sounding rocket that NASA launched over "an extremely active thunder­storm cell.” This mission, named Thunderstorm III, measured lightning EMPs up to 2 megahertz (MHz). Below 738,000 feet, a rising whistler wave was found with a nose-whistler wave shape with a propagating fre­quency near 80 kHz. The results confirmed speculation that the leading intense edge of the lightning EMP was borne on 50-125-kHz waves.[157]

Electromagnetic compatibility is essential to spacecraft performance. The requirement has long been recognized, as the insulating surfaces on early geosynchronous satellites were charged by geomagnetic sub­storms to a point where discharges occurred. The EMPs from such dis­charges coupled into electronic systems, potentially disrupting satellites. Laboratory tests on insulator charging indicated that discharges could be initiated at insulator edges, where voltage gradients could exist.[158]

Apart from observation and study, detecting electromagnetic pulses is a step toward avoidance. Most lightning detections systems include an antenna that senses atmospheric discharges and a processor to deter­mine whether the strobes are lightning or static charges, based upon their electromagnetic traits. Generally, ground-based weather surveillance is more accurate than an airborne system, owing to the greater number of sensors. For instance, ground-based systems employ numerous antennas hundreds of miles apart to detect a lightning stroke’s radio frequency (RF) pulses. When an RF flash occurs, electromagnetic pulses speed outward from the bolt to the ground at hyper speed. Because the antennas cover a large area of Earth’s surface, they are able to triangulate the bolt’s site of origin. Based upon known values, the RF data can determine with con­siderable accuracy the strength or severity of a lightning bolt.

Space-based lightning detection systems require satellites that, while more expensive than ground-based systems, provide instantaneous visual monitoring. Onboard cameras and sensors not only spot light­ning bolts but also record them for analysis. NASA launched its first lightning-detection satellite in 1995, and the Lightning Imaging Sensor, which analyzes lightning through rainfall, was launched 2 years later. From approximately 1993, low-Earth orbit (LEO) space vehicles car­ried increasingly sophisticated equipment requiring increased power levels. Previously, satellites used 28-volt DC power systems as a leg­acy of the commercial and military aircraft industry. At those voltage levels, plasma interactions in LEO were seldom a concern. But use of high-voltage solar arrays increased concerns with electromagnetic compatibility and the potential effects of EMPs. Consequently, space­craft design, testing, and performance assumed greater importance.

NASA researchers noted a pattern wherein insulating surfaces on geosynchronous satellites were charged by geomagnetic substorms, building up to electrical discharges. The resultant electromagnetic pulses can couple into satellite electronic systems, creating potentially disrup­tive results. Reducing power loss received a high priority, and laboratory tests on insulator charging showed that discharges could be initiated at insulator edges, where voltage gradients could exist. The benefits of such tests, coupled with greater empirical knowledge, afforded greater operating efficiency, partly because of greater EMP protection.[159]

Research into lightning EMPs remains a major focus. In 2008, Stanford’s Dr. Robert A. Marshall and his colleagues reported on time­modeling techniques to study lightning-induced effects upon VLF trans­mitter signals called "early VLF events.” Marshall explained:

This mechanism involves electron density changes due to electromagnetic pulses from successive in-cloud light­ning discharges associated with cloud-to-ground dis­charges (CGs), which are likely the source of continuing current and much of the charge moment change in CGs. Through time-domain modeling of the EMP we show that a sequence of pulses can produce appreciable density changes in the lower ionosphere, and that these changes are primarily electron losses through dissociative attach­ment to molecular oxygen. Modeling of the propagat­ing VLF transmitter signal through the disturbed region shows that perturbed regions created by successive hor­izontal EMPs create measurable amplitude changes.[160]

However, the researchers found that modeling optical signatures was difficult when observation was limited by line of sight, especially by ground-based observers. Observation was further complicated by clouds and distance, because elves and "sprites” (large-scale discharges over thunderclouds) were mostly seen at ranges of 185 to 500 statute miles. Consequently, the originating lightning usually was not visible. But empirical evidence shows that an EMP from lightning is extremely short-lived when compared to the propagation time across an elve’s radius. Observers therefore learned to recognize that the illuminated area at a given moment appears as a thin ring rather than as an actual disk.[161]

In addition to the effects of EMPs upon personnel directly engaged with aircraft or space vehicles, concern was voiced about researchers being exposed to simulated pulses. Facilities conducting EMP tests upon avionics and communications equipment were a logical area of investi­gation, but some EMP simulators had the potential to expose operators and the public to electromagnetic fields of varying intensities, includ­ing naturally generated lightning bolts. In 1988, the NASA Astrophysics Data System released a study of bioelectromagnetic effects upon humans. The study stated, "Evidence from the available database does not estab­lish that EMPs represent either an occupational or a public health haz­ard.” Both laboratory research and years of observations on staffs of EMP manufacturing and simulation facilities indicated "no acute or short-term health effects.” The study further noted that the occupational exposure guideline for EMPs is 100 kilovolts per meter, "which is far in excess of usual exposures with EMP simulators.”[162]

NASA’s studies of EMP effects benefited nonaerospace communities. The Lightning Detection and Ranging (LDAR) system that enhanced a safe work environment at Kennedy Space Center was extended to pri­vate industry. Cooperation with private enterprises enhances commercial applications not only in aviation but in corporate research, construction, and the electric utility industry. For example, while two-dimensional commercial systems are limited to cloud-to-ground lightning, NASA’s three-dimensional LDAR provides precise location and elevation of in­cloud and cloud-to-cloud pulses by measuring arrival times of EMPs.

Nuclear – and lightning-caused EMPs share common traits. Nuclear EMPs involve three components, including the "E2” segment, which is similar to lightning. Nuclear EMPs are faster than conventional cir­cuit breakers can handle. Most are intended to stop millisecond spikes caused by lightning flashes rather than microsecond spikes from a high – altitude nuclear explosion. The connection between ionizing radiation and lightning was readily demonstrated during the "Mike” nuclear test at Eniwetok Atoll in November 1952. The yield was 10.4 million tons, with gamma rays causing at least five lightning flashes in the ionized air around the fireball. The bolts descended almost vertically from the cloud above the fireball to the water. The observation demonstrated that, by causing atmospheric ionization, nuclear radiation can trigger a short­ing of the natural vertical electric gradient, resulting in a lightning bolt.[163]

Thus, research overlap between thermonuclear and lightning­generated EMPs is unavoidable. NASA’s workhorse F-106B, apart from NASA’s broader charter to conduct lightning-strike research, was employed in a joint NASA-USAF program to compare the electromag­netic effects of lightning and nuclear detonations. In 1984, Felix L. Pitts of NASA Langley proposed a cooperative venture, leading to the Air Force lending Langley an advanced, 10-channel recorder for measur­ing electromagnetic pulses.

Langley used the recorder on F-106 test flights, vastly expand­ing its capability to measure magnetic and electrical change rates, as well as currents and voltages on wires inside the Dart. In July 1993, an Air Force researcher flew in the rear seat to operate the advanced equipment, when 72 lightning strikes were obtained. In EMP tests at Kirtland Air Force Base, the F-106 was exposed to a nuclear electro­magnetic pulse simulator while mounted on a special test stand and during flybys. NASA’s Norman Crabill and Lightning Technologies’

J. A. Plumer participated in the Air Force Weapons Laboratory review of the acquired data.[164]

With helicopters becoming ever-more complex and with increasing dependence upon electronics, it was natural for researchers to extend the Agency’s interest in lightning to rotary wing craft. Drawing upon the Agency’s growing confidence in numerical computational analysis, Langley produced a numerical modeling technique to investigate the response of helicopters to both lightning and nuclear EMPs. Using a UH-60A Black Hawk as the focus, the study derived three-dimensional time domain finite-difference solutions to Maxwell’s equations, com­puting external currents, internal fields, and cable responses. Analysis indicated that the short-circuit current on internal cables was generally greater for lightning, while the open-circuit voltages were slightly higher for nuclear-generated EMPs. As anticipated, the lightning response was found to be highly dependent upon the rise time of the injected current. Data showed that coupling levels to cables in a helicopter are 20 to 30 decibels (dB) greater than in a fixed wing aircraft.[165]

Glass Cockpit

As aircraft systems became more complex and the amount of naviga­tion, weather, and air traffic information available to pilots grew in abundance, the nostalgic days of "stick and rudder” men (and women) gave way to "cockpit managers.” Mechanical, analog dials showing a

Glass Cockpit

A prototype "glass cockpit” that replaces analog dials and mechanical tapes with digitally driven flat panel displays is installed inside the cabin of NASA’s 737 airborne laboratory, which tested the new hardware and won support for the concept in the aviation community. NASA.

single piece of information (e. g., airspeed or altitude) weren’t sufficient to give pilots the full status of their increasingly complicated aircraft fly­ing in an increasingly crowded sky. The solution came from engineers at NASA’s Langley Research Center in Hampton, VA, who worked with key industry partners to come up with an electronic flight display—what is generally known now as the glass cockpit—that took advantage of pow­erful, small computers and liquid crystal display (LCD) flat panel technol­ogy. Early concepts of the glass cockpit were flight-proven using NASA’s Boeing 737 flying laboratory and eventually certified for use by the FAA.[233]

According to a NASA fact sheet,

The success of the NASA-led glass cockpit work is reflected in the total acceptance of electronic flight displays beginning with the introduction of the Boeing 767 in 1982. Airlines and their passengers, alike, have benefitted. Safety and efficiency of flight have been increased with improved pilot understand­ing of the airplane’s situation relative to its environment.

The cost of air travel is less than it would be with the old technology and more flights arrive on time.[234]

After developing the first glass cockpits capable of displaying basic flight information, NASA has continued working to make more infor­mation available to the pilots,[235] while at the same time being conscious of information overload,[236] the ability of the flight crew to operate the cockpit displays without distraction during critical phases of flight (take­off and landing),[237] and the effectiveness of training pilots to use the glass cockpit.[238]

The Future of ATC

Fifty years of working to improve the Nation’s airways and the equip­ment and procedures needed to manage the system have laid the foun­dation for NASA to help lead the most significant transformation of the National Airspace System in the history of flight. No corner of the air traffic control operation will be left untouched. From airport to airport, every phase of a typical flight will be addressed, and new technology and solutions will be sought to raise capacity in the system, lower oper­ating costs, increase safety, and enhance the security of an air transpor­tation system that is so vital to our economy.

This program originated from the 2002 Commission on the Future of Aerospace in the United States, which recommended an overhaul of the air transportation system as a national priority—mostly from the concern that air traffic is predicted to double, at least, during the next 20 years. Congress followed up with some money, and President George W. Bush signed into law a plan to create a Next Generation Air Transportation System (NextGen). To manage the effort, a Joint Planning and Development Office (JPDO) was created, with NASA, the FAA, the DOD, and other key aviation organizations as members.[281]

NASA then organized itself to manage its NextGen efforts through the Airspace Systems Program. Within the program, NASA’s efforts are fur­ther divided into projects that are in support of either NextGen Airspace or NextGen Airportal. The airspace project is responsible for dealing with air traffic control issues such as increasing capacity, determining how much more automation can be introduced, scheduling, spacing of aircraft, and rolling out a GPS-based navigation system that will change the way we perceive flying. Naturally, the airportal project is examining ways to improve terminal operations in and around the airplanes, includ­ing the possibility of building new airports.[282]

Already, several technologies are being deployed as part of NextGen. One is called the Wide Area Augmentation System, another the Automatic Dependent Surveillance-Broadcast-B (ADS-B). Both have to do with deploying a satellite-based GPS tracking system that would end reliance on radars as the primary means of tracking an aircraft’s approach.[283]

WAAS is designed to enhance the GPS signal from Earth orbit and make it more accurate for use in civilian aviation by correcting for the errors that are introduced in the GPS signal by the planet’s ionosphere.[284] Meanwhile, ADS-B, which is deployed at several locations around the U. S., combines information with a GPS signal and drives a cockpit display that tells the pilots precisely where they are and where other aircraft are in their area, but only if those other aircraft are similarly equipped with the ADS-B hardware. By combining ADS-B, GPS, and WAAS signals, a pilot can navigate to an airport even in low visibility.[285] NASA was a member of the Government and industry team led by the FAA that conducted an ADS-B field test several years ago with United Parcel Service at its hub in Louisville, KY. This work earned the team the 2007 Collier Trophy.

In these various ways, NASA has worked to increase the safety of the air traveler and to enhance the efficiency of the global air transportation

network. As winged flight enters its second century, it is a safe bet that the Agency’s work in coming years will be as comprehensive and influ­ential as it has been in the past, thanks to the competency, dedication, and creativity of NASA people.

The Future of ATC

A Langley Research Center human factors research engineer inspects the interior of a light business aircraft after a simulated crash to assess the loads experienced during accidents and develop means of improving survivability. NASA.

Workload, Strategic Behavior, and Decision-Making

It is well-known that more than half of aircraft incidents and accidents have occurred because of human error. These errors resulted from such factors as flightcrew distractions, interruptions, lapses of attention, and work overload.[391] For this reason, NASA researchers have long been interested in characterizing errors made by pilots and other crewmem­bers while performing the many concurrent flight deck tasks required during normal flight operations. Its Attention Management in the Cockpit program analyzes accident and incident reports, as well as question­naires completed by experienced pilots, to set up appropriate laboratory experiments to examine the problem of concurrent task management and to develop methods and training programs to reduce errors. This research will help design simulated but realistic training scenarios, assist flight – crew members in understanding their susceptibility to errors caused by lapses in attention, and create ways to help them manage heavy work­load demands. The intended result is increased flight safety.[392]

Likewise, safety in the air can be compromised by errors in judg­ment and decision making. To tackle this problem, NASA Ames Research

Center joined with the University of Oregon to study how decisions are made and to develop techniques to decrease the likelihood of bad deci­sion making.[393] Similarly, mission success has been shown to depend on the degree of cooperation between crewmembers. NASA research specifically studied such factors as building trust, sharing information, and managing resources in stressful situations. The findings of this research will be used as the basis for training crews to manage inter­personal problems on long missions.[394]

It can therefore be seen that NASA has indeed played a primary role in developing many of the human factors models in use, relating to air­crew efficiency and mental well-being. These models and the training programs that incorporate them have helped both military and civil­ian flightcrew members improve their management of resources in the cockpit and make better individual and team decisions in the air. This knowledge has also helped more clearly define and minimize the nega­tive effects of crew fatigue and excessive workload demands in the cock­pit. Further, NASA has played a key role in assisting both the aviation industry and DOD in setting up many of the training programs that are utilizing this new technology to improve flight safety.

Progress and Design Data

In the 1920s and 1930s, researchers in several wind tunnel and full-scale aircraft flight groups at Langley conducted analytical and experimental investigations to develop design guidelines to ensure satisfactory stability

and control behavior.[468] Such studies sought to develop methods to reli­ably predict the inherent flight characteristics of aircraft as affected by design variables such as the wing dihedral angle, sizes and locations of the vertical and horizontal tails, wing planform shape, engine power, mass distribution, and control surface geometry. The staff of the Free – Flight Tunnel joined in these efforts with several studies that correlated the qualitative behavior of free-flight models with analytical predictions of dynamic stability and control characteristics. Coupled with the results from other facilities and analytical groups, the free-flight results accel­erated the maturity of design tools for future aircraft from a qualita­tive basis to a quantitative methodology, and many of the methods and design data derived from these studies became classic textbook material.[469]

By combining free-flight testing with theory, the researchers were able to quantify desirable design features, such as the amount of wing – dihedral angle and the relative size of vertical tail required for satisfac­tory behavior. With these data in hand, methods were also developed to theoretically solve the dynamic equations of motion of aircraft and determine dynamic stability characteristics such as the frequency of inherent oscillations and the damping of motions following inputs by pilots or turbulence.

During the final days of model flight projects in the Free-Flight Tunnel in the mid-1950s, various Langley organizations teamed to quan­tify the effects of aerodynamic dynamic stability parameters on flying characteristics. These efforts included correlation of experimentally determined aerodynamic stability derivatives with theoretical predic­tions and comparisons of the results of qualitative free-flight tests with theoretical predictions of dynamic stability characteristics. In some cases, rate gyroscopes and servos were used to artificially vary the magnitudes of dynamic aerodynamic stability parameters such as yawing moment because of rolling.[470] In these studies, the free-flight model result served as a critical test of the validity of theory.

Spin Entry

The helicopter drop-model technique has been used since the early 1950s to evaluate the spin entry behavior of relatively large unpowered mod­els of military aircraft. The objective of these tests has been to evaluate the relative spin resistance of configurations following various combi­nations of control inputs, and the effects of timing of recovery control inputs following departures. A related testing technique used to eval­uate spin resistance of spin entry evaluations of general aviation con­figurations employs remotely controlled powered models that take off from ground runways and fly to the test condition.

In the late 1950s, industry had become concerned over potential scale effects on long pointed fuselage shapes as a result of the XF8U-1

experiences in the Spin Tunnel, as discussed earlier. Thus, interest was growing over the possible use of much larger models than those used in spin tunnel tests, to eliminate or minimize undesirable scale effects. Finally, a major concern arose for some airplane designs over the launch­ing technique used in the Spin Tunnel. Because the spin tunnel model was launched by hand in a very flat attitude with forced rotation, it would quickly seek the developed spin modes—a very valuable output— but the full-scale airplane might not easily enter the spin because of con­trol limitations, poststall motions, or other factors.

One of the first configurations tested, in 1958, to establish the cred­ibility of the drop-model program was a 6.3-foot-long, 90-pound model of the XF8U-1 configuration.[519] With previously conducted spin tunnel results in hand, the choice of this design permitted correlation with the earlier tunnel and aircraft flight-test results. As has been discussed, wind tunnel testing of the XF8U-1 fuselage forebody shape had indi­cated that pro-spin yawing moments would be produced by the fuse­lage for values of Reynolds number below about 400,000, based on the average depth of the fuselage forebody. The Reynolds number for the drop-model tests ranged from 420,000 to 505,000, at which the fuse­lage contribution became antispin and the spins and recovery charac­teristics of the drop model were found to be very similar to the full-scale results. In particular, the drop model did not exhibit a flat-spin mode predicted by the smaller spin tunnel model, and results were in agree­ment with results of the aircraft flight tests, demonstrating the value of larger models from a Reynolds number perspective.

Success in applications of the drop-model technique for studies of spin entry led to the beginning of many military requests for evaluations of emerging fighter aircraft. In 1959, the Navy requested an evaluation of the McDonnell F4H-1 Phantom II airplane using the drop technique.[520] Earlier spin tunnel tests of the configuration indicated the possibility of two types of spins: one of which was steep and oscillatory, from which recoveries were satisfactory, and the other was fast and flat, from which recovery was difficult or impossible. As mentioned previously, the spin tunnel launching technique had led to questions regarding whether the airplane would exhibit a tendency toward the steeper spin or the more

dangerous flat spin. The objective of the drop tests was to determine if it was likely, or even possible, for the F4H-1 to enter the flat spin.

In the F4H-1 investigation, an additional launching technique was used in an attempt to obtain a developed spin more readily and to pos­sibly obtain the flat spin to verify its existence. This technique consisted of prespinning the model on the helicopter launch rig before it was released in a flat attitude with the helicopter in a hovering condition. To achieve even higher initial rotation rates than could be achieved on the launch rig, a detachable flat metal plate was attached to one wingtip of the model to propel it to spin even faster. After the model appeared to be rotating sufficiently fast after release, the vane was jettisoned by the ground-based pilot, who, at the same time, moved the ailerons against the direction of rotation to help promote the spin. The model was then allowed to spin for several turns, after which recovery controls were applied. In some aspects, this approach to testing replicated the spin tunnel launch technique but at a larger scale.

Results of the drop-model investigation for the F4H-1 are especially notable because it established the value of the testing technique to pre­dict spin tendencies as verified by subsequent full-scale results. A total of 35 flights were made, with the model launched 15 times in the pre­rotated condition and 20 times in forward flight. During these 35 flights, poststall gyrations were obtained on 21 occasions, steep spins were obtained on 10 flights, and only 4 flat spins were obtained. No recoveries were possible from the flat spins, but only one flat spin was obtained with­out prerotation. The conclusions of the tests stated that the aircraft was more susceptible to poststall gyrations than spins; that the steeper, more oscillatory spin would be more readily obtainable and recovery could be made by the NASA-recommended control technique; and that the like­lihood of encountering a fast, flat spin was relatively remote. Ultimately, these general characteristics of the airplane were replicated at full-scale test conditions during spin evaluations by the Navy and Air Force.

Applying Hypersonic Test Facilities to Hypersonic Vehicle Design

One of NASA’s first flight research studies was the X-15 program (1959— 1968). The program investigated flight at five or more times the speed of sound at altitudes reaching the fringes of space. Launched from the wing of NASA’s venerable Boeing B-52 mother ship, the North American X-15 was a true "aerospace” plane, with performance that went well beyond the capabilities of existing aircraft within and beyond the atmo­sphere. Long, black, rocket-powered, and distinctive with its cruci­form tail, the X-15 became the highest-flying airplane in history. In one flight, the X-15 flew to 67 miles (354,200 feet) above the Earth at a speed of Mach 6.7, or 4,534 mph. At those speeds and altitudes, the X-15 pilots, made up of the leading military and civilian aviators, had to wear pressure suits, and many of them earned astronaut’s wings. North American used titanium as the primary structural material and covered it with a new high-temperature nickel alloy called Inconel-X. The X-15 relied upon conventional controls in the atmosphere but used reaction – control jets to maneuver in space. The 199 flights of X-15 program generated important data on high-speed flight and provided valuable lessons for NASA’s space program.

The air traveling over the X-15 at hypersonic speeds generated enough friction and heat that the outside surface of the airplane reached 1,200 °F. A dozen Langley and Ames wind tunnels contributed to the X-15 program. The sole source of aerodynamic data for the X-15 came from tests generated in the pioneering Mach 6.8 11-Inch Hypersonic Tunnel developed by John Becker at Langley in the late 1940s. Fifty percent of the work conducted in the tunnel was for the X-15 program, which focused on aerodynamic heating, stability and control, and load

Applying Hypersonic Test Facilities to Hypersonic Vehicle Design

Part of the Project Fire study included the simulation of reentry heating on high-temperature mate­rials in the 9- by 6-Foot Thermal Structures Tunnel. NASA.

distribution studies. The stability and control investigations contributed to the research airplane’s distinctive cruciform tail. The 7- by 10-Foot High-Speed Wind Tunnel enabled the study of the X-15’s separation from the B-52 at subsonic speeds, a crucial phase in the test flight. At Ames, gun-launched models fired into the free-flight tunnels obtained shadowgraphs of the shock wave patterns between Mach 3.5 and 6, the performance regime for the X-15. The Unitary Plan Supersonic Tunnel generated data on aerodynamic forces and heat transfer. The Lewis Research Center facilities provided additional data on supersonic jet – plumes and rocket-nozzle studies.[595]

There was a concern that wind tunnel tests would not provide cor­rect data for the program. First, the cramped size of the tunnel test sec­tions did not facilitate more accurate full-scale testing. Second, none of NASA’s tunnels was capable of replicating the extreme heat gener­ated by hypersonic flight, which was believed to be a major factor in flying at those speeds. The flights of the X-15 validated the wind tunnel

testing and revealed that lift, drag, and stability values were in agree­ment with one another at speeds up to Mach 10.[596]

The wind tunnels of NASA continued to reflect the Agency’s flexibil­ity in the development of craft that operated in and out of the Earth’s atmosphere. Specific components evaluated in the 9- by 6-Foot Thermal Structures Tunnel included the X-15 vertical tail, the heat shields for the Centaur launch vehicle and Project Fire entry vehicle, and components of the Hawk, Falcon, Sam-D, and Minuteman missiles. Researchers also subjected humans, equipment, and structures such as the Mercury Spacecraft to the 162-decibel, high-intensity noise at the tunnel exit. As part of Project Fire, in the early 1960s, personnel in the tunnel evalu­ated the effects of reentry heating on spacecraft materials.[597]

The Air Force’s failed X-20 Dyna-Soar project attempted to develop a winged spacecraft. The X-20 never flew, primarily because of bureaucratic entanglements. NASA researchers H. Julian Allen and Alfred J. Eggers, Jr., working on ballistic missiles, found that a blunt shape made reentry possible.[598] NASA developed a series of "lifting bodies”— capable of reentry and then being controlled in the atmosphere—to test unconventional blunt configurations. The blunt nose and wing-leading edge of the Space Shuttles that are launched into space and then glide to a landing after reentry, starting with Columbia in April 1981, owe their success to the lifting body tests flown by NASA in the 1960s and 1970s.

The knowledge gained in these programs contributed to the Space Shuttle of the 1980s. Analyses of the Shuttle reflected the tradition dating back to the Wright brothers of correlating ground, or wind tunnel, data with flight data. Langley researchers conducted an extended aerodynamic and aerothermodynamic comparison of hyper­sonic flight- and ground-test results for the program. The research team asserted that the "survival of the vehicle is a tribute to the over­all design philosophy, including ground test predictions, and to the designers of the Space Shuttle.”[599]

Applying Hypersonic Test Facilities to Hypersonic Vehicle Design

H. Julian Allen used the 8- by 7-foot test section of the NACA Ames Unitary Plan Wind Tunnel during the development of his blunt-body theory. NASA.

The latest NASA research program, called Hyper-X, investigated hypersonic flight with a new type of aircraft engine, the X-43A scramjet, or supersonic combustion ramjet. The previous flights of the X-15, the lifting bodies, and the Space Shuttle relied upon rocket power for hyper­sonic propulsion. A conventional air-breathing jet engine, which relies upon the mixture of air and atomized fuel for combustion, can only pro­pel aircraft to speeds approaching Mach 4. A scramjet can operate well

past Mach 5 because the process of combustion takes place at super­sonic speeds. Launch-mounted to the front of rocket booster from a B-52 at 40,000 feet, the 12-foot-long, 2,700-pound X-43A first flew in March 2004. During the 11-second flight, the little engine reached Mach 6.8 and demonstrated the first successful operation of a scramjet. In November 2004, a second flight achieved Mach 9.8, the fastest speed ever attained by an air-breathing engine. Much like Frank Whittle and Hans von Ohain’s turbojets and the Wrights’ invention of the airplane, the X-43A offered the promise of a new revolution in aviation, that of high-speed global travel and a cheaper means to access space.

The diminutive X-43A allowed for realistic testing at NASA Langley. First, it was at full-scale for the specific scramjet tests. Moreover, it served as a scale model for the hypersonic engines intended for future aerospace craft. The majority of the testing for the Hyper-X program occurred in the Arc-Heated Scramjet Test Facility, which was the primary Mach 7 scramjet test facility. Introduced in the late 1970s, the Langley facility generated the appropriate flows at 3,500 °F. Additional transonic and supersonic tests of 30-inch X-43A models took place in the 16-Foot Transonic Tunnel and the Unitary Plan Wind Tunnel.[600]

Researchers in the Langley Aerothermodynamics Branch worked on a critical phase of the flight: the separation of the X-43A from the Pegasus booster. The complete Hyper-X Launch Vehicle stack, consist­ing of the scramjet and booster, climbed to 20,000 feet under the wing of NASA’s Boeing B-52 Stratofortress in captive/carry flight. Clean sep­aration between the two within less than a second ensured the success of the flight. The X-43A, with its asymmetrical shape, did not facilitate that clean separation. The Langley team required a better aerodynamic understanding of multiple configurations: the combined stack, the X-43A and the Pegasus in close proximity, and each vehicle in open, free flight. The Langley 20-Inch Mach 6 and 31-Inch Mach 10 blow-down tunnels were used for launch, postlaunch, and free-flyer hypersonic testing.[601]

Understanding GA Aircraft Behavior and Handling Qualities

As noted earlier, the NACA research on aircraft performance began at the onset of the Agency. The steady progression of aircraft technology was matched by an equivalent progression in the understanding and com­prehension of aircraft motions, beginning with extensive studies of the loads, stability, control, and handling qualities fighter biplanes encoun­tered during steady and maneuvering flight.[807] At the end of the interwar period, NACA Langley researchers undertook a major evaluation of the flying qualities of American GA aircraft, though the results of that inves­tigation were not disseminated because of the outbreak of the Second World War and the need for the Agency to focus its attention on mili­tary, not civil, needs. Langley test pilots flew five representative aircraft, and the test results, on the whole, were generally satisfactory. Control effectiveness was, on the overall, good, and the aircraft demonstrated a desirable degree of longitudinal (pitch) inherent stability, though two of the designs had degraded longitudinal stability at low speeds. Lateral (roll) stability was likewise satisfactory, but "wide variations” were found in directional stability, though rudder inputs on each were sufficient to trim the aircraft for straight flight. Stall warning (exemplified by pro­gressively more violent airframe buffeting) was good, and each aircraft possessed adequate stall recovery behavior, though departures from controlled flight during stalls in turns proved more violent (the airplane rolling in the direction of the downward wing) than stalls made from wings-level flight. In all cases, aileron power was inadequate to maintain lateral control. Stall recovery was "easily made” in every case simply by pushing forward on the elevator. Overall, if some performance deficien­cies existed—for example, the tendency to spiral instability or the lack of lateral control effectiveness at the staff—such limitations were small compared with the dramatic handling qualities deficiencies of many early aircraft just two decades previously, at the end of the First World War. This survey demonstrated that by 1940 America had mastered the design of the practical, useful GA airplane. Indeed, such aircraft, built by the thousands, would play a critical role in initiating many young Americans into wartime service as combat and combat support pilots.[808]

Understanding GA Aircraft Behavior and Handling Qualities

The Aeronca Super Chief shown here was evaluated at Langley as part of a prewar survey of General Aviation aircraft handling and flying qualities. NASA.

During the Second World War, the NACA generated a new series of so-called Wartime Reports, complementing its prewar series of Technical

Reports (TR), Technical Memoranda (TM), and Technical Notes (TN). They subsequently had great influence upon aircraft design and engineering practice, particularly after the war, when applied to high-performance GA aircraft. The NACA studied various ways to improve aircraft performance through drag reduction of single-engine military fighter type aircraft and other designs resulting in improved handling qualities and increased air­speeds. The first Wartime Report was published in October 1940 by NACA engineers C. H. Dearborn and Abe Silverstein. This report described the test results that investigated methods for increasing the high speed for 11 single-engine military aircraft for the Army Air Corps. Their tests found inefficient design features on many of these airplanes indicating the desir­ability of analyzing and combining all of the results into a single paper for distribution to the designers. It highlighted one of the major problems afflicting aircraft design and performance analysis: understanding the inter­relationship of design, performance, and handling qualities.[809]

Understanding GA Aircraft Behavior and Handling Qualities

The fifteen different types of aircraft evaluated as part of a landmark study on longitudinal sta­bility represented various configurations and design layouts, both single and multiengine, and from light general aviation designs to experimental heavy bombers. From NACA TR-71 1 (1941).

The NACA had long recognized "the need for quantitative design cri – terions for describing those qualities of an airplane that make up satis­factory controllability, stability, and handling characteristics,” and the individual who, more than any other, spurred Agency development of them was Robert R. Gilruth, later a towering figure in the devel­opment of America’s manned spaceflight program.[810] Gilruth’s work built upon earlier preliminary efforts by two fellow Langley research­ers, Hartley A. Soule (later chairman of the NACA Research Airplane Projects Panel that oversaw the postwar X-series transonic and super­sonic research airplane programs) and chief Agency test pilot Melvin N. "Mel” Gough, though it went considerably beyond.[811] In 1941, Gilruth and M. D. White assessed the longitudinal stability characteristics of 15 dif­ferent airplanes (including bombers, fighters, transports, trainers, and GA sport aircraft).[812] Gilruth followed this with another study, in partner­ship with W. N. Turner, on the lateral control required for satisfactory fly­ing qualities, again based on flight tests of numerous airplanes.[813] Gilruth capped his research with a landmark report establishing the require­ments for satisfactory handling qualities in airplanes, issued first as an Advanced Confidential Report in April 1941, then as a Wartime Report, and, finally, in 1943, as one of the Agency’s Technical Reports, TR-755. Based on "real-world” flight-test results, TR-755 defined what measured characteristics were significant in the definition of satisfactory flying qualities, what were reasonable to require from an airplane (and thus to establish as design requirements), and what influence various design features had upon the flying qualities of the aircraft once it entered flight testing.[814] Together, this trio profoundly influenced the field of flying qualities assessment.

But what was equally needed was a means of establishing a stan­dard measure for pilot assessment of aircraft handling qualities.

Подпись: 8 Understanding GA Aircraft Behavior and Handling Qualities

This proved surprisingly difficult to achieve and took a number of years of effort. Indeed, developing such measures took on such urgency and constituted such a clear requirement that it was one of the compelling reasons underlying the establishment of professional test pilot training schools, beginning with Britain’s Empire Test Pilots’ School established in 1943.[815] The measure was finally derived by two American test pilots, NASA’s George Cooper and the Cornell Aeronautical Laboratory’s Robert Harper, Jr., thereby establishing one of the essen­tial tools of flight testing and flight research, the Cooper-Harper rating scale, issued in 1969 in a seminal report.[816] This evaluation tool quickly replaced earlier scales and measures and won international acceptance, influencing the flight-test evaluation of virtually all flying craft, from light GA aircraft through hypersonic lifting reentry vehicles and rotor – craft. The combination of the work undertaken by Gilruth, Cooper, and

Подпись: The Cessna C-1 90 shown here was evaluated at Langley as part of an early postwar assessment of General Aviation aircraft performance. NASA. Подпись: 8

their associates dramatically improved flight safety and flight efficiency, and must therefore be considered one of the NACA-NASA’s major con­tributions to aviation.[817]

Despite the demands of wartime research, the NACA and its research staff continued to maintain a keen interest in the GA field, particularly as expectations (subsequently frustrated by postwar economics) antic­ipated massive sales of GA aircraft as soon as conflict ended. While this was true in 1946—when 35,000 were sold in a single year!—the post­war market swiftly contracted by half, and then fell again, to just 3,000 in 1952, a "boom-bust” cycle the field would, alas, all too frequently repeat over the next half-century.[818] Despite this, hundreds of NACA general-aviation-focused reports, notes, and memoranda were produced— many reflecting flight tests of new and interesting GA designs—but, as well, some already-classic machines such as the Douglas DC-3, which underwent a flying qualities evaluation at Langley in 1950 as an exer­cise to calculate its stability derivatives, and, as well, update and refine the then-existing Air Force and Navy handling qualities specifications guidebooks. Not surprisingly, the project pilot concluded, "the DC-3
is a very comfortable airplane to fly through all normal flight regimes, despite fairly high control forces about all three axes.”[819]

On October 4, 1957, Sputnik rocketed into orbit, heralding the onset of the "Space Age” and the consequent transformation of the NACA into the National Aeronautics and Space Administration (NASA). But despite the new national focus on space, NASA maintained a broad program of aeronautical research—the lasting legacy of the NACA—even in the shadow of Apollo and the Kennedy-mandated drive to Tranquility Base.

Understanding GA Aircraft Behavior and Handling Qualities

The Beech Debonair, one of many General Aviation aircraft types evaluated at the NASA Flight Research Center (now the NASA Dryden Flight Research Center). NASA.

This included, in particular, the field of GA flying and handling qual­ities. The first report written in 1960 under NASA presented the status of spin research—a traditional area of concern, particularly as it was a killer of low-flying-time pilots—from recent airplane design as inter­preted at the NASA Langley Research Center, Langley, VA.[820] Sporadically, NASA researchers flight-tested new GA designs to assess their handling qualities, performance, and flight safety, their flight test reports frankly detailing both strengths and deficiencies. In December 1964, for exam­ple, NASA Flight Research Center test pilot William Dana (one of the Agency’s X-15 pilots) evaluated a Beech Debonair, a conventional-tailed derivative of the V-tail Beech Bonanza. Dana found the sleek Debonair a satisfactory aircraft overall. It had excellent longitudinal, spiral, and speed stability, with good roll damping and "honest” stall behavior in "clean” (landing gear retracted) configuration. But he faulted it for lack of rudder trim that hurt its climb performance, lack of "much warning, either by stick or airframe buffet” of impending stalls, and poor gear – down stall performance manifested by an abrupt left wing drop that hindered recovery. Finally, the plane’s tendency to promote pilot-induced oscillations (PIO) during its landing flare earned it a pilot-rating grade of "C” for landings.[821]

The growing recognition that GA technology had advanced far beyond the state of GA that had existed at the time of the NACA’s first qualitative examination of light aircraft handling qualities triggered one of the most significant of NASA’s GA assessment programs. In 1966, at the height of the Apollo program, pilots and engineers at the Flight Research Center performed an evaluation of the handling qualities of seven GA aircraft, expanding upon this study subsequently to include the handling qual­ities of other light aircraft and advanced control systems and displays. The aircraft for the 1966 study were a mix of popular single-and twin – engine, high-and low-wing types. Project pilot was Fred W. Haise (sub­sequently an Apollo 13 astronaut); Marvin R. Barber, Charles K. Jones, and Thomas R. Sisk were project engineers.[822]

As a group, the seven aircraft all exhibited generally satisfactory stability and control characteristics. However, these characteristics, as researchers noted,

Degraded with decreasing airspeed, increasing aft cen­ter of gravity, increasing power, and extension of gear and flaps.

The qualitative portion of the program showed the han­dling qualities were generally satisfactory during visual and instrument flight in smooth air. However, atmo­sphere turbulence degraded these handling qualities, with the greatest degradation noted during instrument landing system approaches. Such factors as excessive control-system friction, low levels of static stability, high adverse yaw, poor Dutch roll characteristics, and control-surface float combined to make precise instru­ment tracking tasks, in the present of turbulence diffi­cult even for experienced instrument pilots.

Подпись: 8The program revealed three characteristics of specific airplanes that were considered unacceptable if encoun­tered by inexperienced or unsuspecting pilots: (1) A vio­lent elevator force reversal or reduced load factors in the landing configuration, (2) power-on stall character­istics that culminate in rapid roll offs and/or spins, and (3) neutral-to-unstable static longitudinal stability at aft center gravity.

A review indicated that existing criteria had not kept pace with aircraft development in areas of Dutch roll, adverse yaw, effective dihedral, and allowable trim changes with gear, flap and power. This study indicated that crite­ria should be specified for control-system friction and control-surface float.

This program suggested a method of quantitative eval­uating and handling qualities of aircraft by the use of pilot-work-load factor.[823]

As well, all of the aircraft tested had "undesirable and inconsistent placement of both primary flight instruments and navigational dis­plays,” increasing pilot workload, a matter of critical concern during precision instrument landing approaches.[824] Further, they all lacked good
stall warning (defined as progressively strong airframe buffet prior to stall onset). Two had "unacceptable” stall characteristics, one entering an "uncontrollable” left roll/yaw and altitude-consuming spin, and the other having "a rapid left rolloff in the power-on accelerated stall with landing flaps extended.”[825]

The 1966 survey stimulated more frequent evaluations of GA designs by NASA research pilots and engineers, both out of curiosity and some­times after accounts surfaced of marginal or questionable behavior. NASA test pilots and engineers found that while various GA designs had "gen­erally satisfactory” handling qualities for flight in smooth air and under visual conditions, they had far different qualities in turbulent flight and with degraded visibility. Control system friction, longitudinal and spiral instability, adverse yaw, combined lateral-directional "Dutch roll” char­acteristics, abrupt trim changes when deploying landing gear flaps, and adding or subtracting power all inhibited effective precision instrument tracking. Thus, instrument landing approaches quickly taxed a pilot, markedly increasing pilot workload. The FRC team explored applying advanced control systems and displays, modifying a light twin-engine

Understanding GA Aircraft Behavior and Handling Qualities

The workhorse Piper PA-30 on final approach for a lakebed landing at the Dryden Flight Research Center. NASA.

Piper PA-30 Twin Comanche business aircraft as a GA testbed with a flight-director display and an attitude-command control system. The result, demonstrated in 72 flight tests and over 120 hours of operation, was "a flying machine that borders on being perfect from a handling qual­ities standpoint during ILS approaches in turbulent air.” The team pre­sented their findings at a seminal NASA conference on aircraft safety and operating problems held at the Langley Research Center in May 1971.[826]

The little PA-30 proved a workhorse, employed for a variety of research studies including exploring remotely piloted vehicle technol­ogy.[827] During the time period of 1969-1972, NASA researchers Chester Wolowicz and Roxanah Yancey undertook wind tunnel and flight tests on it to investigate and assess its longitudinal and lateral static and dynamic stability characteristics.[828] These tests documented represen­tative state-of-the-art analytical procedures and design data for predict­ing the subsonic longitudinal static and dynamic stability and control characteristics of a light, propeller-driven airplane.[829] But the tests also confirmed, as one survey undertaken by North Carolina State University researchers for NASA concluded, that much work remained to be done to define and properly quantify the desirable handling qualities of GA aircraft.[830]

Fortunately, a key tool was rapidly maturing that made such anal­ysis far more attainable than it would have been just a few years pre­viously: the computer. Given a properly written analytical program, it had the ability to rapidly extract relevant performance parameters from flight-test data. Over several decades, estimating stability and control parameters from flight-test data had progressed through simple analog matching methodologies, time vector analysis, and regression analysis.[831] A joint program between the NASA Langley Research Center and the Aeronautical Laboratory of Princeton University using a Ryan Navion demonstrated that an iterative "maximum-likelihood minimum variance” parameter estimation procedure could be used to extract key aerodynamic parameters based on flight test results, but also showed that caution was warranted. Unanticipated relations between the various parame­ters had made it difficult to sort out individual values and indicated that prior to such studies, researchers should have a reliable mathematical model of the aircraft.[832] At the Flight Research Center, Richard E. Maine and Kenneth W. Iliff extended such work by applying IBM’s FORTRAN programming language to ease determination of aircraft stability and control derivatives from flight data. Their resulting program, a max­imum likelihood estimation method supported by two associated programs for routine data handling, was validated by successful analy­sis of 1,500 maneuvers executed by 20 different aircraft and was made available for use by the aviation community via a NASA Technical Note issued in April 1975.[833] Afterwards, NASA, the Beech Aircraft Corporation, and the Flight Research Laboratory at the University of Kansas col­laborated on a joint flight test of a loaned Beech 99 twin-engine com­muter aircraft, extracting longitudinal and lateral-directional stability derivatives during a variety of maneuvers at assorted angles of attack and in clean and flaps-down condition. "In general,” researchers con­cluded, "derivative estimates from flight data for the Beech 99 airplane were quite consistent with the manufacturer’s predictions.”[834] Another analytical tool was thus available for undertaking flying and handling qualities analysis.

Pursuing Highly Maneuverable Aircraft Technology

In 1973, NASA and Air Force officials began exploring a project to develop technologies for advanced fighter aircraft. Several aerospace

Pursuing Highly Maneuverable Aircraft Technology

The HiMAT research vehicle demonstrated advanced technologies for use in high-performance military aircraft. NASA.

 

9

 

contractors submitted designs for a baseline advanced-fighter concept with performance goals of a 300-nautical-mile mission radius, sus­tained 8 g maneuvering capability at Mach 0.9, and a maximum speed of Mach 1.6 at 30,000 feet altitude. The Los Angeles Division of Rockwell International was selected to build a 44-percent-scale, remotely piloted model for a project known as Highly Maneuverable Aircraft Technology (HiMAT). Testing took place at Dryden, initially under the leadership of Project Manager Paul C. Loschke and later under Henry Arnaiz.[945] The scale factor for the RPRV was determined by cost considerations, pay­load requirements, test-data fidelity, close matching of thrust-to-weight ratio and wing loading between the model and the full-scale design, and availability of off-the-shelf hardware. The overall geometry of the design was faithfully scaled with the exception of fuselage diameter and inlet – capture area, which were necessarily over-scale in order to accommodate a 5,000-pound-thrust General Electric J85-21 afterburning turbojet engine.

Advanced technology features included maximum use of lightweight, high-strength composite materials to minimize airframe weight; aero – elastic tailoring to provide aerodynamic benefits from the airplane’s
structural-flexibility characteristics; relaxed static stability, to provide favorable drag effects because of trimming; digital fly-by-wire controls; a digital integrated propulsion-control system; and such advanced aero­dynamic features as close-coupled canards, winglets, variable-camber leading edges, and supercritical wings. Composite materials, mostly graphite/epoxy, comprised about 95 percent of the exterior surfaces and approximately 29 percent of the total structural weight of the airplane. Researchers were interested in studying the interaction of the various new technologies.[946] To keep development costs low and allow for maxi­mum flexibility for proposed follow-on programs, the HiMAT vehicle was modular for easy reconfiguration of external geometry and propulsion systems. Follow-on research proposals included forward-swept wings, a two-dimensional exhaust nozzle, alternate canard configurations, active flutter suppression, and various control-system modifications. These options, however, were never pursued.[947] Rockwell built two HiMAT air vehicles, known as AV-1 and AV-2, at a cost of $17.3 million. Each was 22.5 feet long, spanned 15.56 feet, and weighed 3,370 pounds. The vehicle was carried to a launch altitude of about 40,000 to 45,000 feet beneath the wing of the NB-52B. Following release from the wing pylon at a speed of about Mach 0.7, the HiMAT dropped for 3 seconds in a preprogrammed maneuver before transitioning to control of the ground pilot. Research flight-test maneuvers were restricted to within a 50-nautical-mile radius of Edwards and ended with landing on Rogers Dry Lake. The HiMAT was equipped with steel skid landing gear. Maximum flight duration varied from about 15 to 80 minutes, depending on thrust requirements, with an average planned flight duration of about 30 minutes.

As delivered, the vehicles were equipped with a 227-channel data collection and recording system. Each RPRV was instrumented with 128 surface-pressure orifices with 85 transducers, 48 structural load and hinge-moment strain gauges, 6 buffet accelerometers, 7 propulsion sys­tem parameters, 10 control-surface-position indicators, and 15 airplane motion and air data parameters. NASA technicians later added more transducers for a surface-pressure survey.[948] The HiMAT project repre­sented a shift in focus by researchers at Dryden. Through the Vietnam era, the focal point of fighter research had been speed. In the 1970s, driven by a national energy crisis, new digital technology, and a chang­ing combat environment, researchers sought to develop efficient research models for experiments into the extremes of fighter maneuverability. As a result, the quest for speed, long considered the key component of suc­cessful air combat, became secondary.

HiMAT program goals included a 100-percent increase in aerody­namic efficiency over 1973 technology and maneuverability that would allow a sustained 8 g turn at Mach 0.9 and an altitude of 25,000 feet. Engineers designed the HiMAT aircraft’s rear-mounted swept wings, dig­ital flight-control system, and forward-mounted controllable canards to give the plane a turn radius twice as tight as that of conventional fighter planes. At near-sonic speeds and at an altitude of 25,000 feet, the HiMAT aircraft could perform an 8 g turn, nearly twice the capabil­ity of an F-16 under the same conditions.[949] Flying the HiMAT from the ground-based cockpit using the digital fly-by-wire system required con­trol techniques similar to those used in conventional aircraft, although design of the vehicle’s control laws had proved extremely challenging. The HiMAT was equipped with a flight-test-maneuver autopilot based on a design developed by Teledyne Ryan Aeronautical Company, which also developed the aircraft’s backup flight control system (with modifi­cations made by Dryden engineers). The autopilot system provided pre­cise, repeatable control of the vehicle during prescribed maneuvers so that large quantities of reliable test data could be recorded in a compar­atively short period of flight time. Dryden engineers and pilots tested the control laws for the system in simulations and in flight, making any nec­essary adjustments based on experience. Once adjusted, the autopilot was a valuable tool for obtaining high-quality, precise data that would not have been obtainable using standard piloting methods. The autopi­lot enabled the pilot to control multiple parameters simultaneously and to do so within demanding, repeatable tolerances. As such, the flight – test-maneuver autopilot showed itself to be a broadly applicable tech­nique for flight research with potential benefit to any flight program.[950]

The maiden flight of HiMAT AV-1 took place July 27, 1979, with Bill Dana at the controls. All objectives were met despite some minor dif­ficulty with the telemetry receiver. Subsequent flights resulted in acqui­sition of significant data and cleared the HiMAT to a maximum speed of Mach 0.9 and an altitude of 40,000 feet, as well as demonstrating a 4 g turning capability. By the end of October 1980, the HiMAT had been flown to Mach 0.925 and performed a sustained 7 g turn. The ground pilot was occasionally challenged to respond to unexpected events, includ­ing an emergency engine restart during flight and a gear-up landing.

AV-2 was flown for the first time July 24, 1981. The following week, Stephen Ishmael joined the project as a ground pilot. After several airspeed calibration flights, researcher began collecting data with AV-2.

On February 3, 1982, AV-1 was flown to demonstrate the 8 g maneu­ver capabilities that had been predicted for the vehicle. A little over 3 months later, researchers obtained the first supersonic data with the HiMAT, achieving speeds of Mach 1.2 and Mach 1.45. Research with both air vehicles continued through January 1983. Fourteen flights were completed with AV-1 and 12 with AV-2, for a total of 26 over 3% years.[951] The HiMAT research successfully demonstrated a synergistic approach to accelerating development of an advanced high-performance aircraft. Many high-risk technologies were incorporated into a single, low-cost vehicle and tested—at no risk to the pilot—to study interaction among systems, advanced materials, and control software. Design requirements dictated that no single failure should result in loss of the vehicle. Consequently, redundant systems were incorporated throughout the aircraft, including computer microprocessors, hydraulic and electri­cal systems, servo-actuators, and data uplink/downlink equipment.[952] The HiMAT program resulted in several important contributions to flight technology. The foremost of these was the use of new composite materials in structural design. HiMAT engineers used materials such as fiberglass and graphite epoxy composites to strengthen the airframe and allow it to withstand high g conditions during maneuverability tests. Knowledge gained in composite construction of the HiMAT vehicle strongly influenced other advanced research projects, and such materials are now used extensively on commercial and military aircraft.

Designers of the X-29 employed many design concepts developed for HiMAT, including the successful use of a forward canard and the rear-mounted swept wing constructed from lightweight composite mate­rials. Although the X-29’s wings swept forward rather than to the rear, the principle was the same. HiMAT research also brought about far – reaching advances in digital flight control systems, which can monitor and automatically correct potential flight hazards.[953]

The Quest for Long-Range Supersonic Cruise

Two users were looking to field airplanes in the 1960s with long range at high speeds. One organization’s requirement was high profile and the object of much debate: the United States Air Force and its continuing desire to have an intercontinental range supersonic bomber. The other organiza­tion was operating in the shadows. It was the Central Intelligence Agency (CIA), and it was aiming to replace its covert subsonic high-altitude recon­naissance plane (the Lockheed U-2). The requirement was simple; the ful­fillment would be challenging, to say the least: a mission radius of 2,500 miles, cruising at Mach 3 for the entire time, at altitudes up to 90,000 feet. The payload was to be on the order of 800 pounds, as it was on the U-2.

The evolution of both supersonic cruise aircraft was involved, much more so for the highly visible USAF aircraft that eventually appeared as the XB-70. The B-58 had given the USAF experience with a Mach 2 bomber, but bombing advocates (notably Gen. Curtis LeMay) wanted long range to go with the supersonic performance. As demonstrated in the classic Breguet range equation, range is a direct function of
lift-to-drag (L/D) ratio. The high drag at supersonic speeds reduced that ratio to the point where large fuel tanks were necessary, increas­ing the weight of the vehicle, requiring more lift, more drag, and more fuel. Initial designs weighed 750,000 pounds and looked like a "3-ship formation.” NACA research on the XF-92 had suggested a delta wing design as an efficient high-speed shape; now, a paper written by Alfred Eggers and Clarence Syvertson of Ames published in 1954 studied sim­ple shapes in the supersonic wind tunnels. They noted that, by mount­ing a wing atop a half cylindrical shape, they could use the pressure increase behind the shape’s shock wave to increase the effective lift of the wing.[1066] A lift increase of up to 30 percent could be achieved. This con­cept was dubbed "compression lift”; more recently, it is referred to as the "wave rider” concept. Using compression lift principles, North American Aviation (NAA) proposed a 6-engined aircraft weighing 500,000 pounds loaded that could cruise at Mach 2.7 to 3 for 5,000 nautical miles. The aircraft would have a delta wing, with a large underslung shape hous­ing the propulsion system, weapons bay, landing gear, and fuel tanks. A canard surface behind the cockpit would provide trim lift at super­sonic speeds. To provide additional directional stability at high speeds, the outer wingtips would fold to either 25 or 65 degrees down. Although reducing effective wing lifting surface, it would have an additional ben­efit of further increasing compression lift caused by wingtip shocks reflecting off the underside of the wing. Because of the 900-1,100-degree sustained skin temperature at such high cruise speeds, the aircraft would be made of titanium and stainless steel, with stainless steel honeycomb being used in the 6,300-square-foot wing to save weight.[1067]

Подпись: 10Original goals were for the XB-70, as it was designated, to make its first flight in December 1961, after contract award to NAA in January 1958. But the development of the piloted bomber was colliding with the missile and space age. The NACA now became the National Aeronautics and Space Administration (NASA), and the research organization gained the mission of directing the Nation’s civilian space program, as well as its traditional aeronautics advancement focus. For military aviation, the development of reliable intercontinental ballistic missiles (ICBM)

The Quest for Long-Range Supersonic CruiseВ-70 AERODYNAMIC FEATURES

Подпись: TWIN VERTICALSПодпись: ELEVONSDELTA WING

Подпись: VARIABLE POSITION CANOPY Подпись: 10

Подпись:

Подпись: FOLDING WING TIPS
The Quest for Long-Range Supersonic Cruise

DROOPED LEAOING EDGES 4 “COMPRESSION LIFT’

BLC GUTTER

North American Aviation (NAA) XB-70 Valkyrie. NASA.

promised delivery of atomic payloads in 30 minutes from launch. The deployment by the Soviet Union of supersonic interceptors armed with supersonic air-to air missiles and belts of Mach 3 surface-to-air missiles (SAM) increasingly made the survivability of the unescorted bomber once again in doubt. The USAF clung to the concept of the piloted bomber, but in the face of delays in manufacturing the airframe with its new mate­rials, increasing program costs, and the concerns of the new Secretary of Defense Robert S. McNamara, the program was scaled back to an experimental program with only four (later three, then two) aircraft to be built. The Air Force’s loss was NASA’s gain; a limited test program of 180 hours was to be flown, with the USAF and NASA sharing the cost and the data. At last, a true supersonic cruise aircraft would be avail­able for the NACA’s successor to study in the sky. The long-awaited first flight of XB-70 No. 1 occurred before a large crowd at Palmdale, CA, on September 21, 1964. But the other shadow supersonic cruise aircraft had already stolen a march on the star of the show.

In February 1964, President Lyndon Johnson revealed to the world that the United States was operating an aircraft that cruised at Mach 3 at latitudes over 70,000 feet. Describing a plane called the A-11, the ini­tial press release was misleading—deliberately so. The A-11 name was a misnomer; it was a proposed design for the CIA spy plane that was never
built, as it had too large a radar cross section. The photograph released was of a slim, long aircraft with two huge wing-mounted engines: the two-seat USAF interceptor version, known as the YF-12. Only three were built, and they were not put into production. The "A-11” that was flying was actually known as the A-12 and was the single-seat low-radar cross-section design plane built in secret by the Lockheed team led by Kelly Johnson, designer of the original U-2. Built almost exclusively of titanium, the aircraft had to be extremely light to achieve its altitude goal; its long range also dictated a high fuel fraction. The twin J58 turbojets had to remain in afterburner for the cruise portion, which dictated even higher-temperature materials than titanium and unique attention to the thermal environment of the vehicle.[1068] [1069]

Подпись: 10The USAF ordered a two-seat reconnaissance version of the A-12, designated the SR-71 and duly announced by the President in summer 1964, before the Presidential election. The single-seat A-12 existence was kept secret for another 20 years at CIA insistence, which had a signifi­cant impact on NASA’s flight test of the only other Mach 3 piloted air­craft besides the XB-70. Later known collectively known as Blackbirds, a fleet of 50 Mach 3 cruise airplanes were built in the 1960s and oper­ated for over 25 years. But the labyrinth of secrecy surrounding them severely hampered acquisition by NASA of an airplane for research, much less investigating their technical details and publishing reports. This was unfortunate, as now the United States was committed to not only a space race, but also a global race for a new landmark in aviation technology: a practical supersonic jet airliner, more popularly known as the Supersonic Transport (SST). The emerging NASA would be a major participant in this race, and in 1964, the other runners had a headstart.