Category NASA’S CONTRIBUTIONS TO AERONAUTICS

YF-12 Flight Test: NASA’s Major Supersonic Cruise Study Effort

The XB-70 test program had focused on SST research, as it was the only large aircraft capable of high Mach cruise. In the 1970s, flight data col­lection could focus on a smaller aircraft but one that had already dem­onstrated routine flight at Mach 3. Lockheed’s Mach 3 Blackbird was no longer as secret as it had been in its CIA A-12 initial stages, and the USAF

was operating a fleet of acknowledged Mach 3 aircraft, although details of its missions, top speeds, and altitudes remained military secrets. NASA had requested Blackbirds as early as 1968 for flight research, but the Agency was rejected as being too open for the CIA’s liking. Later, as the NASA flight-test engineers Bill Schweikhard and Gene Matranga, both XB-70 test program veterans, assisted the USAF in SR-71 flight-test data analysis, the atmosphere changed, and the USAF was more willing to provide the aircraft but without compromising the secrecy of the details of the SR-71. The YF-12s were in storage, as the USAF had decided not to buy any further aircraft. Because the YF-12 had a different fuselage and earlier model J58s than the SR-71, a joint test program was pro­posed, with the USAF providing aircraft and crew support and NASA paying operational costs. Phase I of the program would concentrate on USAF desires to evaluate operational tactics against a high Mach target (such as the new Soviet MiG-25 Foxbat). NASA would instrument the aircraft and collect basic research data, as well as conduct Phase II of the test program with applied research that would benefit from a Mach 3, 80,000-foot altitude supersonic cruise platform.[1101]

Подпись: 10Between 1969 and 1979, flight-test crews flew 298 flights with 2 YF-12 Blackbird aircraft. The first of these was a modified YF-12A inter­ceptor. The second, which replaced another YF-12A lost from a fire dur­ing an Air Force test mission, was a nonstandard SR-71A test aircraft given a fictitious "YF-12C” designation and serial number to mask its spy plane origins. YF-12 supersonic cruise-related test results included isolation of thermal effects on aircraft loads from aerodynamic effects. The instrumented aircraft collected loads and temperature data in flight. It was then heated in purpose-built form-fit ovens on the ground—the High Temperature Loads Laboratory (HTTL)—so the thermal strains and loads could be differentiated from the aerodynamic.[1102] For a high – temperature aircraft of the future, separation of these stresses could be crucial in the event that underestimation could lead to skin failures, as experienced by XB-70 AV-1. One byproduct of this research was to correct the factoid still quoted into the 21st century that the airplane expanded in length by 30 inches because of heat. The actual figure was closer to 12 inches, with the difference appearing in structural stresses.

YF-12 Flight Test: NASA's Major Supersonic Cruise Study Effort Подпись: 10

Lockheed’s masterful Blackbird relied not only on lightweight tita­nium and a high fuel fraction for its long range but also a finely tuned propulsion system. At Mach 3, over 50 percent of the nacelle net thrust came from the inlet pressure rise, with the engine thrust being only on the order of 20 percent; the remainder came from the accelerated flow

exiting the nozzle (given the small percentage of thrust from the engine, Lockheed designer Kelly Johnson used to good-naturedly joke that Pratt & Whitney’s superb J58 functioned merely as an air pump for the nacelle at Mach 3 and above). It is not necessarily self-evident why the nozzle should produce such a large percentage of thrust, while the engine’s con­tribution seems so little. The nozzle produces so much thrust because it accelerates the combined flow from the engine and inlet bypass air as it passes through the constricted nozzle throat at the rear of the nacelle. Engine designers concentrate only on the engine, regarding the nacelle inlet and exhaust as details that the airframe manufacturer provides. The low percentage numbers for the jet engine are because it produces less absolute net thrust the faster and higher it goes. Therefore, at the same time the engine thrust goes down (as the plane climbs to high altitude and accelerates to high Mach numbers), the percentage of net thrust from nonengine sources increases drastically (mainly because of inlet pressure buildup). Thus, static sea level thrust is the highest thrust an engine can produce. Integration of the propulsion system (i. e., match­ing the nacelle with the engine for optimum net thrust) is critical for efficient and economical supersonic cruise, as opposed to accelerating briefly through the speed of sound, which can be achieved by using (as the early Century series did) a "brute force” afterburner to boost engine power over airframe drag.

Подпись: 10Air had to be bypassed around the engine to position shock waves properly for the pressure recovery. This bypass led to the added bene­fit of cooling the engine and to the system being referred to as a turbo ramjet. The NASA YF-12 inlets were instrumented, and much testing was devoted to investigation of the inlet/shock wave/engine interac­tion. Inlet unstarts in the YF-12 were even more noticeable and criti­cal than they were in the B-70, as the nacelles were close to mid-span on the wing and the instantaneous loss of the inlet thrust led to violent yaws induced in the less massive aircraft. The Blackbirds had auto­matic inlet controls, unlike XB-70 AV-1, but they were analog control devices and were often not up to the task; operational crewmembers spent much time in the simulator practicing emergency manual con­trol of the inlets. The NASA test sorties revealed that the inlet affected the flight performance of the aircraft during restart recovery. The excess spillage drag airflows from the unstarted inlets induced uncommanded rolling moments. This could result in a "falling leaf” effect at extreme altitudes, as the inlet control systems attempted to reposition the shock

Подпись:Подпись: Full-scale engine

YF-12 Flight Test: NASA's Major Supersonic Cruise Study Effort Подпись: 10
Подпись: Test module
Подпись: Flight. Dryden
Подпись: Airplane
Подпись: 10’ x 10' wind tunnel.
Подпись: Full-scale inlet model

YF-12 Flight Test: NASA's Major Supersonic Cruise Study Effort8×7 9*7 and 11-foot

wind tunnels. Ames

Propulsion Systems laboratory <PSl>

Altitude Test Facility, lewis

Comparison of inlet configurations and facilities.

YF-12 NASA propulsion research assets. NASA.

wave properly by spike positioning and door movements.[1103] This was an illustration of the strong interaction for a supersonic cruiser between aerodynamics and propulsion.

To further investigate this interaction, much research was dedi­cated to the inlet system of the YF-12.[1104] A salvaged full-scale inlet was tested in a supersonic wind tunnel and a one-third-scale inlet. Using this approach, flight-test data could be compared with wind tunnel data to validate the tunnel data and adjust them as required.

The digital computer era was appearing, and NASA led the way in applying it to aeronautics. In addition to the digital fly-by-wire F-8 flight

research, the YF-12 was also employing the digital computer. Originally, the Central Airborne Performance Analyzer (CAPA) general performance digital computer was used to monitor the behavior of the YF-12 Air Inlet Control System (AICS). It behaved well in the harsh airborne environ­ment and provided excellent data. Based upon this and the progress in digital flight control systems, NASA partnered with Lockheed in 1975 to incorporate a Cooperative Airframe/Propulsion Control System com­puter on the YF-12C (the modified SR-71) that would perform the flight control system and propulsion control functions.[1105] This was delivered in 1978, only shortly before the end of the YF-12 flight-test era. The sys­tem requirements dictated that pilot interface be transparent between the standard analog aircraft and the digital aircraft and that the aircraft system performance be duplicated digitally. The development included the use of a digital model of the flight controls and propulsion system for software development. Only 13 flights were flown in the 4 months remaining before program shutdown, but the flights were spectacu­larly successful. After initial developmental "teething problems” early in the program, the aircraft autopilot behavior was 10 times more pre­cise than it was in the analog system, inlet unstarts were rare, and the aircraft exhibited a 5-7-percent increase in range.[1106] Air Force test pilots flew the YF-12C on three occasions and were instrumental in persuad­ing the USAF Logistics Command to install a similar digital system on the entire SR-71 fleet. The triple-redundant operational system—called Digital Automatic Flight and Inlet Control System (DAFICS)—was tested and deployed between 1980 and 1985 and exhibited similar benefits.

Подпись: 10For the record, the author himself was a USAF SR-71 flight-test engineer and navigator/back-seater for the developmental test flights of DAFICs on the SR-71 and during the approximately 1-year test program experienced some 85 inlet unstarts! Several NASA research papers speculated on the effect of inlet unstarts on passenger, using anecdotal flight-test data from XB-70 and YF-12 flights. The author agreed with the comments in 1968 of XB-70 test pilot Fulton (who also flew the YF-12 for NASA) that he thought paying passen­gers in an SST would put up with an unstart exactly once. The author

also has several minutes of supersonic glider time because of dual inlet unstarts followed by dual engine flameouts, accompanied by an unrelated engine mechanical problem inhibiting engine restart. During a dual inlet unstart at 85,000 feet and subsequent emergency single engine descent to 30,000 feet, he experienced the "falling leaf” mode of flight, as the inlets cycled trying to recapture the shock waves within the inlet while the flight Mach number also oscillated. The problems during the test program indicated the sensitivity of the integration of propulsion with the airframe for a supersonic cruise aircraft. Once the in-flight "debugging” of the digital system had been accomplished, however, operational crews never experienced unstarts, except in the event of mechanical malfunction. One byproduct of the digital system was that the inlet setting software could be varied to account for differ­ences in individual airframe inlets because of manufacturing tolerances. This even allowed inlet optimization by tail number.

Подпись: 10Other YF-12 research projects were more connected with taking advantage of its high speed and high altitude as platforms for basic research experiments. One measured the increase in drag caused by an aft-facing "step” placed within the Mach 3 boundary layer. As well, researchers measured the thickness and flow characteristics of this turbulent region. The coldwall experiment was the most famous (or infamous).[1107] It was a thermodynamics heat transfer exper­iment that took an externally mounted, insulated, cryogenically cooled cylinder to Mach 3 cruise and then exposed it to the high – temperature boundary layer by explosively stripping the insulation. Basic handling qualities investigations with the cylinder resulted in loss of the carrier YF-12A folding ventral fin. It was replaced with a newer material, producing a bonus materials experiment. When the experiment was finally cleared for deployment, it resulted in sending debris into the left inlet of the YF-12 carrier and unstarts in both inlets, not to mention multiple unstarts of the YF-12C chase aircraft. Both aircraft were grounded for over 6 weeks for inspections and repairs. Fortunately, the next 2 deployments with fewer explosives were more routine. An implicit lesson learned was that at Mach 3, the seemingly routine flight-test techniques may require careful review to ensure that they really are routine.

Tail Plane Icing Program

Following the traumatic loss of TWA Flight 800 in 1996, then- President Clinton put together a commission on aviation safety, from which NASA in 1997 began an Aviation Safety Program to address very specific areas of flying in a bid to reduce the accident rate, even as air traffic was anticipated to grow at record rates. The emphasis on safety came at a time when a 4-year program led by NASA with the help of the FAA to understand the phenomenon known as ice – contaminated tail plane stall, or ICTS, was a year away from wrapping up. The successful Tail Plane Icing Program provided immediate bene­fits to the aviation community and today is considered by veteran NASA
researchers as one of the Agency’s most important icing-related projects ever conducted.[1238]

Подпись: 12According to a 1997 fact sheet prepared by GRC, the ICTS phenom­enon is "characterized as a sudden, often uncontrollable aircraft nose down pitching moment, which occurs due to increased angle-of-attack of the horizontal tail plane resulting in tail plane stall. Typically, this phe­nomenon occurs when lowering the flaps during final approach while operating in or recently departing from icing conditions. Ice formation on the tail plane leading edge can reduce tail plane angle-of-attack range and cause flow separation resulting in a significant reduction or com­plete loss of aircraft pitch control.” At the time the program began there had been a series of commuter airline crashes in which icing was suspect or identified as a cause. And while there was a great deal of knowledge about the effects of icing on the primary wing of an aircraft and how to combat it or recover from it, there was little information about the effect of icing on the tail or how pilots could most effectively recover from a tail plane stall induced by icing. As the popularity of the smaller, regional commuter jets grew following airline deregulation in 1978, the incidents of tail plane icing began to grow at a relatively alarming rate. By 1991, when the FAA first had the notion of initiating a review of all aspects of tail plane icing, there had been 16 accidents involving turboprop-powered transport and commuter-class airplanes, resulting in 139 fatalities.[1239]

Following a review of all available data on tail plane icing and inci­dents of the tail stalling on turboprop-powered commuter airplanes as of 1991, the FAA requested assistance from NASA in managing a full – scale research program into the characteristics of ICTS. And so an initial 4-year program began to deal with the problem and propose solutions. More specifically the goals of the program were to collect detailed aero­dynamic data on how the tail of a plane contributed to the stability of an aircraft in flight, and then take the same measurements with the tail contaminated with varying severity of ice, and from that information develop methods for predicting the effects of tail plane icing and recov­ering from them. To accomplish this, a series of wind tunnel tests were performed with a tail section of a De Havilland of Canada DHC-6 Twin Otter aircraft (a design then widely used for regional transport), both in dry air conditions and with icing turned on in the tunnel. Flight tests of a full Twin Otter were made to complement the ground-based studies.[1240]

Подпись: 12As is typical with many research programs, as new information comes in and questions get answered, the research results often gener­ate additional questions that demand even more study to find solutions. So following the initial tail plane icing research that concluded in 1997, a year later NASA’s Ohio-based Field Center initiated a second multi­phase program to continue the icing investigations. This time the work was assigned to Wichita State University in Kansas, which would coor­dinate its activities with support from the Bombardier/Learjet Company. The main goal was of the combined Government/industry/university effort was to expand on the original work with the Twin Otter by com­ing up with methods and criteria for testing multiple tail plane configu­rations in a wind tunnel, and then actually conduct the tests to generate a comprehensive database of tail plane aerodynamic performance with and without ice contamination for a range of tail plane/airfoil configu­rations. The resulting database would then be used to support develop­ment and verification of future icing analysis tools.[1241]

From this effort pilots were given new tools to recognize the onset of tail plane icing and recover from any disruptions to the aircraft’s
aerodynamics, including a full stall. As part of the education process, a Guest Pilot Workshop was held to give aviators firsthand experience with tail plane icing via an innovative "real world” simulation in which the pilots flew with a model of a typical ice buildup attached to the tail surface of a Twin Otter. The event provided a valuable exchange between real-world pilots and laboratory researchers, which in turn resulted in the collaboration on a 23-minute educational video on tail plane icing that is still used today.[1242]

New Levels of Departure Resistance: The F-15 Program

Подпись: 13After its traumatic experiences with the F-4 stability and control defi­ciencies at high angles of attack, the Air Force encouraged competitors in the F-15 selection process to stress good high-angle-of-attack char­acteristics for the candidate configurations of their proposed aircraft. As part of the source selection process, an analysis of departure resis­tance was required based on high Reynolds number aerodynamic data obtained for each design in the NASA Ames 12-Foot Pressure Tunnel. In addition, spin and recovery characteristics were determined during the competitive phase using models in the Langley Spin Tunnel. The source selection team evaluated data from these and other high-angle – of-attack tests and analysis.

In its role as an air superiority fighter, the winning McDonnell – Douglas F-15 design was carefully crafted to exhibit superior stability and departure resistance at high angles of attack. In addition to pro­viding a high level of inherent aerodynamic stability, the McDonnell – Douglas design team devised an automatic control concept to avoid control-induced departures at high angles of attack because of adverse yaw from lateral control (ailerons and differential horizontal tail deflec­tions). By using an automatic aileron washout scheme that reduced the amount of aileron/tail deflections obtainable at high angles of attack and an interconnect system that deflected the rudder for roll control as a

Подпись: 13 New Levels of Departure Resistance: The F-15 Program

function of angle of attack within its Command Augmentation System (CAS), the F-15 was expected to exhibit exceptional stability and depar­ture resistance at high angles of attack.

NASA’s free-flight model tests of the F-15 in the Langley Full-Scale Tunnel during 1971 verified that the F-15 would be very stable at high – angle-of-attack conditions, in dramatic contrast to its immediate prede­cessors.[1295] During the F-15 development process, spin tunnel testing at Langley provided predictions for spin modes for the basic airplane as well as an extensive number of external stores, and an emergency spin recovery parachute size was determined.

Langley was also requested to evaluate the spin resistance of the F-15 with the outdoor helicopter drop-model technique used at Langley for many previous assessments of spin resistance. During spin entry attempts of the drop model with the CAS operative, it was once again obvious that the configuration was very spin resistant. In fact, an excep­tional effort was required by the Langley team to develop a longitudinal
and lateral-directional control input technique to spin the model. Ultimately, such a technique was identified and demonstrated, although it was successful for a very constrained range of flight variables. This spin entry technique was later used in the full-scale aircraft flight program to promote spins. In 1972, Dryden constructed a larger drop model with a more complete representation of the aircraft flight con­trol system and a larger-scale prediction of the airplanes spin recovery characteristics. Launched from a B-52 and known as the F-15 spin research vehicle (SRV), the remotely piloted vehicle verified the pre­dictions of the smaller model and added confidence to the subsequent flight tests.[1296]

Подпись: 13Meanwhile, testing in the Spin Tunnel concentrated on one of the more critical spin conditions for the F-15 aircraft—unsymmetrical mass loadings. Model tests showed that the configuration’s spin and recov­ery characteristics deteriorated when lateral unbalance was simulated, as would be the situation for asymmetric weapon store loadings on the right and left wing panels or fuel imbalance between wing tanks. Fuel imbalance can occur during banked turns in strenuous air com­bat maneuvers when tanks feed at different rates. The results of the spin tunnel tests showed that the spins would be faster and flatter in one direction, and that recovery would not be possible when the mass imbalance exceeded a certain critical value. As frequently happens in the field of spinning and spin recovery, a configuration that was extremely spin resistant in the "clean” configuration suddenly became an unmanageable tiger with mass imbalance.

During its operational service, the F-15 has experienced several accidents caused by unrecoverable spins with asymmetric loadings. At one time, this type of accident was the second greatest cause of F-15 losses, after midair collisions.[1297]

Comparison of theoretical predictions, spin tunnel results, drop-model results, and flight results indicated that correlation of a model and airplane results were very good and that risk in the full-scale program had been reduced considerably by the NASA model tests.

X-14: A Little Testbed That Could

On May 24, 1958, Bell test pilot David Howe completed a vertical take­off followed by conventional flight, a transition, and a vertical landing during testing at Niagara Falls Airport, NY. His short foray was a mile­stone in aviation history, for the flight demonstrated the practicality of using vectored thrust for vertical flight. Howe took off straight up, hov­ered like a helicopter, flew away at about 160 mph, climbed to 1,000 feet, circled back, approached at about 95 mph, deflected the engine
thrust (which caused the plane to slow to a hover a mere 10 feet off the ground), made a 180-degree turn, and then settled down, anticipat­ing the behavior and capabilities of future operational aircraft like the British Aerospace Harrier and Soviet Yak-38 Forger.

Подпись: 14The plane that he flew into history was the Bell X-14, a firmly sub­sonic accretion of various aircraft components that proved to have sur­prising value and utility. Before proceeding with this ungainly creature, company engineers had first built a VTOL testbed: the Bell Model 65 Air Test Vehicle (ATV). The ATV used a mix of components from a glider, a lightplane, and a helicopter, with two Fairchild J44 jet engines attached under its wing. Each engine could be pivoted from horizontal to vertical, and it had a stabilizing tail exhaust furnished by a French Turbomeca Palouste compressor as well. Tests with the ATV convinced Bell of the possibility of a jet convertiplane, though not by using that particular approach, and the ATV never attempted a full conversion from VTOL to conventional flight. Accordingly, X-14 differed from all its predeces­sors because it used a cascade thrust diverter, essentially a venetian – blind-like vane system, to deflect the exhaust from the craft’s two small British-built Armstrong-Siddeley Viper ASV 8 engines for vertical lift. Each engine produced 1,900 pounds of thrust. Since the aircraft gross weight was 3,100 pounds, the X-14 had a thrust to weight ratio of 1.226. Compressed-air reaction "controls” kept the craft in balance during take­off, hovering, and landing, when its conventional aerodynamic control surfaces lacked effectiveness. To simplify construction, the X-14 had an open cockpit, no ejection seat, the wings of a Beech Bonanza, and fuse­lage and tail of a Beech T-34 Mentor trainer.[1430]

Early testing revealed that, as completed, the aircraft had numer­ous deficiencies typical of a first-generation technological system. After Ames acquired the aircraft, its research team operated the X-14A with due caution. Not surprisingly, weight limitations precluded installation of an ejection seat or even a rollover protection bar. The twin engines imparted strong gyroscopic "coupling” forces, these being dramatically illustrated on one flight when the X-14’s strong gyroscopic moments gen­erated a severe pitch-up during a yaw, "which resulted in the aircraft
performing a loop at zero forward speed.”[1431] The X-14’s hover flight – test philosophy was rooted in an inviolate rule: hover either at 2,500 feet, or at 12-15 feet, but never in between. At the higher altitude, the pilot would have sufficient height to recover from a single engine fail­ure or to bail out. At the lower altitude, he could complete an emergency landing.[1432] Close to the ground, the aircraft lost approximately 10 per­cent of its lift from so-called aerodynamic suck-down while operating in ground effect. During hover operations, the jet engines ingested the hot exhaust gas, degrading their performance. As well, it possessed low control power about all axes, and the lack of an SAS resulted in marginal hover characteristics. Hover flights were often flown over the ramp or at the concrete VTOL area north of the hangar, and typical flights ran from 20 to 40 minutes and within an area close enough to allow for a comfortable glide back to the airfield. Extensive flight-testing investi­gated a range of flying qualities in hover. Those flights resulted in cri­teria for longitudinal, lateral, and directional control power, sensitivity, and damping.[1433]

Подпись: 14By 1960, Ames V/STOL expertise was well-known throughout the global aeronautical community. This led to interaction with aeronauti­cal establishments in many countries pursuing their own V/STOL pro­grams, via the North Atlantic Treaty Organization’s (NATO) Advisory Group for Aeronautical Research and Development (AGARD).[1434] For example, Dassault test pilot Jacques Pinier flew the X-14 before fly­ing the Balzac. So, too, did Hawker test pilots Bill Bedford and Hugh Merewether before tackling the P. 1127. Both arrived at Ames in April 1960 for familiarization sorties in the X-14 to gain experience in a "simple” vectored-thrust airplane before trying the more complex British jet in VTOL, then in final development. Unfortunately, on Merewether’s sortie, the X-14 entered an uncontrolled sideslip, touching down hard and breaking up its landing gear, a crash attributed to low roll control power and no SAS. "Though bad for the ego,” the British pilot wrote good-naturedly later, "it was probably a blessing in

Подпись: The X-14A shown during a hover test flight at Ames Research Center. NASA. Подпись: 14

disguise since it brought home to all and sundry the perils of weak reaction controls.”[1435]

Later that year, Ames technicians refitted the X-14 with more pow­erful 2,450-pound thrust General Electric J85-5 turbojet engines and modified its flight control system with a response-feedback analog com­puter controlling servo reaction control nozzles (in addition to its exist­ing manually controlled ones), thus enabling it to undertake variable stability in-flight simulation studies. NASA redesignated the extensively modified craft as the X-14A Variable Stability and Control Research Aircraft (VSCRA). In this form, the little jet contributed greatly to under­standing the special roll, pitch, and yaw control power needs of V/STOL vehicles, particularly during hovering in and out of ground effect and at low speeds, where conventional aerodynamic control surfaces lacked effectiveness.[1436] It still had modest performance capabilities. Even though its engine power had increased significantly, so had its weight, to 3,970 pounds. Thus, the thrust to weight ratio of the X-14A was only
marginally better than the X-14.[1437] For one handling qualities study, researchers installed a movable exhaust vane to generate a side force so that the X-14A could undertake lateral translations, so they could study how larger V/STOL aircraft, of approximately 100,000 pounds gross weight, could be safely maneuvered at low speeds and altitudes. To this end, NASA established a maneuver course on the Ames ramp. The X-14A, fitted with wire-braced lightweight extension tubes with bright orange Styrofoam balls simulating the wingspan and wingtips of a much larger aircraft, was maneuvered by test pilots along this track in a series of flat turns and course reversals. The results confirmed that, for best low-speed flight control, V/STOL vehicles needed attitude-stabilization, and, as regards wingspan effects, "None of the test pilots could perceive any effect of the increased span, per se, on their tendency to bank during hovering maneuvers around the ramp or in their method of flying the airplane in general.”[1438]

Подпись: 14Attitude control during hover and low-speed flight was normally accomplished in the X-14A through reaction control nozzles in the tail for pitch and yaw and on each wingtip for roll control. Engine compres­sor bleed air furnished the reaction control moments. For an experi­mental program in 1969, its wingtip reaction controls were replaced temporarily by two 12.8-inch-diameter lift fans, similar to those on the XV-5B fan-in-wing aircraft, to investigate their feasibility for VTOL roll control. Bleed air, normally supplied to the wingtip reaction control nozzles, drove the tip-turbine-driven fans. Fan thrust was controlled by varying the pressure ratio to the tip turbine and thereby controlling fan speed. Rolling moments were generated by accelerating the rpm of one fan and decelerating the other to maintain a constant net lift.[1439]

A number of "lessons learned” were generated as a result of this handling qualities flight-test investigation, as noted by project pilot Ronald M. Gerdes. The fans were so simple, efficient, and reliable that the total bleed air requirement was reduced by about 20 percent from that required
using the tip nozzles. As a consequence, the jet engines produced about 4 percent more thrust and could operate at lower temperatures during vertical takeoffs. Despite this, however, during the flight tests, control sys­tem lag and increases in the aircraft moment of inertia caused by place­ment of the fans at the tips negated the increased roll performance that the fans had over the reaction control nozzles and resulted in the pilot having a constant tendency to overcontrol roll-attitude and thus induce oscillations during any maneuver. The wingtip lift-fan control system was thus rated unacceptable, even for emergency conditions, as it scored a Cooper Harper pilot rating of 6% to 7% (on a 1-10 scale, where 1 is best and 10 is worst). Finally, Gerdes concluded: "This test also demonstrated a principle that must be kept in mind when considering fans for controls. Even though the time response characteristics of a fan system are capa­ble of improvement by such means as closing the loop with rpm feed­back, full authority operation of the control eliminates the fan speed-up capabilities provided by the closed loop, and the fans revert to their open- loop time constants. In the case of the X-14A, its open – and closed-loop first-order time constraints were 0.58 and 0.34 seconds, respectively.”[1440]

Подпись: 14The X-14A flew for two decades for NASA at the Ames Research Center, piloted by Fred Drinkwater and his colleagues on a variety of research investigations. These ranged from evaluating sophisticated electronic control systems to simulating the characteristics of a lunar lander in support of the Apollo effort. In 1965, it was configured to enable simulations of lunar landing approach profiles. The future first man on the Moon, Neil Armstrong, flew the X-14A to evaluate its con­trol characteristics and a visual simulation of the vertical flightpath that the Apollo Lunar Module would fly during its final 1,500-foot descent from the Command Module (CM) to a landing upon the lunar surface.[1441]

Another study effort examined soil erosion caused by VTOL oper­ations off unprepared surfaces. In this case, a 5-second hover at 6 feet
resulted in chunks of soil and grass being thrown into the air, where they were ingested by the engines, damaging their compressors and forcing subsequent replacement of both engines.[1442]

Подпись: 14In 1971, under the direction of Richard Greif and Terry Gossett, NASA modified the X-14A a third time, to install a digital variable sta­bility system and up-rated GE J85-19 engines to improve its hover per­formance. It was redesignated as the X-14B and flown in a program "to establish criteria for pitch and roll attitude command concepts, which had become the control augmentation of choice for precision hover.”[1443] Unfortunately, in May 1981, a control software design flaw led to satu­ration of the VSCS autopilot roll control servos, a condition from which the pilot could not recover before it landed heavily. Although NASA con­templated repairing it, the X-14B never flew again.[1444]

As a personal aside, having had the opportunity to fly the X-14B near its final flight, I was impressed with its simplicity.[1445] For example, one of the more important instruments on the airplane was a 4-inch piece of yarn attached to a small post in the center of the front windshield bow. You never wanted to see the yarn pointed to the front of the airplane. If you did, it meant you were flying backward, and that was a real no-no! The elevator had a nasty tendency to dig in and flip the aircraft over on its back. We aptly named the flip the "Williford maneuver,” after J. R. Williford, the first test pilot to inadvertently "accomplish” it. The next most impor­tant instrument was the fuel gauge, because the X-14 didn’t carry much gas. In retrospect, I consider it a privilege to have flown one of the most successful research aircraft of all time, one that in over 20 years contrib­uted greatly to a variety of other VTOL programs in technical input and piloting training, and to the evolution of V/STOL technology generally.

Initial NACA-NASA Research

Sudden gusts and their effects upon aircraft have posed a danger to the aviator since the dawn of flight. Otto Lilienthal, the inventor of the hang glider and arguably the most significant aeronautical researcher before the Wright brothers, sustained fatal injuries in an 1896 accident, when a gust lifted his glider skyward, died away, and left him hanging in a stalled flight condition. He plunged to Earth, dying the next day, his last words reputedly being "Opfer mussen gebracht werden”—or "Sacrifices must be made.”[19]

NASA’s interest in gust and turbulence research can be traced to the earliest days of its predecessor, the NACA. Indeed, the first NACA
technical report, issued in 1917, examined the behavior of aircraft in gusts.[20] Over the first decades of flight, the NACA expanded its interest in gust research, looking at the problems of both aircraft and lighter – than-air airships. The latter had profound problems with atmospheric turbulence and instability: the airship Shenandoah was torn apart over Ohio by violent stormwinds; the Akron was plunged into the Atlantic, possibly from what would now be considered a microburst; and the Macon was doomed when clear air turbulence ripped off a vertical fin and opened its gas cells to the atmosphere. Dozens of airmen lost their lives in these disasters.[21]

Initial NACA-NASA ResearchDuring the early part of the interwar years, much research on turbulence and wind behavior was undertaken in Germany, in con­junction with the development of soaring, and the long-distance and long – endurance sailplane. Conceived as a means of preserving German aeronautical skills and interest in the wake of the Treaty of Versailles, soaring evolved as both a means of flight and a means to study atmo­spheric behavior. No airman was closer to the weather, or more depen­dent upon an understanding of its intricacies, than the pilot of a sailplane, borne aloft only by thermals and the lift of its broad wings. German soaring was always closely tied to the nation’s excellent technical insti­tutes and the prestigious aerodynamics research of Ludwig Prandtl and the Prandtl school at Gottingen. Prandtl himself studied thermals, pub­lishing a research paper on vertical air currents in 1921, in the earliest years of soaring development.[22] One of the key figures in German sail­plane development was Dr. Walter Georgii, a wartime meteorologist who headed the postwar German Research Establishment for Soaring Flight (Deutsche Forschungsanstalt fur Segelflug ([DFS]). Speaking before

Britain’s Royal Aeronautical Society, he proclaimed, "Just as the mas­ter of a great liner must serve an apprenticeship in sail craft to learn the secret of sea and wind, so should the air transport pilot practice soaring flights to gain wider knowledge of air currents, to avoid their dangers and adapt them to his service.”[23] His DFS championed weather research, and out of German soaring, came such concepts as thermal flying and wave flying. Soaring pilot Max Kegel discovered firsthand the power of storm-generated wind currents in 1926. They caused his sail­plane to rise like "a piece of paper that was being sucked up a chimney,” carrying him almost 35 miles before he could land safely.[24] Used dis­cerningly, thermals transformed powered flight from gliding to soaring. Pioneers such as Gunter Gronhoff, Wolf Hirth, and Robert Kronfeld set notable records using combinations of ridge lift and thermals. On July 30, 1929, the courageous Gronhoff deliberately flew a sailplane with a barograph into a storm, to measure its turbulence; this flight anticipated much more extensive research that has continued in various nations.[25]

Initial NACA-NASA ResearchThe NACA first began to look at thunderstorms in the 1930s. During that decade, the Agency’s flagship laboratory—the Langley Memorial Aeronautical Laboratory in Hampton, VA—performed a series of tests to determine the nature and magnitude of gust loadings that occur in storm systems. The results of these tests, which engineers performed in Langley’s signature wind tunnels, helped to improve both civilian and military aircraft.[26] But wind tunnels had various limitations, leading to use of specially instrumented research airplanes to effectively use the sky as a laboratory and acquire information unobtainable by tradi­tional tunnel research. This process, most notably associated with the post-World War II X-series of research airplanes, led in time to such future NASA research aircraft as the Boeing 737 "flying laboratory” to study wind shear. Over subsequent decades, the NACAs successor, NASA,

would perform much work to help planes withstand turbulence, wind shear, and gust loadings.

Initial NACA-NASA ResearchFrom the 1930s to the 1950s, one of the NACA’s major areas of research was the nature of the boundary layer and the transition from laminar to turbulent flow around an aircraft. But Langley Laboratory also looked at turbulence more broadly, to include gust research and meteorological turbulence influences upon an aircraft in flight. During the previous decade, experimenters had collected measurements of pressure distribution in wind tunnels and flight, but not until the early 1930s did the NACA begin a systematic program to generate data that could be applied by industry to aircraft design, forming a committee to oversee loads research. Eventually, in the late 1930s, Langley cre­ated a separate structures research division with a structures research laboratory. By this time, individuals such as Philip Donely, Walter Walker, and Richard V. Rhode had already undertaken wideranging and influ­ential research on flight loads that transformed understanding about the forces acting on aircraft in flight. Rhode, of Langley, won the Wright Brothers Medal in 1935 for his research of gust loads. He pioneered the undertaking of detailed assessments of the maneuvering loads encoun­tered by an airplane in flight. As noted by aerospace historian James Hansen, his concept of the "sharp edge gust” revised previous think­ing of gust behavior and the dangers it posed, and it became "the back­bone for all gust research.”[27] NACA gust loads research influenced the development of both military and civilian aircraft, as did its research on aerodynamic-induced flight-surface flutter, a problem of particu­lar concern as aircraft design transformed from the era of the biplane to that of the monoplane. The NACA also investigated the loads and stresses experienced by combat aircraft when undertaking abrupt rolling and pullout maneuvers, such as routinely occurred in aerial dog­fighting and in dive-bombing.[28] A dive bomber encountered particularly punishing aerodynamic and structural loads as the pilot executed a pullout: abruptly recovering the airplane from a dive and resulting in it

swooping back into the sky. Researchers developed charts showing the relationships between dive angle, speed, and the angle required for recovery. In 1935, the Navy used these charts to establish design requirements for its dive bombers. The loads program gave the American aeronautics community a much better understanding of load distributions between the wing, fuselage, and tail surfaces of aircraft, including high – performance aircraft, and showed how different extreme maneuvers "loaded” these individual surfaces.

Initial NACA-NASA ResearchIn his 1939 Wilbur Wright lecture, George W. Lewis, the NACA’s legendary Director of Aeronautical Research, enumerated three major questions he believed researchers needed to address:

• What is the nature or structure of atmospheric gusts?

• How do airplanes react to gusts of known structure?

• What is the relation of gusts to weather conditions?[29]

Answering these questions, posed at the close of the biplane era, would consume researchers for much of the next six decades, well into the era of jet airliners and supersonic flight.

The advent of the internally braced monoplane accelerated inter­est in gust research. The long, increasingly thin, and otherwise unsup­ported cantilever wing was susceptible to load-induced failure if not well-designed. Thus, the stresses caused by wind gusts became an essen­tial factor in aircraft design, particularly for civilian aircraft. Building on this concern, in 1943, Philip Donely and a group of NACA research­ers began design of a gust tunnel at Langley to examine aircraft loads produced by atmospheric turbulence and other unpredictable flow phenomena and to develop devices that would alleviate gusts. The tun­nel opened in August 1945. It utilized a jet of air for gust simulation, a catapult for launching scaled models into steady flight, curtains for catching the model after its flight through the gust, and instruments for recording the model’s responses. For several years, the gust tunnel was useful, "often [revealing] values that were not found by the best known methods of calculation. . . in one instance, for example, the gust tunnel tests showed that it would be safe to design the airplane for load increments 17 to 22 percent less than the previously accepted

Initial NACA-NASA Research

The experimental Boeing XB-1 5 bomber was instrumented by the NACA to acquire gust-induced structural loads data. NASA.

values.”[30] As well, gust researchers took to the air. Civilian aircraft— such as the Aeronca C-2 light, general-aviation airplane, Martin M-130 flying boat, and the Douglas DC-2 airliner—and military aircraft, such as the Boeing XB-15 experimental bomber, were outfitted with special loads recorders (so-called "v-g recorders,” developed by the NACA). Extensive records were made on the weather-induced loads they experienced over various domestic and international air routes.[31]

This work was refined in the postwar era, when new generations of long-range aircraft entered air transport service and were also instrumented to record the loads they experienced during routine airline
operation.[32] Gust load effects likewise constituted a major aspect of early transonic and supersonic aircraft testing, for the high loads involved in transiting from subsonic to supersonic speeds already posed a serious challenge to aircraft designers. Any additional loading, whether from a wind gust or shear, or from the blast of a weapon (such as the over­pressure blast wave of an atomic weapon), could easily prove fatal to an already highly loaded aircraft.[33] The advent of the long-range jet bomber and transport—a configuration typically having a long and relatively thin swept wing, and large, thin vertical and horizontal tail surfaces— added further complications to gust research, particularly because the penalty for an abrupt gust loading could be a fatal structural failure. Indeed, on one occasion, while flying through gusty air at low altitude, a Boeing B-52 lost much of its vertical fin, though fortunately, its crew was able to recover and land the large bomber.[34]

Initial NACA-NASA ResearchThe emergence of long-endurance, high-altitude reconnaissance aircraft such as the Lockheed U-2 and Martin RB-57D in the 1950s and the long-range ballistic missile further stimulated research on high – altitude gusts and turbulence. Though seemingly unconnected, both the high-altitude jet airplane and the rocket-boosted ballistic missile required understanding of the nature of upper atmosphere turbulence and gusts. Both transited the upper atmospheric region: the airplane cruising in the high stratosphere for hours, and the ballistic missile

or space launch vehicle transiting through it within seconds on its way into space. Accordingly, from early 1956 through December 1959, the NACA, in cooperation with the Air Weather Service of the U. S. Air Force, installed gust load recorders on Lockheed U-2 strategic reconnais­sance aircraft operating from various domestic and overseas locations, acquiring turbulence data from 20,000 to 75,000 feet over much of the Northern Hemisphere. Researchers concluded that the turbulence problem would not be as severe as previous estimates and high-altitude balloon studies had indicated.[35]

Initial NACA-NASA ResearchHigh-altitude loitering aircraft such as the U-2 and RB-57 were followed by high-altitude, high-Mach supersonic cruise aircraft in the early to mid-1960s, typified by Lockheed’s YF-12A Blackbird and North American’s XB-70A Valkyrie, both used by NASA as Mach 3+ Supersonic Transport (SST) surrogates and supersonic cruise research testbeds. Test crews found their encounters with high – altitude gusts at supersonic speeds more objectionable than their exposure to low-altitude gusts at subsonic speeds, even though the given g-loading accelerations caused by gusts were less than those experi­enced on conventional jet airliners.[36] At the other extreme of aircraft performance, in 1961, the Federal Aviation Agency (FAA) requested NASA assistance to document the gust and maneuver loads and performance of general-aviation aircraft. Until the program was terminated in 1982, over 35,000 flight-hours of data were assembled from 95 airplanes, representing every category of general-aviation airplane, from single-engine personal craft to twin-engine business airplanes and including such specialized types as crop-dusters and aerobatic aircraft.[37]

Along with studies of the upper atmosphere by direct measurement came studies on how to improve turbulence detection and avoidance, and how to measure and simulate the fury of turbulent storms. In 1946­1947, the U. S. Weather Bureau sponsored a study of turbulence as part of a thunderstorm study project. Out of this effort, in 1948, research­ers from the NACA and elsewhere concluded that ground radar, if prop­erly used, could detect storms, enabling aircraft to avoid them. Weather radar became a common feature of airliners, their once-metal nose caps replaced by distinctive black radomes.[38] By the late 1970s, most wind shear research was being done by specialists in atmospheric science, geo­physical scientists, and those in the emerging field of mesometeorology— the study of small atmospheric phenomena, such as thunderstorms and tornadoes, and the detailed structure of larger weather events.[39] Although turbulent flow in the boundary layer is important to study in the laboratory, the violent phenomenon of microburst wind shear cannot be sufficiently understood without direct contact, investigation, and experimentation.[40]

Initial NACA-NASA ResearchMicroburst loadings constitute a threat to aircraft, particularly dur­ing approach and landing. No one knows how many aircraft accidents have been caused by wind shear, though the number is certainly con­siderable. The NACA had done thunderstorm research during World War II, but its instrumentation was not nearly sophisticated enough to detect microburst (or thunderstorm downdraft) wind shear. NASA would join with the FAA in 1986 to systematically fight wind shear and would only have a small pool of existing wind shear research data from which to draw.[41]

Initial NACA-NASA Research

The Lockheed L-101 1 TriStar uses smoke generators to show its strong wing vortex flow patterns in 1977. NASA.

 

Initial NACA-NASA Research

Подпись: A revealing view taken down the throat of a wingtip vortex, formed by a low-flying crop- duster. NASA.

Wind Shear Emerges as an Urgent Aviation Safety Issue

In 1972, the FAA had instituted a small wind shear research program, with emphasis upon developing sensors that could plot wind speed and direction from ground level up to 2,000 feet above ground level (AGL). Even so, the agency’s major focus was on wake vortex impingement. The powerful vortexes streaming behind newer-generation wide-body aircraft could—and sometimes did—flip smaller, lighter aircraft out of control. Serious enough at high altitude, these inadvertent excur­sions could be disastrous if low over the ground, such as during landing and takeoff, where a pilot had little room to recover. By 1975, the FAA had developed an experimental Wake Vortex Advisory System, which it installed later that year at Chicago’s busy O’Hare International Airport. NASA undertook a detailed examination of wake vortex studies, both in tunnel tests and with a variety of aircraft, including the Boeing 727 and 747, Lockheed L-1011, and smaller aircraft, such as the Gates Learjet, helicopters, and general-aviation aircraft.

But it was wind shear, not wake vortex impingement, which grew into a major civil aviation concern, and the onset came with stunning and deadly swiftness.[42] Three accidents from 1973 to 1975 highlighted the extreme danger it posed. On the afternoon of December 17, 1973, while making a landing approach in rain and fog, an Iberia Airlines McDonnell-Douglas DC-10 wide-body abruptly sank below the glide – slope just seconds before touchdown, impacting amid the approach lights of Runway 33L at Boston’s Logan Airport. No one died, but the crash seriously injured 16 of the 151 passengers and crew. The subse­quent National Transportation Safety Board (NTSB) report determined "that the captain did not recognize, and may have been unable to recog­nize an increased rate of descent” triggered "by an encounter with a low – altitude wind shear at a critical point in the landing approach.”[43] Then, on June 24, 1975, Eastern Air Lines’ Flight 66, a Boeing 727, crashed on approach to John F. Kennedy International Airport’s Runway 22L. This time, 113 of the 124 passengers and crew perished. All afternoon, flights had encountered and reported wind shear conditions, and at least one pilot had recommended closing the runway. Another Eastern captain, flying a Lockheed L-1011 TriStar, prudently abandoned his approach and landed instead at Newark. Shortly after the L-1011 diverted, the EAL Boeing 727 impacted almost a half mile short of the runway threshold, again amid the approach lights, breaking apart and bursting into flames. Again, wind shear was to blame, but the NTSB also faulted Kennedy’s air traffic controllers for not diverting the 727 to another runway, after the EAL TriStar’s earlier aborted approach.[44]

Initial NACA-NASA ResearchJust weeks later, on August 7, Continental Flight 426, another Boeing 727, crashed during a stormy takeoff from Denver’s Stapleton

International Airport. Just as the airliner began its climb after lifting off the runway, the crewmembers encountered a wind shear so severe that they could not maintain level flight despite application of full power and maintenance of a flight attitude that ensured the wings were produc­ing maximum lift.[45] The plane pancaked in level attitude on flat, open ground, sustaining serious damage. No lives were lost, though 15 of the 134 passengers and crew were injured.

Initial NACA-NASA ResearchLess than a year later, on June 23, 1976, Allegheny Airlines Flight 121, a Douglas DC-9 twin-engine medium-range jetliner, crashed dur­ing an attempted go-around at Philadelphia International Airport. The pilot, confronting "severe horizontal and vertical wind shears near the ground,” abandoned his landing approach to Runway 27R. As controllers in the airport tower watched, the straining DC-9 descended in a nose – high attitude, pancaking onto a taxiway and sliding to a stop. The fact that it hit nose-high, wings level, and on flat terrain undoubtedly saved lives. Even so, 86 of the plane’s 106 passengers and crew were seriously injured, including the entire crew.[46]

In these cases, wind shear brought about by thunderstorm down­drafts (microbursts), rather than the milder wind shear produced by gust fronts, caused these accidents. This led to a major reinterpretation of the wind shear-causing phenomena that most endangered low-flying planes. Before these accidents, meteorologists believed that gust fronts, or the leading edge of a large dome of rain-cooled air, provided the most danger­ous sources of wind shear. Now, using data gathered from the planes that had crashed and from weather radar, scientists, engineers, and designers came to realize that the small, focused, jet-like downdraft columns charac­teristic of microbursts produced the most threatening kind of wind shear.[47]

Microburst wind shear poses an insidious danger for an aircraft. An aircraft landing will typically encounter the horizontal outflow of a microburst as a headwind, which increases its lift and airspeed, tempting

Initial NACA-NASA Research

Fateful choice: confronting the microburst threat. Richard P. Hallion.

the pilot to reduce power. But then the airplane encounters the descend­ing vertical column as an abrupt downdraft, and its speed and altitude both fall. As it continues onward, it will exit the central downflow and experience the horizontal outflow, now as a tailwind. At this point, the airplane is already descending at low speed. The tailwind seals its fate, robbing it of even more airspeed and, hence, lift. It then stalls (that is, loses all lift) and plunges to Earth. As NASA testing would reveal, professional pilots generally need between 10 to 40 seconds of warning to avoid the problems of wind shear.[48]

Goaded by these accidents and NTSB recommendations that the FAA improve its weather advisory and runway selection procedures, "step up research on methods of detecting the [wind shear] phenome­non,” and develop aircrew wind shear training process, the FAA man­dated installation at U. S. airports of a new Low-Level Windshear Alert System (LLWAS), which employed acoustic Doppler radar, technically similar to the FAA’s Wake Vortex Advisory System installed at O’Hare.[49] The LLWAS incorporated a variety of equipment that measured wind velocity (wind speed and direction). This equipment included a mas­ter station, which had a main computer and system console to moni­tor LLWAS performance, and a transceiver, which transmitted signals
to the system’s remote stations. The master station had several visual computer displays and auditory alarms for aircraft controllers. The remote stations had wind sensors made of sonic anemometers mounted on metal pipes. Each remote station was enclosed in a steel box with a radio transceiver, power supplies, and battery backup. Every airport out­fitted with this system used multiple anemometer stations to effectively map the nature of wind events in and around the airport’s runways.[50]

Initial NACA-NASA ResearchAt the end of March 1981, over 70 representatives from NASA, the FAA, the military, the airline community, the aerospace industry, and aca­demia met at the University of Tennessee Space Institute in Tullahoma to explore weather-related aviation issues. Out of that came a list of recommendations for further joint research, many of which directly addressed the wind shear issue and the need for better detection and warning systems. As the report summarized:

1. There is a critical need to increase the data base for wind and temperature aloft forecasts both from a more fre­quent updating of the data as well as improved accuracy in the data, and thus, also in the forecasts which are used in flight planning. This will entail the development of rational definitions of short term variations in inten­sity and scale length (of turbulence) which will result in more accurate forecasts which should also meet the need to improve numerical forecast modeling require­ments relative to winds and temperatures aloft.

2. The development of an on-board system to detect wind induced turbulence should be beneficial to meeting the requirement for an investigation of the subjective evaluation of turbulence "feel” as a function of motion drive algorithms.

3. More frequency reporting of wind shift in the terminal area is needed along with greater accuracy in forecasting.

4. T here is a need to investigate the effects of unequal wind components acting across the span of an airfoil.

5. The FAA Simulator Certification Division should monitor the work to be done in conjunction with the JAWS project relative to the effects of wind shear on air­craft performance.

6. Initial NACA-NASA ResearchRobert Steinberg’s ASDAR effort should be utilized as soon as possible, in fact it should be encouraged or demanded as an operational system beneficial for flight planning, specifically where winds are involved.

7. There is an urgent need to review the way pilots are trained to handle wind shear. The present method, as indicated in the current advisory circular, of immedi­ately pulling to stick shaker on encountering wind shear could be a dangerous procedure. It is suggested the cir­cular be changed to recommend the procedure to hold at whatever airspeed the aircraft is at when the pilot real­izes he is encountering a wind shear and apply maxi­mum power, and that he not pull to stick shaker except to flare when encountering ground effect to minimize impact or to land successfully or to effect a go-around.

8. Need to develop a clear non-technical presentation of wind shear which will help to provide improved train­ing for pilots relative to wind shear phenomena. Such training is of particular importance to pilots of high per­formance, corporate, and commercially used aircraft.

9. Need to develop an ICAO type standard terminology for describing the effects of windshear on flight performance.

10. The ATC system should be enhanced to provide opera­tional assistance to pilots regarding hazardous weather areas and in view of the envisioned controller workloads generated, perfecting automated transmissions contain­ing this type of information to the cockpit as rapidly and as economically as practicab1e.

11. In order to improve the detection in real time of haz­ardous weather, it is recommended that FAA, NOAA, NWS, and DOD jointly address the problem of fragmen­tal meteorological collection, processing, and dissem­ination pursuant to developing a system dedicated to making effective use of perishable weather information. Coupled with this would be the need to conduct a cost

benefit study relative to the benefits that could be real­ized through the use of such items as a common winds and temperature aloft reporting by use of automated sensors on aircraft.

12. Initial NACA-NASA ResearchDevelop a capabi1ity for very accurate four to six min­ute forecasts of wind changes which would require ter­minal reconfigurations or changing runways.

13. Due to the inadequate detection of clear air turbulence an investigation is needed to determine what has hap­pened to the promising detection systems that have been reported and recommended in previous workshops.

14. Improve the detection and warning of windshear by developing on-board sensors as well as continuing the development of emerging technology for ground – based sensors.

15. Need to collect true three and four dimensional wind shear data for use in flight simulation programs.

16. Recommend that any systems whether airborne or ground based that can provide advance or immediate alert to pilots and controllers should be pursued.

17. Need to continue the development of Doppler radar tech­nology to detect the wind shear hazard, and that this be continued at an accelerated pace.

18. Need for airplane manufacturers to take into consid­eration the effect of phenomena such as microbursts which produce strong periodic longitudinal wind perturbations at the aircraft phugoid frequency.

19. Consideration should be given, by manufacturers, to consider gust alleviation devices on new aircraft to pro­vide a softer ride through turbulence.

20. Need to develop systems to automatically detect haz­ardous weather phenomena through signature recog­nition algorithms and automatically data linking alert messages to pilots and air traffic controllers.[51]

Given the subsequent history of NASA’s research on the wind shear problem (and others), many of these recommendations presciently forecast the direction of Agency and industry research and develop­ment efforts.

Initial NACA-NASA ResearchUnfortunately, that did not come in time to prevent yet another series of microburst-related accidents. That series of catastrophes effectively elevated microburst wind shear research to the status of a national air safety emergency. By the early 1980s, 58 U. S. airports had installed LLWAS. Although LLWAS constituted a great improvement over verbal observations and warnings by pilots communicated to air traffic control­lers, LLWAS sensing technology was not mature or sophisticated enough to remedy the wind shear threat. Early LLWAS sensors were installed without fullest knowledge of microburst characteristics. They were usu­ally installed in too-few numbers, placed too close to the airport (instead of farther out on the approach and departure paths of the runways), and, worst, were optimized to detect gust fronts (the traditional pre – Fujita way of regarding wind shear)—not the columnar downdrafts and horizontal outflows characteristic of the most dangerous shear flows. Thus, wind shear could still strike, and viciously so.

On July 9, 1982, Clipper 759, a Pan American World Airways Boeing 727, took off from the New Orleans airport amid showers and "gusty, variable, and swirling” winds.[52] Almost immediately, it began to descend, having attained an altitude of no more than 150 feet. It hit trees, con­tinued onward for almost another half mile, and then crashed into res­idential housing, exploding in flames. All 146 passengers and crew died, as did 8 people on the ground; 11 houses were destroyed or "substan­tially” damaged, and another 16 people on the ground were injured. The NTSB concluded that the probable cause of the accident was "the airplane’s encounter during the liftoff and initial climb phase of flight with a microburst-induced wind shear which imposed a downdraft and a decreasing headwind, the effects of which the pilot would have had difficulty recognizing and reacting to in time for the airplane’s descent to be arrested before its impact with trees.” Significantly, it also noted, "Contributing to the accident was the limited capability of current ground based low level wind shear detection technology [the LLWAS] to provide
definitive guidance for controllers and pilots for use in avoiding low level wind shear encounters.”[53] This tragic accident impelled Congress to direct the FAA to join with the National Academy of Sciences (NAS) to "study the state of knowledge, alternative approaches and the consequences of wind shear alert and severe weather condition standards relating to take off and landing clearances for commercial and general aviation aircraft.”[54] As the FAA responded to these misfortunes and accelerated its research on wind shear, NASA researchers accelerated their own wind shear research. In the late 1970s, NASA Ames Research Center con­tracted with Bolt, Baranek, and Newman, Inc., of Cambridge, MA, to perform studies of "the effects of wind-shears on the approach perfor­mance of a STOL aircraft. . . using the optimal-control model of the human operator.” In laymen’s terms, this meant that the company used existing data to mathematically simulate the combined pilot/aircraft reaction to various wind shear situations and to deduce and explain how the pilot should manipulate the aircraft for maximum safety in such situations. Although useful, these studies did not eliminate the wind shear problem.[55] Throughout the 1980s, NASA research into thun­derstorm phenomena involving wind shear continued. Double-vortex thunderstorms and their potential effects on aviation were of partic­ular interest. Double-vortex storms involve a pair of vortexes present in the storm’s dynamic updraft that rotate in opposite directions. This pair forms when the cylindrical thermal updraft of a thunderstorm pen­etrates the upper-level air and there is a large amount of vertical wind shear between the lower – and upper-level air layers. Researchers pro­duced a numerical tornado prediction scheme based on the movement of the double-vortex thunderstorm. A component of this scheme was the Energy-Shear Index (ESI), which researchers calculated from radio­sonde measurements. The index integrated parameters that were rep­resentative of thermal instability and the blocking effect. It indicated

Initial NACA-NASA Research

Initial NACA-NASA Research

NASA 809, a Martin B-57B flown by Dryden research crews in 1982 for gust and microburst research. NASA.

environments appropriate for the development of double-vortex thun­derstorms and tornadoes, which would help pilots and flight control­lers determine safe flying conditions.[56]

In 1982, in partnership with the National Center for Atmospheric Research (NCAR), the University of Chicago, the National Oceanic Atmospheric Administration (NOAA), the National Science Foundation (NSF), and the FAA, NASA vigorously supported the Joint Airport Weather Studies (JAWS) effort. NASA research pilots and flight research engineers from the Ames-Dryden Flight Research Facility (now the NASA Dryden Flight Research Center) participated in the JAWS program from mid-May through mid-August 1982, using a specially instrumented Martin B-57B jet bomber. NASA researchers selected the B-57B for its strength, flying it on low-level wind shear research flights around the Sierra Mountains near Edwards Air Force Base (AFB), CA, about the Rockies near Denver, CO, around Marshall Space Flight Center, AL, and near Oklahoma City, OK. Raw data were digitally collected on microbursts, gust fronts, mesocyclones, torna­
does, funnel clouds, and hail storms; converted into engineering for­mat at the Langley Research Center; and then analyzed at Marshall Space Flight Center and the University of Tennessee Space Institute at Tullahoma. Researchers found that some microbursts recorded dur­ing the JAWS program created wind shear too extreme for landing or departing airliners to survive if they encountered it at an altitude less than 500 feet.[57] In the most severe case recorded, the B-57B experienced an abrupt 30-knot speed increase within less than 500 feet of distance traveled and then a gradual decrease of 50 knots over 3.2 miles, clear evidence of encountering the headwind outflow of a microburst and then the tailwind outflow as the plane transited through the microburst.[58]

Initial NACA-NASA ResearchAt the same time, the Center for Turbulence Research (CTR), run jointly by NASA and Stanford University, pioneered using an early par­allel computer, the Illiac IV, to perform large turbulence simulations, something previously unachievable. CTR performed the first of these simulations and made the data available to researchers around the globe. Scientists and engineers tested theories, evaluated modeling ideas, and, in some cases, calibrated measuring instruments on the basis of these data. A 5-minute motion picture of simulated turbulent flow provided an attention-catching visual for the scientific community.[59]

In 1984, NASA and FAA representatives met at Langley Research Center to review the status of wind shear research and progress toward developing sensor systems and preventing disastrous accidents. Out of this, researcher Roland L. Bowles conceptualized a joint NASA-FAA
program to develop an airborne detector system, perhaps one that would be forward-looking and thus able to furnish real-time warning to an air­line crew of wind shear hazards in its path. Unfortunately, before this program could yield beneficial results, yet another wind shear accident fol­lowed the dismal succession of its predecessors: the crash of Delta Flight 191 at Dallas-Fort Worth International Airport (DFW) on August 2, 1985.[60]

Initial NACA-NASA ResearchDelta Flight 191 was a Lockheed L-1011 TriStar wide-body jumbo jet. As it descended toward Runway 17L amid a violent turbulence – producing thunderstorm, a storm cell produced a microburst directly in the airliner’s path. The L-1011 entered the fury of the outflow when only 800 feet above ground and at a low speed and energy state. As the L-1011 transitioned through the microburst, a lift-enhancing head­wind of 26 knots abruptly dropped to zero and, as the plane sank in the downdraft column, then became a 46-knot tailwind, robbing it of lift. At low altitude, the pilots had insufficient room for recovery, and so, just 38 seconds after beginning its approach, Delta Flight 191 plunged to Earth, a mile short of the runway threshold. It broke up in a fiery heap of wreckage, slewing across a highway and crashing into some water tanks before coming to a rest, burning furiously. The accident claimed the lives of 136 passengers and crewmembers and the driver of a passing automobile. Just 24 passengers and 3 of its crew survived: only 2 were without injury. [61] Among the victims were several senior staff members from IBM, including computer pioneer Don Estridge, father of the IBM PC. Once again, the NTSB blamed an "encounter at low altitude with a microburst-induced, severe wind shear” from a rapidly developing thunderstorm on the final approach course. But the accident illustrated as well the immature capabilities of the LLWAS at that time; only after Flight 191 had crashed did the DFW LLWAS detect the fatal microburst.[62]

The Dallas accident resulted in widespread shock because of its large number of fatalities. It particularly affected airline crews, as American Airlines Capt. Wallace M. Gillman recalled vividly at a NASA-sponsored 1990 meeting of international experts in wind shear:

Initial NACA-NASA ResearchAbout one week after Delta 191’s accident in Dallas, I was taxi­ing out to take off on Runway 17R at DFW Airport. Everybody was very conscience of wind shear after that accident. I remem­ber there were some storms coming in from the northwest and we were watching it as we were in a line of airplanes waiting to take off. We looked at the wind socks. We were listening to the tower reports from the LLWAS system, the winds at var­ious portions around the airport. I was number 2 for takeoff and I said to my co-pilot, "I’m not going to go on this runway.”

But just at that time, the number 1 crew in line, Pan Am, said,

"I’m not going to go.” Then the whole line said, "We’re not going to go” then the tower taxies us all down the runway, took us about 15 minutes, down to the other end. By that time the storm had kind of passed by and we all launched to the north.[63]

Applications Technology Satellite 1 (ATS 1): 1966-1967

Aviation’s use of actual space-based technology was first demonstrated by the FAA using NASA’s Applications Technology Satellite 1 (ATS 1) to relay voice communications between the ground and an airborne FAA aircraft using very high frequency (VHF) radio during 1966 and 1967, with the aim of enabling safer air traffic control over the oceans.[199]

Launched from Cape Canaveral atop an Atlas Agena D rocket on December 7, 1966, the spin-stabilized ATS 1 was injected into geo­synchronous orbit to take up a perch 22,300 miles high, directly over Ecuador. During this early period in space history, the ATS 1 spacecraft was packed with experiments to demonstrate how satellites could be used to provide the communication, navigation, and weather monitor­ing that we now take for granted. In fact, the ATS 1’s black and white television camera captured the first full-Earth image of the planet’s cloud-covered surface.[200]

Eight flight tests were conducted using NASA’s ATS 1 to relay voice signals between the ground and an FAA aircraft using VHF band radio, with the intent of allowing air traffic controllers to speak with pilots flying over an ocean. Measurements were recorded of signal level, signal plus noise-to-noise ratio, multipath propagation, voice intelli­gibility, and adjacent channel interference. In a 1970 FAA report, the author concluded that the "overall communications reliability using the ATS 1 link was considered marginal.”[201]

All together, the ATS project attempted six satellite launches between 1966 and 1974, with ATS 2 and ATS 4 unable to achieve a useful orbit. ATS 1 and ATS 3 continued the FAA radio relay testing, this time includ­ing a specially equipped Pan American Airways 747 as it flew a commer­cial flight over the ocean. Results were better than when the ATS 1 was tested alone, with a NASA summary of the experiments concluding that

The experiments have shown that geostationary satellites can provide high quality, reliable, un-delayed communications

between distant points on the earth and that they can also be used for surveillance. A combination of un-delayed communi­cations and independent surveillance from shore provides the elements necessary for the implementation of effective traffic control for ships and aircraft over oceanic regions. Eventually the same techniques may be applied to continental air traffic control.[202]

Center TRACON Automation System

The computer-based tools used to improve the flow of traffic across the National Airspace System—such as SMS, FACET, and ACES already discussed—were built upon the historical foundation of another set of tools that are still in use today. Rolled out during the 1990s, the underlying concepts of these tools go back to 1968, when an Ames Research Center scientist, Heinz Erzberger, first explored the idea of introducing air traffic control concepts—such as 4-D trajectory syn­thesis—and then proposed what was, in fact, developed: the Center TRACON Automation System (CTAS), the Traffic Manager Adviser (TMA), the En Route Descent Adviser (EDA), and the Final Approach Spacing Tool (FAST). Each of the tools provides controllers with advice, information, and some amount of automation—but each tool does this for a different segment of the NAS.[265]

CTAS provides automation tools to help air traffic controllers plan for and manage aircraft arriving to a Terminal Radar Approach Control (TRACON), which is the area within about 40 miles of a major airport. It does this by generating air traffic advisories that are designed to increase fuel efficiency and reduce delays, as well as assist controllers in ensuring that there is an acceptable separation between aircraft and that planes are approaching a given airport in the correct order. CTAS’s goals also include improving airport capacity without threatening safety or increasing the workload of controllers.[266]

Center TRACON Automation System

Flight controllers test the Traffic Manager Adviser tool at the Denver TRACON. The tool helps manage the flow of air traffic in the area around an airport. National Air and Space Museum.

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

Over the past 50 years, NASA has indeed encountered many complex human factors issues. Each of these had to be resolved to make possi­ble the space agency’s many phenomenal accomplishments. Its initial goal of putting a man into space was quickly accomplished by 1961. But in the years to come, NASA progressed beyond that at warp speed— at least technologically speaking.[344] By 1973, it had put men into orbit around the Earth; sent them outside the relative safety of their orbiting craft to "walk” in space, with only their pressurized suit to protect them; sent them around the far side of the Moon and back; placed them into an orbiting space station, where they would live, function, and perform com­plex scientific experiments in weightlessness for months at a time; and, certainly most significantly, accomplished mankind’s greatest technolog­ical feat by landing humans onto the surface of the Moon—not just once, but six times—and bringing them all safely back home to Mother Earth.[345]

NASA’s magnificent accomplishments in its piloted space program during the 1960s and 1970s—nearly unfathomable only a few years before—thus occurred in large part as a result of years of dedicated human factors research. In the early years of the piloted space program, researchers from the NASA Environmental Physiology Branch focused on the biodynamics—or more accurately, the bioastronautics – of man in space. This discipline, which studies the biological and medical effects of space flight on man, evaluated such problems as noise, vibration, acceleration and deceleration, weightlessness, radiation, and the phys­iology, behavioral aspects, and performance of astronauts operating under confined and often stressful conditions.[346] These researchers thus focused on providing life support and ensuring the best possi-

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

Mercury astronauts experiencing weightlessness in a C-1 31 aircraft flying a "zero-g” trajec­tory. This was just one of many aspects of piloted space flight that had never before been addressed. NASA.

ble medical selection and maintenance of the humans who were to fly into space.

Also essential for this work to progress was the further development of the technology of biomedical telemetry. This involved monitoring and transmitting a multitude of vital signs from an astronaut in space on a real-time basis to medical personnel on the ground. The compre­hensive data collected included such information as body temperature, heart rate and rhythm, blood and pulse pressure, blood oxygen content, respiratory and gastrointestinal functions, muscle size and activity, uri­nary functions, and varying types of central nervous system activity.[347] Although much work had already been done in this field, particularly in the X-15 program, NASA further perfected it during the Mercury program when the need to carefully monitor the physiological condi­tion of astronauts in space became critical.[348]

Finally, this early era of NASA human factors research included an emphasis on the bioengineering aspects of piloted space flight, or the application of engineering principles in order to satisfy the phys­iological requirements of humans in space. This included the design and application of life-sustaining equipment to maintain atmospheric pressure, oxygen, and temperature; provide food and water; eliminate metabolic waste products; ensure proper restraint; and combat the many other stresses and hazards of space flight. This research also included finding the most expeditious way of arranging the multitude of dials, switches, knobs, and displays in the spacecraft so that the astronaut could efficiently monitor and operate them.[349]

In addition to the knowledge gained and applied while planning these early space flights was that gleaned from the flights themselves. The data gained and the lessons learned from each flight were essen­tial to further success, and they were continually factored into future piloted space endeavors. Perhaps even more important, however, was the information gained from the failures of this period. They taught NASA researchers many painful but nonetheless important lessons about the cost of neglecting human factors considerations. Perhaps the most glaring example of this was the Apollo 1 fire of January 27, 1967, that killed NASA astronauts Virgil "Gus” Grissom, Roger Chaffee, and Edward White. While the men were sealed in their capsule conduct­ing a launch pad test of the Apollo/Saturn space vehicle that was to be used for the first flight, a flash fire occurred. That such a fire could have happened in such a controlled environment was hard to explain, but the fact that there had been provided no effective means for the astronauts’ rescue or escape in such an emergency was inexplicable.[350] This tragedy did, however, serve some purpose; it gave impetus to tangible safety and engineering improvements, including the cre­ation of an escape hatch through which astronauts could more quickly open and egress during an emergency.[351] Perhaps more importantly, this tragedy caused NASA to step back and reevaluate all of its safety and human engineering procedures.

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

Apollo 1 astronauts, left to right, Gus Grissom, Ed White, and Roger Chaffee. Their deaths in a January 27, 1967, capsule fire prompted vital changes in NASA’s safety and human engineering policies. NASA.

A New Direction for NASA’s Human Factors Research

By the end of the Apollo program, NASA, though still focused on the many initiatives of its space ventures, began to look in a new direction for its research activities. The impetus for this came from a 1968 Senate Committee on Aeronautical and Space Sciences report recommend­ing that NASA and the recently created Department of Transportation jointly determine which areas of civil aviation might benefit from fur­ther research.[352] A subsequent study prompted the President’s Office of Science and Technology to direct NASA to begin similar research. The resulting Terminal Configured Vehicle program led to a new focus in NASA human factors research. This included the all-important inter­face between not only the pilot and airplane, but also the pilot and the air traffic controller.[353]

The goal of this ambitious program was

. . . to provide improvements in the airborne systems (avionics and air vehicle) and operational flight procedures for reducing approach and landing accidents, reducing weather minima, increasing air traffic controller productivity and airport and airway capacity, saving fuel by more efficient terminal area operations, and reducing noise by operational procedures.[354]

With this directive, NASA’s human factors scientists were now officially involved with far more than "just” a piloted space program; they would now have to extend their efforts into the expansive world of aviation.

With these new aviation-oriented research responsibilities, NASA’s human factors programs would continue to evolve and increase in com­plexity throughout the remaining decades of the 20th century and into the present one. This advancement in development was inevitable, given the growing technology, especially in the realm of computer science and complex computer-managed systems, as well as the changing space and aeronautical needs that arose throughout this period.

During NASA’s first three decades, more and more of the increasingly complex aerospace operating systems it was developing for its space ini­tiatives and the aviation industry were composed of multiple subsys­tems. For this reason, the need arose for a human systems integration (HSI) plan to help maximize their efficiency. HSI is a multidisciplinary approach that stresses human factors considerations, along with other such issues as health, safety, training, and manpower, in the early design of fully integrated systems.[355]

To better address the human factors research needs of the aviation community, NASA formed the Flight Management and Human Factors Division at Ames Research Center, Moffett Field, CA.[356] Its name was later changed to the Human Factors Research & Technology Division; today, it is known as the Human Systems Integrations Division (HSID).[357]

For the past three decades, this division and its precursors have sponsored and participated in most of NASA’s human factors research affecting both aviation and space flight. HSID describes its goal as "safe, efficient, and cost-effective operations, maintenance, and training, both in space, in flight, and on the ground,” in order to "advance human – centered design and operations of complex aerospace systems through analysis, experimentation and modeling of human performance and human-automation interaction to make dramatic improvements in safety, efficiency and mission success.”[358] To accomplish this goal, the division, in its own words,

• Studies how humans process information, make deci­sions, and collaborate with human and machine systems.

• Develops human-centered automation and interfaces, decision support tools, training, and team and organi­zational practices.

• Develops tools, technologies, and countermeasures for safe and effective space operations.[359]

More specifically, the Human Systems Integrations Division focuses on the following three areas:

• Human performance: This research strives to better define how people react and adapt to various types of technology and differing environments to which they are exposed. By analyzing such human reactions as visual, auditory, and tactile senses; eye movement; fatigue; attention; motor control; and such perceptual cogni­tive processes as memory, it is possible to better predict and ultimately improve human performance.

• Technology interface design: This directly affects human performance, so technology design that is patterned to efficient human use is of utmost importance. Given the complexity and magnitude of modern pilot/aircrew cock­pit responsibilities—in commercial, private, and military aircraft, as well as space vehicles—it is essential to sim­plify and maximize the efficiency of these tasks. Only with cockpit instruments and controls that are easy to operate can human safety and efficiency be maximized. Interface design might include, for example, the development of cockpit instrumentation displays and arrangement, using a graphical user interface.

• Human-computer interaction: This studies the "pro­cesses, dialogues, and actions” a person uses to inter­act with a computer in all types of environment. This interaction allows the user to communicate with the computer by inputting instructions and then receiving responses back from the computer via such mechanisms as conventional monitor displays or head monitor dis­plays that allows the user to interact with a virtual envi­ronment. This interface must be properly adapted to the individual user, task, and environment.[360]

Some of the more important research challenges HSID is addressing and will continue to address are proactive risk management, human per­formance in virtual environments, distributed air traffic management, com­putational models of human-automation interaction, cognitive models of complex performance, and human performance in complex operations.[361]

Over the years, NASA’s human factors research has covered an almost unbelievably wide array of topics. This work has involved—and ben – efitted—nearly every aspect of the aviation world, including the FAA, DOD, the airline industry, general aviation, and a multitude of nonavi­ation areas. To get some idea of the scope of the research with which NASA has been involved, one need only search the NASA Technical Report Server using the term "human factors,” which produces more

Bioastronautics, Bioengineering, and Some Hard-Learned Lessons

A full-scale aircraft drop test being conducted at the 240-foot-high NASA Langley Impact Dynamics Research Facility. The gantry previously served as the Lunar Landing Research Facility. NASA.

than 3,600 records.[362] It follows that no single paper or document—and this case study is no exception—could ever comprehensively describe NASA’s human factors research. It is possible, however, to get some idea of the impact that NASA human factors research has had on aviation safety and technology by reviewing some of the major programs that have driven the Agency’s human factors research over the past decades.

Into the Future

The preceding discussion can serve only as a brief introduction to NASA’s massive research contribution to aviation in the realm of human factors. Hopefully, however, it has clearly made the following point: NASA, since its creation in 1958, has been an equally contributing partner with the aeronautical industry in the sharing of new technol­ogy and information resulting from their respective human factors research activities.

Because aerospace is but an extension of aeronautics, it is difficult to envision how NASA could have put its first human into space with­out the knowledge and technology provided by the aeronautical human factors research and development that occurred in the decades lead­ing up to the establishment of NASA and its piloted space program. In return, however, today’s high-tech aviation industry is immeasurably more advanced than it would have been without the past half century of dedicated scientific human factors research conducted and shared by the various components of NASA.

Without the thousands of NASA human factors-related research initiatives during this period, many—if not most—of the technologies that are a normal part of today’s flight, air traffic control, and aircraft maintenance operations, would not exist. The high cost, high risk, and lack of tangible cost effectiveness the research and development these advances entailed rendered this kind of research too expensive and spec­ulative for funding by commercial concerns forced to abide by "bottom­line” considerations. As a result of NASA research and the many safety programs and technological innovations it has sponsored for the bene­fit of all, countless additional lives and dollars were saved as many acci­dents and losses of efficiency were undoubtedly prevented.

It is clear that NASA is going to remain in the business of improv­ing aviation safety and technology for the long haul. NASA’s Aeronautics Research Mission Directorate (ARMD), one of the Agency’s four major directorates, will continue improving the safety and efficiency of aviation

with its aviation safety, fundamental aeronautics, airspace systems, and aeronautics test programs. Needless to say, a major aspect of these pro­grams will involve human factors research, as it pertains to aeronautics.[439]

It is impossible to predict precisely in which direction NASA’s human factors research will go in the decades to come; however, based on the Agency’s remarkably unique 50-year history, it seems safe to assume it will continue to contribute to an ever-safer and more efficient world of aviation.

Into the Future

Hovering flight test of a free-flight model of the Hawker P.1127 V/STOL fighter underway in the return passage of the Full-Scale Tunnel. Flying-model demonstrations of the ease of transi­tion to and from forward flight were key in obtaining the British government’s support. NASA.

 

Spinning

Qualitatively, recovery from the various spin modes is dependent on the type of spins exhibited, the mass distribution of the aircraft, and the sequence of controls applied. Recovering from the steep steady spin tends to be relatively easy because the nose-down orientation of the air­craft control surfaces to the free stream enables at least a portion of the control effectiveness to be retained. In contrast, during a flat spin, the fuselage may be almost horizontal, and the control surfaces are ori­ented so as to provide little recovery moment, especially a rudder on a conventional vertical tail. In addition to the ineffectiveness of controls for recovery from the flat spin, the rotation of the aircraft about a near­vertical axis near its center of gravity results in extremely high centrifu­gal forces at the cockpit for configurations with long fuselages. In many cases, the negative ("eyeballs out”) g-loads may be so high as to incapaci­tate the crewmembers and prevent them from escaping from the aircraft.