Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Dynamically Scaled Free-Flight Models

Joseph R. Chambers

The earliest flying machines were small models and concept demonstra­tors, and they dramatically influenced the invention of flight. Since the invention of the airplane, free-flight atmospheric model testing—and tests of "flying" models in wind tunnel and ground research facilities — has been a means of undertaking flight research critical to ensuring that designs meet mission objectives. Much of this testing has helped identify problems and solutions while reducing risk.

N A HOT, MUGGY DAY IN SUMMER 1 959, Joe Walker, the crusty old head of the wind tunnel technicians at the legend­ary NASA Langley Full-Scale Tunnel, couldn’t believe what he saw in the test section of his beloved wind tunnel. Just a few decades earlier, Walker had led his technician staff during wind tunnel test oper­ations of some of the most famous U. S. aircraft of World War II in its gigantic 30- by 60-foot test section. With names like Buffalo, Airacobra, Warhawk, Lightning, Mustang, Wildcat, Hellcat, Avenger, Thunderbolt, Helldiver, and Corsair, the test subjects were big, powerful fighters that carried the day for the United States and its allies during the war. Early versions of these aircraft had been flown to Langley Field and installed in the tunnel for exhaustive studies of how to improve their aerodynamic performance, engine cooling, and stability and control characteristics.

On this day, however, Walker was witnessing a type of test that would markedly change the research agenda at the Full-Scale Tunnel for many years to come. With the creation of the new National Aeronautics and Space Administration (NASA) in 1958 and its focus on human space flight, massive transfers of the old tunnel’s National Advisory Committee for Aeronautics (NACA) personnel to new space flight priorities such as Project Mercury at other facilities had resulted in significant reductions in the tunnel’s staff, test schedule, and workload. The situation had not, however, gone unnoticed by a group of brilliant engineers that had pio­neered the use of remotely controlled free-flying model airplanes for

predictions of the flying behavior of full-scale aircraft using a unique testing technique that had been developed and applied in a much smaller tunnel known as the Langley 12-Foot Free Flight Tunnel. The engineers’ activities would benefit tremendously by use of the gigantic test section of the Full-Scale Tunnel, which would provide a tremendous increase in flying space and allow for a significant increase in the size of models used in their experiments. In view of the operational changes occurring at the tunnel, they began a strong advocacy to move their free-flight stud­ies to the larger facility. The decision to transfer the free-flight model testing to the Full-Scale Tunnel was made in 1959 by Langley’s manage­ment, and the model flight-testing was underway.

Joe Walker was observing a critical NASA free-flight model test that had been requested under joint sponsorship between NASA, industry, and the Department of Defense (DOD) to determine the flying charac­teristics of a 7-foot-long model of the North American X-15 research aircraft. As Walker watched the model maneuvering across the test sec­tion, he lamented the radical change of test subjects in the tunnel with several profanities and a proclamation that the testing had "gone from big-iron hardware to a bunch of damn butterflies.”[440] What Walker didn’t appreciate was that the revolutionary efforts of the NACA and NASA to develop tools, facilities, and testing techniques based on the use of sub­scale flying models were rapidly maturing and being sought by military and civil aircraft designers—not only in the Full-Scale Tunnel, but in several other unique NASA testing facilities.

For over 80 years, thousands of flight tests of "butterflies” in NACA and NASA wind tunnel facilities and outdoor test ranges have contrib­uted valuable predictions, data, and risk reduction for the Nation’s high-priority aircraft programs, space flight vehicles, and instrumented planetary probes. Free-flight models have been used in a myriad of studies as far ranging as aerodynamic drag reduction, loads caused by atmospheric gusts and landing impacts, ditching, aeroelasticity and flut­ter, and dynamic stability and control. The models used in the studies have been flown at conditions ranging from hovering flight to hyper­sonic speeds. Even a brief description of the wide variety of free-flight model applications is far beyond the intent of this essay; therefore, the following discussion is limited to activities in flight dynamics, which

includes dynamic stability and control, flight at high angles of attack, spin entry, and spinning.

Establishing Creditability: The Early Days

Following the operational readiness of the Langley 15-Foot Free-Spinning Tunnel in 1935, initial testing centered on establishing correlation with full-scale flight-test results of spinning behavior for the XN2Y-1 and F4B-2 biplanes.[506] Critical comparisons of earlier results obtained on small-scale models from the Langley 5-Foot Vertical Tunnel and full-scale flight tests indicated considerable scale effects on aerodynamic char­acteristics; therefore, calibration tests in the new tunnel were deemed imperative. The results of the tests for the two biplane models were very encouraging in terms of the nature of recovery characteristics and served to inspire confidence in the testing technique and promote future tests. During those prewar years, the NACA staff was afforded time to con­duct fundamental research studies and to make general conclusions for emerging monoplane designs. Systematic series of investigations were conducted in which, for example, models were tested for combinations

of eight different wings and three different tails.[507] Other investigations of tunnel-to-flight correlations occurred, including comparison of results for the BT-9 monoplane trainer.

As experience with spin tunnel testing increased, researchers began to observe more troublesome differences between results obtained in flight and in the tunnel. The effects of Reynolds number, model accuracies, control-surface rigging of full-scale aircraft, propeller slipstream effects not present during unpowered model tests, and other factors became appreciated to the point that a general philosophy began to emerge for which model tests were viewed as good predictors of full-scale charac­teristics but also examples of poor correlation that required even more correlation studies and a conservative interpretation of model results. Critics of small-scale model testing did not accept a growing philosophy that spin predictions were an "art” based on extensive testing to deter­mine the relative sensitivity of results to configuration variables, model damage, and testing technique. Nonetheless, pressure mounted to arrive at design guidelines for satisfactory spin recovery characteristics.

The Transition to NASA

In the wake of the launch of Sputnik I in October 1957, the National Air and Space Act of 1958 combined the NACAs research facilities at Langley, Ames, Lewis, Wallops Island, and Edwards with the Army and Navy rocket programs and the California Institute of Technology’s Jet Propulsion Laboratory to form NASA. Suddenly, the NACAs scope of American civilian research in aeronautics expanded to include the chal­lenges of space flight driven by the Cold War competition between the United States and the Soviet Union and the unprecedented growth of American commercial aviation on the world stage.

NASA inherited an impressive inventory of facilities from the NACA. The wind tunnels at Langley, Ames, and Lewis were the start of the art and reflected the rich four-decade legacy of the NACA and the ever – evolving need for specialized tunnels. Over the next five decades of NASA history, the work of the wind tunnels reflected equally in the first "A” and the "S” in the administration’s acronym.

Challenges and Opportunities

Challenges and OpportunitiesIf composites were to receive wide application, the cost of the materials would have to dramatically decline from their mid-1980s levels. ACEE succeeded in making plastic composites commonplace not just in fair­ings and hatches for large airliners but also on control surfaces, such as the ailerons, flaps, and rudder. On these secondary structures, cash – strapped airlines achieved the weight savings that prompted the shift to composites in the first place. The program did not, however, result in the immediate transition to widespread production of plastic compos­ites for primary structures. Until the industry could make that transition, it would be impossible to justify the investment required to create the infrastructure that Lovelace described to produce composites at rates equivalent to yearly aluminum output.

To the contrary, tooling costs for composites remained high, as did the labor costs required to fabricate the composite parts.[748] A major issue driving costs up under the ACEE program was the need to improve the damage tolerance of the composite parts, especially as the program transitioned from secondary components to heavily loaded primary structures. Composite plastics were still easy to damage and costly to replace. McDonnell-Douglas once calculated that the MD-11 trijet con­tained about 14,000 pounds of composite structure, which the company estimated saved airlines about $44,000 in yearly fuel costs per plane.[749] But a single incident of "ramp rash” requiring the airline to replace one of the plastic components could wipe away the yearly return on invest­ment provided by all 14,000 pounds of composite structure.[750]

The method that manufacturers devised in the early 1980s involved using toughened resins, but these required more intensive labor to fabri­cate, which aggravated the cost problem.[751] From the early 1980s, NASA
worked to solve this dilemma by investigation new manufacturing meth­ods. One research program sponsored by the Agency considered whether textile-reinforced composites could be a cost-effective way to build damage-tolerant primary structures for aircraft.[752] Composite laminates are not strong so much as they are stiff, particularly in the direction of the aligned fibers. Loads coming from different directions have a ten­dency to damage the structure unless it is properly reinforced, usually in the form of increased thickness or other supports. Another poor char­acteristic of laminated composites is how the material reacts to dam­age. Instead of buckling like aluminum, which helps absorb some of the energy caused by the impact, the stiff composite material tends to shatter.

Challenges and OpportunitiesSome feared that such materials could prove too much for cash- strapped airlines of the early 1990s to accept. If laminated composites were the problem, some believed the solution was to continue investi­gating textile composites. That meant shifting to a new process in which carbon fibers could be stitched or woven into place, then infused with a plastic resin matrix. This method seemed to offer the opportunity to solve both the damage tolerance and the manufacturing problems simul­taneously. Textile fibers could be woven in a manner that made the mate­rial strong against loads coming from several directions, not just one. Moreover, some envisioned the deployment of giant textile composite sewing machines to mass-produce the stronger material, dramatically lowering the cost of manufacture in a single stroke.

The reality, of course, would prove far more complex and challeng­ing than the visionaries of textile composites had imagined. To be sure, the concept faced many skeptics within the conservative aerospace industry even as it gained force in the early 1990s. Indeed, there have been many false starts in the composite business. The Aerospace America journal in 1990 proposed that thermoplastics, a comparatively little – used form of composites, could soon eclipse thermoset composites to become the "material of the ’90s.” The article wisely contained a cau­tionary note from a wry Lockheed executive, who recalled a quote by a former boss in the structures business: "The first thing I hear about a new material is the best thing I ever hear about it. Then reality sinks in, and it’s a matter of slow and steady improvements until you achieve the properties you want.”[753] The visionaries of textile composite in the late
1980s could not foresee it, but they would contend with more than the normal challenges of introducing any technology for widespread pro­duction. A series of industry forces were about to transform the com­petitive landscape of the aerospace industry over the next decade, with a wave of mergers wreaking particular havoc on NASA’s best-laid plans.

Challenges and OpportunitiesIt was in this environment when NASA began the plunge into devel­oping ever-more-advanced forms of composites. The timeframe came in the immediate aftermath of the ACEE program’s demise. In 1988, the Agency launched an ambitious effort called the Advanced Composites Technology (ACT) program. It was aimed at developing hardware for composite wing and fuselage structures. The goals were to reduce struc­tural weight for large commercial aircraft by 30-50 percent and reduce acquisition costs by 20-25 percent.[754] NASA awarded 15 contracts under the ACT banner a year later, signing up teams of large original equip­ment manufacturers, universities, and composite materials suppliers to work together to build an all-composite fuselage mated to an all­composite wing by the end of the century.[755]

During Phase A, from 1989 to 1991, the program focused on man­ufacturing technologies and structural concepts, with stitched textile preform and automated tow placement identified as the most promis­ing new production methods.[756] "At that point in time, textile reinforced composites moved from being a laboratory curiosity to large scale air­craft hardware development,” a NASA researcher noted.[757] Phase B, from 1992 to 1995, focused on testing subscale components.

Within the ACT banner, NASA sponsored projects of wide-ranging scope and significance. Sikorsky, for example, which was selected after 1991 to lead development and production of the RAH-66 Comanche, worked on a new process using flowable silicone powder to simplify the process of vacuum-bagging composites before being heated in an auto­clave.[758] Meanwhile, McDonnell-Douglas Helicopter investigated 3-D
finite element models to discover how combined loads create stresses through the thickness of composite parts during the design process.

Challenges and OpportunitiesThe focus of ACT, however, would be aimed at developing the tech­nologies that would finally commercialize composites for heavily loaded structures. The three major commercial airliner firms that dominated activity under the ACEE remained active in the new program despite huge changes in the commercial landscape.

Lockheed already had decided not to build any more commercial airliners after ceasing production of the L-1011 Tristar in 1984 but pur­sued ACT contracts to support a new strategy—also later dropped—to become a structures supplier for the commercial market.[759] Lockheed’s role involved evaluating textile composite preforms for a wide variety of applications on aircraft.

It was still 8 years before Boeing and McDonnell-Douglas agreed to their fateful merge in 1997, but ACT set each on a path for develop­ing new composites that would converge around the same time as their corporate identities. NASA set Douglas engineers to work on producing an all-composite wing. Part of Boeing’s role under ACT involved con­structing several massive components, such as a composite fuselage bar­rel; a window belt, introducing the complexity of material cutouts; and a full wing box, allowing a position to mate the Douglas wing and the Boeing fuselage. As ambitious as this roughly 10-year plan was, it did not overpromise. NASA did not intend to validate the airworthiness of the technologies. That role would be assigned to industry, as a private investment. Rather, the ACT program sought to merely prove that such structures could be built and that the materials were sound in their man­ufactured configuration. Thus, pressure tests would be performed on the completed structures to verify the analytical predictions of engineers.

Such aims presupposed some level of intense collaboration between the two future partners, Boeing and McDonnell-Douglas, but NASA may have been disappointed about the results before the merger of 1997. Although the former ACEE program had achieved a level of unique col­laboration between the highly competitive commercial aircraft prime contractors, that spirit appeared to have eroded under the intense mar­ket pressures of the early 1990s airline industry. One unnamed industry source explained to an Aerospace Daily reporter in 1994: "Each company
wants to do its own work. McDonnell doesn’t want to put its [compos­ite] wing on a Boeing [composite] fuselage and Boeing doesn’t trust its composite fuselage mated to a McDonnell composite wing.”[760]

Challenges and OpportunitiesNASA, facing funding shortages after 1993, ultimately scaled back the goal of ACT to mating an all-composite wing made by either McDonnell- Douglas or Boeing to an "advanced aluminum” fuselage section.[761] Boeing’s work on completing an all-composite fuselage would continue, but it would transition to a private investment, leveraging the extensive experiences provided by the NASA and military composite development programs.

In 1995, McDonnell-Douglas was selected to enter Phase C of the ACT program with the goal to construct the all-composite wing, but indus­try developments intervened. After McDonnell-Douglas was absorbed into Boeing’s brand, speculation swirled about the fate of the former’s active all-composite wing program. In 1997, McDonnell-Douglas had plans to eventually incorporate the new wing technology on the legacy MD-90 narrow body.[762] (Boeing later renamed MD-90 by filling a gap created when the manufacturer skipped from the 707 to the 727 air­liners, having internally designated the U. S. Air Force KC-135 refueler the 717.[763] [764]) One postmerger speculative report suggested that Boeing might even consider adopting McDonnell-Douglas’s all-composite wing for the Next Generation 737 or a future variant of the 757. Boeing, however, would eventually drop the all-composite wing concept, even closing 717 production in 2006.

The ACT program produced an impressive legacy of innovation. Amid the drive under ACT to finally build full-scale hardware, NASA also pushed industry to radically depart from building composite struc­tures through the laborious process of laying up laminates. This pro­cess not only drove up costs by requiring exorbitant touch labor; it also produced material that was easy to damage without adding bulk—and weight—to the structure in the form of thicker laminates and extra stiff­eners and doublers.

The ACT formed three teams that combined one major airframer each, with several firms that represented part of a growing and
increasingly sophisticated network of composite materials suppliers to the aerospace industry. A Boeing/Hercules team focused on a promis­ing new method called automated tow placement. McDonnell-Douglas was paired with Dow Chemical to develop a process that could stitch the fibers roughly into the shape of the finished parts, then introduce the resin matrix through the resin transfer molding (RTM) process.123 That process is known as "stitched/RTM.”[765] Lockheed, meanwhile, was tasked with BASF Structural Materials to work on textile preforms.

Challenges and OpportunitiesNASA and the ACT contractors had turned to textiles full bore to both reduce manufacturing costs and enhance performance. Preimpregnating fibers aligned unidirectionally into layers of laminate laid up by hand and cured in an autoclave had been the predominant production method throughout the 1980s. However, layers arranged in this manner have a tendency to delaminate when damaged.[766] The solution proposed under the ACT program was to develop a method to sew or weave the com­posites three-dimensionally roughly into their final configuration, then infuse the "preform” mold with resin through resin transfer molding or vacuum-assisted resin transfer molding.[767] It would require the inven­tion of a giant sewing machine large and flexible enough to stitch a car­bon fabric as large as an MD-90 wing.

McDonnell-Douglas began the process with the goal of building a wing stub box test article measuring 8 feet by 12 feet. Pathe Technologies, Inc., built a single-needle sewing machine. Its sewing head was com­puter controlled and could move by a gantry-type mechanism in the x – and y-axes to sew materials up to 1 inch in thickness. The machine stitched prefabricated stringers and intercostal clips to the wing skins.[768] The wings skins had been prestitched using a separate multineedle machine.[769] Both belonged to a first generation of sewing machines that accomplished their purpose, which was to provide valuable data and experience. The single-needle head, however, would prove far too limited. It moved only 90 degrees in the vertical and horizontal planes,

Challenges and Opportunities

The Advanced Composite Cargo Aircraft is a modified Dornier 328Jet aircraft. The fuselage aft of the crew station and the vertical tail were removed and replaced with new structural designs made of advanced composite materials fabricated using out-of-autoclave curing. It was devel­oped by the Air Force Research Laboratory and Lockheed Martin. Lockheed Martin.

meaning it was limited to stitching only panels with a flat outer mold line. The machine also could not stitch materials deeply enough to meet the requirement for a full-scale wing.129

NASA and McDonnell-Douglas recognized that a high-speed multi­needle machine, combined with an improved process for multiaxial warp knitting, would achieve affordable full-scale wing structures. This so-called advanced stitching machine would have to handle "cover panel preforms that were 3.0m wide by 15.2m long by 38.1mm thick at speeds up to 800 stitches per minute. The multiaxial warp knitting machine had to be capable of producing 2.5m wide carbon fabric with an areal weight of 1,425g/m2.”130 Multiaxial warp knitting automates the process of producing multilayer broad goods. NASA and Boeing selected the resin film infusion (RFI) process to develop a wing cost-effectively.

Boeing’s advanced stitching machine remains in use today, qui­etly producing landing gear doors for the C-17 airlifter. The thrust of [770] [771]
innovation in composite manufacturing technology, however, has shifted to other places. Lockheed’s ACCA program spotlighted the emergence of a third generation of out-of-autoclave materials. Small civil aircraft had been fashioned out of previous generations of this type of material, but it was not nearly strong enough to support loads required for larger aircraft such as, of course, a 328Jet. In the future, manufacturers hope to build all-composite aircraft on a conventional production line, with localized ovens to cure specific parts. Parts or sections will no longer need to be diverted to cure several hours inside an autoclave to obtain their strength properties. Lockheed’s move with the X-55 ACCA jet rep­resents a critical first attempt, but others are likely to soon follow. For its part, Boeing has revealed two major leaps in composite technology development on the military side, from the revelation of the 1990s-era Bird of Prey demonstrator, which included a single-piece composite structure, to the co-bonded, all-composite wing section for the X-45C demonstrator (now revived and expected to resume flight-testing as the Phantom Ray).

Challenges and OpportunitiesThe key features of new out-of-autoclave materials are measured by curing temperature and a statistic vital for determining crashworthi­ness called compression after impact strength. Third-generation resins now making an appearance in both Lockheed and Boeing demonstra­tion programs represent major leaps in both categories. In terms of raw strength, Boeing states that third-generation materials can resist impact loads up to 25,000 pounds per square inch (psi), compared to 18,000 psi for the previous generation. That remains below the FAA standard for measuring crashworthiness of large commercial aircraft but may fit the standard for a new generation of military cargo aircraft that will even­tually replace the C-130 and C-17 after 2020. In September 2009, the U. S. Air Force awarded Boeing a nearly $10-million contract to demon­strate such a nonautoclave manufacturing technology.

The Next, More Ambitious Step: The Piper PA-30

Encouraged by the results of the Hyper III experiment, Reed and his team decided to convert a full-scale production airplane into a RPRV. They selected the Flight Research Center’s modified Piper PA-30 Twin Comanche, a light, twin-engine propeller plane that was equipped with both conventional and fly-by-wire control systems. Technicians installed uplink/downlink telemetry equipment to transmit radio commands and data. A television camera, mounted above the cockpit windscreen, transmitted images to the ground pilot to provide a visual reference—a significant improvement over the Hyper III cockpit. To provide the pilot with physical cues, as well, the team developed a harness with small elec­tronic motors connected to straps surrounding the pilot’s torso. During maneuvers such as sideslips and stalls, the straps exerted forces to sim­ulate lateral accelerations in accordance with data telemetered from the RPRV, thus providing the pilot with a more natural "feel.”[895] The origi­nal control system of pulleys and cables was left intact, but a few minor modifications were incorporated. The right-hand, or safety pilot’s, con­trols were connected directly to the flight control surfaces via conven­tional control cables and to the nose gear steering system via pushrods. The left-hand control wheel and rudder pedals were completely inde­pendent of the control cables, instead operating the control surfaces via hydraulic actuators through an electronic stability-augmentation system.

Bungees were installed to give the left-hand controls an artificial "feel.” A friction control was added to provide free movement of the throttles while still providing friction control on the propellers when the remote throttle was in operation.

When flown in RPRV configuration, the left-hand cockpit controls were disabled, and signals from a remote control receiver fed directly into the control system electronics. Control of the airplane from the ground cockpit was functionally identical to control from the pilot’s seat. A safety trip channel was added to disengage the control system whenever the airborne remote control system failed to receive intelli­gible commands. In such a situation, the safety pilot would immedi­ately take control.[896] Flight trials began in October 1971, with research pilot Einar Enevoldson flying the PA-30 from the ground while Thomas C. McMurtry rode on board as safety pilot, ready to take con­trol if problems developed. Following a series of incremental buildup flights, Enevoldson eventually flew the airplane unassisted from takeoff to landing, demonstrating precise instrument landing system approaches, stall recovery, and other maneuvers.[897] By February 1973, the project was nearly complete. The research team had successfully developed and demonstrated basic RPRV hardware and operating techniques quickly and at relatively low cost. These achievements were critical to follow-on programs that would rely on the use of remotely piloted vehicles to reduce the cost of flight research while maintaining or expanding data return.[898]

Lessons Learned-Realities and Recommendations

Unmanned research vehicles have proven useful for evaluating new aeronautical concepts and providing precision test capability, repeat­able test maneuver capability, and flexibility to alter test plans as nec­essary. They allow testing of aircraft performance in situations that might be too hazardous to risk a pilot on board yet allow for a pilot in the loop through remote control. In some instances, it is more cost – effective to build a subscale RPRV than a full-scale aircraft.[1047] Experience with RPRVs at NASA Dryden has provided valuable lessons. First and foremost, good program planning is critical to any successful RPRV project. Research engineers need to spell out data objectives in as much detail as possible as early as possible. Vehicle design and test planning should be tailored to achieve these objectives in the most effective way. Definition of operational techniques—air launch versus ground launch, parachute recovery versus horizontal landing, etc.—are highly dependent on research objectives.

One advantage of RPRV programs is flexibility in regard to match­ing available personnel, facilities, and funds. Almost every RPRV project at Dryden was an experiment in matching personnel and equipment to operational requirements. As in any flight-test project, staffing is very important. Assigning an operations engineer and crew chief early in the design phase will prevent delays resulting from opera­tional and maintainability issues.[1048] Some RPRV projects have required only a few people and simple model-type radio-control equipment. Others involved extremely elaborate vehicles and sophisticated control systems. In either case, simulation is vital for RPRV systems development, as well as pilot training. Experience in the simulator helps mitigate some of the difficulties of RPRV operation, such as lack of sensory cues in the cock­pit. Flight planners and engineers can also use simulation to identify significant design issues and to develop the best sequence of maneu­vers for maximizing data collection.[1049] Even when built from R/C model stock or using model equipment (control systems, engines, etc.), an RPRV should be treated the same as any full-scale research airplane. Challenges inherent with RPRV operations make such vehicles more susceptible to mishaps than piloted aircraft, but this doesn’t make an RPRV expend­able. Use of flight-test personnel and procedures helps ensure safe oper­ation of any unmanned research vehicle, whatever its level of complexity.

Configuration control is extremely important. Installation of new software is essentially the same as creating a new airplane. Sound engineering judgments and a consistent inspection process can eliminate potential problems.

Knowledge and experience promote safety. To as large a degree as possible, actual mission hardware should be used for simulation and training. People with experience in manned flight-testing and develop­ment should be involved from the beginning of the project.[1050] The criti­cal role of an experienced test pilot in RPRV operations has been repeat­edly demonstrated. A remote pilot with flight-test experience can adapt to changing situations and discover system anomalies with greater flex­ibility and accuracy than an operator without such experience.

The need to consider human factors in vehicle and ground cock­pit design is also important. RPRV cockpit workload is comparable to that for a manned aircraft, but remote control systems fail to provide many significant physical cues for the pilot. A properly designed Ground Control Station will compensate for as many of these shortfalls as possible.[1051] The advantages and disadvantages of using RPRVs for flight research sometimes seem to conflict. On one hand, the RPRV approach can result in lower program costs because of reduced vehicle size and complexity, elimination of man-rating tests, and elimination of the need for life-support systems. However, higher program costs may result from a number of factors. Some RPRVs are at least as complex as manned vehicles and thus costly to build and operate. Limited space in small airframes requires development of min­iaturized instrumentation and can make maintenance more difficult. Operating restrictions may be imposed to ensure the safety of people on the ground. Uplink/downlink communications are vulnerable to outside interference, potentially jeopardizing mission success, and line-of-sight limitations restrict some RPRV operations.[1052] The cost of designing and building new aircraft is constantly rising, as the need for speed, agility, stores/cargo capacity, range, and survivability increases. Thus, the cost of testing new aircraft also increases. If flight-testing is curtailed, however, a new aircraft may reach production with undiscovered design flaws or idiosyncrasies. If an aircraft must operate in an environment or flight profile that cannot be adequately tested through wind tunnel or computer simulation, then it must be tested in flight. This is why high-risk, high-payoff research projects are best suited to use of RPRVs. High data-output per flight—through judicious flight planning—and elimination of physical risk to the research pilot can make RPRV operations cost-effective and worth­while.[1053] Since the 1960s, remotely piloted research vehicles have evolved continuously. Improved avionics, software, control, and telemetry sys­tems have led to development of aircraft capable of operating within a broad range of flight regimes. With these powerful research tools, scientists and engineers at NASA Dryden continue to explore the aeronautical frontier.

Into the 21st Century

Подпись: 10In 2004, NASA Headquarters Aeronautics Research Mission Directorate (ARMD) formed the Vehicle Systems Program (VSP) to preserve core supersonic research capabilities within the Agency.[1118] As the program had limited funding, much of the effort concentrated on cooperation with other organizations, notably the Defense Advanced Research Projects Agency (DARPA) and the military. Likely configuration studies pointed toward business jets as being a more likely candidate for supersonic trav­elers than full-size airliners. More effort was devoted to cooperation with DARPA on the sonic boom problem. An earlier joint program resulted in the shaped sonic boom demonstration of 2003, when a Northrop F-5 fighter with a forward fuselage modified to reduce the type’s characteris­tic sonic boom signature demonstrated that the modification worked.[1119]

Подпись: Among the supersonic cruise flight-test research tools, circa 2007, was thermal imagery. NASA.

Military aircraft have traversed the sonic regime so frequently that one can hardly dignify it with the name "frontier” that it once had.

10

Into the 21st Century

Into the 21st Century

In-flight Schlieren imagery. NASA.

 

10

 

Подпись: RTDs (К)

Into the 21st Century

In-flight thermography output. NASA.

Подпись: 10Nevertheless, there have been few supercruising aircraft: the SR-71, the Concorde, the Tu-144, and the F-22A constituting notable exceptions. The operational experience gained with the SR-71 fleet with its DAFICS in the 1980s, and the more recent Air Force experience with the low – observable supercruising Lockheed-Martin F-22A Raptor, indicate that a properly designed aircraft with modern digital systems makes high Mach supersonic cruise now within reach technologically. Indeed, at a November 2007 Langley Research Center presentation at the annual meeting of the Aeronautics Research Mission Directorate reflected that although no supersonic cruise aircraft is lying, digital simulation capa­bilities, advanced test instrumentation, and research tools developed in support of previous programs are nontrivial legacies of the supersonic cruise study programs, positioning NASA well for any nationally iden­tified supersonic cruise aircraft requirement. Whether that will occur in the near future remains to be seen, just as it has since the creation of NASA a half century ago, but one thing is clear: the more than three decades of imaginative NASA supersonic cruise research after cancel­lation of the SST have produced a technical competency permitting, if needed, design for routine operation of a high Mach supersonic cruiser.[1120]

Into the 21st Century

NASA synthetic vision research promises to increase flight safety by giving pilots perfect posi­tional and situation awareness, regardless of weather or visibility conditions. Richard P. Hallion.

 

Learning to Fly with SLDs

From the earliest days of aviation, the easiest way for pilots to avoid problems related to weather and icing was to simply not fly through clouds or in conditions that were less than ideal. This made weather forecasting and the ability to quickly and easily communicate observed conditions around the Nation a top priority of aviation researchers. Working with the National Oceanic and Atmospheric Administration (NOAA) during the 1960s, NASA orbited the first weather satellites, which began equipped with black-and-white television cameras and

Подпись: 12 Подпись: Post-flight image shows ice contamination on the NASA Twin Otter airplane as a result of encountering Supercooled Large Droplet (SLD) conditions near Parkersburg, WV.

have since progressed to include sensors capable of seeing beyond the range of human eyesight, as well as lasers capable of characterizing the contents of the atmosphere in ways never before possible.[1248]

Our understanding of weather and the icing phenomenon, in com­bination with the latest navigation capabilities—robust airframe man­ufacturing, anti – and de-icing systems, along with years of piloting experience—has made it possible to certify airliners to safely fly through almost any type of weather where icing is possible (size of the freezing rain is generally between 100 and 400 microns). The exception is for one category in which the presence of supercooled large drops (SLDs) are detected or suspected of being there. Such rain is made up of water droplets that are greater than 500 microns and remain in a liquid state even though its temperature is below freezing. This makes the drop very unstable, so it will quickly freeze when it comes into contact with a cold object such as the leading edge of an airplane. And while some
of the SLDs do freeze on the wing’s leading edge, some remain liquid long enough to run back and freeze on the wing surfaces, making it dif­ficult, if not impossible, for de-icing systems to properly do their job. As a result, the amount of ice on the wing can build up so quickly, and so densely, that a pilot can almost immediately be put into an emergency situation, particularly if the ice so changes the airflow over the wing that the behavior of the aircraft is adversely affected.

Подпись: 12This was the case on October 31, 1994 when American Eagle Flight 4184, a French-built ATR 72-212 twin-turboprop regional airliner car­rying a crew of 4 and 64 passengers, abruptly rolled out of control and crashed in Roselawn, IN. During the flight, the crew was asked to hold in a circling pattern before approaching to land. Icing conditions existed, with other aircraft reporting rime ice buildup. Suddenly the ATR 72 began an uncommanded roll; its two pilots heroically attempted to recover as the plane repeatedly rolled and pitched, all the while diving at high speed. Finally, as they made every effort to recover, the plane broke up at a very low altitude, the wreckage plunging into the ground and bursting into flame. An exhaustive investigation, including NASA tests and tests of an ATR 72 flown behind a Boeing NKC-135A icing tanker at Edwards Air Force Base, revealed that the accident was all the more tragic for it had been completely preventable. Records indicated that the ATR 42 and 72 had a marked propensity for roll-control incidents, 24 of which had occurred since 1986 and 13 of which had involved icing. The National Transportation Safety Board (NTSB) report concluded:

The probable cause of this accident were the loss of control, attributed to a sudden and unexpected aileron hinge moment reversal that occurred after a ridge of ice accreted beyond the deice boots because: 1) ATR failed to completely disclose to operators, and incorporate in the ATR 72 airplane flight manual, flightcrew operating man­ual and flightcrew training programs, adequate infor­mation concerning previously known effects of freeing precipitation on the stability and control characteristics, autopilot and related operational procedures when the ATR 72 was operated in such conditions; 2) the French Directorate General for Civil Aviation’s (DGAC’s) inade­quate oversight of the ATR 42 and 72, and its failure to take the necessary corrective action to ensure continued
airworthiness in icing conditions; and 3) the DGAC’s failure to provide the FAA with timely airworthiness information developed from previous ATR incidents and accidents in icing conditions, as specified under the Bilateral Airworthiness Agreement and Annex 8 of the International Civil Aviation Organization.

Подпись: 12Contributing to the accident were; 1) the Federal Aviation Administration’s (FAAs) failure to ensure that air­craft icing certification requirements, operational require­ments for flight into icing conditions, and FAA published aircraft icing information adequately accounted for the hazards that can result from light in freezing rain and other icing conditions not specified in 14 Code of Federal Regulations 9CFR) part 25, Appendix C; and 2) the FAA’s inadequate oversight of the ATR 42 and 72 to ensure con­tinued airworthiness in icing conditions. [1249]

This accident focused attention on the safety hazard associated with SLD and prompted the FAA to seek a better understanding of the atmo­spheric characteristics of the SLD icing condition in anticipation of a rule change regarding certifying aircraft for flight through SLD condi­tions, or at least long enough to safely depart the hazardous zone once SLD conditions were encountered. Normally a manufacturer would demonstrate its aircraft’s worthiness for certification by flying in actual SLD conditions, backed up by tests involving a wind tunnel and com­puter simulations. But in this case such flight tests would be expensive to mount, requiring an even greater reliance on ground tests. The trou­ble in 1994 was lack of detailed understanding of SLD precipitation that could be used to recreate the phenomenon in the wind tunnel or pro­gram computer models to run accurate simulations. So a variety of flight tests and ground-based research was planned to support the decision­making process on the new certification standards.[1250]

Подпись: 12

Подпись: NASA's Twin Otter ice research aircraft, based at the Glenn Research Center in Cleveland, is shown in flight.

One interesting approach NASA took in conducting basic research on the behavior of SLD rain was to employ high-speed, close-up photography. Researchers wanted to learn more about the way an SLD strikes an object: is it more of a direct impact, and/or to what extent does the drop make a splash? Investigators also had similar questions about the way ice particles impacted or bounced when used during research in an icing wind tunnel such as the one at GRC. With water droplets less than 1 millimeter in diameter and the entire impact process taking less than 1 second in time, the close-up, high-speed imaging technique was the only way to capture the sought-after data. Based on the results from these tests, follow-on tests were conducted to investigate what effect ice particle impacts might have on the sensing elements of water content measurement devices.[1251]

Another program to understand the characteristics of SLDs Supercooled Large Droplets involved a series of flight tests over the Great Lakes during the winter of 1996-1997. GRC’s Twin Otter icing research aircraft was flown in a joint effort with the FAA and the National Center for Atmospheric Research (NCAR). Based on weather forecasts
and real-time pilot reports of in-flight icing coordinated by the NCAR, the Twin Otter was rushed to locations where SLD conditions were likely. Once on station, onboard instrumentation measured the local weather conditions, recorded any ice accretion that took place, and registered the aerodynamic performance of the aircraft in response to the icing. A total of 29 such icing research sorties were conducted, exposing the flight research team to all the sky has to offer—from normal-sized pre­cipitation and icing to SLD conditions, as well as mixed phase condi­tions. Results of the flight tests added to the database of knowledge about SLDs and accomplished four technical objectives that included charac­terization of the SLD environment aloft in terms of droplet size distri­bution, liquid water content, and measuring associated variables within the clouds containing SLDs; development of improved SLD diagnostic and weather forecasting tools; increasing the fidelity of icing simula­tions using wind tunnels and icing prediction software (LEWICE); and providing new information about SLD to share with pilots and the fly­ing community through educational outreach efforts.[1252]

Подпись: 12Thanks in large measure to the SLD research done by NASA in part­nership with other agencies—an effort NASA Associate Administrator Jaiwon Shin ranks as one of the top three most important contribu­tions to learning about icing—the FAA is developing a proposed rule to address SLD icing, which is outside the safety envelope of current icing certification requirements. According to a February 2009 FAA fact sheet: "The proposed rule would improve safety by taking into account super­cooled large-drop icing conditions for transport category airplanes most affected by these icing conditions, mixed-phase and ice-crystal condi­tions for all transport category airplanes, and supercooled large drop, mixed phase, and ice-crystal icing conditions for all turbine engines.”[1253]

As of September 2009, SLD certification requirements were still in the regulatory development process, with hope that an initial, draft rule would be released for comment in 2010.[1254]

Precision Controllability Flight Studies

During the 1970s, NASA Dryden conducted a series of flight assessments of emerging fighter aircraft to determine factors affecting the precision
tracking capability of modern fighters at transonic conditions.[1301] Although the flight evaluations did not explore the flight envelope beyond stall and departure, they included strenuous maneuvers at high angles of attack and explored typical such handling quality deficiencies as wing rock (undesirable large-amplitude rolling motions), wing drop, and pitch – up encountered during high-angle-of-attack tracking. Techniques were developed for the assessment process and were applied to seven differ­ent aircraft during the study. Aircraft flown included a preproduction version of the F-15, the YF-16 and YF-17 Lightweight Fighter proto­types, the F-111A and the F-111 supercritical wing research aircraft, the F-104, and the F-8.

Подпись: 13Extensive data were acquired in the flight-test program regarding the characteristics of the specific aircraft at transonic speeds and the impact of configuration features such as wing maneuver flaps and auto­matic flap deflection schedules with angle of attack and Mach number. However, some of the more valuable observations relative to undesirable and uncommanded aircraft motions provided insight and guidance to the high-angle-of-attack research community regarding aerodynamic and control system deficiencies and the need for research efforts to mitigate such issues. In addition, researchers at Dryden significantly expanded their experience and expertise in conducting high-angle-of-attack flight evaluations and developing methodology to expose inherent handling- quality deficiencies during tactical maneuvers.

Appendix: Lessons from Flight-Testing the XV-5 and X-14 Lift Fans

Note: The following compilation of lessons learned from the XV-5 and X-14 programs is excerpted from a report prepared by Ames research pilot Ronald M. Gerdes based upon his extensive flight research experience with such aircraft and is of interest because of its reference to Supersonic Short Take-Off, Vertical Landing Fighter (SSTOVLF) studies anticipat­ing the advent of the SSTOVLF version of the F-35 Joint Strike Fighter:[1457]

Подпись: 14The discussion to follow is an attempt to apply the key issues of "lessons learned” to what might be applicable to the prelim­inary design of a hypothetical Supersonic Short Take-off and Vertical Landing Fighter/attack (SSTOVLF) aircraft. The objec­tive is to incorporate pertinent sections of the "Design Criteria Summary” into a discussion of six important SSTOVLF pre­liminary design considerations to form the viewpoint of the writer’s lift-fan aircraft flight test experience. These key issues are discussed in the following order: (1) Merits of the Gas – Driven Lift-Fan, (2) Lift-Fan Limitations, (3) Fan-in-Wing Aircraft Handling Qualities, (4) Conversion System Design, (5) Terminal Area Approach Operations, and (6) Human Factors.

MERITS OF THE XV-5 GAS-DRIVEN LIFT-FAN

The XV-5 flight test experience demonstrated that a gas-driven lift-fan aircraft could be robust and easy to maintain and oper­ate. Drive shafts, gear boxes and pressure lubrication systems, which are highly vulnerable to enemy fire, were not required with gas drive. Pilot monitoring of fan machinery health is thus reduced to a minimum which is highly desirable for a single – piloted aircraft such as the SSTOVLF. Lift-fans have proven to be highly resistant to ingestion of foreign objects which is a plus for remote site operations. In one instance an XV-5A wing – fan continued to produce substantial lift despite considerable damage inflicted by the ingestion of a rescue collar weight. All pilots who have flown the XV-5 felt confident in the integrity of the lift-fans, and it was felt that the combat effectiveness of the SSTOVLF would be enhanced by using gas-driven lift-fans.