Category NASA’S CONTRIBUTIONS TO AERONAUTICS

Eluding Aeolus: Turbulence, Gusts, and Wind Shear

Eluding Aeolus: Turbulence, Gusts, and Wind ShearKristen Starr

Since the earliest days of American aeronautical research, NASA has studied the atmosphere and its influence upon flight. Turbulence, gusts, and wind shears have posed serious dangers to air travelers, forc­ing imaginative research and creative solutions. The work of NASA’s researchers to understand atmospheric behavior and NASA’s deriva­tion of advanced detection and sensor systems that can be installed in aircraft have materially advanced the safety and utility of air transport.

B

EFORE WORLD WAR II, the National Advisory Committee for Aeronautics (NACA), founded in 1915, performed most of America’s institutionalized and systematic aviation research. The NACA’s mission was "to supervise and direct the scientific study of the prob­lems of flight with a view to their practical solution.” Among the most serious problem it studied was that of atmospheric turbulence, a field related to the Agency’s great interest in fluid mechanics and aerody­namics in general. From the 1930s to the present, the NACA and its suc­cessor—the National Aeronautics and Space Administration (NASA), formed in 1958—concentrated rigorously on the problems of turbulence, gusts, and wind shear. Midcentury programs focused primarily on gust load and boundary-layer turbulence research. By the 1980s and 1990s, NASA’s atmospheric turbulence and wind shear programs reached a level of sophistication that allowed them to make significant contribu­tions to flight performance and aircraft reliability. The aviation industry integrated this NASA technology into planes bought by airlines and the United States military. This research has resulted in an aviation transportation system exponentially safer than that envisioned by the pioneers of the early air age.

An Unsettled Sky

When laypeople think of the words "turbulence” and "aviation” together, they probably envision the "bumpy air” that passengers are often

subjected to on long-duration plane flights. But the term "turbulence” has a particular technical meaning. Turbulence describes the motion of a fluid (for, our purposes, air) that is characterized by chaotic, seem­ingly random property changes. Turbulence encompasses fluctua­tions in diffusion, convection, pressure, and velocity. When an aircraft travels through air that experiences these changes, its passengers feel the turbulence buffeting the aircraft. Engineers and scientists charac­terize the degree of turbulence with the Reynolds number, a scaling parameter identified in the 1880s by Osborne Reynolds at the University of Manchester. Lower numbers denote laminar (smooth) flows, inter­mediate values indicate transitional flows, and higher numbers are characteristic of turbulent flow.[1]

Eluding Aeolus: Turbulence, Gusts, and Wind ShearA kind of turbulent airflow causes drag on all objects, including cars, golf balls, and planes, which move through the air. A boundary layer is "the thin reaction zone between an airplane [or missile] and its exter­nal environment.” The boundary layer is separated from the contour of a plane’s airfoil, or wing section, by only a few thousandths of an inch. Air particles change from a smooth laminar flow near the leading edge to a turbulent flow toward the airfoil’s rear.[2] Turbulent flow increases friction on an aircraft’s skin and therefore increased surface heat while slowing the speed of the aircraft because of the drag it produces.

Most atmospheric circulation on Earth causes some kind of turbu­lence. One of the more common forms of atmospheric turbulence expe­rienced by aircraft passengers is clear air turbulence (CAT), which is caused by the mixing of warm and cold air in the atmosphere by wind, often via the process of wind shear. Wind shear is a difference in wind speed and direction over a relatively short distance in Earth’s atmosphere. One engineer describes it as "any situation where wind velocity varies sharply from point to point.”[3] Wind shears can have both horizontal and vertical components. Horizontal wind shear is usually encountered near coastlines and along fronts, while vertical wind shear appears closer to Earth’s surface and sometimes at higher levels in the atmosphere, near frontal zones and upper-level air jets.

Large-scale weather events, such as weather fronts, often cause wind shear. Weather fronts are boundaries between two masses of air that have different properties, such as density, temperature, or mois­ture. These fronts cause most significant weather changes. Substantial wind shear is observed when the temperature difference across the front is 9 degrees Fahrenheit (°F) or more and the front is moving at 30 knots or faster. Frontal shear is seen both vertically and horizontally and can occur at any altitude between surface and tropopause, which is the lowest portion of Earth’s atmosphere and contains 75 percent of the atmosphere’s mass. Those who study the effects of weather on aviation are concerned more with vertical wind shear above warm fronts than behind cold fronts because of the longer duration of warm fronts.[4]

Eluding Aeolus: Turbulence, Gusts, and Wind ShearThe occurrence of wind shear is a microscale meteorological phe­nomenon. This means that it usually develops over a distance of less than 1 kilometer, even though it can emerge in the presence of large weather patterns (such as cold fronts and squall lines). Wind shear affects the movement of soundwaves through the atmosphere by bend­ing the wave front, causing sounds to be heard where they normally would not. A much more violent variety of wind shear can appear near and within downbursts and microbursts, which may be caused by thun­derstorms or weather fronts, particularly when such phenomena occur near mountains. Vertical shear can form on the lee side of mountains when winds blow over them. If the wind flow is strong enough, turbu­lent eddies known as "rotors” may form. Such rotors pose dangers to both ascending and descending aircraft.[5]

The microburst phenomenon, discovered and identified in the late 1970s by T. Theodore Fujita of the University of Chicago, involves highly localized, short-lived vertical downdrafts of dense cool air that impact the ground and radiate outward toward all points of the compass at high speed, like a water stream from a kitchen faucet impacting a basin.[6]

Speed and directional wind shear result at the three-dimensional boundary’s leading edge. The strength of the vertical wind shear is directly proportional to the strength of the outflow boundary. Typically, microbursts are smaller than 3 miles across and last fewer than 15 min­utes, with rapidly fluctuating wind velocity.[7]

Eluding Aeolus: Turbulence, Gusts, and Wind ShearWind shear is also observed near radiation inversions (also called nocturnal inversions), which form during rapid cooling of Earth’s sur­face at night. Such inversions do not usually extend above the lower few hundred feet in the atmosphere. Favorable conditions for this type of inversion include long nights, clear skies, dry air, little or no wind, and cold or snow-covered surfaces. The difference between the inversion layer and the air above the inversion layer can be up to 90 degrees in direction and 40 knots. It can occur overnight or the following morn­ing. These differences tend to be strongest toward sunrise.[8]

The troposphere is the lowest layer of the atmosphere in which weather changes occur. Within it, intense vertical wind shear can slow or prevent tropical cyclone development. However, it can also coax thun­derstorms into longer life cycles, worsening severe weather.[9]

Wind shear particularly endangers aircraft during takeoff and land­ing, when the aircraft are at low speed and low altitude, and particularly susceptible to loss of control. Microburst wind shear typically occurs during thunderstorms but occasionally arises in the absence of rain
near the ground. There are both "wet” and "dry” microbursts. Before the developing of forward-looking detection and evasion strategies, it was a major cause of aircraft accidents, claiming 26 aircraft and 626 lives, with over 200 injured, between 1964 and 1985.[10]

Eluding Aeolus: Turbulence, Gusts, and Wind ShearAnother macro-level weather event associated with wind shear is an upper-level jetstream, which contains vertical and horizontal wind shear at its edges. Jetstreams are fast-flowing, narrow air currents found at cer­tain areas of the tropopause. The tropopause is the transition between the troposphere (the area in the atmosphere where most weather changes occur and temperature decreases with height) and the stratosphere (the area where temperature increases with height).[11] A combination of atmo­spheric heating (by solar radiation or internal planetary heat) and the planet’s rotation on its axis causes jetstreams to form. The strongest jet – streams on Earth are the polar jets (23,000-39,000 feet above sea level) and the higher and somewhat weaker subtropical jets (33,000-52,000 feet). Both the northern and southern hemispheres have a polar jet and a subtropical jet. Wind shear in the upper-level jetstream causes clear air turbulence. The cold-air side of the jet, next to the jet’s axis, is where CAT is usually strongest.[12]

Although most aircraft passengers experience clear air turbulence as a minor annoyance, this kind of turbulence can be quite hazard­ous to aircraft when it becomes severe. It has caused fatalities, as in the case of United Airlines Flight 826.[13] Flight 826 took off from Narita International Airport in Japan for Honolulu, HI, on December 28, 1997.

At 31,000 feet, 2 hours into the flight, the crew of the plane, a Boeing 747, received warning of severe clear air turbulence in the area. A few minutes later, the plane abruptly dropped 100 feet, injuring many pas­sengers and forcing an emergency return to Tokyo, where one passenger subsequently died of her injuries.[14] A low-level jetstream is yet another phenomenon causing wind shear. This kind of jetstream usually forms at night, directly above Earth’s surface, ahead of a cold front. Low-level vertical wind shear develops in the lower part of the low-level jet. This kind of wind shear is also known as nonconvective wind shear, because it is not caused by thunderstorms.

Eluding Aeolus: Turbulence, Gusts, and Wind ShearThe term "jetstream” is often used without further modification to describe Earth’s Northern Hemisphere polar jet. This is the jet most important for meteorology and aviation, because it covers much of North America, Europe, and Asia, particularly in winter. The Southern Hemisphere polar jet, on the other hand, circles Antarctica year-round.[15] Commercial use of the Northern Hemisphere polar jet began November 18, 1952, when a Boeing 377 Stratocruiser of Pan American Airlines first flew from Tokyo to Honolulu at an altitude of 25,000 feet. It cut the trip time by over one-third, from 18 to 11.5 hours.[16] The jetstream saves fuel by shortening flight duration, since an airplane flying at high altitude can attain higher speeds because it is passing through less – dense air. Over North America, the time needed to fly east across the continent can be decreased by about 30 minutes if an airplane can fly with the jetstream but can increase by more than 30 minutes it must fly against the jetstream.[17]

Strong gusts of wind are another natural phenomenon affecting avi­ation. The National Weather Service reports gusts when top wind speed reaches 16 knots and the variation between peaks and lulls reaches 9 knots.[18] A gust load is the wind load on a surface caused by gusts.

Eluding Aeolus: Turbulence, Gusts, and Wind Shear

Otto Lilienthal, the greatest of pre-Wright flight researchers, in flight. National Air and Space Museum.

The more physically fragile a surface, the more danger a gust load will pose. As well, gusts can have an upsetting effect upon the aircraft’s flightpath and attitude.

Avoiding Bird Hazards: 1966

After millions of years of birds having the sky to themselves, it only took 9 years from the time the Wright brothers first flew in 1903 for the first human fatality brought about by a bird striking an aircraft and caus­ing the plane to crash in 1912. Fast-forward to 1960, when an Eastern Air Lines plane went down near Boston, killing 62 people as a result of a bird strike—the largest loss of life from a single bird incident.[192]

With the growing number of commercial jet airplanes, faster aircraft increased the potential damage a small bird could inflict and the larger airplanes put more humans at risk during a single flight. The need to address methods for dealing with birds around airports and in the skies also rose in priority. So, on September 9, 1966, the Interagency Bird

Avoiding Bird Hazards: 1966

A DeTect, Inc., MERLIN bird strike avoidance radar is seen here in use in South Africa. NASA uses the same system at Kennedy Space Center for Space Shuttle missions, and the FAA is con­sidering its use at airports around the Nation. NASA.

Hazard Committee was formed to gather data, share information, and develop methods for mitigating the risk of collisions between birds and airplanes. With the FAA taking the lead, the Committee included rep­resentatives from NASA; the Civil Aeronautics Board; the Department of Interior; the Department of Health, Education, and Welfare; and the U. S. Air Force, Navy, and Army.[193]

Through the years since the Committee was formed, the avia­tion community has approached the bird strike hazard primarily on three fronts: (1) removing or relocating the birds, (2) designing aircraft components to be less susceptible to damage from bird strikes, and (3) increasing the understanding of bird habitats and migratory pat­terns so as to alter air traffic routes and minimize the potential for bird strikes. Despite these efforts, the problem persists today, as evidenced by the January 2009 incident involving a US Airways jet that was forced to ditch in the Hudson River. Both of its jet engines failed because of

bird strikes shortly after takeoff. Fortunately, all souls on board survived the water landing thanks to the training and skills of the entire flightcrew.[194]

NASA’s contributions in this area include research to character­ize the extent of damage that birds might inflict on jet engines and other aircraft components in a bid to make those parts more robust or forgiving of a strike,[195] and the development of techniques to iden­tify potentially harmful flocks of birds[196] and their local and seasonal flight patterns using radar so that local air traffic routes can be altered.[197]

Radar is in use to warn pilots and air traffic controllers of bird haz­ards at the Seattle-Tacoma International Airport. As of this writing, the FAA plans to deploy test systems at Chicago, Dallas, and New York air­ports, as the technology still needs to be perfected before its deploy­ment across the country, according to an FAA spokeswoman quoted in a Wall Street Journal story published January 26, 2009.[198]

Meanwhile, a bird detecting radar system first developed for the Air Force by DeTect, Inc., of Panama City, FL, has been in use since 2006 at NASA’s Kennedy Space Center to check for potential bird strike hazards before every Space Shuttle launch. Two customized marine radars scan the sky: one oriented in the vertical, the other in the horizontal. Together with specialized software, the MERLIN system can detect flocks of birds up to 12 miles from the launch pad or runway, according to a company fact sheet.

In the meantime, airports with bird problems will continue to rely on broadcasting sudden loud noises, shooting off fireworks, flashing strobe lights, releasing predator animals where the birds are nesting, or, in the worst case, simply eliminating the birds.

Surface Management System

Making the skyways safer for aircraft to fly by reducing delays and lowering the stress on the system begins and ends with the short jour­ney on the ground between the active runway and the terminal gate. To better coordinate events between the air and ground sides, NASA devel­oped, in cooperation with the FAA, a software tool called the Surface Management System (SMS), whose purpose is to manage the move­ments of aircraft on the surface of busy airports to improve capacity, efficiency, and flexibility.[261]

The SMS has three parts: a traffic management tool, a controller tool, and a National Airspace System information tool.[262]

The traffic management tool monitors aircraft positions in the sky and on the ground, along with the latest times when a departing air­liner is about to be pushed back from its gate, to predict demand for taxiway and runway usage, with an aim toward understanding where backups might take place. Sharing this information among the traffic control tools and systems allows for more efficient planning. Similarly, the controller tool helps personnel in the ATC and ramp towers to bet­ter coordinate the movement of arriving and departing flights and to

advise pilots on which taxiways to use as they navigate between the runway and the gate.[263] Finally, the NAS information tool allows data from the SMS to be passed into the FAA’s national Enhanced Traffic Management System, which in turn allows traffic controllers to have a more accurate picture of the airspace.[264]

NASA Arrives: Taking Human Factors Research to the Next Level

It is therefore abundantly evident that when the NACA handed over the keys of its research facilities to NASA on October 1, 1958, the Nation’s new space agency began operations with a large database of informa­tion relating to the human factors and human engineering aspects of piloted flight. But though this mass of accumulated knowledge and technology was of inestimable value, the prospect of taking man to the next level, into the great unknown of outer space, was a different prop­osition from any ever before tackled by aviation research.[339] No one had yet comprehensively dealt with such human challenges as the effects of long-term weightlessness, exposure to ionizing radiation and extreme temperature changes, maintaining life in the vacuum of space, or with­standing prolonged impact deceleration forces encountered by humans violently reentering the Earth’s atmosphere.[340]

NASA began operations in 1958 with a final parting report from the NACAs Special Committee on Space Technology. This report recommended several technical areas in which NASA should proceed with its human factors research. These included acceleration, high-intensity radiation in space, cosmic radiation, ionization effects, human information process­ing and communication, displays, closed-cycle living, space capsules, and crew selection and training.[341] This Committee’s Working Group on Human Factors and Training further suggested that all experimentation con­sider crew selection, survival, safety, and efficiency.[342] With that, America’s new space agency had its marching orders. It proceeded to assemble "the largest group of technicians and greatest body of knowledge ever used to define man’s performance on the ground and in space environments.”[343]

Thus, from NASA’s earliest days, it has pioneered the way in human – centered aerospace research and technology. And also from its begin­ning—and extending to the present—it has shared the benefits of this research with the rest of the world, including the same industry that contributed so much to NASA during its earliest days—aeronautics. This 50-year storehouse of knowledge produced by NASA human fac­tors research has been shared with all areas of the aviation community— both the Department of Defense (DOD) and all realms of civil avia­tion, including the Federal Aviation Administration (FAA), the National Transportation and Safety Board (NTSB), the airlines, general aviation, aircraft manufacturing companies, and producers of aviation-related hardware and software.

Vision Science and Technology

Scientists at NASA Ames Research Center have for many years been heavily involved with conducting research on visual technology for humans. The major areas explored include vision science, image

compression, imaging and displays, and visual human factors. Specific projects have investigated such issues as eye-tracking accuracy, image enhancement, metrics for measuring image quality, and methods to measure and improve the visibility of in-flight and air traffic control monitor displays.[433]

The information gained from this and other NASA-conducted research has played an important role in the development of such important and innovative human-assisting technologies as virtual reality goggles, helmet-mounted displays, and so-called glass cockpits.[434]

The latter concept, which NASA pioneered in the 1970s, refers to the replacement of conventional cockpit analog dials and gauges with a system of cathode ray tubes (CRT) or liquid crystal display (LCD) flatpanels that display the same information in a more readable and usable form.[435] Conventional instruments can be difficult to accurately read and monitor, and they are capable of providing only one level of information. Computerized "glass” instrumentation, on the other hand, can display both numerical and graphic color-coded readouts in 3-D format; furthermore, because each display can present several layers of information, fewer are needed. This provides the pilot larger and more readable displays. This technology, which is now used in nearly all airliners, business jets, and an increasing number of general-aviation aircraft, has improved flight safety and aircrew efficiency by decreasing workload, fatigue, and instrument interpretation errors.[436]

A related vision technology that NASA researchers helped develop is the head-up display.[437] This transparent display allows a pilot to view flight data while looking outside the aircraft. This is especially use­ful during approaches for landing, when the pilot’s attention needs to be focused on events outside the cockpit. This concept was originally developed for the Space Shuttle and military aircraft but has since been

adapted to commercial and civil aircraft, air traffic control towers, and even automobiles.[438]

Final Maturity: Concept Demonstrators

The efforts of the NACA and NASA in developing and applying dynami­cally scaled free-flight model testing techniques have progressed through a truly impressive maturation process. Although the scaling relation­ships have remained constant since the inception of free-flight testing, the facilities and test attributes have become dramatically more sophis­ticated. The size and construction of models have changed from unpow­ered balsa models weighing a few ounces with wingspans of less than 2 feet to very large powered composite models with weights of over 1,000 pounds. Control systems have changed from simple solenoid bang-bang controls operated by a pilot with visual cues provided by model motions to hydraulic systems with digital flight controls and full feedbacks from an array of sensors and adaptive control systems. The level of sophisti­cation integrated into the model testing techniques has now given rise

Final Maturity: Concept Demonstrators

The Boeing X-48B Blended Wing-Body flying model in flight at NASA Dryden. The configura­tion has undergone almost 15 years of research, including free-flight testing at Langley and Dryden. NASA.

to a new class of free-flight models that are considered to be integrated concept demonstrators rather than specific technology tools. Thus, the lines between free-flight models and more complex remotely piloted vehicles have become blurred, with a noticeable degree of refinement in the concept demonstrators.

Research activities at the NASA Dryden Flight Research Center vividly illustrate how far free-flight testing has come. Since the 1970s, Dryden has continually conducted a broad program of demonstrator applications with emphasis on integrations of advanced technology. In 1997, another milestone was achieved at Dryden in remotely piloted research vehicle technology, when an X-36 vehicle demonstrated the feasibility of using advanced technologies to ensure satisfactory flying qualities for radical tailless fighter designs. The X-36 was designed as a joint effort between the NASA Ames Research Center and the Boeing Phantom Works (previously McDonnell-Douglas) as a 0.28-scale pow­ered free-flight model of an advanced fighter without vertical or hori­zontal tails to enhance survivability. Powered by a F112 turbofan engine and weighing about 1,200 pounds, the 18-foot-long configuration used

a canard, split aileron surfaces, wing leading – and trailing-edge flaps, and a thrust-vectoring nozzle for control. A single-channel digital fly­by-wire system provided artificial stability for the configuration, which was inherently unstable about the pitch and yaw axes.[505]

The Prehistory of the Wind Tunnel to 1958

The growing interest in and institutionalization of aeronautics in the late 19th century led to the creation of the wind tunnel.[531] English scien­tists and engineers formed the Royal Aeronautical Society in 1866. The group organized lectures, technical meetings, and public exhibitions, published the influential Annual Report of the Aeronautical Society, and funded research to spread the idea of powered flight. One of the more influential members was Francis Herbert Wenham. Wenham, a profes­sional engineer with a variety of interests, found his experiments with a whirling arm to be unsatisfactory. Funded by a grant from the Royal Aeronautical Society, he created the world’s first operating wind tunnel in 1870-1872. Wenham and his colleagues conducted rudimentary lift and drag studies and investigated wing designs with their new research tool.[532]

Wenham’s wing models were not full-scale wings. In England, University of Manchester researcher Osborne Reynolds recognized in 1883 that the airflow pattern over a scale model would be the same for its full-scale version if a certain flow parameter were the same in both cases. This basic parameter, attributed to its discoverer as the Reynolds number, is a measure of the relative effects of the inertia and viscosity of air flowing over an aircraft. The Reynolds number is used to describe all types of fluid flow, including the shape of flow, heat transfer, and the start of turbulence.[533]

While Wenham invented the wind tunnel and Reynolds created the basic parameter for understanding its application to full-scale aircraft, Wilbur and Orville Wright were the first to use a wind tunnel in the sys­tematic way that later aeronautical engineers would use it. The broth­ers, not aware of Wenham’s work, saw their "invention” of the wind tunnel become part of their revolutionary program to create a practical heavier-than-air flying machine from 1896 to 1903. Frustrated by the

poor performance of their 1900 and 1901 gliders on the sandy dunes of the Outer Banks—they did not generate enough lift and were uncontrol­lable—the Wright brothers began to reevaluate their aerodynamic cal­culations. They discovered that Smeaton’s coefficient, one of the early contributions to aeronautics, and Otto Lilienthal’s groundbreaking air­foil data were wrong. They found the discrepancy through the use of their wind tunnel, a 6-foot-long box with a fan at one end to generate air that would flow over small metal models of airfoils mounted on balances, which they had created in their bicycle workshop. The lift and drag data they compiled in their notebooks would be the key to the design of wings and propellers during the rest of their experimental program, which cul­minated in the first controlled, heavier-than-air flight December 17, 1903.[534]

Over the early flight and World War I eras, aeronautical enthusi­asts, universities, aircraft manufacturers, military services, and national governments in Europe and the United States built 20 wind tunnels. The United States built the most at 9, with 4 rapidly appearing during American involvement during the Great War. Of the European countries, Great Britain built 4, but the tunnels in France (2) and Germany (3) proved to be the most innovative. Gustav Eiffel’s 1912 tunnel at Auteiul, France, became a practical tool for the French aviation industry to develop high-performance aircraft for the Great War. At the University of Gottingen in Germany, aerodynamics pioneer Ludwig Prandtl designed what would become the model for all "modern” wind tunnels in 1916. The tunnel featured a closed circuit; a contraction cone, or nozzle, just before the test section that created uniform air velocity and reduced turbulence in the test section; and a chamber upstream of the test section that stilled any remaining turbulent air further.[535]

Into the Jet Age

Materials used in aircraft construction changed little from the early 1950s to the late 1970s. Aluminum alloyed with zinc metals, first introduced in 1943,[686] grew steadily in sophistication, leading to the introduction of a new line of even lighter-weight aluminum-lithium alloys in 1957.

Composite structure remained mostly a novelty item in aerospace con­struction. Progress continued to be made with developing composites, but demand was driven mainly by unique performance requirements, such as for high-speed atmospheric flight or exo-atmospheric travel.

Into the Jet AgeA few exceptions emerged in the general-aviation market. The Federal Aviation Agency (FAA) certified the Taylorcraft Model 20 in 1955, which was based on a steel substructure but incorporated fiberglass for the skins and cowlings.[687] Even more progress was made by Piper Aircraft, which launched the PA-29 "plastic plane” project a few years later.[688] The PA-29 was essentially a commercial X-plane, experimenting with mate­rials that could replace aluminum alloy for light aircraft.[689] The PA-29’s all-fiberglass structure demonstrated the potential strength properties of composite material. Piper’s engineers reported that the wing survived to 200 percent of ultimate load in static tests; the fuselage cracked at 180 percent because of a weakened bolt hole near the cockpit.[690] Piper con­cluded that it "is not only possible but also quite practical to build pri­mary aircraft structures of fiberglass reinforced plastic.”[691]

Commercial airliners built in the early 1950s relied almost exclu­sively upon aluminum and steel for structures. Boeing selected 2024 aluminum alloy for the fuselage skin and lower wing cover of the four – engine 707.[692] It was not until Boeing started designing the 747 jumbo airliner in 1966 that it paid serious attention to composites. Composites were used on the 747’s rudder and elevators. Fiberglass, however, was in even greater demand on the 747, used as the structure for variable – camber leading-edge flaps.[693]

In 1972, NASA started a program with Boeing to redesign the 737’s aluminum spoilers with skins made of graphite-epoxy composite and an aluminum honeycomb core, while the rest of the spoiler structure—the hinges and spar—remained unchanged. Each of the four spoilers on the 737 measures roughly 24 inches wide by 52 inches long. The composite
material comprised about 35 percent of the weight of the new struc­ture of each spoiler, which measured about 13 pounds, or 17 percent less than an all-metal structure.[694] The composite spoilers initiated flight operations on 27 737s owned by the airlines Aloha, Lufthansa, New Zealand National, Piedmont, PSA, and VASP. Five years later, Boeing reported no problems with durability and projected a long service life for the components.[695]

Into the Jet AgeThe impact of the 1973 oil embargo finally forced airlines to start reexamining their fuel-burn rates. After annual fuel price increases of 5 percent before the embargo, the gas bill for airlines jumped by 10 cents to 28 cents per gallon almost overnight.[696] Most immediately, airframers looked to the potential of the recently developed high-bypass turbofan engine, as typified by the General Electric TF39/CF6 engine family, to gain rapid improvements in fuel efficiency for airliners. But against the backdrop of the oil embargo, the potential of composites to drive another revolution in airframe efficiency could not be ignored. Graphite-epoxy composite weighed 25 percent less than comparable aluminum struc­ture, potentially boosting fuel efficiency by 15 percent.[697]

The stage was set for launching the most significant change in air­craft structural technology since the rapid transition to aluminum in the early 1930s. However, it would be no easy transition. In the early 1970s, composite design for airframes was still in its infancy, despite its many advances in military service. Recalling this period, a Boeing executive would later remember the words of caution from one of his mentors in 1975: "One of Boeing most senior employees said, when composites were first introduced in 1975, that he had lived through the transition from spruce and fabric to aluminum. It took three airplane generations before the younger designers were able to put aluminum to its best use, and he thought that we would have to be very clever to avoid that with composites.”[698] The anonymous commentary would prove eerily pre­scient. From 1975, Boeing would advance through two generations of aircraft—beginning with the 757/767 and progressing with the 777 and

Next Generation 737—before mastering the manufacturing and design requirements to mass-produce an all-composite fuselage barrel, one of the key design features of the 787, launched in 2003.

Into the Jet AgeBy the early 1970s, the transition to composites was a commercial imperative, but it took projects and studies launched by NASA and the military to start building momentum. Unlike the transition from spruce to metal structures four decades before, the industry’s leading aircraft makers now postured conservatively. The maturing air travel industry presented manufacturers with a new set of regulatory and legal barriers to embracing innovative ideas. In this new era, passengers would not be the unwitting guinea pigs as engineers worked out the problems of a new construction material. Conservatism in design would especially apply to load-bearing primary structures. "Today’s climate of government regulatory nervousness and aircraft/airline industry liability concerns demand that any new structural material system be equally reliable,” Boeing executive G. L. Brower commented in 1978.[699]

Reducing the High Cost of Flight Research

Research aircraft are designed to explore advanced technologies and new fight regimes. Consequently, they are often relatively expensive to build and operate, and inherently risky to fly. Flight research from the earliest days of aviation well into the mid-20th century resulted in a staggering loss of life and valuable, often one-of-a-kind, aircraft.

This was tragically illustrated during experimental testing of advanced aircraft concepts, early jet-powered aircraft, and supersonic rocket planes of the 1940s and 1950s at Muroc Army Air Field in the Mojave Desert. Between 1943 and 1959, more than two-dozen research airplanes and prototypes were lost in accidents, more than half of them fatal. Among these were several of Northrop’s flying wing designs, including the N9M-1, XP-56, and both YB-49 prototypes. Early variants of Lockheed P-80 and F-104 jet fighters were lost, along with the two Martin XB-51 bomber pro­totypes. A rocket-powered Bell X-1 and its second-generation stablemates, the X-1A and X-1D, were lost to explosions—all fortunately nonfatal—and Capt. Milburn Apt died in the Bell X-2 after becoming the first human to fly more than three times the speed of sound.

By the 1960s, researchers began to recognize the value of using remotely piloted vehicles (RPVs) to mitigate the risks associated with flight-testing. During World War I and World War II, remotely controlled aircraft had been developed as weapons. In the postwar era, drones served as targets for missile tests and for such tasks as flying through clouds of radioactive fallout from nuclear explosions to collect particu­late samples without endangering aircrews. By the 1950s, cruise-missile prototypes, such as the Regulus and X-10, were taking off and landing under radio control. Several of these vehicles crashed, but without a crew on board, there was no risk of losing a valuable test pilot.[881] Over the

years, advances in electronics greatly increased the reliability of control systems, rendering development of RPRVs more practical. Early efforts focused on guidance and navigation, stabilization, and remote control. Eventually, designers worked to improve technologies to support these capabilities through the integration of improved avionics, micropro­cessors, and computers. The RPRV concept was attractive to research­ers because it built confidence in new technology through demonstration under actual flight conditions, at relatively low cost, in quick response to demand, and at no risk to the pilot.

Taking the pilot out of the airplane provided additional savings in terms of development and fabrication. The cost and complexity of robotic and remotely piloted vehicles are generally less than those of com­parable aircraft that require an onboard crew, because there is no need for life-support systems, escape and survival equipment, or hygiene facil­ities. Hazardous testing can be accomplished with a vehicle that may be considered expendable or semiexpendable.

Quick response to customer requirements and reduced program costs resulted from the elimination of redundant systems (usually added for crew safety) and man-rating tests, and through the use of less com­plex structures and systems. Subscale test vehicles generally cost less than full-size airplanes while providing usable aerodynamic and systems data. The use of programmable ground-based control systems provides additional flexibility and eliminates downtime resulting from the need for extensive aircraft modifications.[882]

ERAST: High-Altitude, Long-Endurance Science Platforms

In the early 1990s, NASA’s Earth Science Directorate received a solic­itation for research to support the Atmospheric Effects of Aviation project. Because the project entailed assessment of the potential envi­ronmental impact of a commercial supersonic transport aircraft, measurements were needed at altitudes around 85,000 feet. Initially, Aurora Flight Sciences of Manassas, VA, proposed developing the Perseus A and Perseus B remotely piloted research aircraft as part of NASA’s Small High-Altitude Science Aircraft (SHASA) program.

The SHASA effort expanded in 1993 as NASA teamed with industry partners for what became known as the Environmental Research Aircraft and Sensor Technology project. Goals for the ERAST project included development and demonstration of unpiloted aircraft to perform long – duration airborne science missions. Transfer of ERAST technology to an emerging UAV industry validated the capability of unpiloted aircraft to carry out operational science missions.

The ERAST project was managed at Dryden, with significant contri­butions from Ames, Langley, and Glenn Research Centers. Industry part­ners included such aircraft manufacturers as AeroVironment, Aurora Flight Sciences, General Atomics Aeronautical Systems, Inc., and Scaled Composites. Thermo-Mechanical Systems, Hyperspectral Sciences, and Longitude 122 West developed sensors to be carried by the research air­craft.[1022] The ERAST effort resulted in a diverse fleet of unpiloted vehi­cles. Perseus A, built in 1993, was designed to stay aloft for 5 hours and reach altitudes around 82,000 feet. An experimental, closed-system, four – cylinder piston engine recycled exhaust gases and relied on stored liquid oxygen to generate combustion at high altitudes. Aurora built two Perseus A vehicles, one of which crashed because of an autopilot malfunction. By that time, the airplane had only reached an altitude of 50,000 feet.

Aurora engineers designed the Perseus B to remain aloft for 24 hours. The vehicle was equipped with a triple-turbocharged engine to provide sea-level air pressure up to 60,000 feet. In the 2 years following its maiden flight in 1994, Perseus B experienced some technical diffi­culties and a few hard landings that resulted in significant damage. As a result, Aurora technicians made numerous improvements, including extending the wingspan from 58.5 feet to 71.5 feet. When flight oper­ations resumed in 1998, the Perseus B attained an unofficial altitude record of 60,280 feet before being damaged in a crash in October 1999. Despite such difficulties, experience with the Perseus vehicles provided designers with useful data regarding selection of instrumentation for RPRVs and identifying potential failures resulting from feedback defi­ciencies in a ground cockpit.[1023] Aurora Flight Sciences also built a larger UAV named Theseus that was funded by NASA through the Mission To Planet Earth environmental observation program. Aurora and its

partners, West Virginia University and Fairmont State College, built the Theseus for NASA under an innovative, $4.9-million fixed-price contract. Dryden hosted the Theseus program, providing hangar space and range safety. Aurora personnel were responsible for flight-testing, vehicle flight safety, and operation of the aircraft.

With the potential to carry 700 pounds of science instruments to alti­tudes above 60,000 feet for durations of greater than 24 hours, the Theseus was intended to support research in areas such as stratospheric ozone depletion and the atmospheric effects of future high-speed civil transport aircraft engines. The twin-engine, unpiloted vehicle had a 140-foot wing­span and was constructed primarily from composite materials. Powered by two 80-horsepower, turbocharged piston engines that drove twin 9-foot – diameter propellers, it was designed to fly autonomously at high altitudes, with takeoff and landing under the active control of a ground-based pilot.

Operators from Aurora Fight Sciences piloted the maiden flight of the Theseus at Dryden on May 24, 1996. The test team conducted four addi­tional checkout flights over the next 6 months. During the sixth flight, the vehicle broke apart and crashed while beginning a descent from 20,000 feet.[1024] Innovative designers at AeroVironment in Monrovia, CA, took a markedly different approach to the ERAST challenge. In 1983, the com­pany had built and tested the High-Altitude Solar (HALSOL) UAV using battery power only. Now, NASA scientists were anxious to see how it would perform with solar panels powering its six electrically driven pro­pellers. The aircraft was a flying wing configuration with a rectangular planform and two ventral pods containing landing gear. Its structure consisted of a composite framework encased in plastic skin. In 1993 and 1994, researchers at Dryden flew it using a combination of battery and solar power, in a program sponsored by the Ballistic Missile Defense Organization that sought to develop a long-endurance surveillance plat­form. By now renamed Pathfinder, the unusual craft joined the ERAST fleet in 1995, where it soon attained an altitude of 50,500 feet, a record for solar-powered aircraft.[1025] After additional upgrades and checkout flight at Dryden, ERAST team members transported the Pathfinder to the U. S. Navy’s Pacific Missile Range Facility (PMRF) at Barking Sands,

Kauai, HI, in April 1997. Predictable weather patterns, abundant sun­light, available airspace and radio frequencies, and the diversity of terres­trial and coastal ecosystems for validating scientific imaging applications made Kauai an optimum location for testing. During one of seven high – altitude flights from the PMRF, the Pathfinder reached a world altitude record for propeller-driven as well as solar-powered aircraft at 71,530 feet.[1026] In 1998, technicians at AeroVironment modified the vehicle to include two additional engines and extended the wingspan from 98 feet to 121 feet. Renamed Pathfinder Plus, the craft had more efficient sil­icon solar cells developed by SunPower, Corp., of Sunnyvale, CA, that were capable of converting almost 19 percent of the solar energy they received to useful electrical energy to power the motors, avionics, and communication systems. Maximum potential power was boosted from about 7,500 watts on the original configuration to about 12,500 watts. This allowed the Pathfinder Plus to reach a record altitude of 80,201 feet during another series of developmental test flights at the PMRF.[1027] NASA research teams, coordinated by the Ames Research Center and including researchers from the University of Hawaii and the University of California, used the Pathfinder/Pathfinder Plus vehicle to carry a vari­ety of scientific sensors. Experiments included detection of forest nutri­ent status, observation of forest regrowth following hurricane damage, measurement of sediment and algae concentrations in coastal waters, and assessment of coral reef health. Several flights demonstrated the practical utility of using high-flying, remotely piloted, environmentally friendly solar aircraft for commercial purposes. Two flights, funded by a Japanese communications consortium and AeroVironment, empha­sized the vehicle’s potential as a platform for telecommunications relay services. A NASA-sponsored demonstration employed remote-imaging techniques for use in optimizing coffee harvests.[1028] AeroVironment engi­neers ultimately hoped to produce an autonomous aircraft capable of flying at altitudes around 100,000 feet for weeks—or even months—at a time through use of rechargeable solar power cells. Building on their experience with the Pathfinder/ Pathfinder Plus, they subsequently devel­oped the 206-foot-span Centurion. Test flights at Dryden in 1998, using only battery power to drive 14 propellers, demonstrated the aircraft’s

ERAST: High-Altitude, Long-Endurance Science Platforms

The solar-electric Helios Prototype was flown from the U. S. Navy’s Pacific Missile Range Facility. NASA.

 

9

 

capability for carrying a 605-pound payload. The vehicle was then modi­fied to feature a 247-foot-span and renamed the Helios Prototype, with a performance goal of 100,000 feet altitude and 96 hours mission duration.

As with its predecessors, a ground pilot remotely controlled the Helios Prototype, either from a mobile control van or a fixed ground station. The aircraft was equipped with a flight-termination system— required on remotely piloted aircraft flown in military restricted air­space—that included a parachute system plus a homing beacon to aid in determining the aircraft’s location.

Flights of the Helios Prototype at Dryden included low-altitude eval­uation of handling qualities, stability and control, response to turbu­lence, and use of differential motor thrust to control pitch. Following installation of more than 62,000 solar cells, the aircraft was transported to the PMRF for high-altitude flights. On August 13, 2001, the Helios Prototype reached an altitude of 96,863 feet, a world record for sus­tained horizontal flight by a winged aircraft.[1029]

During a shakedown mission June 26, 2003, in preparation for a 48-hour long-endurance flight, the Helios Prototype aircraft encoun­tered atmospheric turbulence, typical of conditions expected by the test crew, causing abnormally high wing dihedral (upward bowing of both wingtips). Unobserved mild pitch oscillations began but quickly
diminished. Minutes later, the aircraft again experienced normal tur­bulence and transitioned into an unexpected, persistent high wing – dihedral configuration. As a result, the aircraft became unstable, exhib­iting growing pitch oscillations and airspeed deviations exceeding the design speed. Resulting high dynamic pressures ripped the solar cells and skin off the upper surface of the outer wing panels, and the Helios Prototype fell into the Pacific Ocean. Investigators determined that the mishap resulted from the inability to predict, using available analysis methods, the aircraft’s increased sensitivity to atmospheric disturbances, such as turbulence, following vehicle configuration changes required for the long-duration flight demonstration.[1030] Scaled Composites of Mojave, CA, built the remotely piloted RAPTOR Demonstrator-2 to test remote flight control capabilities and technologies for long-duration (12 to 72 hours), high-altitude vehicles capable of carrying science payloads. Key technology development areas included lightweight structures, science payload integration, engine development, and flight control systems. As a result, it had only limited provisions for a scientific payload. The D-2 was unusual in that it was optionally piloted. It could be flown either by a pilot in an open cockpit or by remote control. This capability had been demonstrated in earlier flights of the RAPTOR D-1, developed for the Ballistic Missile Defense Organization in the early 1990s.

D-2 flight tests began August 23, 1994. In late 1996, technicians linked the D-2 to NASA’s Tracking and Data Relay Satellite system in order to demonstrate over-the-horizon communications capabilities between the aircraft and ground stations at ranges of up to 2,000 miles. The D-2 resumed flights in August 1998 to test a triple-redundant flight control sys­tem that would allow remotely piloted high-altitude missions.[1031] General Atomics of San Diego, CA, produced several vehicles for the ERAST pro­gram based on the company’s Gnat and Predator UAVs. The first two, called Altus (Latin for "high”) and Altus 2, looked similar to the compa­ny’s Gnat 750. Altus was 23.6 feet long and featured long, narrow, high aspect ratio wings spanning 55.3 feet. Powered by a rear-mounted, tur­bocharged, four-cylinder piston engine rated at 100 horsepower, the vehicle was capable of cruising at 80 to 115 mph and attaining altitudes of up to 53,000 feet. Altus could accommodate up to 330 pounds of sensors and scientific instruments.

NASA Dryden personnel initially operated the Altus vehicles as part of the ERAST program. The Altus 2, the first of the two aircraft to be com­pleted, made its first flight May 1, 1996. During subsequent developmen­tal tests, it reached an altitude of 37,000 feet. In late 1996, researchers flew the Altus 2 in an atmospheric-radiation-measurement study spon­sored by the Department of Energy’s Sandia National Laboratory for the purpose of collecting data on radiation/cloud interactions in Earth’s atmosphere to better predict temperature rise resulting from increased carbon dioxide levels. During the course of the project, Altus 2 set a single-flight endurance record for remotely operated aircraft, remain­ing aloft for 26.18 hours through a complete day-to-night-to-day cycle.[1032] The multiagency program brought together capabilities available among Government agencies, universities, and private industry. Sandia provided technical direction, logistical planning and support, data analysis, and a multispectral imaging instrument. NASA’s Goddard Space Flight Center and Ames Research Center, Lawrence Livermore National Laboratory, Brookhaven National Laboratory, Colorado State University, and the University of California Scripps Institute provided additional instru­mentation. Scientists from the University of Maryland, the University of California at Santa Barbara, Pennsylvania State University, the State University of New York, and others also participated.[1033] In September 2001, the Altus 2 carried a thermal imaging system for the First Response Experiment (FiRE) during a demonstration at the General Atomics flight operations facility at El Mirage, CA. A sensor developed for the ERAST program and previously used to collect images of coffee plan­tations in Hawaii was modified to provide real-time, calibrated, geo – located, multispectral thermal imagery of fire events. This scientific demonstration showcased the capability of an unmanned aerial system (UAS) to collect remote sensing data over fires and relay the information to fire management personnel on the ground.[1034] A larger vehicle called Altair, based on the Predator B (Reaper) UAV, was designed to perform a variety of ERAST science missions specified by NASA’s Earth Science enterprise. In the initial planning phase of the project, NASA scientists established a stringent set of requirements for the Altair that included mission endurance of 24 to 48 hours at an altitude range of 40,000 to

65,0 feet with a payload of at least 660 pounds. The project team also sought to develop procedures to allow operations from conventional air­ports without conflict with piloted aircraft. Additionally, the Altair had to be capable of demonstrating command and control beyond-line-of-sight communications via satellite link, undertake see-and-avoid operations relative to other air traffic, and demonstrate the ability to communi­cate with FAA air traffic controllers. To accomplish this, the Altair was equipped with an automated collision-avoidance system and a voice relay to allow air traffic controllers to talk to ground-based pilots. As the first UAV to meet FAA requirements for operating from conventional airports, with piloted aircraft in the national airspace, the aircraft also had to meet all FAA airworthiness and maintenance standards. The final Altair configuration was designed to fly continuously for up to 32 hours and was capable of reaching an altitude of approximately 52,000 feet with a maximum range of about 4,200 miles. It was designed to carry up to 750 pounds of sensors, radar, communications, and imaging equipment in its forward fuselage.[1035] Although the ERAST program was formally terminated in 2003, research continued with the Altair. In May 2005, the National Oceanic and Atmospheric Administration (NOAA) funded the UAV Flight Demonstration Project in cooperation with NASA and General Atomics. The experiment included a series of atmospheric and oceanic research flights off the California coastline to collect data on weather and ocean conditions, as well as climate and ecosystem moni­toring and management. The Altair was the first UAV to feature triple­redundant controls and avionics for increased reliability, as well as a fault-tolerant, dual-architecture flight control system.

Science flights began May 7 with a 6.5-hour flight to the Channel Islands Marine Sanctuary west of Los Angeles, a site thought ideal for exploring NOAAs operational objectives with a digital camera system and electro-optical/infrared sensors. The Altair carried a payload of instru­ments for measuring ocean color, atmospheric composition and tem­perature, and surface imaging during flights at altitudes of up to 45,000 feet. Objectives of the experiment included evaluation of an unmanned aircraft system for future scientific and operational requirements related to NOAA’s oceanic and atmospheric research, climate research, marine sanctuary mapping and enforcement, nautical charting, and fisheries assessment and enforcement.[1036] In 2006, personnel from NASA, NOAA, General Atomics, and the U. S. Forest Service teamed for the Altair Western States Fire Mission (WSFM). This experiment demon­strated the combined use of an Ames-designed thermal multispectral scanner integrated on a large-payload capacity UAV, a data link telem­etry system, near-real-time image geo-rectification, and rapid Internet data dissemination to fire center and disaster managers. The sensor system was capable of automatically identifying burned areas as well as active fires, eliminating the need to train sensor operators to ana­lyze imagery. The success of this project set the stage for NASA’s acqui­sition of another General Atomics UAV called the Ikhana and for future operational UAS missions in the national airspace.[1037]