Category AERONAUTICS

SCW Takes to the Air

Langley and the Flight Research Center entered into a joint program out­lined in a November 1968 memorandum. Loftin and Whitcomb lead a Langley team responsible for defining the overall objectives, determining the wing contours and construction tolerances, and conducting wind tun­nel tests during the flight program. Flight Research Center personnel deter­mined the size, weight, and balance of the wing; acquired the F-8A airframe and managed the modification program; and conducted the flight research program. North American Rockwell won the contract for the supercriti­cal wing and delivered it to the Flight Research Center in November 1970 at a cost of $1.8 million. Flight Research Center technicians installed the new wing on a Navy surplus TF-8A trainer.[214] At the onset of the flight pro­gram, Whitcomb predicted the new wing design would allow airliners to cruise 100 mph faster and close to the speed of sound (nearly 660 mph) at an altitude of 45,000 feet with the same amount of power.[215]

NASA test pilot Thomas C. McMurtry took to the air in the F-8 Supercritical Wing flight research vehicle on March 9, 1971. Eighty-six flights later, the program ended on May 23, 1973. A pivotal document gen­erated during the program was Supercritical Wing Technology—A Progress Report on Flight Evaluations, which captured the ongoing results of the program. From the standpoint of actually flying the F-8, McMurtry noted that: "the introduction of the supercritical wing is not expected to create any serious problems in day-to-day transport operations.” The combined flight and wind tunnel tests revealed increased efficiency of commercial aircraft by 15 percent and, more importantly, a 2.5-percent increase in profits. In the high-stakes business of international commercial aviation, the supercritical wing and its ability to increase the range, speed, and fuel efficiency of subsonic jet aircraft without an increase in required power or additional weight was a revolutionary new innovation.[216]

NASA went beyond flight tests with the F-8, which was a flight-test vehicle built specifically for proving the concept. The Transonic Aircraft Technology (TACT) program was a joint NASA-U. S. Air Force partner­ship begun in 1972 that investigated the application of supercritical wing technology to future combat aircraft. The program evaluated a modified General Dynamics F-111A variable-sweep tactical aircraft to ascertain its overall performance, handling qualities, and transonic maneuver­ability and to define the local aerodynamics of the airfoil and determine wake drag. Whitcomb worked directly with General Dynamics and the Air Force Flight Dynamics Laboratory on the concept.[217] NASA worked to refine the supercritical wing, and its resultant theory through continued comparison of wind tunnel and flight tests that continued the Langley and Flight Research Center collaboration.[218]

Whitcomb developed the supercritical airfoil using his logical cut – and-try procedures. Ironically, what was considered to be an unso­phisticated research technique in the second half of the 20th century, a process John Becker called "Edisonian,” yielded the complex super­critical airfoil. The key, once again, was the fact that the researcher, Whitcomb, possessed "truly unusual insights and intuitions.”[219] Whitcomb used his intuitive imagination to search for a solution over the course of 8 years. Mathematicians verified his work after the fact and created a formula for use by the aviation industry.[220] Whitcomb received patent No. 3,952,971 for his supercritical wing in May 1976. NASA possessed the rights to granting licenses, and several foreign nations already had filed patent applications.[221]

The spread of the supercritical wing to the aviation industry was slow in the late 1970s. There was no doubt that the supercritical wing possessed the potential of saving the airline industry $300 million annu­ally. Both Government experts and the airlines agreed on its new impor­tance. Unfortunately, the reality of the situation in the mid-1970s was that the purchase of new aircraft or conversion of existing aircraft would cost the airlines millions of dollars, and it was estimated that $1.5 bil­lion in fuel costs would be lost before the transition would be com­pleted. The impetus would be a fuel crisis like the Arab oil embargo, during which the price per gallon increased from 12 to 30 cents within the space of a year.[222]

The introduction of the supercritical wing on production aircraft centered on the Air Force’s Advanced Medium Short Take-Off and Landing (STOL) Transport competition between McDonnell-Douglas and Boeing to replace the Lockheed C-130 Hercules in the early 1970s. The McDonnell-Douglas design, the YC-15, was the first large transport with supercritical wings in 1975. Neither the YC-15 nor the Boeing YC-14 replaced the Hercules because of the cancellation of the competition, but their wings represented to the press an "exotic advance” that pro­vided new levels of aircraft fuel economy in an era of growing fuel costs.[223]

During the design process of the YC-14, Boeing aerodynamicists also selected a supercritical airfoil for the wing. They based their decision on previous research with the 747 airliner wing, data from Whitcomb’s research at Langley, and the promising performance of a Navy T-2C Buckeye that North American Aviation modified with a supercritical air­foil to gain experience for the F-8 wing project and undergoing flight tests in November 1969. Boeing’s correlation of wind tunnel and flight test data convinced the company to introduce supercritical airfoils on the YC-14 and for all of its subsequent commercial transports, includ­ing the triumphant "paperless” airplane, the 777 of the 1990s.[224]

The business jet community embraced the supercritical wing in the increasingly fuel – and energy-conscious 1970s. Business jet pioneer Bill Lear incorporated the new technology in the Canadair Challenger 600, which took to the air in 1978. Rockwell International incorporated the technology into the upgraded Sabreliner 65 of 1979. The extensively redesigned Dassault Falcon 50, introduced the same year, relied upon a supercritical wing that enabled an over-3,000-mile range.[225]

The supercritical wing program gave NASA the ability to stay in the public eye, as it was an obvious contribution to aeronautical technol­ogy. The program also improved public relations and the stature of both Langley and Dryden at a time in the 1960s and 1970s when the first "A” in NASA—aeronautics—was secondary to the single "S”—space. For this reason, historian Richard P. Hallion has called the supercritical wing program "Dryden’s life blood” in the early 1970s.[226]

Subsonic transports, business jets, STOL aircraft, and uncrewed aerial vehicles incorporate supercritical wing technology today.[227] All airliners today have supercritical airfoils custom-designed and fine – tuned by manufacturers with computational fluid dynamics software programs. There is no NASA supercritical airfoil family like the signifi­cant NACA four – and five-airfoil families. The Boeing 777 wing embod­ies a Whitcomb heritage. This revolutionary information appeared in NASA technical notes (TN) and other publications with little or no fan­fare and through direct consultation with Whitcomb. A Lockheed engi­neer and former employee of Whitcomb in the late 1960s remarked on his days at NASA Langley:

When I was working for Dick Whitcomb at NASA, there was hardly a week that went by that some industry person did not come in to see him. It was a time when NASA was being constantly asked for technical advice, and Dick always gave that advice freely. He was always there when industry wanted him to help out. This is the kind of cooperation that makes industry want to work with NASA. As a result of that sharing, we have seen the influence of supercritical technology to go just about every corner of our industry.[228]

Whitcomb set the stage and the direction of contemporary air-craft design.

More accolades were given to Whitcomb by the Government and industry during the years he worked on the supercritical wing. From NASA, he received the Medal for Exceptional Scientific Achievement in 1969, and 5 years later, NASA Administrator James Fletcher awarded Whitcomb $25,000 in cash for the invention of the supercritical wing from NASA in June 1974. The NASA Inventions and Contributions Board recommended the cash prize to recognize individual contributions to the Agency’s programs. It was the largest cash award given to an individual at NASA.[229] In 1969, Whitcomb accepted the Sylvanus Albert Reed Award from the American Institute of Aeronautics and Astronautics, the organi­zation’s highest honor for achievement in aerospace engineering. In 1973, President Richard M. Nixon presented him the highest honor for science and technology awarded by the U. S. Government, the National Medal of Science.[230] The National Aeronautics Association bestowed upon Whitcomb the Wright Brothers Memorial Trophy in 1974 for his dual achievements in developing the area rule and supercritical wing.[231]

Trying Once More: The High-Speed Research Program

While Boeing and Douglas were reporting on early phases of their HSCT studies, the U. S. Congress approved an ambitious new program for High­Speed Research (HSR) in NASAs budget for FY 1990. This effort envisioned Government and industry sharing the cost, with NASA taking the lead for the first several years and industry expanding its role as research progressed. (Because of the intermingling of sensitive and proprietary information, much of the work done during the HSR program was protected by a limited dis­tribution system, and some has yet to enter the public domain.) Although the aircraft companies made some early progress on lower-boom concepts for the HSCT, they identified the need for more sonic boom research by NASA, especially on public acceptability and minimization techniques, before they could design a practical HSCT able to cruise over land.[470]

Because solving environmental issues would be a prerequisite to developing the HSCT, NASA structured the HSR program into two phases. Phase I—focusing on engine emissions, noise around airports, and sonic booms, as well as preliminary design work—was scheduled for 1990-1995. Among the objectives of Phase I were predicting HCST sonic boom signatures, determining feasible reduction levels, and find­ing a scientific basis on which to set acceptability criteria. After hope­fully making sufficient progress on the environmental problems, Phase II would begin in 1994. With more industry participation and greater funding, it would focus on economically realistic airframe and propul­sion technologies and was hoped to have extended until 2001.[471]

When NASA convened its first workshop for the entire High-Speed Research program in Williamsburg, VA, from May 14-16, 1991, the head­quarters status report on sonic boom technology warned that "the impor­tance of reducing sonic boom cannot be overstated.” One of the Douglas studies had projected that even by 2010, overwater-only routes would account for only 28 percent of long-range air traffic, but with overland cruise, the proposed HSCT could capture up to 70 percent of all such traffic. Based on past experience, the study admitted that research on low boom designs "is viewed with some skepticism as to its practical application. Therefore an early assessment is warranted.”[472]

NASA, its contractors, academic grantees, and the manufactures were already busy conducting a wide range of sonic boom research projects. The main goals were to demonstrate a waveform shape that could be acceptable to the public, to prove that a viable airplane could be built to generate such a waveform, to determine that such a shape would not be too badly disrupted during its propagation through the atmosphere, and to estimate that the economic benefit of overland super­sonic flight would make up for any performance penalties imposed by a low-boom design.[473]

During the next 3 years, NASA and its partners went into a full-court press against the sonic boom. They began several dozen major experi­ments and studies, the results of which were published in reports and presented at conferences and workshops dealing solely with the sonic boom. These were held at the Langley Research Center in February 1992,[474] the Ames Research Center in May 1993,[475] the Langley Center in June 1994,[476] and again at Langley in September 1995.[477] The work­shops, like the sonic boom effort itself, were organized into three major

BLUNT NOSE

 

SHARP NOSE

 

NEAR FIELD

 

MID FIELD

 

GROUND

 

Trying Once More: The High-Speed Research Program

Trying Once More: The High-Speed Research Program

Подпись: LOW BOOM HIGH DRAGHIGH BOOM LOW DRAG

Figure 6. Low-boom/high-drag paradox. NASA.

areas of research: (1) configuration design and analysis (managed by Langley’s Advanced Vehicles Division), (2) atmospheric propagation, and (3) human acceptability (both managed by Langley’s Acoustics Division). The reports from these workshops were each well over 500 pages long and included dozens of papers on the progress or completion of various projects.[478]

The HSR program precipitated major advances in the design of super­sonic configurations for reduced sonic boom signatures. Many of these were made possible by the new field of computational fluid dynamics (CFD). Researchers were now able to use complex computational algo­rithms processed by supercomputers to calculate the nonlinear aspects of near-field shock waves, even at high Mach numbers and angles of attack. Results could be graphically displayed in mesh and grid formats that emu­lated three dimensions. (In simple terms: before CFD, the nonlinear char­acteristics of shock waves generated by a realistic airframe had involved too many variables and permutations to calculate by conventional means.)

The Ames Research Center, with its location in the rapidly growing Silicon Valley area, was a pioneer in applying CFD capabilities to aero­dynamics. At the 1991 HSR workshop, an Ames team led by Thomas Edwards and including modeling expert Samsun Cheung predicted that "in many ways, CFD paves the way to much more rapid progress

in boom minimization. . . . Furthermore, CFD offers fast turnaround and low cost, so high-risk concepts and perturbations to existing geom­etries can be investigated quickly.”[479]

At the same time, Christine Darden and a team that included Robert Mack and Peter G. Coen, who had recently devised a computer program for predicting sonic booms, used very realistic 12-inch wind tunnel mod­els (the largest yet to measure for sonic boom). Although the model was more realistic than previous ones and validating much about the designs, including such details as engine nacelles, signature measurements in Langley’s4 by 4 Unitary Wind Tunnel and even Ames 9 by 7 Unitary Wind Tunnel still left much to be desired.[480] During subsequent workshops and at other venues, experts from Ames, Langley and their local contractors reported optimistically on the potential of new CFD computer codes— with names like UPS3D, OVERFLOW, AIRPLANE, and TEAM—to help design configurations optimized for both constrained sonic booms and aerodynamic efficiency. In addition to promoting the use of CFD, for­mer Langley employee Percy "Bud” Bobbitt of Eagle Aeronautics pointed out the potential of hybrid laminar flow control (HLFC) for both aerody­namic and low-boom purposes.[481] At the 1992 workshop, Darden and Mack admitted how recent experiments at Langley had revealed limitations in using near-field wind tunnel data for extrapolating sonic boom signatures.[482]

Even the numbers-crunching capability of supercomputers was not yet powerful enough for CFD codes and the grids they produced to accu­rately depict effects beyond the near field, but the use of parallel computing held the promise of eventually doing so. It was becoming apparent that, for most aerodynamic purposes, CFD was the design tool of the future, with wind tunnel models becoming more a means of verification. As just one example of the value of CFD methods, Ames researchers were able to design an airframe that generated a type of multishock signature that might reach the ground with a quieter sonic boom than either the ramp or flattop wave forms that were a goal of traditional minimization theo­ries.[483] Although not part of the HSCT effort, Ames and its contractors also used CFD to continue exploring the possible advantages of oblique­wing aircraft, including sonic boom minimization.[484]

Since neither wind tunnels nor CFD could as yet prove the persis­tence of waveforms for more than a small fraction of the 200 to 300 body lengths needed to represent the distance from an HSCT to the surface, Domenic Maglieri of Eagle Aeronautics led a feasibility study in 1992 on the most cost effective ways to verify design concepts with realistic test­ing. After exploring a wide range of alternatives, the team selected the Teledyne-Ryan BQM-34 Firebee II remotely piloted vehicle (RPV), which the Air Force and Navy had used as a supersonic target drone. Four of these 28-feet-long RPVs, which could sustain a speed of Mach 1.3 at 9,000 feet (300 body lengths from the surface) were still available as surplus. Modifying them with low-boom design features such as specially con­figured 40-inch nose extensions (shown in Figure 7 with projected wave­forms from 20,000 feet), could provide far-field measurements needed to verify the waveform shaping projected by CFD and wind tunnel mod­els.[485] Meanwhile, a complementary plan at the Dryden Flight Research Center led to NASA’s first significant sonic boom testing there since the 1960s. SR-71 program manager David Lux, atmospheric specialist L. J. Ehernberger, aerodynamicist Timothy R. Moes, and principal investi­gator Edward A. Haering came up with a proposal to demonstrate CFD design concepts by having one of Dryden’s SR-71s modified with a low – boom configuration. As well as being much larger, faster, and higher-fly­ing than the little Firebee, an SR-71 would also allow easier acquisition of near-field measurements for direct comparison with CFD predic­tions.[486] To lay the groundwork for this modification, the Dryden Center obtained baseline data from a standard SR-71 using one of its distinc­tive F-16XL aircraft (built by General Dynamics in the early 1980s for evaluation by the Air Force as a long-range strike version of the F-16 fighter). In tests at Edwards during July 1993, the F-16XL flew as close as 40 feet below and behind an SR-71 cruising at Mach 1.8 to collect near­field pressure measurements. Both the Langley Center and McDonnell – Douglas analyzed this data, which had been gathered by a standard flight – test nose boom. Both reached generally favorable conclusions about the ability of CFD and McDonnell-Douglas’s proprietary MDBOOM pro­gram (derived from PCBoom) to serve as design tools. Based on these results, McDonnell-Douglas Aerospace West designed modifications to reduce the bow and middle shock waves of the SR-71 by reshaping the front of the airframe with a "nose glove” and adding to the midfuse­lage cross-section. An assessment of these modifications by Lockheed Engineering & Sciences found them feasible.[487] The next step would be to obtain the considerable funding that would be needed for the mod­ifications and testing.

In May 1994, the Dryden Center used two of its fleet of F-18 Hornets to measure how near-field shockwaves merged to assess the feasibil­ity of a similar low-cost experiment in waveform shaping using two SR-71s. Flying at Mach 1.2 with one aircraft below and slightly behind the other, the first experiment positioned the canopy of the lower F-18 in the tail shock extending down from the upper F-18 (called a tail-can­opy formation). The second experiment had the lower F-18 fly with its canopy in the inlet shock of the upper F-18 (inlet-canopy). Ground sensor recordings revealed that the tail-canopy formation caused two separate N-wave signatures, but the inlet-canopy formation yielded a single modified signature, which two of the recorders measured as a flat­top waveform. Even with the excellent visibility from the F-18’s bubble canopy (one pilot used the inlet shock wave as a visual cue for positioning

Trying Once More: The High-Speed Research Program

Figure 7. Proposed modifications to BQM-34 Firebee II. NASA.

his aircraft) and its responsive flight controls, maintaining such pre­cise positions was still not easy, and the pilots recommended against trying to do the same with the SR-71, considering its larger size, slower response, and limited visibility.[488]

Atmospheric effects had long posed many uncertainties in under­standing sonic booms, but advances in acoustics and atmospheric sci­ence since the SCR program promised better results. Scientists needed a better understanding not only of the way air molecules absorb sound waves, but also old issue of turbulence. In addition to using the Air Force’s Boomfile and other available material, Langley’s Acoustic Division had Eagle Aeronautics, in a project led by Domenic Maglieri, restore and digitize data from the irreplaceable XB-70 records.[489]

The division also took advantage of the NATO Joint Acoustic Propagation Experiment (JAPE) at the White Sands Missile Range in August 1991 to do some new testing. The researchers arranged for F-15, F-111, T-38 aircraft, and one of Dryden’s SR-71s to make 59 supersonic passes over an extensive array of BEAR, other recording systems, and meteorological sensors—both early in the morning (when the air was still) and during the afternoon (when there was usually more turbulence). Although meteorological data were incomplete, results later showed

Trying Once More: The High-Speed Research Program

UNMODIFIED SR-71 CONFIGURATION,
M -18. a-3.5 DEG

Trying Once More: The High-Speed Research Program

SR-71 CONFIGURATION WITH Mr DONNELL DOUGLAS-MODIFIED FUSELACE M =18. a = 3 9 DEG

am

Figure 8. Proposed SR-71 low-boom modification. NASA.

the effects of molecular relaxation and turbulence on both the rise time and overpressure of bow shocks.[490] Additional atmospheric information came from experiments on waveform freezing (persistence), measur­ing diffraction and distortion of sound waves, and trying to discover the actual relationship among molecular relaxation, turbulence, humidity, and other weather conditions.[491]

Leonard Weinstein of the Langley Center even developed a way to capture images of shock waves in the real atmosphere. He did this using a ground-based schlieren system (a specially masked and fil­tered tracking camera with the Sun providing backlighting). As shown in the accompanying photo, this was first demonstrated in December 1993 with a T-38 flying just over Mach 1 at Wallops Island.[492] All of the research into the theoretical, aerodynamic, and atmospheric aspects of sonic boom—no matter how successful—would not protect the Achilles’ heel of previous programs: the subjective response of human beings.

As a result, the Langley Center, led by Kevin Shepherd of the Acoustics Division, had begun a systematic effort to measure human responses to different strengths and shapes of sonic booms and hopefully deter­mine a tolerable level for community acceptance. As an early step, the division built an airtight foam-lined sonic boom simulator booth (known as the "boom box”) based on one at the University of Toronto. Using the latest in computer-generated digital amplification and loud­speaker technology, it was capable of generating shaped waveforms up to 4 psf (140 decibels). Based on responses from subjects, researchers selected the perceived-level decibel (PLdB) as the preferred metric. For responses outside a laboratory setting, Langley planned several additional acceptance studies.[493]

By 1994, early results had become available from two of these proj­ects. The Langley Center and Wyle Laboratories had developed mobile boom simulator equipment called the In-Home Noise Generation/ Response System (IHONORS). It consisted of computerized sound sys­tems installed in 33 houses for 8 weeks at a time in a network connected by modems to a monitor at Langley. From February to December 1993, these households were subjected to almost 58,500 randomly timed sonic booms of various signatures for 14 hours a day. Although definitive anal­yses were not available until the following year, the initial results con­firmed how the level of annoyance increased whenever subjects were startled or trying to rest.[494]

Preliminary results were also in from the first phase of the Western USA Sonic Boom Survey of civilians who had been exposed to such sounds for many years. This part of the survey took place in remote des­ert towns around the Air Force’s vast Nellis combat training range com­plex in Nevada. Unlike previous community surveys, it correlated citizen responses to accurately measured sonic boom signatures (using BEAR devices) in places where booms were a regular occurrence, yet where the subjects did not live on or near a military installation (i. e., where

Trying Once More: The High-Speed Research Program

Leonard Weinstein’s innovative schlieren photograph showing shock waves emanating from a T-38 flying Mach 1.1 at 13,000 feet, December 1993. NASA.

the economic benefits of the base for the local economy might influence their opinions). Although findings were not yet definitive, these 1,042 interviews proved more decisive than any of the many other research projects in determining the future direction of the HSCT effort. Based on a metric called day-night average noise level, the respondents found the booms much more annoying than previous studies on other types of aircraft noise had, even at the levels projected for low-boom designs. Their negative responses, in effect, dashed hopes that the HSR program might lead to an overland supersonic transport.[495]

Well before the paper on this survey was presented at the 1994 Sonic Boom Workshop, its early findings had prompted NASA Headquarters to reorient High-Speed Research toward an HSCT design that would only fly supersonic over water. Just as with the AST program 20 years ear­lier, this became the goal of Phase II of the HSR program (which began using FY 1994 funding left over from the canceled NASP).[496]

At the end of the 1994 workshop, Christine Darden discussed lessons learned so far and future directions. While the design efforts had shown outstanding progress, dispersal of the work among two NASA Centers

and two major aircraft manufacturers had resulted in communication problems as well as a certain amount of unhelpful competition. The mile­stone-driven HSR effort required concurrent progress in various differ­ent areas, which is inherently difficult to coordinate and manage. And even if low-boom airplane designs had been perfected to meet acoustic criteria, they would have been heavier and suffer from less acceptable low-speed performance than unconstrained designs. Under the new HSR strategy, any continued minimization research would now aim at lower­ing the sonic boom of the "baseline” overwater design, while propagation studies would concentrate on predicting boom carpets, focused booms, secondary booms, and ground disturbances. In view of the HSCT’s over­water mission, new environmental studies would devote more atten­tion to the penetration of shock waves into water and the effect of sonic booms on marine mammals and birds.[497]

Although the preliminary results of the first phase of the Western USA Survey had already had a decisive impact, Wyle Laboratories com­pleted Phase II with a similar polling of civilians in Mojave Desert com­munities exposed regularly to sonic booms, mostly from Edwards AFB. Surprisingly, this phase of the survey found the people there much more amenable to sonic booms than the desert dwellers in Nevada were, but they were still more annoyed by booms than by other aircraft noise of comparable perceived loudness.[498]

With the decision to end work on a low-boom HSCT, the proposed modifications of the Firebee RPVs and SR-71 had of course been can­celed (postponing for another decade any full-scale demonstrations of boom shaping). Nevertheless, some testing continued that would prove of future value. From February through April 1995, the Dryden Center conducted more SR-71 and F-16XL sonic boom flight tests. Led by Ed Haering, this experiment included an instrumented YO-3A light aircraft from the Ames Center, an extensive array of various ground sensors, a network of new differential Global Positioning System (GPS) receivers accurate to within 12 inches, and installation of a sophisticated new nose boom with four pressure sensors on the F-16XL. On eight long missions, one of Dryden’s SR-71s flew at speeds between Mach 1.25 and Mach 1.6 at 31,000-48,000 feet, while the F-16XL (kept aloft by in-flight refuel­ings) made numerous near – and mid-field measurements at distances from 80 to 8,000 feet. Some of these showed that the canopy shock waves were still distinct from the bow shock after 4,000-6,000 feet. Comparisons of far-field measurements obtained by the YO-3A flying at 10,000 feet above ground level and the recording devices on the surface revealed effects of atmospheric turbulence. Analysis of the data validated two existing sonic boom propagation codes and clearly showed how variations in the SR-71’s gross weight, speed, and altitude caused differences in shock wave pat­terns and their coalescence into N-shaped waveforms.[499]

This successful experiment marked the end of dedicated sonic boom flight-testing during the HSR program.

By late 1998, a combination of economic, technological, polit­ical, and budgetary problems (including NASA’s cost overruns for the International Space Station) convinced Boeing to cut its support and the Administration of President Bill Clinton to terminate the HSR program at the end of FY 1999. Ironically, NASA’s success in helping the aircraft industry develop quieter subsonic aircraft, which had the effect of moving the goalpost for acceptable airport noise, was one of the factors convincing Boeing to drop plans for the HSCT. Nevertheless, the HSR program was responsible for significant advances in technologies, techniques, and scientific knowledge, including a better understand of the sonic boom and ways to diminish it.[500]

The Challenge of Limit-Cycles

The success of the new electronic control system concepts was based on the use of electrical signals from sensors (primarily rate gyros and accel­erometers) that could be fed into the flight control system to control air­craft motion. As these electronic elements began to play a larger role, a different dynamic phenomenon came into play. "Limit-cycles” are a com­mon characteristic of nearly all mechanical-electrical closed-loop systems and are related to the total gain of the feedback loop. For an aircraft flight control system, total loop gain is the product of two variables: (1) the mag­nitude of the aerodynamic effectiveness of the control surface for creating rotational motion (aerodynamic gain) and (2) the magnitude of the artifi­cially created control surface command to the control surface (electrical gain). When the aerodynamic gain is low, such as at very low airspeeds, the electrical gain will be correspondingly high to command large sur­face deflections and rapid aircraft response. Conversely, when the aero­dynamic gain is high, such as at high airspeed, low electrical gains and small surface deflections are needed for rapid airplane response.

These systems all have small dead bands, lags, and rate limits (non­linearities) inherent in their final, real-world construction. When the

total feedback gain is increased, the closed-loop system will eventually exhibit a small oscillation (limit-cycle) within this nonlinear region. The resultant total loop gain, which causes a continuous, undamped limit – cycle to begin, represents the practical upper limit for the system gain since a further increase in gain will cause the system to become unstable and diverge rapidly, a condition which could result in structural failure of the system. Typically the limit-cycle frequency for an aircraft control system is between two and four cycles per second.

Notice that the limit-cycle characteristics, or boundaries, are depen­dent upon an accurate knowledge of control surface effectiveness. Ground tests for limit-cycle boundaries were first devised by NASA Dryden Flight Research Center (DFRC) for the X-15 program and were accomplished by using a portable analog computer, positioned next to the airplane, to gen­erate the predicted aerodynamic control effectiveness portion of the feed­back path.[683] The control system rate gyro on the airplane was bypassed, and the analog computer was used to generate the predicted aircraft response that would have been generated had the airplane been actually flying. This equivalent rate gyro output was then inserted into the control system. The total loop gain was then gradually increased at the analog computer until a sustained limit-cycle was observed at the control surface. Small stick raps were used to introduce a disturbance in the closed-loop system in order to observe the damping characteristics. Once the limit-cycle total loop gain boundaries were determined, the predicted aerodynamic gains for various flight conditions were used to establish electrical gain limits over the flight envelope. These ground tests became routine at NASA Dryden and at the Air Force Flight Test Center (AFFTC) for all new aircraft.[684] For subsequent production aircraft, the resulting gain schedules were programmed within the flight control system computer. Real-time, direct measurements of air­speed, altitude, Mach number, and angle of attack were used to access and adjust the electrical gain schedules while in flight to provide the highest safe feedback gain while avoiding limit-cycle boundaries.

Although the limit-cycle ground tests described above had been per­formed, the NASA-Northrop HL-10 lifting body encountered limit-cycle

oscillations on its maiden flight. After launch from the NB-52, the telem­etry data showed a large limit-cycle oscillation of the elevons. The oscil­lations were large enough that the pilot could feel the aircraft motion in the cockpit. NASA pilot Bruce Peterson manually lowered the pitch gain, which reduced the severity of the limit-cycle. Additional aerodynamic problems were present during the short flight requiring that the final landing approach be performed at a higher-than-normal airspeed. This caused the limit-cycle oscillations to begin again, and the pitch gain was reduced even further by Peterson, who then capped his already impres­sive performance by landing the craft safely at well over 300 mph. NASA engineer Weneth Painter insisted the flight be thoroughly analyzed before the test team made another flight attempt, and subsequent analysis by Robert Kempel and a team of engineers concluded that the wind tunnel predictions of elevon control effectiveness were considerably lower than the effectiveness experienced in flight.[685] This resulted in a higher aero­dynamic gain than expected in the total loop feedback path and required a reassessment of the maximum electrical gain that could be tolerated.[686]

Digital Computer Simulation

The computational mathematical models for the early simulators mentioned previously were performed on analog computers. Analog computers were capable of solving complex differential equations in real time. The digital computers available in the 1950s were mechanical units that were extremely slow and not capable of the rapid integration that was required for simulation. One difficulty with analog comput­ers was the existence of electronic noise within the equipment, which caused the solutions to drift and become inaccurate after several min­utes of operation. For short simulation exercises (such as a 10-minute X-15 flight) the results were quite acceptable. A second difficulty was storing data, such as aerodynamic functions.

The X-20 Dyna-Soar program mentioned previously posed a chal­lenge to the field of simulation. The shortest flight was to be a once – around orbital flight with a flight time of over 90 minutes. A large volume

Digital Computer Simulation
Digital Computer Simulation

of aerodynamic data needed to be stored covering a very large range of Mach numbers and angles of attack. The analog inaccuracy problem was tackled by University of Michigan researchers, who revised the standard equations of motion so that the reference point for integration was a 300- mile circular orbit, rather than the starting Earth coordinates at takeoff. These equations greatly improved the accuracy of analog simulations of orbiting vehicles. As the AFFTC and NASA began to prepare for testing of the X-20, an analog simulation was created at Edwards that was used to develop test techniques and to train pilots. Comparing the real-time sim­ulation solutions with non-real-time digital solutions showed that the clo­sure after 90 minutes was within about 20,000 feet—probably adequate for training, but they still dictated that the mission be broken into segments for accurate results. The solution was the creation of a hybrid computer simulation that solved the three rotational equations using analog com­puters but solved the three translational equations at a slower rate using digital computers. The hybrid computer equipment was purchased for installation at the AFFTC before the X-20 program was canceled in 1963. When the system was delivered, it was reprogrammed to represent the X-15A-2, a rebuilt variant of the second X-15 intended for possible flight to Mach 7, carrying a scramjet aerodynamic test article on a stub ventral fin.[737] Although quite complex (it necessitated a myriad of analog-to-digital and digital-to-analog conversions), this hybrid system was subsequently

used in the AFFTC simulation lab to successfully simulate several other airplanes, including the C-5, F-15, and SR-71, as well as the M2-F2 and X-24A/B Lifting Bodies and Space Shuttle orbiter.

The speed of digital computers increased rapidly in the 1970s, and soon all real-time simulation was being done with digital equipment. Out – of-the-window visual displays also improved dramatically and began to be used in conjunction with the cockpit instruments to provide very real­istic training for flight crews. One of the last features to be developed in the field of visual displays was the accurate representation of the terrain surface during the last few feet of descent before touchdown.

Simulation has now become a primary tool for designers, flight-test engineers, and pilots during the design, development, and flight-testing of new aircraft and spacecraft.

Some Seminal Solutions and Applications

We have discussed the historical evolution of the governing flow equa­tions, the first essential element of CFD. We then discussed the evolu­tion of the high-speed digital computer, the second essential element of CFD. We now come to the crux of this article, the actual CFD flow-field solutions, their evolution, and their importance. Computational fluid dynamics has grown exponentially in the past four decades, render­ing any selective examination of applications problematical. This case study examines four applications that have driven the development of CFD to its present place of prominence: the supersonic blunt body prob­lem, transonic airfoils and wings, Navier-Stokes solutions, and hyper­sonic vehicles.

Ames Research Center

The Ames Research Center—with research responsibilities within aero­dynamics, aeronautical and space vehicle studies, reentry and thermal protection systems, simulation, biomedical research, human factors, nanotechnology, and information technology—is one of the world’s pre­mier aerospace research establishments. It was the second NACA labora­tory, established in 1939 as war loomed in Europe. The Center was built initially to provide for expansion of wind tunnel facilities beyond the space and power generation capacity available at Langley. Accordingly, in the computer age, Ames became a major center for computational fluid dynamics methods development.[850] Ames also developed a large and active structures effort, with approximately 50 to 100 researchers involved in the structural disciplines at any given time.[851] Areas of research include structural dynamics, hypersonic flight and r-entry, rotorcraft, and multidisciplinary design/analysis/optimization. These last two are discussed briefly below.

In the early 1970s, a joint NASA-U. S. Army rotorcraft program led to a significant amount of rotorcraft flight research at Ames. "The flight research activity initially concentrated on control and handling issues. . . . Later on, rotor aerodynamics, acoustics, vibration, loads, advanced concepts, and human factors research would be included as important elements in the joint program activity.”[852] As is typically the case, this effort impacted the direction of analytical work as well in rotor aeroelastics, aeroservoelastics, acoustics, rotor-body coupling, rotor air loads prediction, etc. For example, a "comprehensive analytical model” completed in 1980 combined struc­tural, inertial, and aerodynamic models to calculate rotor performance, loads, noise, vibration, gust response, flight dynamics, handling qualities, and aeroelastic stability of rotorcraft.[853] Other efforts were less comprehen­sive and produced specialized methods for treating various aspects of the rotorcraft problem, such as blade aeroelasticity.[854] The GeneralRotorcraft Aeromechanical Stability Program (GRASP) combined finite elements with concepts used in spacecraft multibody dynamics problems, treating the helicopter as a structure with flexible, rotating substructures.[855]

Rotorcraft analysis has to be multidisciplinary, because of the many types of coupling that are active. Fixed wing aircraft have not always been treated with a multidisciplinary perspective, but the multi-disci­plinary analysis and optimization of aircraft is a growing field and one in which Ames has made many valuable contributions. The Advanced Concepts Branch, not directly associated with Structures & Loads but responsible for multidisciplinary vehicle design and optimization stud­ies, has performed and/or sponsored much of this work.

A general-purpose optimization program, CONMIN, was devel­oped jointly by Ames and by the U. S. Army Air Mobility Research &

Development Laboratory in 1973[856] and had been used extensively by NASA Centers and contractors through the 1990s. Garret Vanderplaats was the principal developer. Because it is a generic mathematical func­tion minimization program, it can in principle drive any design/analysis process toward an optimum. CONMIN has been coupled with many dif­ferent types of analysis programs, including NASTRAN.[857]

Aircraft Synthesis (ACSYNT) was an early example of a multidis­ciplinary aircraft sizing and conceptual design code. Like many early (and some current) total-vehicle sizing and synthesis tools, ACSYNT did not actually perform structural analysis but instead used empirically based equations to estimate the weight of airframe structure. ACSYNT was initially released in the 1970s and has been widely used in the air­craft industry and at universities. Collaboration between Ames and the Virginia Polytechnic Institute’s CAD Laboratory, to develop a computer – aided design (CAD) interface for ACSYNT, eventually led to the commer­cialization of ACSYNT and the creation of Phoenix Integration, Inc., in 1995.[858] Phoenix Integration is currently a major supplier of analysis inte­gration and multidisciplinary optimization software.

Tools such as ACSYNT are very practical, but it has also been a goal at Ames to couple the prediction of aerodynamic forces and loads to more rigorous structural design and analysis, which would give more insight into the effects of new materials or novel vehicle configurations. To this end, a code called ENSAERO was developed, combining finite ele­ment structural analysis capability with high-fidelity Euler (inviscid) and Navier-Stokes (viscous) aerodynamics solutions. "The code is capable of computing unsteady flows on flexible wings with vortical flows,”[859] and pro­visions were made to include control or thermal effects as well. ENSAERO was introduced in 1990 and developed and used throughout the 1990s.

In a cooperative project with Virginia Tech and McDonnell-Douglas Aerospace, ENSAERO was eventually coupled with NASTRAN to provide higher structural fidelity than the relatively limited structural capability intrinsic to ENSAERO.[860] Guru Guruswamy was the principal developer.

In the late 1990s, Juan Alonso, James Reuther, and Joaquim Martins, with other researchers at Ames, applied the adjoint method to the prob­lem of combined aerostructural design optimization. The adjoint method, first applied to purely aerodynamic shape optimization in the late 1980s by Dr. Antony Jameson, is an approach to optimization that provides revolutionary gains in efficiency relative to traditional methods, espe­cially when there are a large number of design variables. It is not an exaggeration to say that adjoint methods have revolutionized the art of aerodynamic optimization. Technical conferences often contain whole sessions on applications of adjoint methods, and several aircraft com­panies have made practical applications of the technique to the aero­dynamic design of aircraft that are now in production.[861] Bringing this approach to aerostructural optimization is extremely significant.

ANSYMP Computer Program (Glenn Research Center, 1983)

ANSYMP was developed to capture the key elements of local plastic behavior without the overhead of a full nonlinear finite element anal­ysis. "Nonlinear, finite-element computer programs are too costly to use in the early design stages for hot-section components of aircraft gas turbine engines. . . . This study was conducted to develop a com­puter program for performing a simplified nonlinear structural analy­sis using only an elastic solution as input data. The simplified method was based on the assumption that the inelastic regions in the structure are constrained against stress redistribution by the surrounding elastic material. Therefore the total strain history can be defined by an elastic analysis. . . . [ANSYMP] was created to predict the stress-strain history at the critical fatigue location of a thermomechanically cycled structure from elastic input data. . . . Effective [inelastic] stresses and plastic strains are approximated by an iterative and incremental solution procedure.” ANSYMP was verified by comparison to a full nonlinear finite element code (MARC). Cyclic hysteresis loops and mean stresses from ANSYMP "were in generally good agreement with the MARC results. In a typical problem, ANSYMP used less than 1 percent of the central processor unit (CPU) time required by MARC to compute the inelastic solution.”[980]

Reusable Surface Insulation

Early in the 1960s, researchers at Lockheed introduced an entirely dif­ferent approach to thermal protection, which in time became the stan­dard. Ablatives were unrivalled for once-only use, but during that decade the hot structure continued to stand out as the preferred approach for reusable craft such as Dyna-Soar. As noted, it used an insulated primary or load-bearing structure with a skin of outer panels. These emitted heat by radiation, maintaining a temperature that was high but steady.

Подпись: ULTIMATE TENSILE STRENGTH Подпись: MAX TEMP

Reusable Surface Insulation2124-T851

7075-T76S1

Подпись: INCO 718 RENE 41 Подпись: 9

LOCKALLOV

Подпись: TI-6AI-4VHAYNES 188

HAYNES 188

Vi RENE 41 INC0 62S

Strength versus temperature for various superalloys, including Rene 41, the primary structural material used on the X-20 Dyna-Soar. NASA.

Metal fittings supported these panels, and while the insulation could be high in quality, these fittings unavoidably leaked heat to the underlying structure. This raised difficulties in crafting this structure of aluminum and even of titanium, which had greater heat resistance. On Dyna-Soar, only Rene 41 would do.[1062]

Ablatives avoided such heat leaks while being sufficiently capable as insulators to permit the use of aluminum. In principle, a third approach combined the best features of hot structures and ablatives. It called for the use of temperature-resistant tiles, made perhaps of ceramic, which could cover the vehicle skin. Like hot-structure panels, they would radi­ate heat, while remaining cool enough to avoid thermal damage. In addition, they were to be reusable. They also were to offer the excellent insulating properties of good ablators, preventing heat from reaching the underlying structure—which once more might be of aluminum. This concept became known as reusable surface insulation (RSI). In time, it gave rise to the thermal protection of the Shuttle.

RSI grew out of ongoing work with ceramics for thermal protec­tion. Ceramics had excellent temperature resistance, light weight, and good insulating properties. But they were brittle and cracked rather than stretched in response to the flexing under load of an underlying metal primary structure. Ceramics also were sensitive to thermal shock, as when heated glass breaks when plunged into cold water. In flight, such thermal shock resulted from rapid temperature changes during reentry.[1063]

Monolithic blocks of the ceramic zirconia had been specified for the nose cap of Dyna-Soar, but a different point of departure used mats of solid fiber in lieu of the solid blocks. The background to the Shuttle’s tiles lay in work with such mats that took place early in the 1960s at Lockheed Missiles and Space Company. Key people included R. M. Beasley, Ronald Banas, Douglas Izu, and Wilson Schramm. A Lockheed patent disclo­sure of December 1960 gave the first presentation of a reusable insula­tion made of ceramic fibers for use as a heat shield. Initial research dealt with casting fibrous layers from a slurry and bonding the fibers together.

Related work involved filament-wound structures that used long continuous strands. Silica fibers showed promise and led to an early success: a conical radome of 32-inch diameter built for Apollo in 1962. Designed for reentry, it had a filament-wound external shell and a light­weight layer of internal insulation cast from short fibers of silica. The two sections were densified with a colloid of silica particles and sintered into a composite. This gave a nonablative structure of silica composite reinforced with fiber. It never flew, as design requirements changed dur­ing the development of Apollo. Even so, it introduced silica fiber into the realm of reentry design.

Another early research effort, Lockheat, fabricated test versions of fibrous mats that had controlled porosity and microstructure. These were impregnated with organic fillers such as Plexiglas (methyl meth­acrylate). These composites resembled ablative materials, though the filler did not char. Instead it evaporated or volatilized, producing an outward flow of cool gas that protected the heat shield at high heat – transfer rates. The Lockheat studies investigated a range of fibers that included silica, alumina, and boria. Researchers constructed multilayer composite structures of filament-wound and short-fiber materials that resembled the Apollo radome. Impregnated densities were 40 to 60 lb/ ft3, the higher density being close to that of water. Thicknesses of no more than an inch gave acceptably low back-face temperatures during simulations of reentry.

This work with silica-fiber ceramics was well underway during 1962. Three years later, a specific formulation of bonded silica fibers was ready for further development. Known as LI-1500, it was 89 percent porous and had a density of 15 lb/ft3, one-fourth that of water. Its external sur­face was impregnated with filler to a predetermined depth, again to provide additional protection during the most severe reentry heating. By the time this filler was depleted, the heat shield was to have entered a zone of more moderate heating, where the fibrous insulation alone could provide protection.

Initial versions of LI-1500, with impregnant, were intended for use with small space vehicles similar to Dyna-Soar that had high heating rates. Space Shuttle concepts were already attracting attention—the January 1964 issue of Astronautics & Aeronautics, the journal of the American Institute of Aeronautics and Astronautics, presents the think­ing of the day—and in 1965 a Lockheed specialist, Maxwell Hunter, introduced an influential configuration called Star Clipper. His design employed LI-1500 for thermal protection.

Like other Shuttle concepts, Star Clipper was to fly repeatedly, but the need for an impregnant in LI-1500 compromised its reusability. But in contrast to earlier entry vehicle concepts, Star Clipper was large, offer­ing exposed surfaces that were sufficiently blunt to benefit from H. Julian Allen’s blunt-body principle. They had lower temperatures and heating rates, which made it possible to dispense with the impregnant. An unfilled version of LI-1500, which was inherently reusable, now could serve.

Here was the first concept of a flight vehicle with reusable insula­tion, bonded to the skin, which could reradiate heat in the fashion of a hot structure. However, the matted silica by itself was white and had low thermal emissivity, making it a poor radiator of heat. This brought excessive surface temperatures that called for thick layers of the silica insulation, adding weight. To reduce the temperatures and the thick­ness, the silica needed a coating that could turn it black for high emis – sivity. It then would radiate well and remain cooler.

The selected coating was a borosilicate glass, initially with an admix­ture of Cr2O3 and later with silicon carbide, which further raised the emissivity. The glass coating and the silica substrate were both silicon dioxide; this assured a match of their coefficients of thermal expansion, to prevent the coating from developing cracks under the temperature changes of reentry. The glass coating could soften at very high temper­atures to heal minor nicks or scratches. It also offered true reusability, surviving repeated cycles to 2,500 °F. A flight test came in 1968 as NASA Langley investigators mounted a panel of LI-1500 to a Pacemaker reen­try test vehicle along with several candidate ablators. This vehicle car­ried instruments and was recovered. Its trajectory reproduced the peak heating rates and temperatures of a reentering Star Clipper. The LI-1500 test panel reached 2,300 °F and did not crack, melt, or shrink. This proof – of-concept test gave further support to the concept of high-emittance reradiative tiles of coated silica for thermal protection.[1064]

Lockheed conducted further studies at its Palo Alto Research Center. Investigators cut the weight of RSI by raising its porosity from the 89 percent of LI-1500 to 93 percent. The material that resulted, LI-900, weighed only 9 pounds per cubic foot, one-seventh the density of water.[1065] There also was much fundamental work on materials. Silica exists in three crystalline forms: quartz, cristobalite, and tridymite. These not only have high coefficients of thermal expansion but also show sud­den expansion or contraction with temperature because of solid-state phase changes. Cristobalite is particularly noteworthy; above 400 °F, it expands by more than 1 percent as it transforms from one phase to another. Silica fibers for RSI were to be glass, an amorphous rather than a crystalline state with a very low coefficient of thermal expansion and an absence of phase changes. The glassy form thus offered superb resis­tance to thermal stress and thermal shock, which would recur repeat­edly during each return from orbit.[1066]

The raw silica fiber came from Johns Manville, which produced it from high-purity sand. At elevated temperatures, it tended to undergo "devitrification,” transforming from a glass into a crystalline state. Then, when cooling, it passed through phase-change temperatures and the fiber suddenly shrank, producing large internal tensile stresses. Some fibers broke, giving rise to internal cracking within the RSI and degra­dation of its properties. These problems threatened to grow worse dur­ing subsequent cycles of reentry heating.

To prevent devitrification, Lockheed worked to remove impurities from the raw fiber. Company specialists raised the purity of the silica to 99.9 percent while reducing contaminating alkalis to as low as 6 parts per million. Lockheed proceeded to do these things not only in the lab­oratory but also in a pilot plant. This plant took the silica from raw material to finished tile, applying 140 process controls along the way. Established in 1970, the pilot plant was expanded in 1971 to attain a true manufacturing capability. Within this facility, Lockheed produced tiles of LI-1500 and LI-900 for use in extensive programs of test and evalua­tion. In turn, the increasing availability of these tiles encouraged their selection for Shuttle protection in lieu of a hot-structure approach.[1067]

General Electric (GE) also became actively involved, studying types of RSI made from zirconia and from mullite, 3Al2O3+2SiO2, as well as from silica. The raw fibers were commercial grade, with the zirconia coming from Union Carbide and the mullite from Babcock and Wilcox. Devitrification was a problem, but whereas Lockheed had addressed it by purifying its fiber, GE took the raw silica from Johns Manville and tried to use it with little change. The basic fiber, the Q-felt of Dyna – Soar, also had served as insulation on the X-15. It contained 19 different elements as impurities. Some were present at a few parts per million, but others—aluminum, calcium, copper, lead, magnesium, potassium, sodium—ran from 100 to 1,000 parts per million. In total, up to 0.3 percent was impurity.

General Electric treated this fiber with a silicone resin that served as a binder, pyrolyzing the resin and causing it to break down at high temperatures. This transformed the fiber into a composite, sheathing each strand with a layer of amorphous silica that had a purity of 99.98 percent or more. This high purity resulted from that of the resin. The amorphous silica bound the fibers together while inhibiting their devit­rification. General Electric’s RSI had a density of 11.5 lb/ft3, midway between that of LI-900 and LI-1500.[1068]

Many Shuttle managers had supported hot structures, but by mid – 1971 they were in trouble. In Washington, the Office of Management and Budget (OMB) now was making it clear that it expected to impose stringent limits on funding for the Shuttle, which brought a demand for new configurations that could cut the cost of development. Within weeks, the contractors did a major turnabout. They abandoned hot struc­tures and embraced RSI. Managers were aware that it might take time to develop for operational use, but they were prepared to use ablatives for interim thermal protection and to switch to RSI once it was ready.[1069]

What brought this dramatic change? The advent of RSI production at Lockheed was critical. This drew attention from Max Faget, a long­time NACA-NASA leader who had kept his hand in the field of Shuttle design, offering a succession of conceptual design configurations that had helped to guide the work of the contractors. His most important concept, designated MSC-040, came out in September 1971 and served as a point of reference. It used RSI and proposed to build the Shuttle of aluminum rather than Rene 41 or anything similar.[1070]

Why aluminum? "My history has always been to take the most con­servative approach,” Faget explained subsequently. Everyone knew how to work with aluminum, for it was the most familiar of materials, but everything else carried large question marks. Titanium, for one, was lit­erally a black art. Much of the pertinent shop-floor experience had been gained within the SR-71 program and was classified. Few machine shops had pertinent background, for only Lockheed had constructed an air­plane—the SR-71—that used titanium hot structure. The situation was worse for columbium and the superalloys, for these metals had been used mostly in turbine blades. Lockheed had encountered serious difficul­ties as its machinists and metallurgists wrestled with titanium. With the Shuttle facing the OMB’s cost constraints, no one cared to risk an overrun while machinists struggled with the problems of other new materials.[1071]

NASA Langley had worked to build a columbium heat shield for the Shuttle and had gained a particularly clear view of its difficulties. It was heavier than RSI but offered no advantage in temperature resistance.

In addition, coatings posed serious problems. Silicides showed promise of reusability and long life, but they were fragile and easily damaged. A localized loss of coating could result in rapid oxygen embrittlement at high temperatures. Unprotected columbium oxidized readily, and above the melting point of its oxide, 2,730 °F, it could burst into flame.[1072] "The least little scratch in the coating, the shingle would be destroyed dur­ing reentry,” Faget said. Charles Donlan, the Shuttle Program Manager at NASA Headquarters, placed this in a broader perspective in 1983:

Phase B was the first really extensive effort to put together studies related to the completely reusable vehicle. As we went along, it became increasingly evident that there were some prob­lems. And then as we looked at the development problems, they became pretty expensive. We learned also that the metallic heat shield, of which the wings were to be made, was by no means ready for use. The slightest scratch and you are in trouble.[1073]

Other refractory metals offered alternatives to columbium, but even when proposing to use them, the complexity of a hot structure also mil­itated against its selection. As a mechanical installation, it called for large numbers of clips, brackets, standoffs, frames, beams, and fasten­ers. Structural analysis loomed as a formidable task. Each of many panel geometries needed its own analysis, to show with confidence that the panels would not fail through creep, buckling, flutter, or stress under load. Yet this confidence might be fragile, for hot structures had lim­ited ability to resist over-temperatures. They also faced the continuing issue of sealing panel edges against ingestion of hot gas during reentry.[1074]

In this fashion, having taken a long look at hot structures, NASA did an about-face as it turned toward the RSI that Lockheed’s Max Hunter had recommended as early as 1965. Then, in January 1972, President Richard Nixon gave his approval to the Space Shuttle program, thereby raising it to the level of a Presidential initiative. Within days, NASA’s Dale Myers spoke to a conference in Houston and stated that the Agency had made the basic decision to use RSI. Requests for proposal soon went out, inviting leading aerospace corporations to bid for the prime contract on the Shuttle orbiter, and North American won this $2.6-billion prize in July. However, the RSI wasn’t Lockheed’s. The proposal specified mullite RSI for the undersur­face and forward fuselage, a design feature that had been held over from the company’s studies of a fully reusable orbiter during the previous year.[1075]

Still, was mullite RSI truly the one to choose? It came from General Electric and had lower emissivity than the silica RSI of Lockheed but could withstand higher temperatures. Yet the true basis for selection lay in the ability to withstand 100 reentries as simulated in ground test. NASA conducted these tests during the last 5 months of 1972, using facilities at its Ames, Johnson, and Kennedy Centers, with support from Battelle Memorial Institute.

The main series of tests ran from August to November and gave a clear advantage to Lockheed. That firm’s LI-900 and LI-1500 went through 100 cycles to 2,300 °F and met specified requirements for main­tenance of low back-face temperature and minimal thermal conductiv­ity. The mullite showed excessive back-face temperatures and higher thermal conductivity, particularly at elevated temperatures. As test con­ditions increased in severity, the mullite also developed coating cracks and gave indications of substrate failure.

The tests then introduced acoustic loads, with each cycle of the sim­ulation now subjecting the RSI to loud roars of rocket flight along with the heating of reentry. LI-1500 continued to show promise. By mid – November, it demonstrated the equivalent of 20 cycles to 160 decibels, the acoustic level of a large launch vehicle, and 2,300 °F. A month later, NASA conducted what Lockheed describes as a "sudden death shoot­out”: a new series of thermal-acoustic tests, in which the contending materials went into a single large 24-tile array at NASA Johnson. After 20 cycles, only Lockheed’s LI-900 and LI-1500 remained intact. In sepa­rate tests, LI-1500 withstood 100 cycles to 2,500 °F and survived a ther­mal overshoot to 3,000 °F, as well as an acoustic overshoot to 174 dB. Clearly, this was the material NASA wanted.[1076]

Подпись: Thermal protection system for the proposed National Hypersonic Flight Research Facility, 1978. NASA. Подпись: 9

As insulation, the tiles were astonishing. A researcher could heat one in a furnace until it was white hot, remove it, allow its surface to cool for a couple of minutes, and pick it up at its edges using his or her fingers, with its interior still at white heat. Lockheed won the thermal-protection subcontract in 1973, with NASA specifying LI-900 as the baseline RSI. The firm responded with preparations for a full – scale production facility in Sunnyvale, CA. With this, tiles entered the mainstream of thermal protection.

The NASA Digital Fly-By-Wire F-8 Program

A former Navy F-8C Crusader fighter was chosen for modification, with the goal being to both validate the benefits of a digital fly-by-wire aircraft
flight control system and provide additional confidence on its use. Mel Burke had worked with the Navy to arrange for the transfer of four LTV F-8C Crusader supersonic fighters to the Flight Research Center. One would be modified for the F-8 Super Cruise Wing project, one was converted into the F-8 DFBW Iron Bird ground simulator, another was modified as the DFBW F-8, and one was retained in its basic service con­figuration and used for pilot familiarization training and general pro­ficiency flying. When Burke left for a job at NASA Headquarters, Cal Jarvis, a highly experienced engineer who worked on fly-by-wire sys­tems on the X-15 and LLRV programs, took over as program manager. In March 1971, modifications began to create the F-8 DFBW Iron Bird simulator. The Iron Bird effort was planned to ensure that development of the ground simulator always kept ahead of conversion efforts on the DFBW flight-test aircraft. This, the very first F-8C built for the Navy in 1958 (bureau No. 1445546), carried the NASA tail No. 802 along with a "DIGITAL FLY-BY-WIRE” logo painted in blue on its fuselage sides.

Highly Maneuverable Aircraft Technology

The Highly Maneuverable Aircraft Technology (HiMAT) program pro­vides an interesting perspective on the use of unmanned research air­craft equipped with digital fly-by-wire flight control systems, one that is perhaps most relevant to today’s rapidly expanding fleet of unpiloted aircraft whose use has proliferated throughout the military services over the past decade. HiMAT research at Dryden was conducted jointly by NASA and the Air Force Flight Dynamics Laboratory at NASA Dryden between 1979 and 1983. The project began in 1973, and, in August 1975, Rockwell International was awarded a contract to construct two HiMAT vehicles based on the use of advanced technologies applicable to future highly maneuverable fighter aircraft. Designed to provide a level of maneuverability that would enable a sustained 8 g turn at 0.9 Mach at an altitude of 25,000 feet, the HiMAT vehicles were approxi­mately half the size of an F-16. Wingspan was about 16 feet, and length was 23.5 feet. A GE J85 turbojet that produced 5,000 pounds of static thrust at sea level powered the vehicle that could attain about Mach

1. 4. Launched from the NASA B-52 carrier aircraft, the HiMAT weighed about 4,000 pounds, including 660 pounds of fuel. About 30 percent of the airframe consisted of experimental composite materials, mainly fiberglass and graphite epoxy. Rear-mounted swept wings, a digital flight control system, and controllable forward canards enabled exceptional maneuverability with a turn radius about half of a conventional piloted fighter. For example, at Mach 0.9 at 25,000 feet, the HiMAT could

Подпись: Research on the HiMAT remotely piloted test vehicle was conducted by NASA and the Air Force Flight Dynamics Laboratory between 1979 and 1983. NASA. Подпись: 10

sustain an 8-g turn, while F-16 capability under the same conditions is about 4.5 g.[1211]

Ground-based, digital fly-by-wire control systems, developed at Dryden on programs such as the DFBW F-8, were vital to success of the HiMAT remotely piloted research vehicle approach. NASA Ames Research Center and Dryden worked closely with Rockwell International in design and development of the two HiMAT vehicles and their ground control system, rapidly bringing the test vehicles to flight status. Many tests that would have been required for a more conventional piloted research aircraft were eliminated, an approach largely made possible by extensive use of computational aerodynamic design tools developed at Ames. This resulted in drastic reductions in wind tunnel testing but caused the need to devote several HiMAT flights to obtain stability and control data needed for refinements to the digital flight control system.[1212]

The HiMAT flight-test maneuver autopilot was based on a design developed by Teledyne Ryan Aeronautical, then a well-known man­
ufacturer of target drones and remotely piloted aircraft. Teledyne also developed the backup flight control system.[1213] Refining the vehicle control laws was an extremely challenging task. Dryden engineers and test pilots evaluated the contractor-developed flight control laws in a ground simulation facility and then tested them in flight, making adjust­ments until the flight control system performed properly. The HiMAT flight-test maneuver autopilot provided precise, repeatable control, enabling large quantities of reliable test data to be quickly gathered. It proved to be a broadly applicable technique for use in future flight research programs.[1214]

Подпись: 10Launched from the NASA B-52 at 45,000 feet at Mach 0.68, the HiMAT vehicle was remotely controlled by a NASA research pilot in a ground station at Dryden, using control techniques similar to those in conventional aircraft. The flight control system used a ground-based computer interlinked with the HiMAT vehicle through an uplink and downlink telemetry system. The pilot used proportional stick and rud­der inputs to command the computer in the primary flight control sys­tem. A television camera mounted in the cockpit provided visual cues to the pilot. A two-seat Lockheed TF-104G aircraft was used to chase each HiMAT mission. The F-104G was equipped with remote control capability, and it could take control of the HiMAT vehicle if problems developed at the ground control site. A set of retractable skids was deployed for landing, which was accomplished on the dry lakebed adja­cent to Dryden. Stopping distance was about 4,500 feet. During one of the HiMAT flight tests, a problem was encountered that resulted in a landing with the skids retracted. A timing change had been made in the ground-based HiMAT control system and in the onboard software that used the uplinked landing gear deployment command to extend the skids. Additionally, an onboard failure of one uplink receiver con­tributed to cause the anomaly. The timing change had been thoroughly tested with the onboard flight software. However, subsequent testing determined that the flight software operated differently when an uplink failure was present.[1215]

HiMAT research also brought about advances in digital flight con­trol systems used to monitor and automatically reconfigure aircraft flight control surfaces to compensate for in-flight failures. HiMAT pro­vided valuable information on a number of other advanced design fea­tures. These included integrated computerized flight control systems, aeroelastic tailoring, close-coupled canards and winglets, new compos­ite airframe materials, and a digital integrated propulsion control sys­tem. Most importantly, the complex interactions of this set of then-new technologies to enhance overall vehicle performance were closely eval­uated. The first HiMAT flight occurred July 27, 1979. The research pro­gram ended in January 1983, with the two vehicles completing a total of 26 flights, during which 11 hours of flying time were recorded.[1216] The two HiMAT research vehicles are today on exhibit at the NASA Ames Research Center and the Smithsonian Institution National Air and Space Museum.