Category AERONAUTICS

Focusing on Fundamentals: The Supersonics Project

In January 2006, NASA Headquarters announced its restructured aeronautics mission. As explained by Associate Administrator for Aeronautics Lisa J. Porter, "NASA is returning to long-term investments

in cutting-edge fundamental research in traditional aeronautical disciplines. . . appropriate to NASA’s unique capabilities.” One of the four new program areas announced was Fundamental Aeronautics (which included supersonic research), with Rich Wlezien as acting director.[524]

During May, NASA released more details on Fundamental Aeronautics, including plans for what was called the Supersonics Project, managed by Mary Jo Long-Davis with Peter Coen as its princi­pal investigator. One of the project’s major technical challenges was to accurately model the propagation of sonic booms from aircraft to the ground incorporating all relevant physical phenomena. These included realistic atmospheric conditions and the effects of vibrations on struc­tures and the people inside (for which most existing research involved military firing ranges and explosives). "The research goal is to model sonic boom impact as perceived both indoors and outdoors.” Developing the propagation models would involve exploitation of existing databases and additional flight tests as necessary to validate the effects of molecular relaxation, rise time, and turbulence on the loudness of sonic booms.[525]

As the Supersonics Project evolved, it added aircraft concepts more challenging than an SSBJ to serve as longer-range targets on which to focus advanced research and technologies. These were a medium-sized (100-200 passenger) Mach 1.6—1.8 supersonic airliner that could have an acceptable sonic boom by about 2020 and an efficient multi-Mach aircraft that might have an acceptably low boom when flying at a speed somewhat below Mach 2 by the years 2030—2035. NASA awarded advanced concept studies for these in October 2008.[526] NASA

also began working with Japan’s Aerospace Exploration Agency (JAXA) on supersonic research, including sonic boom modeling.[527] Although NASA was not ready as yet to develop a new low-boom supersonic research airplane, it supported an application by Gulfstream to the Air Force that reserved the designation X-54A just in case this would be done in the future.[528]

Meanwhile, existing aircraft had continued to prove their value for sonic boom research. During 2005, the Dryden Center began applying a creative new flight technique called low-boom/no-boom to produce controlled booms. Ed Haering used PCBoom4 modeling in developing this concept, which Jim Smolka then refined into a fly- able maneuver with flight tests over an extensive array of pressure sensors and microphones. The new technique allowed F-18s to gen­erate shaped ("low boom”) signatures as well as the evanescent sound waves ("no-boom”) that remain after the refraction and absorption of shock waves generated allow Mach speeds (known as the Mach cutoff) before they reach the surface.

The basic low-boom/no-boom technique requires cruising just below Mach 1 at about 50,000 feet, rolling into an inverted position, diving at a 53-degree angle, keeping the aircraft’s speed at Mach 1.1 during a portion of the dive, and pulling out to recover at about 32,000 feet. This flight profile took advantage of four attributes that contribute to reduced overpressures: a long propagation distance (the relatively high altitude of the dive), the weaker shock waves generated from the top of an aircraft (by diving while upside down), low airframe weight and volume (the relatively small size of an F-18), and a low Mach number. This technique allowed Dryden’s F-18s, which normally generate overpressures of 1.5 psf in level flight, to produce overpressures under 0.1 psf. Using these maneuvers, Dryden’s skilled test pilots could precisely place these focused quiet booms on specific locations, such as those with observers and sensors. Not only were the overpressures low, they had a slower rise time than the typical N-shaped sonic

Focusing on Fundamentals: The Supersonics Project

Focusing on Fundamentals: The Supersonics Project

NASA F-15B No. 836 in flight with Quiet Spike, September 2006. NASA.

signature. The technique also resulted in systematic recordings of evanescent waves—the kind that sound merely like distant thunder.[529]

Dryden researchers used this technique in July 2007 during a test called House Variable Intensity Boom Effect on Structures (House VIBES). Following up on a similar test from the year before with an old (early 1960s) Edwards AFB house slated for demolition,[530] Langley engineers installed 112 sensors (a mix of accelerometers and micro­phones) inside the unoccupied half of a modern (late 1990s) duplex house. Other sensors were placed outside the house and on a nearby 35-foot tower. These measured pressures and vibrations from 12 nor­mal intensity N-shaped booms (up to 2.2 psf) created by F-18s in steady and level flight at Mach 1.25 and 32,000 feet as well as 31 shaped booms (registering only 0.1 to 0.7 psf) from F-18s using the Low Boom/No Boom flight profile. The latter booms were similar to those that would

be expected from an acceptable supersonic business jet. The specially instrumented F-15B No. 852 performed six flights, and an F-18A did one flight. Above the surface boundary layer, an instrumented L-23 sailplane from the Air Force Test Pilot School recorded shock waves at precise locations in the path of the focused booms to account for atmo­spheric effects. The data from the house sensors confirmed fewer vibra­tions and noise levels in the modern house than had been the case with the older house. At the same time, data gathered by the outdoor sen­sors added greatly to NASA’s variable intensity sonic boom database, which was expected to help program and validate sonic boom propa­gation codes for years to come, including more advanced three-dimen­sional versions of PCBoom.[531]

With the awakening of interest in an SSBJ, NASA Langley acoustics specialists including Brenda Sullivan and Kevin Shepherd had resumed an active program of studies and experiments on human and structural response to sonic booms. They upgraded the HSR-era simulator booth with an improved computer-controlled playback system, new loud­speakers, and other equipment to more accurately replicate the sound of various boom signatures, such as those recorded at Edwards. In 2005, they also added predicted boom shapes from several low-boom aircraft designs.[532] At the same time, Gulfstream created a new mobile sonic boom simulator to help demonstrate the difference between traditional and shaped sonic booms to a wider audience. Although Gulfstream’s folded horn design could not reproduce the very low frequencies of Langley’s simulator booth, it created a "traveling” pressure wave that moved past the listener and resonated with postboom noises, features that were judged more realistic than other simulators.

Under the aegis of the Supersonics Project, plans for additional sim­ulation capabilities accelerated. Based on multiple studies that had long cited the more bothersome effects of booms experienced indoors, the

Langley Center began in the summer of 2008 to build one of the most sophisticated sonic boom simulation systems yet. Scheduled for com­pletion in early 2009, it would consist of a carefully constructed 12- by 14-foot room with sound and pressure systems that would replicate all the noises and vibrations caused by various levels and types of sonic booms.[533] Such studies would be vital if most concepts for supersonic business jets were ever to be realized. When the FAA updated its policy on supersonic noise certification in October 2008, it acknowledged the promising results of recent experiments but cautioned that any future changes in the rules against supersonic flight would still depend on public acceptance.[534]

NASA’s Supersonics Project also put a new flight test on its agenda: the Lift and Nozzle Change Effects on Tail Shocks (LaNCETS). Both the SSBD and Quiet Spike experiments had only involved shock waves from the front of an aircraft. Yet shocks from the rear of an aircraft as well as jet engine exhaust plumes also contribute to sonic booms—especially the recompression phase of the typical N-wave signature—but have long been more difficult to control. NASA initiated the LaNCETS experiment to address this issue. As described in the Supersonic Project’s original planning document, one of the metrics for LaNCETS was to "investigate control of aft shock structure using nozzle and/or lift tailoring with the goal of a 20% reduction in near-field tail shock strength.”[535]

NASA Dryden had just the airplane with which to do this: F-15B No. 837. Originally built in 1973 as the Air Force’s first preproduction TF-15A two-seat trainer (soon redesignated as the F-15B), it had been extensively modified for various experiments over its long lifespan. These included the Short Takeoff and Landing Maneuvering Technology Demonstration, the High-Stability Engine Control project, the Advanced Control Technology for Integrated Vehicles Experiment (ACTIVE),

and Intelligent Flight Control Systems (IFCS). F-15B No. 837 had the following special features: digital fly-by-wire controls, canards ahead of the wings for changing longitudinal lift distribution, and thrust-vectoring variable area ratio nozzles on its twin jet engines that could (1) constrict and expand to change the shape the exhaust plumes and (2) change the pitch and yaw of the exhaust flow.[536] It was planned to use these capa­bilities for validating computational tools developed at Langley, Ames, and Dryden to predict the interactions between shocks from the tail and exhaust under various lift and plume conditions.

Tim Moes, one of the Supersonics Project’s associate managers, was the LaNCETS project manager at the Dryden Center. Jim Smolka, who had flown most of F-15B No. 837’s previous missions at Dryden, was its test pilot. He and Nils Larson in F-15B No. 836 conducted Phase I of the test program with three missions from June 17-19, 2008. They gathered baseline measurements with 29 probes, all at 40,000 feet and speeds of Mach 1.2, 1.4, and 1.6.[537]

Several months before Phase II of LaNCETS, NASA specialists and affiliated researchers in the Supersonics Project announced significant progress in near-field simulation tools using the latest in computational fluid dynamics. They even reported having success as far out as 10 body lengths (a mid-field distance). As seven of these researchers claimed in August 2008, "[It] is reasonable to expect the expeditious develop­ment of an efficient sonic boom prediction methodology that will even­tually become compatible with an optimization environment.”[538] Of course, more data from flight-testing would increase the likelihood of this prediction.

LaNCETS Phase II began on November 24, 2008, with nine mis­sions flown by December 11. After being interrupted by a freak snow­storm during the third week of December and then having to break for the holiday season, the LaNCETS team completed the project with flight

tests on January 12, 15, and 30, 2009. In all, Jim Smolka flew 13 mis­sions in F-15B No. 837, 11 of which included in-flight shock wave mea­surements by No. 836 from distances of 100 to 500 feet. Nils Larson piloted the probing flights, with Jason Cudnik or Carrie Rhoades in the back seat. The aircrews tested the effects of both positive and negative canard trim at Mach 1.2, 1.4, and 1.6 as well as thrust vectoring at Mach 1.2 and 1.4. They also gathered supersonic data on plume effects with different nozzle areas and exit pressure ratios. Once again, GPS equip­ment recorded the exact locations of the two aircraft for each of the datasets. On January 30, 2009, with Jim Smolka at the controls for the last time, No. 837 made a final flight before its well-earned retirement.[539]

The large amount of data collected will be made available to indus­try and academia, in addition to NASA researchers at Langley, Ames, and Dryden. For the first time, analysts and engineers will be able to use actual flight test results to validate and improve CFD models on tail shocks and exhaust plumes—taking another step toward the design of a truly low-boom supersonic airplane.[540]

Induced Structural Resonances

Overall, electronic enhancements introduced significant challenges with respect to their practical incorporation in an airplane. Model-following systems required highly responsive servos and high gain levels for the feedback from the motion sensors (gyros and accelerometers) to the con­trol surfaces. These high-feedback-gain requirements introduced serious issues regarding the aircraft structure. An aircraft structure is surpris­ingly vulnerable to induced frequencies, which, like a struck musical tuning fork, can result in resonant motions that may reach the naturally

Induced Structural Resonances

The General Dynamics F-111A was the first production aircraft to use a self-adaptive flight con­trol system. NASA.

destructive frequency of the structure, breaking it apart. Rapid move­ment of a control surface could trigger a lightly damped oscillation of one of the structural modes of the airplane (first mode tail bending, for example). This structural oscillation could be detected by the flight con­trol system sensors, resulting in further rapid movement of the control surface. The resulting structural/control surface oscillation could thus be sustained, or even amplified. These additive vibrations were typically at higher frequencies (5-30 Hz) than the limit-cycle described earlier (2-4 Hz), although some of the landing gear modes and wing bending modes for larger aircraft are typically below 5 Hz. If seemingly esoteric, this phenomenon, called structural resonance, is profoundly serious.

Even the stiff and dense X-15 encountered serious structural reso­nance effects. Ground tests had uncovered a potential resonance between the pitch control system and a landing gear structural mode. Initially, researchers concluded that the effect was related to the ground-test equip­ment and its setup, and thus would not occur in flight. However, after several successful landings, the X-15 did experience a high-frequency vibration upon one touchdown. Additionally, a second and more severe structural resonance occurred at 13 Hz (coincident with the horizon­tal tail bending mode) during one entry from high altitude by the third

X-15 outfitted with the MH-96 adaptive flight control system.[693] The pilot would first note a rumbling vibration that swiftly became louder. As the structure resonated, the vibrations were transmitted to the gyros in the flight control system, which attempted to "correct” for them but actually fed them instead. They were so severe that the pilot could not read the cockpit instruments and had to disengage the pitch damper in order to stop them. As a consequence, a 13 Hz "notch” filter was installed in the electrical feedback path to reduce the gain at the observed structural fre­quency. Thereafter, the third X-15 flew far more predictably[694]

Structural resonance problems are further complicated by the fact that the predicted structural frequencies are often in error, thus the flight control designers cannot accurately anticipate the proper filters for the sensors. Further, structural resonance is related to a structural feedback path, not an aerodynamic one as described for limit-cycles. As a pre­caution, ground vibration tests (GVT) are usually conducted on a new airplane to accurately determine the actual structural mode frequen­cies of the airplane.[695] Researchers attach electrically driven and con­trolled actuators to various locations on the airplane and perform a small amplitude frequency "sweep” of the structure, essentially a "shake test.” Accelerometers at strategic locations on the airplane detect and record the structural response. Though this results in a more accurate determi­nation of the actual structural frequencies for the control system designer, it still does not identify the structural path to the control system sensors.

The flight control resonance characteristics can be duplicated on the ground by placing the airplane on a soft mounting structure, (airbags, or deflated tires and struts) then artificially raising the electrical gain in each flight control closed loop until a vibration is observed. Based on its experience with ground – and flight-testing of research airplanes, NASA DFRC established a ground rule that the flight gains could only be allowed to reach one-half of the gain that triggered a resonance (a gain margin of

2.0). This rule-of-thumb ground rule has been generally accepted within the aircraft industry, and ground tests to establish resonance gain mar­gins are performed prior to first flights. If insufficient gain margin is pres­ent, the solution is sometimes a relocation of a sensor, or a stiffening of the sensor mounting structure. For most cases, the solution is the place­ment of an electronic notch filter within the control loop to reduce the system gain at the identified structural frequency. Many times the fol­lowup ground test identifies a second resonant frequency for a different structural mode that was masked during the first test. A typical notch fil­ter will lower the gain at the selected notch frequency as desired but will also introduce additional lag at nearby frequencies. The additional lag will result in a lowering of the limit-cycle boundaries. The control system designer is thus faced with the task of reducing the gain at one structural frequency while minimizing any increase in the lag characteristics at the limit-cycle frequency (typically 2-4 Hz). This challenge resulted in the cre­ation of lead-lag filters to minimize the additional lag in the system when notch filters were required to avoid structural resonance.[696]

Fighter aircraft usually are designed for 7-9 g load factors and, as a consequence, their structures are quite stiff, exhibiting high natural fre­quencies. Larger transport and reconnaissance airplanes are designed for much lower load factors, and the structures are more limber. Since structural frequencies are often only slightly above the natural aero­dynamic frequencies—as well as the limit-cycle frequencies—of the airplane, this poses a challenge for the flight control system designer who is trying to aggressively control the aerodynamic characteris­tics, avoid limit-cycles, and avoid any control system response at the structural mode frequencies.

Structural mode interactions can occur across a range of flight activ­ities. For example, Rockwell and Air Force testers detected a resonant vibration of the horizontal stabilizer during early taxi tests of the B-1 bomber. It was traced to a landing gear structural mode, and a notch filter was installed to reduce the flight control gain at that frequency. The ground test for resonance is fairly simple, but the structural modes that need to be tested can produce a fairly large matrix of test con­ditions. External stores and fuel loadings can alter the structural fre­quencies of the airplane and thus change the control system feedback

Induced Structural Resonances

Aircraft frequency spectrum for flight control system design. USAF.

characteristics.[697] The frequency of the wing torsion mode of the General Dynamics YF-16 Lightweight Fighter (the prototype of the F-16A Fighting Falcon) was dramatically altered when AIM-9 Sidewinder air-to-air mis­siles were mounted at the wingtip.

The transformed dynamics of the installed missiles induced a seri­ous aileron/wing-twist vibration at 6 Hz (coincident with the wing torsion mode), a motion that could also be classified as flutter, but in this case was obviously driven by the flight control system. Again, the solution was the installation of a notch filter to reduce the aileron response at 6 Hz.[698]

NASA researchers at the Dryden Flight Research Center had an unpleasant encounter with structural mode resonance during the Northrop-NASA HL-10 lifting body flight-test program. After an aborted launch attempt on the HL-10 lifting body, the NB-52B mother ship was returning with the HL-10 still mounted under the wing pylon. When the HL-10 pilot initiated propellant jettison, the launch airplane immedi­ately experienced a violent vibration of the launch pylon attaching the lifting body to the NB-52B. The pilot stopped jettisoning and turned the flight control system off, whereupon the vibration stopped. The solu­tion to this problem was strictly a change in operational procedure—in

the future, the control system was to be disengaged before jettisoning while in captive flight.[699]

Dutch Roll Coupling

Dutch roll coupling is another case of a dynamic loss of control of an airplane because of an unusual combination of lateral-directional static stability characteristics. Dutch roll coupling is a more subtle but nev­ertheless potentially violent motion, one that (again quoting Day) is a "dynamic lateral-directional stability of the stability axis. This coupling of body axis yaw and roll moments with sideslip can produce lateral – directional instability or PIO.”[744] A typical airplane design includes "static directional stability” produced by a vertical fin, and a small amount of "dihedral effect” (roll produced by sideslip). Dihedral effect is created by designing the wing with actual dihedral (wingtips higher than the

wing root), wing sweep (wingtips aft of the wing root), or some combi­nation of the two. Generally static directional stability and normal dihe­dral effect are both stabilizing and both contribute to a stable Dutch roll mode (first named for the lateral-directional motions of smooth-bottom Dutch coastal craft, which tend to roll and yaw in disturbed seas). When the interactive effects of other surfaces of an airplane are introduced, there can be potential regions of the flight envelope where these two contributions to Dutch roll stability are not stabilizing (i. e., regions of negative static directional stability or negative dihedral effect). In these regions, if the negative effect is smaller than the positive influence of the other, then the airplane will exhibit an oscillatory roll-yaw motion. (If both effects are negative, the airplane will show a static divergence in both the roll and yaw axes.) All aircraft that are statically stable exhibit some amount of Dutch roll motion. Most are well damped, and the Dutch roll only becomes apparent in turbulent conditions.

The Douglas DC-3 airliner (equivalent to the military C-47 airlifter) had a persistent Dutch roll that could be discerned by passengers watch­ing the wingtips as they described a slow horizontal "figure eight” with respect to the horizon.

Even the dart-like X-15 manifested Dutch roll characteristics. The very large vertical tail configuration of the X-15 was established by the need to control the airplane near engine burnout if the rocket engine was misaligned, a "lesson learned” from tests of earlier rocket-pow­ered aircraft such as the X-1, X-2, and D-558-2. This led to a large sym­metrical vertical tail with large rudder surfaces both above and below the airplane centerline. (The rocket engine mechanics and engineers at Edwards later devised a method for accurately aligning the engine, so that the large rudder control surfaces were no longer needed.) The X-15 simulator accurately predicted a strange Dutch roll characteristic in the Mach 3-4 region at angles of attack above 8 degrees with the roll and yaw dampers off. This Dutch roll mode was oscillatory and stable with­out pilot inputs but would rapidly diverge into an uncontrollable pilot – induced-oscillation when pilot control inputs were introduced.

During wind tunnel tests after the airplane was constructed, it was discovered that the lower segment of the vertical tail, which was oper­ating in a high compression flow field at hypersonic speeds, was highly effective at reentry angles of attack. The resulting rolling motions pro­duced by the lower fin and rudder were contributing a large negative dihedral effect. Fortunately, this destabilizing influence was not enough

to overpower the high directional stability produced by the very large vertical tail area, so the Dutch roll mode remained oscillatory and stable. The airplane motions associated with this stable oscillation were com­pletely foreign to the test pilots, however. Whereas a normal Dutch roll is described as "like a marble rolling inside a barrel,” NASA test pilot Joe Walker described the X-15 Dutch roll as "like a marble rolling on the out­side of the barrel” because the phase relationship between rolling and yawing were reversed. Normal pilot aileron inputs to maintain the wings level were out of phase and actually drove the oscillation to larger magni­tudes rather quickly. The roll damper, operating at high gain, was fairly effective at damping the oscillation, thus minimizing the pilot’s need to actively control the motion when the roll damper was on.[745]

Because the X-15 roll damper was a single string system (fail-safe), a roll damper failure above about 200,000 feet altitude would have caused the entry to be uncontrollable by the pilot. The X-15 envelope expansion to altitudes above 200,000 feet was delayed until this problem could be resolved. The flight control team proposed installing a backup roll damper, while members of the aerodynamic team proposed removing the lower ventral rudder. Removing the rudder was expected to reduce the direc­tional stability but also would cause the dihedral effect to be stable, thus the overall Dutch roll stability would be more like a normal airplane. The Air Force-NASA team pursued both options. Installation of the backup roll damper allowed the altitude envelope to be expanded to the design value of 250,000 feet. The removal of the lower rudder, however, solved the PIO problem completely, and all subsequent flights, after the initial ventral-off demonstration flights, were conducted without the lower rudder.[746]

The incident described above was unique to the X-15 configura­tion, but the analysis and resolution of the problem is instructive in that it offers a prudent cautioning to designers and engineers to avoid designs that exhibit negative dihedral effect.[747]

Navier-Stokes CFD Solutions

Navier-Stokes CFD SolutionsAs described earlier in this article, the Navier-Stokes equations are the full equations that govern a viscous flow. Solutions of the Navier-Stokes equations are the ultimate in fluid dynamics. To date, no general analyt­ical solutions of these highly nonlinear equations have been obtained. Yet they are the equations that reflect the real world of fluid dynamics. The only way to obtain useful solutions for the Navier-Stokes equations is by means of CFD. And even here such solutions have been slow in coming. The problem has been the very fine grids that are necessary to define certain regions of a viscous flow (in boundary layers, shear layers, separated flows, etc.), thus demanding huge numbers of grid point in the flow field. Practical solutions of the Navier-Stokes equations had to wait for supercomputers such as the Cray X-MP and Cyber 205 to come on the scene. NASA became a recognized and emulated leader in CFD solu­tions of Navier-Stokes equations, its professionalism evident by its hav­ing established the Institute for Computer Applications in Science and Engineering (ICASE) at Langley Research Center, though other Centers as well, particularly Ames, shared this interest in burgeoning CFD. [778] In particular, NASA researcher Robert MacCormack was responsible for the development of a Navier-Stokes CFD code that, by far, became the most popular and most widely used Navier-Stokes CFD algorithm in the last quarter of the 20th century. MacCormack, an applied math­ematician at NASA Ames (and now a professor at Stanford), conceived a straightforward algorithm for the solution of the Navier-Stokes equa­tions, simply identified everywhere as "MacCormack’s method.”

To understand the significance of MacCormack’s method, one must understand the concept of numerical accuracy. Whenever the derivatives in a partial differential equation are replaced by algebraic difference
quotients, there is always a truncation error that introduces a degree of inaccuracy in the numerical calculations. The simplest finite differences, usually involving only two distinct grid points in their formulation, are identified as "first-order” accurate (the least accurate formulation). The next step up, using a more sophisticated finite difference reaching to three grid points, is identified as second-order accurate. For the numer­ical solution of most fluid flow problems, first-order accuracy is not sufficient; not only is the accuracy compromised, but such algorithms frequently blow up on the computer. (The author’s experience, however, has shown that second-order accuracy is usually sufficient for many types of flows.) On the other hand, some of the early second-order algo­rithms required a large computation effort to obtain this second-order accuracy, requiring many pages of paper to write the algorithm and a lot of computations to execute the solution. MacCormack developed a predictor-corrector two-step scheme that was second-order accurate but required much less effort to program and many fewer calculations to execute. He introduced this scheme in an imaginative paper on hyper­velocity impact cratering published in 1969.[779]

Navier-Stokes CFD SolutionsMacCormack’s method broke open the field of Navier-Stokes solu­tions, allowing calculation of myriad viscous flow problems, beginning in the 1970s and continuing to the present time, as was as well (in this author’s opinion) the most "graduate-student friendly” CFD scheme in existence. Many graduate students have cut their CFD teeth on this method and have been able to solve many viscous flow problems that otherwise could not have attempted. Today, MacCormack’s method has been supplanted by several very sophisticated modern CFD algorithms, but even so, MacCormack’s method goes down in history as one of NASA’s finest contributions to the aeronautical sciences.

Goddard Space Flight Center

Goddard Space Flight Center was established in 1959, absorbing the U. S. Navy Vanguard satellite project and, with it, the mission of devel­oping, launching, and tracking unpiloted satellites. Since that time, its roles and responsibilities have expanded to consider space science, Earth observation from space, and unpiloted satellite systems more broadly.

Structural analysis problems studied at Goddard included definition of operating environments and loads applicable to vehicles, subsystems, and payloads; modeling and analysis of complete launch vehicle/payload sys­tems (generic and for specific planned missions); thermally induced loads and deformation; and problems associated with lightweight, deployable structures such as antennas. Control-structural interactions and multi­body dynamics are other related areas of interest.

Goddard’s greatest contribution to computer structural analysis was, of course, the NASTRAN program. With public release of NASTRAN, management responsibility shifted to Langley. However, Goddard remained extremely active in the early application of NASTRAN to practical problems, in the evaluation of NASTRAN, and in the ongoing improvement and addition of new capabilities to NASTRAN: thermal analysis (part of a larger Structural-Thermal-Optical [STOP] program, which is discussed below), hydroelastic analysis, automated cyclic sym­metry, and substructuring techniques, to name a few.[885]

Structural-Thermal-Optical analysis predicts the impact on the per­formance of a (typically satellite-based) sensor system due to the defor­mation of the sensors and their supporting structure(s) under thermal and mechanical loads. After NASTRAN was developed, a major effort began at GSFC to achieve better integration of the thermal and optical analysis components with NASTRAN as the structural analysis compo­nent. The first major product of this effort was the NASTRAN Thermal Analyzer. The program was based on NASTRAN and thereby inherited a great deal of modeling capability and flexibility. But, most impor­tantly, the resulting inputs and outputs were fully compatible with NASTRAN: "Prior to the existence of the NASTRAN Thermal Analyzer, available general purpose thermal analysis computer programs were designed on the basis of the lumped-node thermal balance method.

. . . They were not only limited in capacity but seriously handicapped by incompatibilities arising from the model representations [lumped – node versus finite-element]. The intermodal transfer of temperature data was found to necessitate extensive interpolation and extrapolation. This extra work proved not only a tedious and time-consuming process but also resulted in compromised solution accuracy. To minimize such an interface obstacle, the STOP project undertook the development of a general purpose finite-element heat transfer computer program.”[886] The capability was developed by the MacNeal Schwendler Corporation under subcontract from Bell Aerospace. "It must be stressed, however, that a cooperative financial and technical effort between [Goddard and Langley] made possible the emergence of this capability.”[887]

Another element of the STOP effort was the computation of "view factors” for radiation between elements: "In an in-house STOP proj­ect effort, GSFC has developed an IBM-360 program named ‘VIEW’ which computes the view factors and the required exchange coefficients between radiating boundary elements.”[888] VIEW was based on an ear­lier view factor program, RAVFAC, but was modified principally for compatibility with NASTRAN and eventual incorporation as a subrou­tine in NASTRAN.[889] STOP is still an important part of the analysis of many of the satellite packages that Goddard manages, and work contin­ues toward better performance with complex models, multidisciplinary design, and optimization capability, as well as analysis.

COmposite Blade STRuctural ANalyzer (COBSTRAN, Glenn, 1989)

COBSTRAN was a preprocessor for NASTRAN, designed to generate finite element models of composite blades. While developed specifically

for advanced turboprop blades under the Advanced Turboprop (ATP) project, it was subsequently applied to compressor blades and tur­bine blades. It could be used with both COSMIC NASTRAN and MSC/ NASTRAN, and was subsequently extended to work as a preprocessor for the MARC nonlinear finite element code.[984]

1) BLAde SIMulation (BLASIM), 1992

BLASIM calculates dynamic characteristics of engine blades before and after an ice impact event. BLASIM could accept input geometry in the form of airfoil coordinates or as a NASTRAN-format finite ele­ment model. BLASIM could also utilize the ICAN program (discussed separately) to generate ply properties of composite blades.[985] "The ice impacts the leading edge of the blade causing severe local damage. The local structural response of the blade due to the ice impact is pre­dicted via a transient response analysis by modeling only a local patch around the impact region. After ice impact, the global geometry of the blade is updated using deformations of the local patch and a free vibra­tion analysis is performed. The effects of ice impact location, ice size and ice velocity on the blade mode shapes and natural frequencies are investigated.”[986]

Blade Fabrication

The fabrication of turbine blades represents a related topic. No blade has indefinite life, for blades are highly stressed and must resist creep while operating under continuous high temperatures. Table 3 is taken from the journal Metallurgia and summarizes the stress to cause rupture in both wrought – and investment-case nickel – base superalloy.[1092]

Подпись: 9 Подпись: TABLE 3: STRESS TO CAUSE FAILURE OF VARIOUS ALLOYS TYPE OF ALLOY STRESS TO CAUSE FAILURE AFTER: Wrought Alloys: 100 hours at 1400 °F, MPa 50 hours at 1750 °F, MPa Nimonic 80 340 48 Nimonic 105 494 127 Nimonic 1 15 571 201 Investment-Cast Alloys: IN 100 648 278 B1914 756 262 Mar-M246 756 309

An important development involved the introduction of directionally solidified (d. s.) castings. Their advent, into military engines in 1969 and commercial engines in 1974, brought significant increases in allowable metal temperatures and rotor speeds. D. s. blades and vanes were fab­ricated by pouring molten superalloy into a ceramic mold seated on a water-cooled copper chill plate. Grains nucleate on the chill surface and grow in a columnar manner parallel to a temperature gradient. These columnar grains fill the mold and solidify to form the casting.[1093]

A further development involved single-crystal blades. More was required here than development of a solidification technique; it was nec­essary to consider as well the entire superalloy. It was to achieve a high melting temperature by containing no grain boundary-strengthening elements such as boron, carbon, hafnium, and zirconium. It would achieve high creep strength with a high gamma-prime temperature. A high temper­ature for solution heat treatment would also provide improved properties.

The specialized Alloy 454 had the best properties. It showed a com­plete absence of all grain boundary-strengthening elements and made significant use of tantalum, which suppressed a serious casting defect known as "freckling.” Chromium and aluminum were included to protect against oxidation and hot corrosion. It had a composition of 12Ta+4W+10Cr+5Al+1.5Ti+5Co, balance Ni.

Single-crystal blades were fabricated using a variant of the cited d. s. arrangement. Instead of having the ceramic mold rest directly on the chill

Подпись: A hypersonic scramjet configuration developed by Langley experts in the 1970s. The sharply swept double-delta layout set the stage for the National Aero-Space Plane program. NASA. Подпись: 9

plate, it was separated from this plate by a helical single-crystal selector. A number of grains nucleated at the bottom of the selector, but most of them had their growth cut off by its walls, and only one grain emerged at the top. This grain was then allowed to grow and fill the entire mold cavity.

Creep-rupture tests showed that Alloy 454 had a temperature advan­tage of 75 to 125 °F over d. s. MAR-M200 + Hf, the strongest produc­tion-blade alloy. A 75 °F improvement in metal temperature capability corresponds to threefold improvement in life. Single-crystal Alloy 454 thus was chosen as the material for the first-stage turbine blades of the JT9D-7R4 series of engines that were to power the Boeing 767 and the Airbus A310 aircraft, with engine certification and initial production shipments occurring in July 1980.[1094]

DFBW F-8: An Appreciation

The NASA DFBW F-8 had conclusively proven that a highly redun­dant digital flight control system could be successfully implemented and all aspects of its design validated.[1165] During the course of the pro­gram, the DFBW F-8 demonstrated the ability to be upgraded to take advantage of emerging state-of-the-art technologies or to meet evolv­
ing operational requirements. It proved that digital fly-by-wire flight control systems could be adapted to the new design and employment concepts that were evolving in both the military and in industry at the time. Perhaps the best testimony to the unique accomplishments of the F-8 DFBW aircraft and its NASA flight-test team is encapsulated in the following observations of former NASA Dryden director Ken Szalai:

Подпись: 10DFBW systems are ‘old hat’ today, but in 1972, only Apollo astronauts had put their life and missions into the hands of software engineers. We considered the F-8 DFBW a very high risk in 1972. That fact was driven home to us in the control room when we asked the EAFB [Edwards Air Force Base] tower to close the airfield, as was preplanned with the USAF, for first flight. It was the first time this 30-year-old FCS [Flight Control System] engineer had heard that particular radio call. . . . The project was both a pioneering effort for the tech­nology and a key enabler for extraordinary leaps in aircraft performance, survivability, and superiority. The basic archi­tecture has been used in numerous production systems, and many of the F-8 fault detection and fault handling/recovery technology elements have become ‘standard equipment.’ . . . In the total flight program, no software error/fault ever occurred in the operational software, synchronization was never lost in hundreds of millions of sync cycles, it was never required to transfer to the analog FBW backup system, there were zero nuisance channel failures in all the years of flying, and many NASA and visiting guest pilots easily flew the aircraft, includ­ing Phil Oestricher before the first YF-16 flight.[1166]

In retrospect, the NASA DFBW F-8C is of exceptional interest in the history of aeronautics. It was the first aircraft to fly with a digital fly-by­wire flight control system, and it was also the first aircraft to fly with­out any mechanical backup flight controls. Flown by Ed Schneider, the DFBW F-8 made its last flight December 16, 1985, completing 211 flights. The aircraft is now on display at the NASA Dryden Flight Research Center. Its sustained record of success over a 13-year period provided a
high degree of high confidence in the use of digital computers in fly-by­wire flight control systems. The DFBW F-8C also paved the way for a number of other significant NASA, Air Force, and foreign research pro­grams that would further explore and expand the application of digital computers to modern flight control systems, providing greatly improved aircraft performance and enhanced flight safety.

French Mirage NG

Подпись: 10Although not intended purely as a research aircraft, the French Dassault Mirage 3NG (Nouvelle Generation) was a greatly modified Mirage IIIE single-engine jet fighter that was used to demonstrate the improved air combat performance advantages made possible using relaxed static stability and fly-by-wire. One prototype was built by Dassault; modifications included destabilizing canards, extended wing root lead­ing edges, an analog-computer-controlled fly-by-wire flight control sys­tem (based on that used in the production Mirage 2000 fighter), and the improved Atar 9K-50 engine. The Mirage 3NG first flew in December 1982, demonstrating significant performance improvements over the standard operational Mirage IIIE. These were claimed to include a 20-25-percent reduction in takeoff distance, a 40-percent improvement in time to reach combat altitude, a nearly 10,000-foot increase in super­sonic ceiling, and similarly impressive gains in acceleration, instanta­neous turn rate, and combat air patrol time.

Noise Pollution Forces Engine Improvements

Fast-forward a few years, to a time when Americans embraced the prom­ise that technology would solve the world’s problems, raced the Soviet Union to the Moon, and looked forward to owning personal family hov­ercraft, just like they saw on the TV show The Jetsons. And during that same decade of the 1960s, the American public became more and more comfortable flying aboard commercial airliners equipped with the mod­ern marvel of turbojet engines. Boeing 707s and McDonnell-Douglas DC-8s, each with four engines bolted to their wings, were not only a common sight in the skies over major cities, but their presence could also easily be heard by anyone living next to or near where the planes took off and landed. Boeing 727s and 737s soon followed. At the same

Подпись: 11 Noise Pollution Forces Engine Improvements

time that commercial aviation exploded, people moved away from the metropolis to embrace the suburban lifestyle. Neighborhoods began to spring up immediately adjacent to airports that originally were built far from the city, and the new neighbors didn’t like the sound of what they hearing.[1295]

By 1966, the problem of aircraft noise pollution had grown to the point of attracting the attention of President Lyndon Johnson, who then directed the U. S. Office of Science and Technology to set a new national policy that said:

Подпись: 11The FAA and/or NASA, using qualified contractors as neces­sary, (should) establish and fund. . . an urgent program for conducting the physical, psycho-acoustical, sociological, and other research results needed to provide the basis for quanti­tative noise evaluation techniques which can be used. . . for hardware and operational specifications.[1296]

As a result, NASA began dedicating resources to aggressively address aircraft noise and sought to contract much of the work to industry, with the goals of advancing technology and conducting research to provide lawmakers with the information they needed to make informed regulatory decisions.[1297]

During 1968, the Federal Aviation Administration (FAA) was given authority to implement aircraft noise standards for the airline indus­try. Within a year, the new standards were adopted and called for all new designs of subsonic jet aircraft to meet certain criteria. Aircraft that met these standards were called Stage 2 aircraft, while the older planes that did not meet the standards were called Stage 1 aircraft. Stage 1 aircraft over 75,000 pounds were banned from flying to or from U. S. airports as of January 1, 1985. The cycle repeated itself with the establishment of Stage 3 aircraft in 1977, with Stage 2 aircraft need­ing to be phased out by the end of 1999. (Some of the Stage 2 aircraft engines were modified to meet Stage 3 aircraft standards.) In 2005, the FAA adopted an even stricter noise standard, which is Stage 4. All new aircraft designs submitted to the FAA on or after July 5, 2005, must meet Stage 4 requirements. As of this writing, there is no timeta­ble for the mandatory phaseout of Stage 3 aircraft.[1298]

With every new set of regulations, the airline industry required upgrades to its jet engines, if not wholesale new designs. So having already helped establish reliable working versions of each of the major types of jet engines—i. e., turboprop, turbojet, and turbofan—NASA and its industry partners began what has turned out to be a continuing 50-year-long challenge to constantly improve the design of jet engines to prolong their life, make them more fuel efficient, and reduce their environmental impact in terms of air and noise pollution. With this new direction, NASA set in motion three initial programs.9

Подпись: 11NASA’s first major new program was the Acoustically Treated Nacelle program, managed by the Langley Research Center. Engines flying on Douglas DC-8 and Boeing 707 aircraft were outfitted with experimen­tal mufflers, which reduced noise during approach and landing but had negligible effect on noise pollution during takeoff, according to program results reported during a 1969 conference at Langley.10

The second was the Quiet Engine program, which was managed by the Lewis Research Center in Cleveland (Lewis became the Glenn Research Center on March 1, 1999). Attention here focused on the inte­rior design of turbojet and turbofan engines to make them quieter by as much as 20 decibels. General Electric (GE) was the key industry partner in this program, which showed that noise reduction was possi­ble by several methods, including changing the rotational speed of the fan, increasing the fan bypass ratio, and adjusting the spacing of rotat­ing and stationary parts.11

The third was the Steep Approach program, which was jointly managed by Langley and the Ames Research Center/Dryden Flight Research Facility, both in California. This program did not result in new engine technology but instead focused on minimizing noise on the ground by developing techniques for pilots to use in flying steeper and faster approaches to airports.12 [1299] [1300]