Category AERONAUTICS

NASA’s Experimental (Mod-0) 100-Kilowatt Wind Turbine Generator (1975-1987)

Подпись: 13Between 1974 and 1988, NASA Lewis led the U. S. program for large wind turbine development, which included the design and installation of 13 power-utility-size turbines. The 13 wind turbines included an ini­tial testbed turbine designated the Mod-0 and 3 generations of followup wind turbines designated Mod-0A/Mod-1, Mod-2, and Mod-5. As noted in the Project Independence task force report, the initial 100-kilowatt wind turbine project and related supporting research was to be performed in­house by NASA Lewis, while the remaining 100-kilowatt systems, mega­watt systems, and large-scale multiunit systems subprograms were to be performed by contractors under NASA Lewis direction. Each succes­sive generation of technology increased reliability and efficiency while reducing the cost of electricity. These advances were made by gaining a better understanding of the system-design drivers, improving the analyt­ical design tools, verifying design methods with operating field data, and incorporating new technology and innovative designs. However, before these systems could be fabricated and installed, NASA Lewis needed to design and construct an experimental testbed wind turbine generator.

NASA’s first experimental wind turbine (the Mod-0) was constructed at Plum Brook Station in Sandusky, OH, and first achieved rated speed and power in December 1975. The initial design of the Mod-0 drew upon some of the previous information from the Smith-Putnam and Hutter-Allgaier turbines. The primary objectives of the Mod-0 wind tur­bine generator were to provide engineering data for future use as a base for the entire Federal Wind Energy Program and to serve as a testbed for the various components and subsystems, including the testing of different design concepts for blades, hubs, pitch-change mechanisms, system controls, and generators. Also, a very important function of the Mod-0 was to validate a number of computer models, codes, tools, and control algorithms.

The Mod-0 was an experimental 100-kilowatt wind turbine gener­ator that at a wind speed of 18 miles per hour (mph) was expected to generate 180,000 kilowatthours of electricity per year in the form of 440-volt, 3-phase, 60-cycle alternating current output. The initial test­bed system, which included two metal blades that were each 62-feet long from hub to blade tip located downwind of the tower, was mounted on a 100-foot, four-legged steel lattice (pinned truss design) tower. The drive train and rotor were in a nacelle with a centerline 100 feet above ground.

The blades, which were built by Lockheed and were based on NASA’s and Lockheed’s experience with airplane wing designs, were capable of pitch change (up and down movement) and full feather (angle of the blade change so that wind resistance is minimized). The hub was of the rigid type, meaning that the blades were bolted to the main shaft. A yaw (deviation from a straight path) control aligned the wind turbine with the wind direction, and pitch control was used for startup, shutdown, and power control functions. When the wind turbine was in a shutdown mode, the blades were feathered and free to slowly rotate. The system was linked to a public utility power network through an automatic syn­chronizer that converted direct current to alternating current.[1500]

Подпись: 13A number of lessons were learned from the Mod-0 testbed. One of the first problems involved the detection of larger than expected blade bending incidents that would have eventually caused early fatigue failure of the blades. The blade bending occurred for both the flatwise (out-of­plane) and edgewise (in-plane) blade positions. Followup study of this problem determined that high blade loads that resulted in the bending of the blades were caused by impulses applied to the blade each time it passed through the wake of the tower. Basically, the pinned truss design of the tower was blocking the airflow to a much greater degree than anticipated. The cause of this problem, which related to the flatwise load factors, was confirmed by site wind measurements and wind tun­nel tower model tests. The initial measure taken to reduce the blocking effect was to remove the stairway from the tower. Eventually, however, NASA developed the soft tube style tower that later became the stan­dard construction method for most wind turbine towers. Followup study of the edgewise blade loads determined that the problem was caused by excessive nacelle yawing (side-to-side) motion. This problem was addressed by replacing a single yaw drive, which aligns the rotor with the wind direction, with a dual yaw drive, and by adding three brakes to the yaw system to provide additional stiffness.[1501]

Both of the above measures reduced the bending problems below the predicted level. Detection of these problems on the testbed Mod-0
resulted in reevaluation of the analytical tools and the subsequent rede­sign of the wind turbine that proved extremely important in the design of NASA’s subsequent horizontal-axis large wind turbines. In regard to other operational testing of the Mod-0 system, NASA engineers determined that the wind turbine controls for speed, power, and yaw worked satisfactorily and that synchronization to the power utility network was successfully demonstrated. Also, the startup, utility operation, shutdown, and standby subsystems worked in a satisfactory manner. Finally, the Mod-0 was used to check out remote operation that was planned for future power utility systems. In summary, the Mod-0 project satisfied its primary objective of providing the entire Federal Wind Energy Program with early oper­ations and performance data and through continued experience with testing new concepts and components. While NASA Lewis was ready to move forward with fabrication of its next level Mod-0A and Mod-1 wind turbines, the Mod-0 testbed continued to provide valuable testing of new configurations, components, and concepts for over 11 more years.

The Path to Area Rule

Conventional high-speed aircraft design emulated Ernst Mach’s finding that bullet shapes produced less drag. Aircraft designers started with a pointed nose and gradually thickened the fuselage to increase its cross­sectional area, added wings and a tail, and then decreased the diam­eter of the fuselage. The rule of thumb for an ideal streamlined body for supersonic flight was a function of the diameter of the fuselage. Understanding the incorporation of the wing and tail, which were added for practical purposes because airplanes need them to fly, into Mach’s ideal high-speed soon became the focus of Whitcomb’s investigation.[152]

The 8-foot HST team at Langley began a series of tests on various wing and body combinations in November 1951. The wind tunnel mod­els featured swept, straight, and delta wings, and fuselages with varying amounts of curvature. The objective was to evaluate the amount of drag generated by the interference of the two shapes at transonic speeds. The tests resulted in two important realizations for Whitcomb. First, vari­ations in fuselage shape led to marked changes in wing drag. Second, and most importantly, he learned that the combination of fuselage and wing drag had to be considered together as a synergistic aerodynamic system rather than separately, as they had been before.[153]

While Whitcomb was performing his tests, he took a break to attend a Langley technical symposium, where swept wing pioneer Adolf Busemann presented a helpful concept for imagining transonic flow. Busemann asserted that wind tunnel researchers should emulate aero – dynamicists and theoretical scientists in visualizing airflow as analogous to plumbing. In Busemann’s mind, an object surrounded by streamlines constituted a single stream tube. Visualizing "uniform pipes going over the surface of the configuration” assisted wind tunnel researchers in determining the nature of transonic flow.[154]

Whitcomb contemplated his findings in the 8-foot HST and Busemann’s analogy during one of his daily thinking sessions in December 1951. Since his days at Worcester, he dedicated a specific part of his day to thinking. At the core of Whitcomb’s success in solv­ing efficiency problems aerodynamically was the fact that, in the words of one NASA historian, he was the kind of "rare genius who can see things no one else can.”[155] His relied upon his mind’s eye—the non­verbal thinking necessary for engineering—to visualize the aerodynamic process, specifically transonic airflow.[156] Whitcomb’s ability to apply his findings to the design of aircraft was a clear indication that using his mind through intuitive reasoning was as much an analytical aerody­namic tool as a research airplane, wind tunnel, or slide rule.

With his feet propped up on his desk in his office a flash of inspira­tion—a "Eureka” moment, in the mythic tradition of his hero, Edison— led him to the solution of reducing transonic drag. Whitcomb realized that the total cross-sectional area of a fuselage, wing, and tail caused transonic drag or, in his words: "transonic drag is a function of the longitudinal development of the cross-sectional areas of the entire airplane.”[157] It was simply not just the result of shock waves forming at the nose of the airplane, but drag-inducing shock waves formed just behind the wings. Whitcomb visualized in his mind’s eye that if a designer narrowed the fuselage or reduced its cross section, where the wings attached, and enlarged the fuselage again at the trailing edge, then the fuselage would facilitate a smoother transition from subsonic to supersonic speeds. Pinching the fuselage to resemble a wasp’s waist allowed for smoother flow of the streamlines as they traveled from the nose and over the fuselage, wings, and tail. Even though the fuselage was shaped differently, the overall cross section was the same along the length of the fuselage. Without the pinch, the streamlines would bunch and form shock waves, which created the high energy losses that pre­vented supersonic flight.[158] The removal at the wing of those "aerody­namic anchors,” as historians Donald Baals and William Corliss called them, and the recognition of the sensitive balance between fuselage and wing volume were the key.[159]

Verification of the new idea involved the comparison of the data compiled in the 8-foot HST, all other available NACA-gathered transonic data, and Busemann’s plumbing concept. Whitcomb was convinced that his area rule made sense of the questions he had been investigat­ing. Interestingly enough, Whitcomb’s colleagues in the 8-foot HST, including John Stack, were skeptical of his findings. He presented his findings to the Langley community at its in-house technical seminar.[160] After Whitcomb’s 20-minute talk, Busemann remarked: "Some peo­ple come up with half-baked ideas and call them theories. Whitcomb comes up with a brilliant idea and calls it a rule of thumb.”[161] The name "area rule” came from the combination of "cross-sectional area” with "rule of thumb.”[162]

With Busemann’s endorsement, Whitcomb set out to validate the rule through the wind tunnel testing in the 8-foot HST. His models fea­tured fuselages narrowed at the waist. He had enough data by April 1952 indicating that pinching the fuselage resulted in a significant reduction in transonic drag. The resultant research memorandum, "A Study of the Zero Lift Drag Characteristics of Wing-Body Combinations near the Speed of Sound,” appeared the following September. The NACA immediately distributed it secretly to industry.[163]

The area rule provided a transonic solution to aircraft designers in four steps. First, the designer plotted the cross sections of the aircraft fuselage along its length. Second, a comparison was made between the design’s actual area distribution, which reflected outside considerations, such as engine diameter and the overall size dictated by an aircraft car­rier’s elevator deck, and the ideal area distribution that originated in previous NACA mathematical studies. The third step involved the recon­ciliation of the actual area distribution with the ideal area distribution. Once again, practical design considerations shaped this step. Finally, the designer converted the new area distribution back into cross sec­tions, which resulted in the narrowed fuselage that took into account the overall area of the fuselage and wing combination.[164] A designer that followed those four steps would produce a successful design with min­imum transonic drag.

Supersonic Flight Tests and Surveys

The systematic sonic boom testing that NASA began in 1958 would exponentially expand the heretofore largely theoretical and anecdotal knowledge about sonic booms with a vast amount of "real world” data. The new information would make possible increasingly sophisti­cated experiments and provide feedback for checking and refining theo­ries and mathematical models. Because of the priority bestowed on sonic boom research by the SST program and the numerous types of aircraft then available for creating booms (including some faster than anything flying today), the data and findings from the tests conducted in the 1960s are still of significant value in the 21st century.[351]

The Langley Research Center (often referred to as NASA Langley) served as the Agency’s "team leader” for supersonic research. Langley’s acoustics specialists conducted NASA’s initial sonic boom tests in 1958 and 1959 at the Wallops Island Station on Virginia’s isolated Delmarva Peninsula. During the first year, they used six sorties by NASA F-100 and F-101 fighters, flying at speeds between Mach 1.1 and 1.4 and altitudes from 25,000 to 45,000 feet, to make the first good ground recordings and measurements of sonic booms for steady, level flights (the kind of profile a future airliner would fly). Observers judged some of the booms above

1.0 psf to be objectionable, likening them to nearby thunder, and a sample plate glass window was cracked by one plane flying at 25,000 feet. The 1959 test measured shock waves from 26 flights of a Chance Vought F8U-3 (a highly advanced prototype based on the Navy’s Crusader fighter) at speeds up to Mach 2 and altitudes up to 60,000 feet. A B-58 from Edwards AFB also made two supersonic passes at 41,000 feet. Boom intensities from these higher altitudes seemed to be tolerable to observers, with negligible increases in measured overpressures between Mach 1.4 and 2.0. These results were, however, very preliminary.[352]

In July 1960, NASA and the Air Force conducted Project Little Boom at a bombing range north of Nellis AFB, NV, to measure the effects on structures and people of extremely powerful sonic booms. F-104 and F-105 fighters flew slightly over the speed of sound (Mach 1.09 to 1.2) at altitudes as low as 50 feet above ground level. There were more than 50 incidents of sample windows being broken at 20 to 100 psf, but only a few possi­ble breakages below 20 psf, and no physical or psychological harm to vol­unteers exposed to overpressures as high as 120 psf.[353] At Indian Springs, Air Force fighters flew supersonically over an instrumented C-47 trans­port from Edwards, both in the process of landing and on the ground. Despite 120 psf overpressures, there was only very minor damage when on the ground and no problems in flight.[354] Air Force fighters once again would test powerful sonic booms in 1965 in support of Joint Task Force 2 at Tonopah, NV. The strongest sonic boom ever recorded, 144 psf, was generated by an Air Force F-4E Phantom II flying Mach 1.26 at 95 feet.[355]

In late 1960 and early 1961, NASA and AFFTC followed up on Little Boom with Project Big Boom. B-58 bombers made 16 passes flying Mach 1.5 at altitudes of 30,000 to 50,000 feet over arrays of sensors, which measured a maximum overpressure of 2.1 psf. Varying the bomb­er’s weight from 82,000 to 120,000 pounds provided the first hard data on how an aircraft’s weight and related lift produced higher over-pressures than existing theories based on volume alone would indicate.[356]

Throughout the 1960s, Edwards Air Force Base—with its unequaled combination of Air Force and NASA expertise, facilities, instrumenta­tion, airspace, emergency landing space, and types of aircraft—hosted the largest number of sonic boom tests. NASA researchers from Langley’s Acoustics Division spent much of their time there working with the Flight Research Center in a wide variety of flight experiments. The Air Force Flight Test Center usually participated as well.

In an early test in 1961, Gareth Jordan of the FRC led an effort to collect measurements from F-104s and B-58s flying at speeds of Mach 1.2 to 2.0 over sensors located along Edward AFB’s supersonic corridor and at Air Force Plant 42 in Palmdale, about 20 miles south. Most of the Palmdale measurements were under 1.0 psf, which the vast majority of people surveyed there and in Lancaster (where overpressures tended to be somewhat higher) considered no worse than distant thunder. But there were some exceptions.[357]

Other experiments at Edwards in 1961 conducted by Langley per­sonnel with support from the FRC and AFFTC contributed a variety of new information. With help from a tethered balloon, they made the first good measurements of atmospheric effects, showing that air tur­bulence in the lower atmosphere (known as the boundary layer) signif­icantly affected wave shape and overpressure. They also gathered the first data on booms from very high altitudes. Using an aggressive flight profile, AFFTC’s B-58 crew managed to zoom up to 75,000 feet—25,000 feet higher than the bomber’s normal cruising altitude and 15,000 feet over its design limit! The overpressures measured from this high alti­tude proved stronger than predicted (not a promising result for the planned SST). Much lower down, fighter aircraft performed accelerat­ing and turning maneuvers to generate the kind of acoustical rays that amplified shock waves and produced multiple booms and super booms. Thevarious experiments showed that a combination of atmospheric con­ditions, altitude, speed, flight path, aircraft configuration, and sensor location determined the shape of the pressure signatures.[358]

Of major significance for future boom minimization efforts, NASA also began making in-flight shock wave measurements. The first of these, at Edwards in 1960, had used an F-100 with a sensor probe to measure supersonic shock waves from the sides of an F-100, F-104, and B-58, as well as from F-100s speeding past with only 100 feet of separation. The data confirmed Whitham’s overall theory, with some discrepancies. In early 1963, an F-106 equipped with a sophisticated new sensor probe designed at Langley flew seven sorties both above and below a B-58 at speeds of Mach 1.42 to 1.69 and altitudes of approximately 40,000 to

50,0 feet. The data gathered confirmed Walkden’s theory that lift as well as volume increases peak shock wave pressures. As indicated by

Figure 3, analysis of the readings also found that the bow and tail shock waves spread farther apart as they flowed from the B-58 and showed how the multiple or "saw tooth” shock waves produced by the rest of an airplane’s structure (e. g., fuselage, canopy, wings, engines, nacelles, etc.) merged with the stronger bow and tail waves until—at a distance of between 50 and 90 body lengths—they began to coalesce into the classic N-shaped signature.[359] This marked a major milestone in sonic boom research.

One of the most publicized and extended flight test programs at Edwards had begun in 1959 with the first launch from a B-52 of the fast­est aircraft ever flown: the rocket-propelled X-15. T hree of these legendary aerospace vehicles expanded the envelope and gathered data on super­sonic and hypersonic flight for the next 8 years. Although the X-15 was not specifically dedicated to sonic boom tests, the Flight Research Center did begin placing microphones and tape recorders under the X-15s’ flight tracks in the fall of 1961 to gather boom data. FRC researchers much later reported on the measurements of these sonic booms, made at speeds of Mach 3.5 and Mach 4.8.[360]

For the first few years, NASA’s sonic boom tests occurred in rela­tive isolation within military airspace in the desert Southwest or over Virginia’s rural Eastern Shore. A future SST, however, would have to fly over heavily populated areas. Thus, from July 1961 through January 1962, NASA, the FAA, and the Air Force carried out the Community and Structural Response Program at St. Louis, MO. In Operation Bongo, the Air Force sent B-58 bombers on 76 supersonic training flights over the city at altitudes from 31,000 to 41,000 feet, announcing them as routine SAC radar bomb-scoring missions. F-106 interceptors flew 11 additional flights at 41,000 feet. Langley personnel installed sensors on the ground, which measured overpressures up to 3.1 psf. Investigators from Scott AFB, IL, or for a short time, a NASA-contracted engineering firm, responded to dam-

Подпись: 4 Supersonic Flight Tests and Surveys

age claims, finding some possibly legitimate minor damage in about 20 percent of the cases. Repeated interviews with more than 1,000 residents found 90 percent were at least somewhat affected by the booms and about 35 percent were annoyed. Scott AFB (a long-distance phone call from St. Louis) received about 3,000 complaints during the test and another 2,000 in response to 74 sonic booms in the following 3 months. The Air Force eventually approved 825 claims for $58,648. These results served as a warn­ing that repeated sonic booms could pose an issue for SST operations.[361]

To obtain more definitive data on structural damage, NASA in December 1962 resumed tests at Wallops Island using various sample buildings. Air Force F-104s and B-58s and Navy F4H Phantom IIs flew at altitudes from 32,000 to 62,000 feet, creating overpressures up to3 psf. Results indicated that cracks to plaster, tile, and other brittle mate­rials triggered by sonic booms occurred in spots where the materials were already under stress (a finding that would be repeated in later more comprehensive tests).[362]

In February 1963, NASA, the FAA, and the USAF conducted Project Littleman at Edwards AFB to see what happened when two specially instrumented light aircraft were subjected to sonic booms. F-104s made 23 supersonic passes at distances as near as 560 feet from a little Piper Colt and a 2-engine Beech C-45, creating overpressures up to 16 psf. Their responses were "so small as to be insignificant”—dismissing one possible concern about SST operations.[363]

The St. Louis survey had left many unanswered questions on pub­lic opinion. To learn more, the FAA’s Supersonic Transport Development Office, with support from NASA Langley and the USAF (including Tinker AFB), next conducted the Oklahoma City Public Reaction Study from February through July 1964. This was a much more intensive and sys­tematic test. In an operation named Bongo II, B-58s, F-104s, F-101s, and F-106s were called upon to deliver sonic booms between 1.0 and

2.0 psf, 8 times per day, 7 days a week, for 26 weeks, with another 13 weeks of followup activities. The aircraft flew a total of 1,253 supersonic flights at Mach 1.2 to 2.0 and altitudes between 21,000 and 50,000 feet.

The FAA (which had a major field organization in Oklahoma City) instrumented nine control houses scattered throughout the metropoli­tan area with various sensors to measure structural effects, while experts from Langley instrumented three houses and set up additional sensors throughout the area to record overpressures, wave patterns, and mete­orological conditions. The National Opinion Research Center at the University of Chicago interviewed a sample of 3,000 adults three times during the study. By the end of the test, 73 percent of those surveyed felt that they could live with the number and strength of the booms expe­rienced, and 27 percent would not accept indefinite booms at the level tested. Forty percent believed that they caused some structural damage (even though the control houses showed no significant effects). Analysis of the shock wave patterns by NASA Langley showed that a small number of overpressure measurements were significantly higher than expected, indicating probable atmospheric influences, including heat rising from urban landscapes. One possible result was the breakage of almost 150 windows in the city’s two tallest buildings early in the test.[364]

The Oklahoma City study added to the growing knowledge about sonic booms and their acceptance by the public at the cost of nega­tive publicity for the FAA. In view of the reactions to the St Louis and Oklahoma City tests by much of the public and some politicians, plans for another extended sonic boom test over a different city, including flights at night, never materialized.[365]

The FAA and Air Force conducted the next series of tests from November 1964 to February 1965 in a much less populated place: the remote Oscura camp in the vast White Sands Missile Range of New Mexico, where 21 structures of various types and ages with a variety of plaster, windows, and furnishings were studied for possible dam­age. F-104s from nearby Holloman AFB and B-58s from Edwards gen­erated 1,494 booms producing overpressures from 1.6 to 19 psf. The 680 sonic booms at 5.0 psf caused no real problems, but those above 7.9 psf revealed varying degrees of damage to glass, plaster, tile, and stucco already in vulnerable condition. A parallel study of several thousand incubated chicken eggs showed no reduction in hatchabil – ity, and audiology tests on 20 personnel subjected daily to the booms showed no hearing impairment.[366]

Before the White Sands test ended, NASA Langley personnel began collecting boom data from a highly urbanized setting in winter weather. During February and March 1965, they recorded data at five ground stations as B-58 bombers flew 22 training missions in a corridor over downtown Chicago at speeds of Mach 1.2 to 1.66 and altitudes from

38,0 to 48,000 feet. The results showed how amplitude and wave shape varied widely depending upon atmospheric conditions. These 22 flights and 27 others resulted in the Air Force approving 1,442 of 2,964 damage claims for $114,763.[367]

Also in March 1965, the FAA and NASA, in cooperation with the

U. S. Forest Service, studied the effect on hazardous mountain snow packs in the Colorado Rockies of Air Force fighters creating boom over­pressures up to 5.0 psf. Because of stable snow conditions, none of these created an avalanche. Interestingly enough, in the early 1960s the

National Park Service had tried to use newly deployed F-106s at Geiger Field, WA, to create controlled avalanches in Glacier National Park (Project "Safe Slide”), but presumably found traditional artillery fire more suitable.[368]

From the beginning of the SST program, the aircraft most desired for experiments was, of course, the North American XB-70 Valkyrie. The first of the giant testbeds (XB-70-1) arrived at Edwards AFB in September 1964, and the better performing and better instrumented sec­ond aircraft (XB-70-2) arrived in July 1965. With a length of 186 feet, a wingspan of 105 feet, and a gross weight of about 500,000 pounds, the six-engine giant was less than two-thirds as long as some of the later SST concepts, but it was the best real-life surrogate available.

Even during the initial flight envelope expansion by contractor and AFFTC test pilots, the Flight Research Center began gathering sonic boom data, including direct comparisons of its shock waves with those of a B-58 flying only 800 feet behind.[369] Using an array of microphones and recording equipment at several ground stations, NASA research­ers eventually built a database of boom signatures from 39 flights made by the XB-70s (10 with B-58 chase planes), from March 1965 through May 1966.[370] Because "the XB-70 is capable of duplicating the SST flight profiles and environment in almost every respect,” the FRC was looking forward to beginning its own experimental research program using the second Valkyrie on June 15, 1966, with sonic boom testing listed as the first priority.[371]

On June 8, however, XB-70-2 crashed on its 47th flight as the result of an infamous midair collision that killed two pilots and gravely injured

Подпись: 4 Supersonic Flight Tests and Surveys

a third. Despite this tragic setback to the test program, the less capable XB-70-1 (which underwent modifications until November) eventually proved useful for many purposes. After 6 months of joint AFFTC/FRC operations, including the boom testing described below, the plane was turned over full time to NASA in April 1967 after 60 Air Force flights. The FRC, with a more limited budget, then used the Valkyrie for 23 more test missions until February 1969, when the unique aircraft was retired to the USAF Museum in Dayton, OH.[372] All told, NASA acquired sonic boom measurements from 51 of the 129 total flights made by the XB-70s, using two ground stations on Edwards AFB, one at nearby Boron, and two in Nevada.[373] This data proved to be of great value in the future.

The loss of one XB-70 and retirement of the other from supersonic testing was made somewhat less painful by the availability of another smaller but even faster product of advanced aviation technology: the Lockheed YF-12 and its cousin, the SR-71—both nicknamed Blackbirds. On May 1, 1965, shortly after arriving at Edwards, a YF-12A set nine world records, including a closed-course speed of 2,070 mph (Mach 3.14)
and a sustained altitude of 80,257 feet. Four of that day’s five flights also yielded sonic boom measurements. At speeds of Mach 2.6 to 3.1 and alti­tudes of 60,000 to 76,500 feet above ground level, overpressures varied from 1.2 to 1.7 psf depending on distance from the flight path. During another series of flight tests at slower speeds and lower altitudes, over­pressures up to 5.0 psf were measured during accelerations after hav­ing slowed to refuel. These early results proved consistent with previous B-58 data.[374] Data gathered over the years from ground arrays measur­ing the sonic signatures from YF-12s, XB-70s, B-58s, and smaller air­craft flying at various altitudes also showed that the lateral spread of a boom carpet (without the influence of atmospheric variables) could be roughly equated to 1 mile for every 1,000 feet of altitude, with the N-signatures become more rounded with distance until degenerating into the approximate shape of a sine wave.[375]

Although grateful to benefit from the flights of AFFTC’s Blackbirds, the FRC wanted its own YF-12 or SR-71 for supersonic research. It finally gained the use of two YF-12s through a NASA-USAF memoran­dum of understanding signed in June 1969, paying for operations with funding left over from termination of the X-15 and XB-70 programs.[376]

In the fall of 1965, with public acceptance of sonic booms becom­ing a significant public and political issue, the White House Office of Science and Technology established the National Sonic Boom Evaluation Office (NSBEO) under an interagency Coordinating Committee on Sonic Boom Studies. The new organization, which was attached to Air Force Headquarters for administrative purposes, planned a comprehensive series of tests known as the National Sonic Boom Evaluation Program, to be con­ducted primarily at Edwards AFB. NASA (in particular the Flight Research and Langley Centers) would be responsible for test operations and data collection, with the Stanford Research Institute hired to help analyze the findings.[377] After careful preparations (including specially built structures and extensive sensor and recording arrays), the National Sonic Boom Evaluation began in June 1966. Its main objectives were to address the many issues left unresolved from previous tests. Unfortunately, the loss of XB-70-2 on June 8 forced a 4-month break in the test schedule, with the limited events completed in June designated Phase I. The second phase began in November, when XB-70-1 returned to flight status, and lasted into January 1967. A total of 367 supersonic missions were flown by XB-70s, B-58s, YF-12s, SR-71s, F-104s, and F-106s during the two phases. These were supplemented by 256 subsonic flights by KC-135s, WC-135Bs, C-131Bs, and Cessna 150s. In addition, the Goodyear blimp Mayflower was used in the June phase to measure sonic booms at 2,000 feet.[378]

By the end of testing, the National Sonic Boom Evaluation had obtained new and highly detailed acoustic and seismic signatures from all the differ­ent supersonic aircraft in various flight profiles during a variety of atmo­spheric conditions. The data from 20 XB-70 flights at speeds from Mach 1.38 to 2.94 were to be of particular long-term interest. For example, Langley’s sophisticated nose probe used for the pioneering in-flight flow-field mea­surements of the B-58 in 1963 was installed on one of the FRC’s F-104s to do the same for the XB-70. Comparison of data between blimp and ground sensors and variations between the summer and winter tests confirmed the significant influence that atmospheric conditions, such as turbulence and convective heating near the surface, have on boom propagation. [379] Also, the evaluation provided an opportunity to gather data on more than 1,500 sonic boom signatures created during 35 flights by the recently available SR-71s and YF-12s at speeds up to Mach 3.0 and altitudes up to 80,000 feet.[380]

Some of the findings portended serious problems for planned SST operations. The program obtained responses from several hundred vol­unteers, both outdoors and in houses, to sonic booms of different inten­sities produced by each of the supersonic aircraft. The time between the peak overpressure of the bow and tail waves for aircraft at high alti­tudes ranged from about 0.1 second for the F-104, 0.2 second for the B-58, and 0.3 second for the XB-70. The respondents also compared sonic booms to the jet engine noise of subsonic aircraft. Although data varied for each of the criteria measured, significant minorities tended to find the booms either "just acceptable” or unacceptable, with the "sharper” N-wave signature from the lower flying F-104 more annoying outdoors than the more rounded signatures from the larger aircraft, which had to fly at higher altitudes to create the same overpressure. Other factors included the frequency, time of day or night, and type of boom signature. Correlating how the subjects responded to jet noise (measured in decibels) and sonic booms (normally measured in psf), the SRI researchers used the perceived noise decibel (PNdB) level to assess how loud booms seem to human ears.[381]

Employing sophisticated sensors, civil engineers measured the phys­ical effects on houses and a building with a large interior space (the base bowling alley) to varying degrees of booms created by F-104s, B-58s, and the XB-70. Of special concern for the SST, they found the XB-70’s elon­gated N-wave created more of the low frequencies that cause indoor vibra­tions, such as rattling windows (although less bothersome to observers outdoors). And although no significant harm was detected to the instru­mented structures, 57 complaints of damage were received from residents in the surrounding area, and three windows were broken on base. Finally, monitoring by the Department of Agriculture detected no ill effects on farm animals in the area, although avian species (chickens, turkeys, etc.) reacted more than livestock did.[382] The National Sonic Boom Evaluation remains the most comprehensive such test program yet conducted.

Later, in 1967, the opportunity for collecting additional survey data presented itself when the FAA and NASA learned that SAC was starting an extensive training program for its growing fleet of SR-71s. TRACOR,

Inc., of Austin, TX, which was already under contract to NASA doing surveys on airport noise, had its contract expanded in May 1967 to include public responses to the SR-71s’ sonic booms in Dallas, Los Angeles, Denver, Atlanta, Chicago, and Minneapolis. Between July 3 and October 2, Air Force SR-71s made 220 flights over these cities at high altitude, ranging from 5 over Atlanta to 60 over Dallas. The minority of sonic booms that were measured were almost all N-waves with overpres­sures from slightly less than 1.0 psf to 2.0 psf. Although the data from this impromptu test program were less than definitive, the overall find­ings (based on 6,375 interviews) were fairly consistent with the previous human response surveys. For example, after an initial dropoff, the level of annoyance with the booms tended to increase over time, and almost all those who complained were worried about damage. Among 15 differ­ent adjectives supplied to describe the booms (e. g., disturbing, annoying, irritating), the word "startling” was chosen much more than any other.[383]

Although the FRC and AFFTC continued their missions of supersonic flight-testing and experimentation at Edwards, what might be called the heroic era of sonic boom testing was drawing to a close. The FAA and the Environmental Science Services Administration (a precursor of the Environmental Protection Agency) did some sophisticated testing of meteorological effects at Pendleton, OR, from September 1968 until May 1970, using a dense grid of recently invented unattended record­ing equipment to measure random booms from SR-71s. On the other side of the continent, NASA and the Navy studied sonic booms during Apollo missions in 1970 and 1971.[384]

The most significant NASA testing in 1970 took place from August through October at the Atomic Energy Commission’s Jackass Flats test site in Nevada. In conjunction with the FAA and the National Oceanic and Atmospheric Administration (NOAA), NASA took advantage of the 1,527-foot-tall BREN Tower (named for its original purpose, the "Bare Reactor Experiment Nevada” in 1962) to install a vertical array of 15 microphones as well as meteorological sensors. (Until then, a 250-foot tower at Wallops Island had been the highest used in sonic boom test­ing.) During the summer and fall of 1970, the FRC’s F-104s from Edwards made 121 boom-generating flights to provide measurements of several still poorly understood aspects of the sonic boom, especially the places, known mathematically as caustics, where nonlinear focusing of acous­tical rays occurs. Frequently caused by speeds very near Mach 1 or by acceleration, they can result in U-shaped signatures with bow and tail wave overpressures strong enough to create super booms. The BREN Tower allowed such measurements to be made in the vertical dimension for the first time. This test resulted in definitive data on the formation and nature of caustics, information that would be valuable in helping pilots to avoid making focused booms.[385]

For all intents and purposes, the results of earlier testing and sur­veys had already helped to seal the fate of the SST before the reports on this latest test began coming in. Yet the data gathered from 1958 through 1970 during the SST program contributed tremendously to the interna­tional aeronautical and scientific communities’ understanding of one of the most baffling and complicated aspects of supersonic flight. As Harry Carlson told the Nation’s top sonic boom scientists and engineers on the very same day of the last F-104 mission over Jackass Flats: "The importance of flight-test programs cannot be overemphasized. These tests have pro­vided an impressive amount of high-quality data, which has been of great value in the verification of theoretical methods for the prediction of nom­inal overpressures and in the estimation from a statistical standpoint of the modifying influence of unpredictable atmospheric nonuniformities.”[386]

Physical Problems, Challenges, and[ Pragmatic Solutions

Robert G. Hoey

The advent of the supersonic and hypersonic era introduced a wide range of operational challenges that required creative insight by the flight research community. Among these were phenomena such as inertial (roll) coupling, transonic pitch-up, panel flutter, structural resonances, pilot – induced oscillations, and aerothermodynamic heating. Researchers had to incorporate a variety of solutions and refine simulation techniques to better predict the realities of flight. The efforts of the NACA and NASA, in partnership with other organizations, including the military, enabled development and refinement of reliable aerospace vehicle systems.

HE HISTORY OF AVIATION is replete with challenges and difficul­ties overcome by creative scientists and engineers whose insight, coupled with often-pragmatic solutions, broke through what had appeared to be barriers to future flight. At the dawn of aviation, the problems were largely evident to all: for example, simply developing a winged vehicle that could take off, sustain itself in the air, fly in a con­trolled fashion, and then land. As aviation progressed, the problems and challenges became more subtle but no less demanding. The National Advisory Committee on Aeronautics (NACA) had been created in 1915 to pursue the "scientific study of the problems of flight, with a view to their practical solution,” and that spirit carried over into the aero­nautics programs of the National Aeronautics and Space Administration (NASA), which succeeded the NACA on October 1, 1958, not quite a year after Sputnik had electrified the world. The role of the NACA, and later NASA, is mentioned often in the following discussion. Both have been instrumental in the discovery and solution to many of these problems.

As aircraft flight speeds moved from the firmly subsonic through the transonic and into the supersonic and even hypersonic regimes, the con­tinuing challenge of addressing unexpected interactions and problems

followed right along. Since an airplane is an integrated system, many of these problems crossed multiple discipline areas and affected multiple aspects of an aircraft’s performance, or flight safety. Numerous examples could be selected, but the author has chosen to examine a representative sampling from several areas: experience with flight control systems and their design, structures, and their aeroelastic manifestations; flight sim­ulation; flight dynamics (the motions and experience of the airplane in flight); and aerothermodynamics, the demanding environment of aerody­namic heating that affects a vehicle and its structure at higher velocities.

Moving Base Cockpit and Centrifuge Simulators

As the computational capability to accurately model the handling quali­ties of an airplane improved, there was recognition that the lack of motion cues was a distraction to the realism of the simulation. An early attempt to simulate motion for the pilot consisted of mounting the entire simula­tor cockpit on a set of large hydraulic actuators. These actuators would generate a small positive or negative bump to simulate g onset, while any steady-state acceleration was washed out over time (i. e., back to 1 g). The actuators could also tilt the simulator cockpit to produce a side force, or fore and aft force, on the crew. When correlated with a horizon on a visual screen, the result was a quite realistic sensation of motion. These moving-base cockpit systems were rather expensive and difficult to main­tain compared with a simple fixed-base cockpit. Since both the magni­tude of the g vector and the rotational motion required were false, the systems were not widely accepted in the flight-testing community, where the goal is to evaluate the pilot’s response and capabilities in a true flight environment. They found ready acceptance as airline procedures train­ers when the maneuvers are slow and g forces are typically small and proved a source of entertainment in amusement parks, aerospace muse­ums, and science centers.

In the 1950s, the Naval Air Development Center (NADC) at Johnsville, PA, developed a large, powerful centrifuge to explore human tolerance to high g forces. The centrifuge consisted of a 182-ton electric DC motor turning a 50-foot arm with a gondola at the end of the arm. The motor could generate g forces at the gondola as high as 40 g’s. The gondola was mounted with two controllable gimbals that allowed the g vector to be oriented in different directions for the gondola occupant.[728]

Moving Base Cockpit and Centrifuge Simulators

Test pilot entering the centrifuge gondola at the Naval Air Development Center (NADC) in Johnsville, PA. NASA.

Many detailed studies defining human tolerance to g forces were per­formed on the centrifuge using programmed g profiles. NADC devised a method for installing a cockpit in the gondola, connecting it to a large analog computer, and allowing the pilot to control the computer sim­ulation, which in turn controlled the centrifuge rotation rate and gim – bal angles. This allowed the pilot in the gondola to not only see the pilot displays of the simulated flight, but also to feel the associated transla­tional g levels in all three axes. Although the translational g forces were correctly simulated, the gimbal rotations necessary to properly align the total g vector with the cockpit were artificial and were not representa­tive of a flight environment.

One of the first applications of this closed-loop, moving base sim­ulation was in support of the X-15 program in 1958. There were two prime objectives of the X-15 centrifuge program associated with the high g exit and entry: assessment and validation of the crew station (side arm controller, head and arm restraints, displays, etc.), and evaluation of the handling qualities with and without the Stability Augmentation System. The g environment during exit consisted of a forward acceler­ation (eyeballs-in) increasing from 2 to 4 g, combined with a 2 g pullup (eyeballs-down). The entry g environment was more severe, consisting

of a deceleration (eyeballs-out) of 3 g combined with a simultaneous pullout acceleration of 6 g (eyeballs-down).

The results of the X-15 centrifuge program were quite useful to the X-15’s overall development; however, the pilots felt that the centrifuge did not provide a very realistic simulation of an aircraft flight environment. The false rotational movement of the gondola was apparent to the pilots and was a distraction to the piloting task during entry. T he exit phase of an X-15 flight was a fairly steady acceleration with little rotational motion, and the pilots judged the simulation a good representation of that environment.[729]

The NADC centrifuge was also used in support of the launch phase of the Mercury, Gemini, and Apollo space programs. These provided valuable information regarding the physiological condition of the astronauts and the crew station design but generally did not include closed-loop piloting tasks with the pilot controlling the simulated vehicle and trajectory.

A second closed-loop centrifuge simulation was performed in sup­port of the Boeing X-20 Dyna-Soar program. Dyna-Soar constituted an ambitious but feasible Air Force effort to develop a hypersonic lofted boost-glider capable of an orbital flight. Unfortunately, it was prematurely canceled in 1963 by then-Secretary of Defense Robert S. McNamara. The Dyna-Soar centrifuge study effort was similar to the X-15 centrifuge program, but the acceleration lasted considerably longer and peaked at 6 g (eyeballs-in) at burnout of the Titan III booster. The pilots were "flying” the vehicle in all three axes during these centrifuge runs, and valuable data were obtained relative to the pilot’s ability to function effec­tively during long periods of acceleration. Some of the piloting demon­strations included alleviating wind spikes during the early ascent phase and successfully guiding the booster to an accurate orbital insertion using simple backup guidance concepts in the event of a booster guid­ance failure.[730] The Mercury and Gemini programs used automatic guid­ance during the ascent phase, and the only piloting task during boost was to initiate an abort by firing the escape rockets. The Apollo program included a backup piloting mode during the boost based on the results of the X-20 and other centrifuge programs.

Computational Fluid Dynamics: What It Is, What It Does

What constitutes computational fluid dynamics? The basic equations of fluid dynamics, the Navier-Stokes equations, are expressions of three fundamental principles: (1) mass is conserved (the continuity equation), (2) Newton’s second law (the momentum equation), and (3) the energy equation (the first law of thermodynamics). Moreover, these equations in their most general form are either partial differential equations (as we have discussed) or integral equations (an alternate form we have not discussed involving integrals from calculus).

The partial differential equations are those exhibited at the NASM. Computational fluid dynamics is the art and science of replacing the partial derivatives (or integrals, as the case may be) in these equations with discrete algebraic forms, which in turn are solved to obtain num­bers for the flow-field values (pressure, density, velocity, etc.) at discrete points in time and or space.[765] At these selected points in the flow, called grid points, each of the derivatives in each of the equations are simply replaced with numbers that are advanced in time or space to obtain a solution for the flow. In this fashion, the partial differential equations are replaced by a large number of algebraic equations, which can then be solved simultaneously for the flow variables at all the grid points.

The end product of the CFD process is thus a collection of numbers, in contrast to a closed-form analytical solution (equations). However, in the long run, the objective of most engineering analyses, closed-form or otherwise, is a quantitative description of the problem: that is, num­bers. Along these lines, in 1856, the famous British scientist James Clerk Maxwell wrote: "All the mathematical sciences are founded on relations between physical laws and laws of numbers, so that the aim of exact
science is to reduce the problems of nature to the determination of quantities by operations with numbers.”[766] Well over a century later, it is worth noting how well Maxwell captured the essence of CFD: operations with numbers.

Computational Fluid Dynamics: What It Is, What It DoesNote that computational fluid dynamics results in solutions for the flow only at the distinct points in the flow called grid points, which were identified earlier. In a CFD solution, grid points are either initially dis­tributed throughout the flow and/or generated during the course of the solution (called an "adaptive grid”). This is in theoretical contrast with a closed-form analytical solution for the flow, where the solution is in the form of equations that allow the calculation of the flow variables at any point of one’s choosing, that is, an analytical solution is like a con­tinuous answer spread over the whole flow field. Closed-form analyti­cal solutions may be likened to a traditionalist Dutch master’s painting consisting of continuous brush strokes, while a CFD solution is akin to a French pointillist consisting of multicolored dots made with a brush tip.

Generating a grid is an essential part of the art of CFD. The spacing between grid points and the geometric ways in which they are arrayed is critical to obtaining an accurate numerical CFD solution. Poor grids almost always ensure poor CFD solutions. Though good grids do not guarantee good CFD solutions, they are essential for useful solutions. Grid genera­tion is a discipline all by itself, a subspecialty of CFD. And grid generation can become very labor-intensive—for some flows over complex three­dimensional configurations, it may take months to generate a proper grid.

To summarize, the Navier-Stokes equations, the governing equations of fluid dynamics, have been in existence for more than 160 years, their creation a triumph of derivative insight. But few knew how to analyti­cally solve them except for a few simple cases. Because of their complex­ity, they thus could not serve as a practical widely employed tool in the engineer’s arsenal. It took the invention of the computer to make that pos­sible. And because it did so, it likewise permitted the advent of computa­tional fluid dynamics. So how did the idea of numerical solutions to the Navier-Stokes equations evolve?

Early Use and Continuing Development of NASTRAN

The first components of NASTRAN became operational at Goddard in May 1968. Distribution to other Centers, training, and a debugging period followed through 1969 and into 1970.[816] With completion of the initial devel­opment, "the management of NASTRAN was transferred to the Langley Research Center. The NASTRAN Systems Management Office (NSMO) was established in the Structures Division at Langley October 4, 1970.”[817] Initial public release followed just 1 month later, in November 1970.

NSMO responsibilities included:[818]

• Centralized program development (advisory committees).

• Coordinating user experiences (bimonthly NASTRAN Bulletin and annual Users’ Colloquia).

• System maintenance (error correction and essential improvements).

• Development and addition of new capability.

• NASTRAN-focused research and development (R&D).

The actual distribution of NASTRAN to the public was handled by the Computer Software Management and Information Center (COSMIC), NASA’s clearinghouse for software distribution (which is described in a subsequent section of this paper). The price at initial release was $1,700, "which covers reproducing and supplying the necessary system tapes and documentation.”[819] Documentation was published in four vol­umes, each with a distinct purpose: one for users, one for programmers who would be involved in maintenance and subsequent development, a theory manual, and finally a volume of demonstration problems. (The 900-page user’s manual could be obtained from COSMIC for $10, if purchased separately from the program itself. The author assumes that the other volumes were similarly priced.)[820]

CATEGORY

 

COMPUTERS

 

Early Use and Continuing Development of NASTRAN

NASA CENTERS

 

DOC

 

OTHER GOV T

 

8

 

Early Use and Continuing Development of NASTRAN

AEROSPACE

 

AIRCRAFT

 

Early Use and Continuing Development of NASTRAN

COMPUTER CORF

 

MANUFACTURING ENGR CONSIT AUTO UNIVERSITIES

 

OTHERS

 

Early Use and Continuing Development of NASTRAN

Early Use and Continuing Development of NASTRAN

Подпись: TOTALS2,272 USERS

NASTRAN user community profile in 1974. NASA.

Things were happening quickly. Within the first year after public release, NASTRAN was installed on over 60 machines across the United States. There were urgent needs requiring immediate attention. "When NSMO was established in October 1970, there existed a dire need for maintenance of the NASTRAN system. With the cooperation of Goddard Space Flight Center, an interim maintenance contract was negotiated with Computer Sciences Corporation through a contract in effect at GSFC. This contract provided for the essential function of error cor­rection until a contract for full time maintenance could be negotiated through an open competition. The interim maintenance activity was restricted to the correction of over 75 errors reported to the NSMO, together with all associated documentation changes. New thermal bend­ing and hydroelastic elements previously developed by the MacNeal- Schwendler Corporation under contract to GSFC were also installed. Levels 13 and 14 were created for government testing and evaluation. The next version of NASTRAN to be released to the public. . . will be built upon the results of this interim maintenance activity and will be designated Level 15,” according to a status report to the user community in 1971.[821]

In June 1971, the contract for full-time maintenance was awarded to MacNeal Schwendler Corporation, which then opened an office near Langley. A bug reporting and correction system was established. Bell Aerospace Company received a contract to develop new elements and a thermal analysis capability. Other efforts were underway to improve efficiency and execution time. A prioritized list of future upgrades was started, with input from all of the NASA Centers. However, for the time being, the pace of adding new capability would be limited by the need to also keep up with essential maintenance.[822]

By 1975, NASTRAN was installed on 269 computers. The estimated composition of the user community (based on a survey taken by the NSMO) is illustrated here.

By this time, the NSMO was feeling the pressure of trying to keep up with maintenance, bug fixes, and requested upgrades from a large and rapidly growing user community. There was also a need to keep up with changing hardware technology. Measures under consideration included improvements to the Error Correction Information System (ECIS); more user involvement in the development of improvements (although this would also require effort to enforce coding standards and interface requirements, and to develop procedures for verification and implementation); and a price increase to help support the NSMO’s maintenance costs and also possibly recoup some of the NASTRAN development costs. COSMIC eventually changed its terms for all soft­ware distribution to help offset the costs of maintenance.

An annual NASTRAN Users’ Colloquium was initiated, the first of which occurred approximately 1 year after initial public release. Each Colloquium usually began with an overview from the NSMO on NASTRAN status, including usage trends, what to expect in the next release, and planned changes in NASTRAN management or terms of distribution. Other papers covered experiences and lessons learned in deployment, installation, and training; technical presentations on new types of elements or new solution capabilities that had recently been, were being, or could be, implemented; evaluation and comparison of NASTRAN with test data or other analysis methods; and user experiences and applications. (The early NASTRAN Users’ Colloquia proceedings were available from the National Technical Information Service for $6.)

The first Colloquia were held at Langley and hosted by the NSMO staff. As the routine became more established and the user community grew, the Colloquia were moved to different Centers and cochaired, usu­ally by the current NSMO Manager and a representative from the host­ing Center. There were 21 Users’ Colloquia, at which 429 papers were presented. The breakdown of papers by contributing organization is shown here.(Note: collaborative papers are counted under each contrib­uting organization, so the sum of the subtotals exceeds the overall total.)

ORGANIZATIONS PRESENTING PAPERS AT NASTRAN USERS’ COLLOQUIA

TOTAL PAPERS

429

NASA SUBTOTAL:

91

Goddard

33

Langley

35

Other NASA

23

INDUSTRY SUBTOTAL:

274

Computer and software companies

104

Aircraft and spacecraft industry

1 16

Nonaerospace industry

54

UNIVERSITIES:

26

OTHER GOVERNMENT SUBTOTAL:

91

Air Force

10

Army

15

Navy

61

National Laboratories

5

Computing companies were typically involved in theory, modeling technique, resolution of operational issues, and capability improvements (sometimes on contracts to NASA or other agencies), but also collab­orated with "user” organizations assisting with NASTRAN application to problems of interest. All participants were actively involved in the improvement of NASTRAN, as well as its application.

Major aircraft companies—Boeing, General Dynamics, Grumman, Lockheed, McDonnell-Douglas, Northrop, Rockwell, and Vought—were frequent participants, presenting a total of 70 papers. Smaller aerospace companies also began to use NASTRAN. Gates Learjet modeled the Lear 35/36 wing as a test case in 1976 and then used NASTRAN in the design phase of the Lear 28/29 and Lear 55 business jets.[823] Beechcraft used NASTRAN in the design of the Super King Air 200 twin turboprop and the T-34C Mentor military trainer.[824] Dynamic Engineering, Inc., (DEI) began using NASTRAN in the design and analysis of wind tunnel models in the 1980s.[825]

Nonaerospace applications appeared almost immediately. By 1972, NASTRAN was being used in the automotive industry, in architectural engineering, by the Department of Transportation, and by the Atomic Energy Commission. The NSMO had "received expressions of interest in NASTRAN from firms in nearly every West European country, Japan, and Israel.”[826] That same year, "NASTRAN was chosen as the principal analytical tool” in the design and construction of the 40-story Illinois Center Plaza Hotel building.[827]

Other nonaerospace applications included:

• Nuclear power plants.

• Automotive industry, including tires as well as primary structure.

• Ships and submarines.

• Munitions.

• Acoustic and electromagnetic applications.

• Chemical processing plants.

• Steam turbines and gas turbines.

• Marine structures.

• Electronic circuitry.

B. F. Goodrich, General Motors, Tennessee Eastman, and Texas Instruments were common presenters at the Colloquia. Frequent Government participants, apart from the NASA Centers, included the David Taylor Naval Ship Research & Development Center, the Naval Underwater Systems Cener, the U. S. Army Armament Research & Development Command, and several U. S. Army arsenals and laboratories.[828]

Technical improvements, too numerous to describe them all, were continually being made. At introduction (Level 12), NASTRAN offered linear static and dynamic analysis. There were two main classes of new capability: analysis routines and structural elements. Developments were often tried out on an experimental basis by users and reported on at the Colloquia before being incorporated into standard NASTRAN. Evaluations and further improvements to the capability would typically follow. In addi­tion, of course, there were bug fixes and operational improvements. A few key developments are identified below. Where dates are given, they rep­resent the initial introduction of a capability into standard NASTRAN, not to imply full maturity:

• Thermal analysis: initial capability introduced at Level 15 (1973).

• Pre – and post-processing: continuous.

• Performance improvements: continuous.

• Adaptation to new platforms and operating systems: con­tinuous. (The earliest mention the author has found of NASTRAN running on a PC is 1992.[829])

• New elements: continuous. Level 15 included a dummy structural element to facilitate user experimentation.

• Substructuring: the decomposition of a larger model into smaller models that could be constructed, manipulated, and/or analyzed independently. It was identified as an important need when NASTRAN was first introduced.

Initial substructuring capability was introduced at Level 15 in 1973.

• Aeroelastics and flutter: studies were conducted in the early 1970s. Initial capability was introduced in Level 16 by 1976. NASTRAN aeroelastic, flutter, and gust load anal­ysis uses a doublet-lattice aerodynamic method, which approximates the wing as an initially flat surface for the aerodynamic calculation (does not include camber or thickness). The calculation is much simpler than full – fledged computational fluid dynamics (CFD) analysis but neglects many real flow effects as well as configuration geometry details. Accuracy is provided by using correc­tion factors to match the static characteristics of the dou­blet-lattice model to higher fidelity data from flight test, wind tunnel test, and/or CFD. One of the classic references on correcting lifting surface predictions is a paper by J. P. Giesing, T. P. Kalman, and W. P. Rodden of McDonnell – Douglas, on contract to NASA, in 1976.[830]

• Automated design and analysis: automated fully stressed design was introduced in Level 16 (1976). Design automa­tion is a much broader field than this, however, and most attempts to further automate design, analysis, and/or opti­mization have taken the form of applications outside of, and interfacing with, NASTRAN. In many industries, auto­mated design has become routine; in others, the status of automated design remains largely experimental, primarily because of the inherent complexity of design problems.[831]

• Nonlinear problems: geometric and/or material. Geometric nonlinearity is introduced, for example, when displacements are large enough to change the geomet­ric configuration of the structure in significant ways. Material nonlinearity occurs when local stresses exceed the linear elastic limit. Applications of nonlinear analy­sis include engine hot section parts experiencing regions of local plasticity, automotive and aircraft crash simula­tion, and lightweight space structures that may experi­ence large elastic deformations—to name a few. Studies and experimental implementations were made dur­ing the 1970s. There are many different classes of non­linear problems encompassed in this category, requir­ing a variety of solutions, many of which were added to standard NASTRAN through the 1980s.

NASTRAN Users’ Colloquia, 1971-1993

Note: This appendix includes a list of the dates and locations of the NASTRAN Users’ Colloquia and NASTRAN applications presented at the Colloquia by "nontraditional” users, i. e., industry other than aero­space, Government agencies other than NASA, and universities. Not every paper from these sources is listed, only those that represent applications. Many other papers were presented on modeling techniques, capability improvements, etc., which are not listed.

NASTRAN USERS’ COLLOQUIA DATES AND LOCATIONS

#

YEAR

DATE

LOCATION

CHAIRPERSON(S) / OTHER NOTES

1 st

1971

Sept. 1 3-15

NASA Langley

J. Philip Raney (NASTSRAN SMO)

2nd

1972

Sept. 11-12

NASA Langley

J. Philip Raney

3rd

1973

Sept. 11-12

NASA Langley

<not available>

4th

1975

Sept. 9-11

NASA Langley

Deene J. Weidman

5th

1976

Oct. 5-6

NASA Ames

Deene J. Weidman

6th

1977

Oct. 4-6

NASA Lewis

Deene J. Weidman (Langley) and Christos Chamis (Lewis)

7th

1978

Oct. 4-6

NASA Marshall

Deene J. Weidman (Langley) Robert L. McComas (Marshall)

8th

1979

Oct. 30-31

NASA Goddard

Robert L. Brugh (COSMIC) Reginal

9th

1980

Oct. 22-23

NASA Kennedy

Robert L. Brugh (COSMIC) Henry Harris (KSC)

Note: From this point on, locations were no longer at NASA Centers, individual co/chairs are not identified in the proceedings, and the NASA Scientific & Technical Information (STI) Branch (or program) is listed in the proceedings as the responsible organization.

10th

1982

May 1 3-14

New Orleans, LA

Co-chairs not identified.

11th

1983

May 2-6

San Francisco, CA

12th

1984

May 7-1 1

Orlando, FL

13th

1985

May 6-10

Boston, MA

14th

1986

May 5-9

San Diego, CA

15th

1987

May 4-8

Kansas City, MO

16th

1988

Apr. 25-29

Arlington, VA

17th

1989

Apr. 24-28

San Antonio, TX

18th

1990

Apr. 23-27

Portland, OR

COSMIC, under the STI Branch.

19th

1991

Apr. 22-26

Williamsburg, VA

20th

1992

Apr. 27-May 1

Colorado Springs, CO

21st

1993

Apr. 26-30

Tampa, FL

NASTRAN Users&#39; Colloquia, 1971-1993

NONAEROSPACE INDUSTRY APPLICATIONS OF NASTRAN PRESENTED AT USERS’ COLLOQUIA

YEAR

COMPANY

DESCRIPTION

1972

Westenhoff and Novick

Analysis and design of on-grade railroad track support.

General Motors

NASTRAN and in-house code for automo­tive structures.

Westinghouse (Hanford)

Fuel handling machinery for reactors.

Kleber-Colombes

Tires.

Control Data Corp (CDC)

Structural analysis of 40-story building.

Computer Sciences Corporation (CSC)

Structural dynamic and thermal analysis nuclear reactor vessel support system.

1975

B. F. Goodrich

Tires.

Exxon

Petroleum processing machinery.

Littleton Rsch & Eng, with CDC

Propeller-induced ship vibration.

Westinghouse (Hanford)

Seismic analysis of nuclear reactor structures.

Reactor Centrum Nederland & Hazameyer B. V.

Electromagnetic field problems.

General Motors

Modeling and analysis of acoustic cavities.

1976

Sargent & Lundy (2 papers, 1 with CSC)

Deformations of thick cylinders (power plants); seismic analysis of nuclear power plant control panel.

EBASCO Services, with Universal Analytics

Concrete cracking.

1977

Sperry Marine with Univ VA

Analysis of pressure vessels.

1978

Tennessee Eastman Co.

NASTRAN uses in petrochemical industry.

EBASCO Services with Grumman (2)

Tokomak Fusion Test Reactor toroidal field coil and vacuum vessel structures.

B. F. Goodrich

Rubber sonar dome window.

1979

B. F. Goodrich

Belt tensioning.

1980

Ontario Hydro

Seismic analysis.

NKF Engineering

Problems involving enforced boundary motion.

Tennessee Eastman

Analysis of heat-transfer fluid fill pipe failures.

1982

B. F. Goodrich

Bead area contact load at tire-wheel interface.

1984

Tennessee Eastman

Support system for large compressor.

Hughes Offshore

Bolted marine riser structure.

1985

John Deere

Use of COSMIC NASTRAN in design department.

1986

Texas Instruments

Nonlinear magnetic circuits.

1987

Texas Instruments

Forces on magnetized bodies.

NKF Eng.

HVAC duct hanger systems.

1988

Tiernay Turbines

Stress and vibration analysis of gas turbine components.

Texas Instruments

Magnetostatic nonlinear model of printhead.

1989

Deutsch Metal Components

General product line improvement (hydraulics, pneumatics, other power system components).

Intergraph

NASTRAN in integrated conceptual design environ.

Dynacs Eng.

Flexible multibody dynamics and control (NASTRAN with TREETOPS).

Texas Instruments

Micromechanical deformable mirror.

1990

Analex Corp., with NASA Lewis

Low velocity impact analysis.

1991

Tennessee Eastman

Distillation tray structures.

1993

Butler Analyses

Seismic analysis.

OTHER GOVERNMENT AGENCY NASTRAN APPLICATIONS PRESENTED AT USERS’ COLLOQUIA

YEAR

GOVERNMENT

AGENCY

DESCRIPTION

1971

Naval Air Dev Ctr

F-14A boron horizontal stabilizer static and dynamic.

U. S. Army Air Mobility R&D Lab (USAAMRDL) with NASA Langley

NASTRAN in structural design optimization.

1975

Naval Weapons Center

Modeling and analysis of damaged wings.

Naval Underwater Sys­tems Center (NUSC)

Transient analysis of bodies with moving boundaries.

(David Taylor) Naval Ship Rsch & Dev Ctr (DTN – SRDC)

Dynamic analysis of submerged structures.

Argonne Nat Lab

Fluid-coupled concentric cylinders (nuclear reactors).

1976

DTNSRDC

Underwater shock response.

NUSC

Fluid-structure interactions.

DTNSRDC

Submerged structures.

USAAMRDL with Boeing Vertol

Thermal and structural analysis of helicopter transmission.

U. S. Army, Watervliet

Crack problems.

1977

DTNSRDC

Finite element solutions for free surface flows.

NUSC

Analysis of magnetic fields.

U. S. Army, Watervliet

Large-deformation analysis of fiber-wrapped shells.

1978

Wright-Patterson AFB

Ceramic structures.

DTNSRDC

Magnetostatic field problems.

1979

U. S. Army Armament Rsch & Dev Command (USAARDC) (2)

Stress concentrations in screw heads, elastic – plastic analysis.

NUSC

Dynamically loaded periodic structures.

1980

NUSC, with A. O. Smith

Ring element dynamic stresses.

USAARDC (2)

Simulated damage UH-1 B tailboom, elastic – plastic analysis.

1982

DTNSRDC

Magnetic field problems.

NUSC

Axisymmetric fluid structure interaction problems.

USAARDC

Analysis of overloaded breech ring.

1983

DTNSRDC

Fluid-filled elastic piping systems.

NUSC (2)

Wave propagation through plates (2).

U. S. Army Benet Lab

Elastic-plastic analysis of annular plates.

1984

Dept. of the Navy

Acoustic scattering from submerged structures.

1985

WPAFB with Rockwell

NASTRAN in a computer aided design system.

Naval Wpns Ctr

Missile inertia loads.

WPAFB

Simulation of nuclear overpressures.

DTNSRDC

Loss factors, frequency-dependent damping treatment.

U. S. Army (Harry Dia­mond Lab) with Advanced Tech & Rsch

Transient analysis of fuze assembly.

DTNSRDC

Magnetic heat pump.

1986

DTNSRDC (3)

Multidisciplinary design; acoustics (2).

Naval Ocean Sys Ctr (2)

Stress concentrations; flutter of low aspect ratio wings.

NUSC

Surface impedance analysis.

1987

DTNSRDC (2)

Computer animation of modal and transient vibrations; analysis of ship structures to under­water explosion shocks.

DTNSRDC & NRL

Acoustic scattering.

NUSC

Patrol boat subject to planning loads.

1988

David Taylor Rsch Ctr (DTRC — renamed)

Static preload effects in acoustic radiation and scattering.

1989

DTRC

Low frequency resonances of submerged structures.

1990

DTRC

Scattering from fluid-filled structures.

1991

Los Alamos Nat Lab

Computer animation of displacements.

DTRC (2)

Transient fluid-structure interactions.

1992

Naval Surf. Warfare Ctr

Vibration and shock of laminated composite plates.

DTRC

Acoustics of axisymmetric fluid regions.

1993

U. S.A. F. Wright Lab

Design optimization studies.

UNIVERSITY APPLICATIONS OF NASTRAN PRESENTED AT USERS’ COLLOQUIA

Year

University

Description

1971

Old Dominion Univ

Space Shuttle dynamics model.

1972

Old Dominion Univ

Vibrations of cross-stiffened ship’s deck.

Louisiana Tech Univ

NASTRAN as a teaching aid.

1975

Univ of MD

NASTRAN for simultaneous para­bolic equations.

Univ NB & Mayo Graduate School of Medicine, with IBM

Stress analysis of left ventricle of the heart.

Univ MD, with Army, Frankford Arsenal

Nonlinear analysis of cartridge case neck separation malfunction.

1977

Univ VA

(with Sperry Marine, listed in "Other Industry’ table)

1978

Univ MO Rolla

NASTRAN in education and research.

1982

Air Force Inst Tech

Elastic aircraft airloads.

1985

Univ of GA

Agricultural engineering-teaching and research.

Clemson Univ

Plated bone fracture gap motion.

Univ MO

Fillet weld stress.

1987

Univ of Naples, with NASA Langley

NASTRAN for prediction of aircraft interior noise.

1989

GWU, with DTRC

Electromagnetic fields and waves.

NASTRAN Reference Sources

At time of writing, these are available from the NASA Technical Reports Server at http://ntrs. nasa. gov:

NASTRAN: Users’ Experiences, NASA TM-X-2378, 1971.

NASTRAN: Users’ Experiences (2nd), NASA TM-X-2637, 1972.

NASTRAN: Users’ Experiences (4th), NASA TM-X-3278, 1975.

NASTRAN: Users’ Experiences (5th), NASA TM-X-3428, 1976.^^^^^^ Sixth NASTRAN Users’ Colloquium, NASA CP-2018, 1977.

Seventh NASTRAN Users’ Colloquium, NASA CP-2062, 1978.

Eight NASTRAN Users’ Colloquium, NASA CP-2131, 1979.

Ninth NASTRAN Users’ Colloquium, NASA CP-2151, 1980.

Tenth NASTRAN Users’ Colloquium, NASA CP-2249, 1982.

Eleventh NASTRAN Users’ Colloquium, NASA CP-2284, 1983.

Twelfth NASTRAN Users’ Colloquium, NASA CP-2328, 1984.

Thirteenth NASTRAN Users’ Colloquium, NASA CP-2373, 1985.

Fourteenth NASTRAN Users’ Colloquium, NASA CP-2419, 1986.

Fifteenth NASTRAN Users’ Colloquium, NASA CP-2481, 1987.

Sixteenth NASTRAN Users’ Colloquium, NASA CP-2505, 1988.

Seventeenth NASTRAN Users’ Colloquium, NASA CP-3029, 1989.

Eighteenth NASTRAN Users’ Colloquium, NASA CP-3069, 1990.

Nineteenth NASTRAN Users’ Colloquium, NASA CP-3111, 1991.

Twentieth NASTRAN Users’ Colloquium, NASA CP-3145, 1992.

Twenty-First NASTRAN Users’ Colloquium, NASA CP-3203, 1993.

Appendix B:

Ablative and Radiative Structures

Atmosphere entry of satellites takes place above Mach 20, only slightly faster than the speed of reentry of an ICBM nose cone. The two phe­nomena nevertheless are quite different. A nose cone slams back at a sharp angle, decelerating rapidly and encountering heating that is brief but very severe. Entry of a satellite is far easier, taking place over a number of minutes.

To learn more about nose cone reentry, one begins by considering the shape of a nose cone. Such a vehicle initially has high kinetic energy because of its speed. Following entry, as it approaches the ground, its kinetic energy is very low. Where has it gone? It has turned into heat, which has been transferred both into the nose cone and into the air that has been disturbed by passage of the nose cone. It is obviously of inter­est to transfer as much heat as possible into the surrounding air. During reentry, the nose cone interacts with this air through its bow shock. For effective heat transfer into the air, the shock must be very strong. Hence the nose cone cannot be sharp like a church steeple, for that would substantially weaken the shock. Instead, it must be blunt, as H. Julian Allen of the National Advisory Committee for Aeronautics (NACA) first recognized in 1951.[1030]

Now that we have this basic shape, we can consider methods for cooling. At the outset of the Atlas ICBM program, in 1953, the sim­plest method of cooling was the heat sink, with a thick copper shield absorbing the heat of reentry. An alternative approach, the hot struc­ture, called for an outer covering of heat-resistant shingles that were to radiate away the heat. A layer of insulation, inside the shingles, was to protect the primary structure. The shingles, in turn, overlapped and could expand freely.

A third approach, transpiration cooling, sought to take advantage of the light weight and high heat capacity of boiling water. The nose cone was to be filled with this liquid; strong g-forces during deceleration in the atmosphere were to press the water against the hot inner skin. The skin was to be porous, with internal steam pressure forcing the fluid

Ablative and Radiative Structures

An Atlas ICBM with a low-drag ablatively cooled nose cone. USAF.

 

9

 

through the pores and into the boundary layer. Once injected, steam was to carry away heat. It would also thicken the boundary layer, reducing its temperature gradient and hence its rate of heat transfer. In effect, the nose cone was to stay cool by sweating.

Still, each of these approaches held difficulties. Transpiration cooling was poorly understood as a topic for design. The hot-structure concept raised questions of suitably refractory metals along with the prospect
of losing the entire nose cone if a shingle came off. Heat sinks appeared to promise high weight. But they seemed the most feasible way to proceed, and early Atlas designs specified use of a heat-sink nose cone.[1031]

Atlas was an Air Force program. A separate set of investigations was underway within the Army, which supported hot structures but raised problems with both heat sink and transpiration. This work antic­ipated the independent studies of General Electric’s George Sutton, with both efforts introducing an important new method of cooling: ablation. Ablation amounted to having a nose cone lose mass by flaking off when hot. Such a heat shield could absorb energy through latent heat, when melting or evaporating, and through sensible heat, with its tempera­ture rise. In addition, an outward flow of ablating volatiles thickened the boundary layer, which diminished the heat flow. Ablation promised all the advantages of transpiration cooling, within a system that could be considerably lighter and yet more capable, and that used no fluid.[1032]

Though ablation proved to offer a key to nose cone reentry, experi­ments showed that little if any ablation was to be expected under the rel­atively mild conditions of satellite entry. But satellite entry involved high total heat input, while its prolonged duration imposed a new require­ment for good materials properties as insulators. They also had to stay cool through radiation. It thus became possible to critique the useful­ness of ICBM nose cone ablators for the new role of satellite entry.[1033]

Heat of ablation, in British thermal units (BTU) per pound, had been a standard figure of merit. Water, for instance, absorbs nearly 1,000 BTU/lb when it vaporizes as steam at 212 °F. But for satellite entry, with little energy being carried away by ablation, head of ablation could be irrelevant. Phenolic glass, a fine ICBM material with a measured heat of 9,600 BTU/lb, was unusable for a satellite because it had an unac­ceptably high thermal conductivity. This meant that the prolonged ther­mal soak of a satellite entry could have enough time to fry a spacecraft.

Teflon, by contrast, had a measured heat only one-third as large. It nevertheless made a superb candidate because of its excellent proper­ties as an insulator.[1034]

Hence it became possible to treat the satellite problem as an exten­sion of the ICBM problem. With appropriate caveats, the experience and research techniques of the ICBM program could carry over to this new realm. The Central Intelligence Agency was preparing to recover satellite spacecraft at the same time that the Air Force was preparing to fly full-size Atlas nose cones, with both being achieved in April 1959.

The Army flew a subscale nose cone to intermediate range in August 1958, which President Dwight Eisenhower displayed during a November news conference. The Air Force became the first to fly a nose cone to intercontinental range, in July 1958. Both flights carried a mouse, and both mice survived their reentry, but neither was recovered. Better success came the following April, when an Atlas launched the full – size RVX-l nose cone, and the Discoverer II reconnaissance spacecraft returned safely through the atmosphere—though it fell into Russian, not American, hands.[1035]

European FBW Research Efforts

By the late 1960s, several European research aircraft using partial fly-by-wire flight control systems were in development. In Germany, the supersonic VJ-101 experimental Vertical Take-Off and Landing fighter technology demonstrator, with its swiveling wingtip mounted after­burning turbojet engines, and the Dornier Do-31 VTOL jet transport used analog computer-controlled partial fly-by-wire flight control sys­tems. American test pilots were intimately involved with both programs. George W. Bright flew the VJ-101 on its first flight in 1963, and NASA test
pilot Drury W. Wood, Jr., headed the cooperative U. S.-German Do-31 flight-test program that included representatives from NASA Langley and NASA Ames. Wood flew the Do-31 on its first flight in February 1967. He received the Society of Experimental Test Pilots’ Ivan C. Kinchloe Award in 1968 for his role on the Do-31 program.[1142] By that time, NASA test pilot Robert Innis was chief test pilot on the Do-31 program. The German VAK-191B VTOL fighter technology flight demonstrator flew in 1971. Its triply redundant analog flight control system assisted the pilot in operating its flight control surfaces, engines, and reaction control nozzles, but the aircraft retained a mechanical backup capability. Later in its flight-test program, the VAK-191B was used to support development of the partial fly-by-wire flight control system that was used in the multina­tional Tornado multirole combat aircraft that first flew in August 1974.[1143]

Подпись: 10In the U. K., a Hawker Hunter T.12 two-seat jet trainer was con­verted into a fly-by-wire testbed by the Royal Aircraft Establishment. It incorporated a three-axis, quadruplex analog Integrated Flight Control System (IFCS) and a "sidearm” controller. The mechanical backup flight control system was retained.[1144] First flown in April 1972, the Hunter was eventually lost in a takeoff accident.

In the USSR, a Sukhoi Su-7U two-seat jet fighter trainer was mod­ified with forward destabilizing canards as the Projekt 100LDU fly-by­wire testbed. It first flew in 1968 in support of the Sukhoi T-4 supersonic bomber development effort. Fitted with a quadruple redundant fly-by­wire flight control system with a mechanical backup capability, the four – engine Soviet Sukhoi T-4 prototype first flew in August 1972. Reportedly, the fly-by-wire flight control system provided much better handling qual­ities than the T-4’s mechanical backup system. Four T-4 prototypes were built, but only the first aircraft ever flew. Designed for Mach 3.0, the T-4 never reached Mach 2.0 before the program was canceled after only 10 test flights and about 10 hours of flying time.[1145] In 1973-1974, the Projekt
100LDU testbed was used to support development of the fly-by-wire sys­tem flight control system for the Sukhoi T-10 supersonic fighter proto­type program. The T-10 was the first pure Soviet fly-by-wire aircraft with no mechanical backup; it first flew on May 27, 1977. On July 7, 1978, the T-10-2 (second prototype) entered a rapidly divergent pitch oscilla­tion at supersonic speed. Yevgeny Solovyev, distinguished test pilot and hero of the Soviet Union, had no chance to eject before the aircraft dis­integrated.[1146] In addition to a design problem in the flight control system, the T-10’s aerodynamic configuration was found to be incapable of pro­viding required longitudinal, lateral, and directional stability under all flight conditions. After major redesign, the T-10 evolved into the highly capable Sukhoi Su-27 family of supersonic fighters and attack aircraft.[1147]