Category AERONAUTICS

Ablative and Radiative Structures

Atmosphere entry of satellites takes place above Mach 20, only slightly faster than the speed of reentry of an ICBM nose cone. The two phe­nomena nevertheless are quite different. A nose cone slams back at a sharp angle, decelerating rapidly and encountering heating that is brief but very severe. Entry of a satellite is far easier, taking place over a number of minutes.

To learn more about nose cone reentry, one begins by considering the shape of a nose cone. Such a vehicle initially has high kinetic energy because of its speed. Following entry, as it approaches the ground, its kinetic energy is very low. Where has it gone? It has turned into heat, which has been transferred both into the nose cone and into the air that has been disturbed by passage of the nose cone. It is obviously of inter­est to transfer as much heat as possible into the surrounding air. During reentry, the nose cone interacts with this air through its bow shock. For effective heat transfer into the air, the shock must be very strong. Hence the nose cone cannot be sharp like a church steeple, for that would substantially weaken the shock. Instead, it must be blunt, as H. Julian Allen of the National Advisory Committee for Aeronautics (NACA) first recognized in 1951.[1030]

Now that we have this basic shape, we can consider methods for cooling. At the outset of the Atlas ICBM program, in 1953, the sim­plest method of cooling was the heat sink, with a thick copper shield absorbing the heat of reentry. An alternative approach, the hot struc­ture, called for an outer covering of heat-resistant shingles that were to radiate away the heat. A layer of insulation, inside the shingles, was to protect the primary structure. The shingles, in turn, overlapped and could expand freely.

A third approach, transpiration cooling, sought to take advantage of the light weight and high heat capacity of boiling water. The nose cone was to be filled with this liquid; strong g-forces during deceleration in the atmosphere were to press the water against the hot inner skin. The skin was to be porous, with internal steam pressure forcing the fluid

Ablative and Radiative Structures

An Atlas ICBM with a low-drag ablatively cooled nose cone. USAF.

 

9

 

through the pores and into the boundary layer. Once injected, steam was to carry away heat. It would also thicken the boundary layer, reducing its temperature gradient and hence its rate of heat transfer. In effect, the nose cone was to stay cool by sweating.

Still, each of these approaches held difficulties. Transpiration cooling was poorly understood as a topic for design. The hot-structure concept raised questions of suitably refractory metals along with the prospect
of losing the entire nose cone if a shingle came off. Heat sinks appeared to promise high weight. But they seemed the most feasible way to proceed, and early Atlas designs specified use of a heat-sink nose cone.[1031]

Atlas was an Air Force program. A separate set of investigations was underway within the Army, which supported hot structures but raised problems with both heat sink and transpiration. This work antic­ipated the independent studies of General Electric’s George Sutton, with both efforts introducing an important new method of cooling: ablation. Ablation amounted to having a nose cone lose mass by flaking off when hot. Such a heat shield could absorb energy through latent heat, when melting or evaporating, and through sensible heat, with its tempera­ture rise. In addition, an outward flow of ablating volatiles thickened the boundary layer, which diminished the heat flow. Ablation promised all the advantages of transpiration cooling, within a system that could be considerably lighter and yet more capable, and that used no fluid.[1032]

Though ablation proved to offer a key to nose cone reentry, experi­ments showed that little if any ablation was to be expected under the rel­atively mild conditions of satellite entry. But satellite entry involved high total heat input, while its prolonged duration imposed a new require­ment for good materials properties as insulators. They also had to stay cool through radiation. It thus became possible to critique the useful­ness of ICBM nose cone ablators for the new role of satellite entry.[1033]

Heat of ablation, in British thermal units (BTU) per pound, had been a standard figure of merit. Water, for instance, absorbs nearly 1,000 BTU/lb when it vaporizes as steam at 212 °F. But for satellite entry, with little energy being carried away by ablation, head of ablation could be irrelevant. Phenolic glass, a fine ICBM material with a measured heat of 9,600 BTU/lb, was unusable for a satellite because it had an unac­ceptably high thermal conductivity. This meant that the prolonged ther­mal soak of a satellite entry could have enough time to fry a spacecraft.

Teflon, by contrast, had a measured heat only one-third as large. It nevertheless made a superb candidate because of its excellent proper­ties as an insulator.[1034]

Hence it became possible to treat the satellite problem as an exten­sion of the ICBM problem. With appropriate caveats, the experience and research techniques of the ICBM program could carry over to this new realm. The Central Intelligence Agency was preparing to recover satellite spacecraft at the same time that the Air Force was preparing to fly full-size Atlas nose cones, with both being achieved in April 1959.

The Army flew a subscale nose cone to intermediate range in August 1958, which President Dwight Eisenhower displayed during a November news conference. The Air Force became the first to fly a nose cone to intercontinental range, in July 1958. Both flights carried a mouse, and both mice survived their reentry, but neither was recovered. Better success came the following April, when an Atlas launched the full – size RVX-l nose cone, and the Discoverer II reconnaissance spacecraft returned safely through the atmosphere—though it fell into Russian, not American, hands.[1035]

European FBW Research Efforts

By the late 1960s, several European research aircraft using partial fly-by-wire flight control systems were in development. In Germany, the supersonic VJ-101 experimental Vertical Take-Off and Landing fighter technology demonstrator, with its swiveling wingtip mounted after­burning turbojet engines, and the Dornier Do-31 VTOL jet transport used analog computer-controlled partial fly-by-wire flight control sys­tems. American test pilots were intimately involved with both programs. George W. Bright flew the VJ-101 on its first flight in 1963, and NASA test
pilot Drury W. Wood, Jr., headed the cooperative U. S.-German Do-31 flight-test program that included representatives from NASA Langley and NASA Ames. Wood flew the Do-31 on its first flight in February 1967. He received the Society of Experimental Test Pilots’ Ivan C. Kinchloe Award in 1968 for his role on the Do-31 program.[1142] By that time, NASA test pilot Robert Innis was chief test pilot on the Do-31 program. The German VAK-191B VTOL fighter technology flight demonstrator flew in 1971. Its triply redundant analog flight control system assisted the pilot in operating its flight control surfaces, engines, and reaction control nozzles, but the aircraft retained a mechanical backup capability. Later in its flight-test program, the VAK-191B was used to support development of the partial fly-by-wire flight control system that was used in the multina­tional Tornado multirole combat aircraft that first flew in August 1974.[1143]

Подпись: 10In the U. K., a Hawker Hunter T.12 two-seat jet trainer was con­verted into a fly-by-wire testbed by the Royal Aircraft Establishment. It incorporated a three-axis, quadruplex analog Integrated Flight Control System (IFCS) and a "sidearm” controller. The mechanical backup flight control system was retained.[1144] First flown in April 1972, the Hunter was eventually lost in a takeoff accident.

In the USSR, a Sukhoi Su-7U two-seat jet fighter trainer was mod­ified with forward destabilizing canards as the Projekt 100LDU fly-by­wire testbed. It first flew in 1968 in support of the Sukhoi T-4 supersonic bomber development effort. Fitted with a quadruple redundant fly-by­wire flight control system with a mechanical backup capability, the four – engine Soviet Sukhoi T-4 prototype first flew in August 1972. Reportedly, the fly-by-wire flight control system provided much better handling qual­ities than the T-4’s mechanical backup system. Four T-4 prototypes were built, but only the first aircraft ever flew. Designed for Mach 3.0, the T-4 never reached Mach 2.0 before the program was canceled after only 10 test flights and about 10 hours of flying time.[1145] In 1973-1974, the Projekt
100LDU testbed was used to support development of the fly-by-wire sys­tem flight control system for the Sukhoi T-10 supersonic fighter proto­type program. The T-10 was the first pure Soviet fly-by-wire aircraft with no mechanical backup; it first flew on May 27, 1977. On July 7, 1978, the T-10-2 (second prototype) entered a rapidly divergent pitch oscilla­tion at supersonic speed. Yevgeny Solovyev, distinguished test pilot and hero of the Soviet Union, had no chance to eject before the aircraft dis­integrated.[1146] In addition to a design problem in the flight control system, the T-10’s aerodynamic configuration was found to be incapable of pro­viding required longitudinal, lateral, and directional stability under all flight conditions. After major redesign, the T-10 evolved into the highly capable Sukhoi Su-27 family of supersonic fighters and attack aircraft.[1147]

NASA Observations

Подпись: 10NASA observations on some of the more serious issues encountered in early testing of the AFTI/F-16 asynchronous digital flight control sys­tem are worthy of note. For example, an unknown failure in the Stores Management System on flight No. 15 caused it to request DFCS mode changes at a rate of 50 times per second. The DFCS could not keep up and responded at a rate of 5 mode changes per second. The pilot reported that the aircraft felt like it was in severe turbulence. The flight was aborted, and the aircraft landed safely. Subsequent analysis showed that if the aircraft had been maneuvering at the time, the DFCS would have failed. A subsequent software modification improved the DFCS’s immunity to this failure mode.[1202]

A highly significant flight control law anomaly was encountered on AFTI/F-16 flight No. 36. Following a planned maximum rudder "step and hold” input by the pilot, a 3-second departure from controlled flight occurred. Sideslip angle exceeded 20 degrees, normal acceleration fluc­tuated from -4 g to +7 g, angle of attack varied between -10 and +20 degrees, and the aircraft rolled 360 degrees. Severe structural loads were encountered with the vertical tailfin exceeding its design load. During the out-of-control situation, all control surfaces were operating at rate limits, and failure indications were received from the hydraulics and canard actuators. The failures were transient and reset after the pilot regained control. The problem was traced to a fault in the programmed flight control laws. It was determined that the aerodynamic model used to develop the control laws did not accurately model the nonlinear nature of yaw stability variations as a function of higher sideslip angles. The same inaccurate control laws were also used in the real-time AFTI/F-16 ground flight simulator. An additional complication was caused when the side fuselage-mounted air-data probes were blanked by the canard
at the high angles of attack and sideslip encountered. This resulted in incorrect air data values being passed to the DFCS. Operating asynchro­nously, the different flight control system channels took different paths through the flight control laws. Analysis showed these faults could have caused complete failure of the DFCS and reversion to analog backup.[1203] Subsequently, the canards were removed from the command path to prevent the AFTI/F-16 from obtaining higher yaw angles.

Подпись: 10AFTI/F-16 flight-testing revealed numerous other flight control prob­lems of a similar nature. These prompted NASA engineer Dale Mackall to report: "The asynchronous design of the [AFTI/F-16] DFCS introduced a random, unpredictable characteristic into the system. The system became untestable in that testing for each of the possible time relation­ships between the computers was impossible. This random time rela­tionship was a major contributor to the flight test anomalies. Adversely affecting testability and having only postulated benefits, asynchronous operation of the DFCS demonstrated the need to avoid random, unpre­dictable, and uncompensated design characteristics.” Mackall also pro­vided additional observations that would prove to be highly valuable in developing, validating, and certifying future software-intensive digital fly-by-wire flight control system designs. Urging more formal approaches and rigorous control over the flight control system software design and development process, Mackall reported:

The criticality and number of anomalies discovered in flight and ground tests owing to design oversights are more significant than those anomalies caused by actual hardware failures or software errors. . . . As the operational requirements of avionics systems increase, complexity increases. . . . If the complexity is required, a method to make system designs more understandable, more visible, is needed. . . qualification of such a complex system as this, to some given level of reliability, is difficult. . . the number of test conditions becomes so large that conventional testing methods would require a decade for completion. The fault – tolerant design can also affect overall system reliability by being made too complex and by adding characteristics which are ran­dom in nature, creating an untestable design.[1204]

NF-15B Advanced Control Technology: Air Force S/MTD

NASA Dryden used an NF-15B research aircraft on various research proj­ects from 1993 through early 2009. Originally designated the TF-15, it was the first two-seat F-15 Eagle built by McDonnell-Douglas, the sixth F-15 off the assembly line, and the oldest F-15 flying up to its retire­ment. First flown in July 1973, the aircraft was initially used for F-15 developmental testing and evaluation as part of the F-15 combined test force at Edwards AFB in the 1970s. In the 1980s, the aircraft was exten­sively modified for the Air Force’s Short Takeoff and Landing Maneuver Technology Demonstrator (S/MTD) program. Modifications included the integration of a digital fly-by-wire control system, canards mounted on the engine inlets ahead of the wings,[1273] and two-dimensional thrust­vectoring, thrust-reversing nozzles. The vectoring nozzles redirected

Подпись: 10 NF-15B Advanced Control Technology: Air Force S/MTD

engine exhaust either up or down, giving greater pitch control and addi­tional aerodynamic braking capability. Designated NF-15B to reflect its status as a highly modified research aircraft, the aircraft was used in the S/MTD program from 1988 until 1993. During Air Force S/MTD testing, a 25-percent reduction in takeoff roll was demonstrated with thrust-reversing, enabling the aircraft to stop in just 1,650 feet. Takeoffs using thrust-vectoring produced nose rotation speeds as low as 40 knots, resulting in greatly reduced takeoff distances. Additionally, thrust-revers­ing produced extremely rapid in-flight decelerations, a feature valuable during close-in combat.[1274]

NASA Researchers Work to Reduce Noise in Future Aircraft Design

It’s a noisy world out there, especially around the Nation’s busiest air­ports, so NASA is pioneering new technologies and aircraft designs that could help quiet things down a bit. Every source of aircraft noise, from takeoff to touchdown, is being studied for ways to reduce the racket, which is expected to get worse as officials predict that air traffic will double in the next decade or so.

"It’s always too noisy. You have to always work on making it quieter,” said Edmane Envia, an aerospace engineer at NASA’s Glenn Research Center in Cleveland. "You always have to stay a step ahead to fulfill the needs and demands of the next generation of air travel.”[1366]

Noise reduction research is part of a broader effort by NASA’s Aeronautics Research Mission Directorate in Washington to lay a tech­nological foundation for a new generation of airplanes that are not as noisy, fly farther on less fuel, and may operate out of airports with much shorter runways than exist today. There are no clear solutions yet to these tough challenges, neither is there a shortage of ideas from NASA researchers who are confident positive results eventually will come.[1367]

"Our goal is to have the technologies researched and ready, but ulti­mately it’s the aircraft industry, driven by the market, that makes the deci­sion when to introduce a particular generation of aircraft,” Envia said.

NASA organized its research to look three generations into the future, with conceptual aircraft designs that could be introduced 10, 20, or 30 years from now. The generations are called N+1, N+2, and N+3. Each generation represents a design intended to be flown a decade or so later than the one before it and is to feature increasingly sophisticated meth­ods for delivering quieter aircraft and jet engines.[1368]

Подпись: 11"Think of the Boeing 787 Dreamliner as N and the N+1 as the next generation aircraft after that,” Envia said.

The N+1 is an aircraft with familiar parts, including a conventional tube-shaped body, wings, and a tail. Its jet engines still are attached to the wings, as with an N aircraft, but those engines might be on top of the wings, not underneath. Conceptual N+2 designs throw out con­vention and basically begin with a blank computer screen, with design engineers blending the line between the body, wing, and engines into a more seamless, hybrid look. What an N+3 aircraft might look like is anyone’s guess right now. But with its debut still 30 years away, NASA is sponsoring research that will produce a host of ideas for consid­eration. The Federal Aviation Administration’s current guidelines for overall aircraft noise footprints constitute the design baseline for all of NASA’s N aircraft concepts. That footprint summarizes in a single number, expressed as a decibel, the noise heard on the ground as an airplane lands, takes off, and then cuts back on power for noise abate­ment. The noise footprint extends ahead and behind the aircraft and to a certain distance on either side. NASA’s design goal is to make each new aircraft generation quieter than today’s airplanes by a set number of decibels. The N+1 goal is 32 decibels quieter than a fully noise compliant Boeing 737, while the N+2 goal is 42 decibels quieter than a Boeing 777. So far, the decibel goal for the N+1 aircraft has been elusive.[1369]

"What makes our job very hard is that we are asked to reduce noise but in ways that do not adversely impact how high, far or fast an air­plane is capable of flying,” Envia said.

NASA researchers have studied changes in the operation, shape, or materials from which key noise contributors are made. The known suspects include the airframe, wing flaps, and slats, along with components of the jet engine, such as the fan, turbine, and exhaust noz­zle. While some reductions in noise can be realized with some design changes in these components, the overall impact still falls short of the N+1 goal by about 6 decibels. Envia said that additional work with design and operation of the jet engine’s core may make up the difference, but that a lot more work needs to be done in the years to come. Meanwhile, reaching the N+2 goals may or may not prove easier to achieve.[1370]

Подпись: 11"We’re starting from a different aircraft configuration, from a clean sheet, that gives you the promise of achieving even more aggressive goals,” said Russell Thomas, an aerospace engineer at Langley Research Center. "But it also means that a lot of your prior experience is not directly appli­cable, so the problem gets a lot harder from that point of view. You may have to investigate new areas that have not been researched heavily in the past.”[1371]

Efforts to reduce noise in the N+2 aircraft have focused on the air­frame, which blends the wing and fuselage together, greatly reducing the number of parts that extend into the airflow to cause noise. Also, according to Thomas, the early thinking on the N+2 aircraft is that the jet engines will be on top of the vehicle, using the airplane body to shield most of the noise from reaching the ground.

"We’re on course to do much more thorough research to get higher quality numbers, better experiments, and better prediction methods so we can really understand the acoustics of this new aircraft configura­tion,” Thomas said.

As for the N+3 aircraft, it remains too early to say how NASA researchers will use technology not yet invented to reduce noise levels to their lowest ever.

"Clearly significant progress has been made over the years and air­planes are much quieter than they were 20 years ago,” Envia said, not­ing that further reductions in noise will require whole new approaches to aircraft design. "It is a complicated problem and so it is a worthy challenge to rise up to.”

First Generation DOE-NASA Wind Turbine Systems (Mod-0A and Mod-1) (1977-1982)

The Mod-0 testbed wind turbine system was upgraded from 100 kilo­watts to a 200-kilowatt system that became the Mod-0A. Installation of the first Mod-0A system was completed in November 1977, with one additional machine installed each year through 1980 at four locations: Clayton, NM; Culebra, PR; Block Island, RI; and Oahu, HI. This first generation of wind turbines completed its planned experimental oper­ations in 1982 and was removed from service.

The basic components and systems of the Mod-0A consisted of the rotor – and pitch-change mechanism, drive train, nacelle equipment, yaw drive mechanism and brake, tower and foundation, electrical sys­tem and components, and control systems. The rotor consisted of the blades, hub, pitch-change mechanism, and hydraulic system. The drive train included the low-speed shaft, speed increaser, high-speed shaft, belt drive, fluid coupling, and rotor blades. The electrical system and components were the generator, switchgear, transformer, utility con­nection, and slip rings. The control systems were the blade pitch, yaw, generator control, and safety system.11 [1502]

Similar to the Mod-0 testbed, the Mod-0A horizontal-axis machines had a 125-foot-diameter downwind rotor mounted on a 100-foot rigid pinned truss tower. However, this more powerful first genera­tion of turbines had a rated power of 200 kilowatts at a wind speed of 18 miles per hour and made 40 revolutions per minute. The turbine had two aluminum blades that were each 59.9 feet long. The Westinghouse Electric Corporation was selected, by competitive bidding, as the contractor for building the Mod-0A, and Lockheed was selected to design and build the blades. NASA and Westinghouse personnel were involved in the installation, site tests, and checkout of the wind turbine systems.

Подпись: 13The primary goal of the Mod-0A wind turbine was to gain expe­rience and obtain early operation performance data with horizontal – axis wind turbines in power utility environments, including resolving issues relating to power generation quality, and safety, and procedures for system startup, synchronization, and shutdown. This goal included demonstrating automatic operation of the turbine and assessing machine compatibility with utility power systems, as well as determining reliability and maintenance requirements. To accomplish this primary goal, small power utility companies or remote location sites were selected in order to study problems that might result from a significant percentage of power input into a power grid. NASA engineers also wanted to determine the reaction of the public and power utility companies to the operation of the turbines. The Mod-0A systems were online collectively for over 38,000 hours, generating over 3,600 megawatthours of electricity into power utility networks. NASA deter­mined that while some early reliability and rotor-blade life problems needed to be corrected, overall the Mod-0A wind turbine systems accomplished the engineering and research objectives of this phase of the program and made significant contributions to second – and third-generation machines that were to follow the Mod-0A and Mod-1 projects. Interface of the Mod-0A with the power utili­ties demonstrated satisfactory operating results during their ini­tial tests from November 1977 to March 1978. The wind turbine was successfully synchronized to the utility network in an unattended mode. Also, dynamic blade loads during the initial operating period were in good agreement with the calculation using the MOSTAB computer code. Finally, successful testing on the Mod-0 provided the database that led the way for private development of a wide

range of small wind turbines that were placed in use during the late 1980s.[1503]

Подпись: 13Closely related to the Mod-0A turbine was the Mod-1 project, for which planning started in 1976, with installation of the machine taking place in May 1979. In addition to noise level and television interference testing (see below), the primary objective of the Mod-1 program was to demonstrate the feasibility of remote utility wind turbine control. Three technical assessments were planned to evaluate machine performance, interface with the power utility, and examine the effects on the environ­ment. This system was a one-of-a-kind prototype that was much larger than the Mod-0A, with a rated power of 2,000 kilowatts (later reduced to 1,350) and a blade swept diameter of 200 feet. The Mod-1 was the largest wind turbine constructed up to that time. Considerable testing was done on the Mod-1 because the last experience with megawatt-size wind turbines was nearly 40 years earlier with the Smith-Putnam 1.25- megawatt machine, a very different design. Full-span blade pitch was used to control the rotor speed at a constant 35 revolutions per minute (later reduced to 23 rpm). The machine was mounted on a steel tubular truss tower that was 12 feet square at the top and 48 feet square at the bottom. General Electric was the prime contractor for designing, fabri­cating, and installing the Mod-1. The two steel blades were manufactured by the Boeing Engineering and Construction Company. There was also a set of composite rotor blades manufactured by the Kaman Aerospace Corporation that was fully compatible for testing on the Mod-1 system. The wind turbine, which was in Boone, NC, was tested with the Blue Ridge Electrical Membership Corporation from July 1979 to January 1981. The machine, operating in fully automatic synchronized mode, fed into the power network within utility standards.[1504]

One of the testing objectives of this first-generation prototype was to determine noise levels and any potential electromagnetic inter­ference with microwave relay, radio, and television associated with
mountainous terrain. These potential problems were among those identified by an initial study undertaken by NASA Lewis, General Electric, and the Solar Energy Research Institute. An analytical model developed at NASA Lewis of acoustic emissions from the rotor recommended that the rotor speed be reduced from 35 to 23 revolu­tions per minute, and the 2,000-kilowatt generator was replaced with a 1,350-kilowatt, 1,200-rpm generator. This change to the power train made a significant reduction in measured rotor noise. During the noise testing, however, the Mod-1, like the Mod-0A, experienced a failure in the low-speed shaft of the drive train and, because NASA engineers determined that both machines had accomplished their purposes, they were removed from the utility sites. Lessons learned from the engineer­ing studies and testing of the first-generation wind turbine systems indi­cated the need for technological improvements to make the machines more acceptable for large utility applications. These lessons proved valu­able in the design, construction, and operation of the next generation of DOE-NASA wind turbines. Other contributions from the Mod-1 pro­gram included low-cost wind turbine design concepts and metal and composite blade design and fabrication. Also, computer codes were verified for dynamic and loads analysis.

Подпись: 13Although the Mod-1 was a one-of-kind prototype, there was a con­ceptual design that was designated as the Mod-1A. The conceptual design incorporated improvements identified during the Mod-1 project but, because of schedule and budget constraints, were not able to be used in fabrication of the Mod-1 machine. One of the improvements involved ideas to lessen the weight of the wind turbine. Also, one of the proposed configurations made use of a teetered hub and upwind blades with par­tial span control. Although the Mod-1A was not built, many of the ideas were incorporated into the second – and third-generation DOE-NASA wind turbines.

Validation in Flight

As Whitcomb was discovering the area rule, Convair in San Diego, CA, was finalizing its design of a new supersonic all-weather fighter-inter­ceptor, began in 1951, for a substantial Air Force contract. The YF-102 Delta Dagger combined Mach’s ideal high-speed bullet-shaped fuselage and delta wings pioneered on the Air Force’s Convair XF-92A research airplane with the new Pratt & Whitney J57 turbojet, the world’s most powerful at 10,000 pounds thrust. Armed entirely with air-to-air and for­ward-firing missiles, the YF-102 was to be the prototype for America’s first piloted air defense weapon’s system.[165] Convair heard of the NACA’s transonic research at Langley and feared that its investment in the YF-102 and the payoff with the Air Force would come to naught if the new air­plane could not fly supersonic.[166] Convair’s reputation and a consider­able Department of Defense contract were at stake.

A delegation of Convair engineers visited Langley in mid-August 1952, where the engineers witnessed a disappointing test of an YF-102 model in the 8-foot HST. The data indicated, according to the NACA at least, that the YF-102 was unable to reach Mach 1 in level flight. The transonic drag exhibited near Mach 1 simply counteracted the ability of the J57 to push the YF-102 through the sound barrier. They asked Whitcomb what could be done, and he unveiled his new rule of thumb for the design of supersonic aircraft. The data, Whitcomb’s solution, and what was perceived as the continued skepticism on the part of his boss, John Stack, left the Convair engineers unconvinced as they went back to San Diego with their model.[167] They did not yet see the area rule as the solution to their perceived problem.

Nevertheless, Whitcomb worked with Convair’s aerodynamicists to incorporate the area rule into the YF-102. New wind tunnel evaluations in May 1953 revealed a nominal decrease in transonic drag. He traveled to San Diego in August to assist Convair in reshaping the YF-102 fuselage. The NACA notified Convair that the modified design, soon be designated the YF-102A, was capable of supersonic flight in October.[168]

Despite the fruitful collaboration with Whitcomb, Convair was hedg­ing its bets when it continued the production of the prototype YF-102 in the hope that it was a supersonic airplane. The new delta wing fighter with a straight fuselage was unable to reach its designed supersonic speeds during its full-scale flight evaluation and tests by the Air Force in January 1954. The disappointing performance of the YF-102 to reach only Mach 0.98 in level flight confirmed the NACAs wind tunnel findings and validated Whitcomb’s research that led to his area rule. The Air Force realistically shifted the focus toward production of the YF-102A after NACA Director Hugh Dryden guaranteed that Chief of Staff of the Air Force Gen. Nathan F. Twining developed a solution to the problem and that the information had been made available to Convair and the rest of the aviation industry. The Air Force ordered Convair to stop production of the YF-102 and retool to manufacture the improved area rule design.[169]

It took Convair only 7 months to prepare the prototype YF-102A, thanks to the collaboration with Whitcomb. Overall, the new fighter-interceptor was much more refined than its predecessor was, with sharper features at the redesigned nose and canopy. An even more powerful version of the J57 turbojet engine produced 17,000 pounds thrust with afterburner. The primary difference was the contoured fuselage that resembled a wasp’s waist and obvious fairings that expanded the circumference of the tail. With an area rule fuselage, the newly re-designed YF-102A easily went supersonic. Convair test pilot Pete Everest undertook the second flight test on December 21, 1954, during which the YF-102A climbed away from Lindbergh Field, San Diego, and "slipped easily past the sound barrier and kept right on going.” More importantly, the YF-102A’s top speed was 25 percent faster, at Mach 1.2.[170]

The Air Force resumed the contract with Convair, and the manu­facturer delivered 975 production F-102A air defense interceptors, with the first entering active service in mid-1956. The fighter-intercep­tors equipped Air Defense Command and United States Air Force in Europe squadrons during the critical period of the late 1950s and 1960s. The increase in performance was dramatic. The F-102A could cruise at 1,000 mph and at a ceiling of over 50,000 feet. It replaced three subsonic interceptor aircraft in the Air Force inventory—the North American F-86D Sabre, F-89 Scorpion, and F-94 Starfire—which were 600-650 mph aircraft with a 45,000-foot ceiling range. Besides speed and alti­tude, the F-102A was better equipped to face the Soviet Myasishchev Bison, Tupolev Bear, and Ilyushin Badger nuclear-armed bombers with a full complement of Hughes Falcon guided missiles and Mighty Mouse rockets. Convair incorporated the F-102A’s armament in a drag – reducing internal weapons bay.

When the F-102A entered operational service, the media made much of the fact that the F-102 "almost ended up in the discard heap” because of its "difficulties wriggling its way through the sound barrier.” With an area rule fuselage, the F-102A "swept past the sonic problem.” The downside to the F-102A’s supersonic capability was the noise from its J57 turbojet. The Air Force regularly courted civic leaders from areas near Air Force bases through familiarization flights so that they would understand the mission and role of the F-102A.[171]

Validation in Flight

The Air Force’s F-102 got a whole new look after implementing Richard Whitcomb’s area rule. At left is the YF-102 without the area rule, and at right is the new YF-102A version. NASA.

Convair produced the follow-on version, the F-106 Delta Dart, from 1956 to 1960. The Dart was capable of twice the speed of the Dagger with its Pratt & Whitney J75 engine.[172] The F-106 was the primary air defense interceptor defending the continental United States up to the early 1980s. Convair built upon its success with the F-102A and the F-106, two cor­nerstone aircraft in the Air Force’s Century series of aircraft, and intro­duced more area rule aircraft: the XF2Y-1 Sea Dart and the B-58 Hustler.[173]

The YF-102/YF-102A exercise was valuable in demonstrating the importance of the area rule and of the NACA to the aviation industry and the military, especially when a major contract was at stake.[174] Whitcomb’s revolutionary and intuitive idea enabled a new generation of supersonic military aircraft, and it spread throughout the industry. Like Convair, Chance Vought redesigned its F8U Crusader carrier-based interceptor with an area rule fuselage. The first production aircraft appeared in September 1956, and deliveries began in March 1957. Four months later, in July 1957, Marine Maj. John H. Glenn, Jr., as part of Project Bullet,

made a recordbreaking supersonic transcontinental flight from Los Angeles to New York in 3 hours 23 minutes. Crusaders served in Navy and Marine fighter and reconnaissance squadrons throughout the 1960s and 1970s, with the last airframes leaving operational service in 1987.[175]

Grumman was the first to design and manufacture from the ground up an area rule airplane. Under contract to produce a carrier-based supersonic fighter, the F9F-9 Tiger, for the Navy, Grumman sent a team of engineers to Langley, just 2 weeks after receiving Whitcomb’s pivotal September 1952 report, to learn more about transonic drag. Whitcomb traveled to Bethpage, NY, in February 1953 to evaluate the design before wind tunnel and rocket-model tests were to be conducted by the NACA. The tests revealed that the new fighter was capable of supersonic speeds in level flight with no appreciable transonic drag. Grumman constructed the prototype, and in August 1954, with company test pilot C. H. "Corky” Meyer at the controls, the F9F-9 achieved Mach 1 in level flight without the assistance of an afterburner, which was a good 4 months before the supersonic flight of the F-102A.[176] The Tiger, later designated the F11F – 1, served with the fleet as a frontline carrier fighter from 1957 to 1961 and with the Navy’s demonstration team, the Blue Angels.[177]

Another aircraft designed from the ground up with an area rule fuselage represented the next step in military aircraft performance in the late 1950s. The legendary Lockheed "Skunk Works” introduced the F-104 Starfighter, "the missile with a man in it,” in 1954. Characterized by its short, stubby wings and needle nose, the production prototype F-104, powered by a General Electric J79 turbojet, was the first jet to exceed Mach 2 (1,320 mph) in flight, on April 24, 1956. Starfighters joined operational Air Force units in 1958. An international manu­facturing scheme and sales to 14 countries in Europe, Asia, and the Middle East ensured that the Starfighter was in frontline use through the rest of the 20th century.[178]

Validation in Flight

The area rule profile of the Grumman Tiger. National Air and Space Museum.

The area rule opened the way for the further refinement of super­sonic aircraft, which allowed for concentration on other areas within the synergistic system of the airplane. Whitcomb and his colleagues con­tinued to issue reports refining the concept and giving designers more options to design aircraft with higher performance. Working by himself and with researcher Thomas L. Fischetti, Whitcomb worked to refine high-speed aircraft, especially the Chance Vought F8U-1 Crusader, which evolved into one of the finest fighters of the postwar era.[179]

Spurred on by the success of the F-104, NACA researchers at the Lewis Flight Propulsion Laboratory in Cleveland, OH, estimated that innovations in jet engine design would increase aircraft speeds upward

of 2,600 mph, or Mach 4, based on advanced metallurgy and the sophis­ticated aerodynamic design of engine inlets, including variable-geom­etry inlets and exhaust nozzles.[180] One thing was for certain: supersonic aircraft of the 1950s and 1960s would have an area rule fuselage.

The area rule gave the American defense establishment breathing room in the tense 1950s, when the Cold War and the constant need to possess the technological edge, real or perceived, was crucial to the sur­vival of the free world. The design concept was a state secret at a time when no jets were known to be capable of reaching supersonic speeds, due to transonic drag. The aviation press had known about it since January 1954 and kept the secret for national security purposes. The NACA intended to make a public announcement when the first aircraft incorporating the design element entered production. Aero Digest unof­ficially broke the story a week early in its September 1955 issue, when it proclaimed, "The SOUND BARRIER has been broken for good,” and declared the area rule the "first major aerodynamic breakthrough in the past decade.” In describing the area rule and the Grumman XF9F-9 Tiger, Aero Digest stressed the bottom line for the innovation: the area rule provided the same performance with less power.[181]

The official announcement followed. Secretary of the Air Force Donald A. Quarles remarked on the CBS Sunday morning television news program Face the Nation on September 11, 1955, that the area rule was "the kind of breakthrough that makes fundamental research so very important.”[182] Aviation Week declared it "one of the most significant military scientific breakthroughs since the atomic bomb.”[183] These statements highlighted the crucial importance of the NACA to American aeronautics.

The news of the area rule spread out to the American public. The media likened the shape of an area rule fuselage to a "Coke bottle,” a "wasp waist,” an "hourglass,” or the figure of actress Marilyn Monroe.[184] While the Coke bottle description of the area rule is commonplace today, the NACA contended that Dietrich Kuchemann’s Coke bottle and Whitcomb’s area rule were not the same and lamented the use of the term. Kuchemann’s 1944 design concept pertained only to swept wings and tailored the specific flow of streamlines. Whitcomb’s rule applied to any shape and contoured a fuselage to maintain an area equivalent to the entire stream tube.[185] Whitcomb actually preferred "indented.”[186] One learned writer explained to readers of the Christian Science Monitor that an aircraft with an area rule slipped through the transonic barrier due to the "Huckleberry Finn technique,” which the character used to suck in his stomach to squeeze through a hole in Aunt Polly’s fence.[187]

Whitcomb quickly received just recognition from the aeronautical community for his 3-year development of the area rule. The National Aeronautics Association awarded him the Collier Trophy for 1954 for his creation of "a powerful, simple, and useful method” of reducing transonic drag and the power needed to overcome it.[188] Moreover, the award cita­tion designated the area rule as "a contribution to basic knowledge” that increased aircraft speed and range while reducing drag and using the same power.[189] As Vice President Richard M. Nixon presented him the award at the ceremony, Whitcomb joined the other key figures in aviation history, including Orville Wright, Glenn Curtiss, and his boss, John Stack, in the pantheon of individuals crucial to the growth of American aeronautics.[190]

Besides the Collier, Whitcomb received the Exceptional Service Medal of the U. S. Air Force in 1955 and the inaugural NACA Distinguished Service Medal in 1956.[191] At the age of 35, he accepted an honorary doc­tor of engineering degree from his alma mater, Worcester Polytechnic

Institute, in 1956.[192] Whitcomb also rose within the ranks at Langley, where he became head of Transonic Aerodynamics Branch in 1958.

Whitcomb’s achievement was part of a highly innovative period for Langley and the rest of the NACA, all of which contributed to the success of the second aeronautical revolution. Besides John Stack’s involvement in the X-1 program, the NACA worked with the Air Force, Navy, and the aerospace industry on the resultant high-speed X-aircraft programs. Robert

T. Jones developed his swept wing theory. Other NACA researchers gen­erated design data on different aircraft configurations, such as variable – sweep wings, for high-speed aircraft. Whitcomb was directly involved in two of these major innovations: the slotted tunnel and the area rule.[193]

Laboratory Experiments and Sonic Boom Theory

The rapid progress made in understanding the nature and significance of sonic booms during the 1960s resulted from the synergy among flight testing, wind tunnel experiments, psychoacoustical studies, theoretical refinements, and new computing capabilities. Vital to this process was the largely free exchange of information by NASA, the FAA, the USAF, the airplane manufacturers, academia, and professional organizations such as the American Institute of Aeronautics and Astronautics (AIAA) and the Acoustical Society of America (ASA). The sharing of information even extended to potential rivals in Europe, where the Anglo-French Concorde supersonic airliner got off to a headstart on the more ambi­tious American program.

Designing commercial aircraft has always required tradeoffs between speed, range, capacity, weight, durability, safety, and, of course, costs— both for manufacturing and operations. Balancing such factors was espe­cially challenging with an aircraft as revolutionary as the SST. Unlike with the supersonic military aircraft in the 1950s, NASA’s scientists and engineers and their partners in industry also had to increasingly con­sider the environmental impacts of their designs. At the Agency’s aero­nautical Centers, especially Langley, this meant that aerodynamicists incorporated the growing knowledge about sonic booms in their equa­tions, models, and wind tunnel experiments.

Harry Carlson of the Langley Center had conducted the first wind tun­nel experiment on sonic boom generation in 1959. As reported in December, he tested seven models of various geometrical and airplane-like shapes at differing angles of attack in Langley’s original 4 by 4 supersonic wind tun­nel at a speed of Mach 2.01. The tunnel’s relatively limited interior space mandated the use of very small models to obtain sonic boom signatures: about 2 inches in length for measuring shock waves at 8 body lengths dis­tance and only about three-quarters inch for trying to measure them at 32 body lengths (as close as possible to the "far field,” a distance where mul­tiple shock waves coalesce into the typical N-wave signature). Although far-field data were problematic, the overall results correlated with existing theory, such as Whitham’s formulas on volume-induced overpressures and Walkden’s on those caused by lift.[387] Carlson’s attempt to design one of the models to alleviate the strength of the bow shock was unsuccessful, but this might be considered NASAs first attempt at boom minimization.

The small size and extreme precision needed for the models, the disruptive effects of the assemblies needed to hold them, and the extra sensitivity required of pressure-sensing devices all limited a wind tun­nel’s ability to measure the type of shock waves that would reach the ground from a full-sized aircraft. Even so, substantial progress contin­ued, and the data served as a useful cross-check on flight test data and mathematical formulas.[388] For example, in 1962 Carlson used a 1-inch model of a B-58 to make the first correlation of flight test data with wind tunnel data and sonic boom theory. Results proved that wind tun­nel readings, with appropriate extrapolations, could be used with some confidence to estimate sonic boom signatures.[389]

Exactly 5 years after publishing results of the first wind tunnel sonic boom experiment, Harry Carlson was able to report, "In recent years, intensive research efforts treating all phases of the problem have served to provide a basic understanding of this phenomenon. The theoretical studies [of Whitham and Walkden] have resulted in the correlations with the wind tunnel data.. .and with the flight data.”[390] As for minimiza­tion, wind tunnel tests of SCAT models had revealed that some config­urations (e. g., the "arrow wing”) produced lower overpressures.[391] Such possibilities were soon being explored by aerodynamicists in industry, academia, and NASA. They included Langley’s long-time supersonic specialist, F. Edward McLean, who had discovered extended near-field effects that might permit designing airframes for lower overpressures.[392] Of major significance (and even more potential in the future), improved data reduction methods and numerical evaluations of sonic boom the­ory were being adapted for high-speed processing with new computer codes and hardware, such as Langley’s massive IBM 704. Using these new capabilities, Carlson, McLean, and others eventually designed the SCAT – 15F, an improved SST concept optimized for highly efficient cruise.[393]

In addition to reports and articles, NASA researchers presented findings from the growing knowledge about sonic booms in various meetings and professional symposia. One of the earliest took place September 17-19, 1963, when NASA Headquarters sponsored an SST feasibility studies review at the Langley Center—attended by Government, contractor, and airline personnel—that examined every aspect of the planned airplane. In a session on noise, Harry Carlson warned that "sonic boom considerations alone may dictate allowable minimum altitudes along most of the flight path and have indicated that in many cases the airframe sizing and engine selection depend directly on sonic boom.”[394] On top of that, Harvey Hubbard and Domenic Maglieri discussed how atmospheric effects and community response to building vibrations might pose problems with the current SST sonic boom objectives (2 psf during acceleration and 1.5 psf during cruise).[395]

The conferees discussed various other technological challenges for the planned American SST, some indirectly related to the sonic boom issue. For example, because of frictional heating, an airframe covered largely with stainless steel (such as the XB-70) or titanium (such as the then-top secret A-12/YF-12) would be needed to cruise at Mach 2.7+ and over 60,000 feet, an altitude that many hoped would allow the sonic boom to weaken by the time it reached the surface. Manufacturing such a plane, however, would be much more expensive than building aMach 2.2 SST with aluminum skin, such as the Concorde.

Despite such concerns, the FAA had already released the SST request for proposals (RFP) on August 15, 1963. Thereafter, as explained by Ed McLean, "NASA’s role changed from one of having its own concepts eval­uated by the airplane industry to one of evaluating the SST concepts of the airplane industry.”[396] By January 1964, Boeing, Lockheed, North American, and their jet engine partners had submitted initial proposals. In retrospect, advocates of the SST were obviously hoping that technol­ogy would catch up with requirements before it went into production.

Although the SST program was now well underway, a growing aware­ness of the public response to booms became one factor in many that tri­agency (FAA-NASA-DOD) groups in the mid-1960s, including the PAC chaired by Robert McNamara, considered in evaluating the proposed SST designs. The sonic boom issue also became the focus of a special committee of the National Academy of Sciences and attracted increas­ing attention from the academic and scientific community at large.

The Acoustical Society of America, made up of professionals of all fields involving sound (ranging from music to noise to vibrations), sponsored the first Sonic Boom Symposium on November 3, 1965, during its 70th meeting in—appropriately enough—St. Louis. McLean, Hubbard, Carlson, Maglieri, and other Langley experts presented papers on the background of sonic boom research and their latest findings.[397] The paper by McLean and Barrett L. Shrout included details on a breakthrough in using near-field shock waves to evaluate wind tunnel models for boom minimization, in this case a reduc­tion in maximum overpressure in a climb profile from 2.2 to 1.1 psf. This technique also allowed the use of 4-inch models, which were easier to fab­ricate to the close tolerances required for accurate measurements.[398]

In addition to the scientists and engineers employed by the aircraft manufactures, eminent researchers in academia took on the challenge of discovering ways to minimize the sonic boom, usually with support from NASA. These included the team of Albert George and A. Richard Seebass of Cornell University, which had one of the Nation’s premier aeronautical laboratories. Seebass edited the proceedings of NASA’s first sonic boom research conference, held on April 12, 1967. The meeting was chaired by another pioneer of minimization, Wallace D. Hayes of Princeton University, and attended by more than 60 other Government, industry, and university experts. Boeing had been selected as the SST contractor less than4 months earlier, but the sonic boom was becom­ing recognized far and wide as a possibly fatal flaw for its future pro­duction, or at least for allowing it to fly supersonically over land.[399] The two most obvious theoretical ways to reduce sonic booms during super­sonic cruise—flying much higher with no increase in weight or building an airframe 50 percent longer at half the weight—were not considered practical.[400] Furthermore, as apparent from a presentation by Domenic Maglieri on flight test findings, such an airplane would still have to deal with the problem of booms caused by maneuvering and accelerating, and from atmospheric conditions.[401]

The stated purpose of this conference was "to determine whether or not all possible aerodynamic means of reducing sonic boom over­pressure were being explored.”[402] In that regard, Harry Carlson showed how various computer programs then being used at Langley for aero­dynamic analyses (e. g., lift and drag) were also proving to be a useful tool for bow wave predictions, complementing improved wind tunnel experiments for examining boom minimization concepts.[403] After pre­sentations by representatives from NASA, Boeing, and Princeton, and follow-on discussions by other experts, some of the attendees thought more avenues of research could be explored. But many were still con­cerned whether low enough sonic booms were possible using contem­porary technologies. Accordingly, NASA’s Office of Advanced Research and Technology, which hosted the conference, established specialized research programs on seven aspects of sonic boom theory and appli­cations at five American universities and the Aeronautical Research Institute of Sweden.[404] This mobilization of aeronautical brainpower almost immediately began to pay dividends.

Seebass and Hayes cochaired NASA’s second sonic boom conference on May 9-10, 1968. It included 19 papers on the latest boom-related test­ing, research, experimentation, and theory by specialists from NASA and the universities. The advances made in one year were impressive. In the area of theory, for example, the straightforward linear technique for predicting the propagation of sonic booms from slender airplanes such as the SST had proven reliable, even for calculating some nonlinear (mathematically complex and highly erratic) aspects of their signatures.

Additional field testing had improved understanding of the geometri­cal acoustics caused by atmospheric conditions. Computational capa­bilities needed to deal with such complexities continued to accelerate. Aeronautical Research Associates of Princeton (ARAP), under a NASA contract, had developed a computer program to calculate overpressure signatures for supersonic aircraft in a horizontally stratified atmosphere. Offering another preview of the digital future, researchers at Ames had begun using a computer with graphic displays to perform flow-field analyses and to experiment with a dozen diverse aircraft configurations for lower boom signatures. Several other papers by academic experts, such as Antonio Ferri of New York University (a notable prewar Italian aerodynamicist who had worked at the NACA’s Langley Laboratory after escaping to the United States in 1944), dealt with progress in the aero­dynamic techniques to reduce sonic booms.[405]

Nevertheless, several important theoretical problems remained, such as the prediction of sonic boom signatures near a caustic (an objective of the previously described Jackass Flats testing in 1970), the diffraction of shock waves into "shadow zones” beyond the primary sonic boom car­pet, nonlinear shock wave behavior near an aircraft, and the still mysti­fying effects of turbulence. Ira R. Schwartz of NASA’s Office of Advanced Research and Technology summed up the state of sonic boom mini­mization as follows: "It is yet too early to predict whether any of these design techniques will lead the way to development of a domestic SST that will be allowed to fly supersonic over land as well as over water.”[406]

Rather than conduct another meeting the following year, NASA deferred to a conference by NATO’s Advisory Group for Aerospace Research & Development (AGARD) on aircraft engine noise and sonic boom, held in Paris during May 1969. Experts from the United States and five other nations attended this forum, which consisted of seven ses­sions. Three of the sessions, plus a roundtable, dealt with the status of boom research and the challenges ahead.[407] As reflected by these confer­ences, the three-way partnership between NASA, Boeing, and the aca­demic aeronautical community during the late 1960s continued to yield new knowledge about sonic booms as well as technological advance in exploring ways to deal with them. In addition to more flight test data and improved theoretical constructs, much of this progress was the result of various experimental apparatuses.

The use of wind tunnels (especially Langley’s 4 by 4 supersonic wind tunnels and the 9 by 7 and 8 by 7 supersonic sections of Ames’s Unitary Wind Tunnel complex) continued to advance the understanding of shock wave generation and aircraft configurations that could minimize the sonic boom.[408] As two of Langley’s sonic boom experts reported in 1970, the many challenges caused by nonuniform tunnel conditions, model and probe vibrations, boundary layer effects, and the precision needed for small models "have been met with general success.”[409]

Also during the latter half of the 1960s, NASA and its contrac­tors developed several new types of simulators that proved useful in studying the physical and psychoacoustic effects of sonic booms. The smallest (and least expensive) was a spark discharge system. The Langley Center and other laboratories used these "bench-type” devices for basic research into the physics of pressure waves. Langley’s system created miniature sonic booms by using parabolic or two-dimensional mirrors to focus the shock waves caused by discharging high voltage bolts of electricity between tungsten eletrodes toward precisely placed microphones. Such experiments were used to verify laws of geometri­cal acoustics. The system’s ability to produce shock waves that spread out spherically proved useful for investigating how the cone-shaped waves generated by aircraft interact with buildings.[410]

For studying the effect of temperature gradients on boom propaga­tion, Langley used a ballistic range consisting of a helium gas launcher that shot miniature projectiles at constant Mach numbers through a partially enclosed chamber. The inside could be heated to ensure a sta­ble atmosphere for accuracy in boom measurements. Innovative NASA – sponsored simulators included Ling-Temco-Vought’s shock-expansion tube, basically a mobile 13-foot-diameter conical horn mounted on a trailer, and General American Research Division’s explosive gas-filled envelopes suspended above sensors at Langley’s sonic boom simulation range.[411] NASA also contracted with Stanford Research Institute for sim­ulator experiments that showed how sonic booms could interfere with sleep, especially for older people.[412]

Other simulators were devised to handle both human and struc­tural response to sonic booms. (The need to better understand effects on people was called for in a report released in June 1968 by the National Academy of Sciences.)[413] Unlike the previously described studies using actual sonic booms created by aircraft, these devices had the advan­tages of a controlled laboratory environment. They allowed research­ers to produce multiple boom signatures of varying shapes, pressures, and durations as often as needed at a relatively low cost.[414] The Langley Center’s Low-Frequency Noise Facility—built earlier in the 1960s to gen­erate the intense chest-pounding sounds of giant Saturn boosters during Apollo launches—also performed informative sonic boom simulation experiments. Consisting of a cylindrical test chamber 24 feet in diam­eter and 21 feet long, it could accommodate people, small structures, and materials for testing. Its electrohydraulically operated 14-foot pis­ton was capable of producing sound waves from 1-50 hertz (sort of a super subwoofer) and sonic boom N-waves from 0.5 to 20 psf at dura­tions from 100 to 500 milliseconds.[415]

To provide an even more versatile system designed specifically for sonic boom research, NASA contracted with General Applied Science Laboratories (GASL) of Long Island, NY, to develop an ideal simulator using a quick action valve and shock tube design. (Antonio Ferri was the president of GASL, which he had cofounded with the illustrious aeronautical scientist Theodore von Karman in 1956). Completed in 1969, this new simulator consisted of a high-speed flow valve that sent pressure wave bursts through a heavily reinforced 100-foot-long con­ical duct that expanded into an 8 by 8 test section with an instrumen­tation and model room. It could generate overpressures up to 10 psf with durations from 50 to 500 milliseconds. Able to operate at less than a 1-minute interval between bursts, its sonic boom signatures proved very accurate and easy to control.[416] In the opinion of Ira Schwartz, "the GASL/NASA facility represents the most advanced state of the art in sonic boom simulation.”[417]

While NASA and its partners were learning more about the nature of sonic booms, the SST was becoming mired in controversy. Many in the public, the press, and the political arena were concerned about the noise SSTs would create, with a growing number expressing hostility to the entire SST program. As one of the more reputable critics wrote in 1966, with a map showing a dense network of future boom carpets crossing the United States, "the introduction of supersonic flight, as it is at present conceived, would mean that hundreds of millions of peo­ple would not only be seriously disturbed by the sonic booms. . . they would also have to pay out of their own pockets (through subsidies) to keep the noise-creating activity alive.”[418]

Opposition to the SST grew rapidly in the late 1960s, becoming a cause celebre for the burgeoning environmental movement as well as target for small-Government conservatives opposed to Federal subsi­dies.[419] Typical of the growing trend among opinion makers, the New York Times published its first strongly anti-sonic-boom editorial in June 1968, linking the SST’s potential sounds with an embarrassing incident the week before when an F-105 flyover shattered 200 windows at the Air Force Academy, injuring a dozen people.[420] The next 2 years brought a growing crescendo of complaints about the supersonic transport, both for its expense and the problems it could cause—even as research on controlling sonic booms began to bear some fruit.

By the time 150 scientists and engineers gathered in Washington, DC, for NASA’s third sonic boom research conference on October 29-30, 1970, the American supersonic transport program was less than 6 months away from cancellation. Thus the 29 papers presented at the conference and others at the ASA’s second sonic boom symposium in Houston the following month might be considered, in their entirety, as a final status report on sonic boom research during the SST decade.[421] Of future if not near-term significance, considerable progress was being made in under­standing how to design airplanes that could fly faster than the speed of sound while leaving behind a gentler sonic footprint.

As summarized by Ira Schwartz: "In the area of boom minimiza­tion, the NASA program has utilized the combined talents of Messrs. E. McLean, H. L. Runyan, and H. R. Henderson at NASA Langley Research Center, Dr. W. D. Hayes at Princeton University, Drs. R. Seebass and A. R. George at Cornell University, and Dr. A. Ferri at New York University to determine the optimum equivalent bodies of rotation [a technique for relating airframe shapes to standard aerodynamic rules governing simple projectiles with round cross sections] that minimize the over­pressure, shock pressure rise, and impulse for given aircraft weight, length, Mach number, and altitude of operation. Simultaneously, research efforts of NASA and those of Dr. A. Ferri at New York University have provided indications of how real aircraft can be designed to provide values approaching these optimums. . . . This research must be contin­ued or even expanded if practical supersonic transports with minimum and acceptable sonic boom characteristics are to be built.”[422]

Any consensus among the attendees about the progress they were making was no doubt tempered by their awareness of the financial problems now plaguing the Boeing Company and the political difficul­ties facing the administration of President Richard Nixon in continu­ing to subsidize the American SST. From a technological standpoint, many of them also seemed resigned that Boeing’s final 2707-300 design (despite its 306-foot length and 64,000-foot cruising altitude) would not pass the overland sonic boom test. Richard Seebass, who was in the vanguard of minimization research, admitted that "the first few generations of supersonic transport (SST) aircraft, if they are built at all, will be limited to supersonic flight over oceanic and polar regions.”[423] In view of such concerns, some of the attendees were even looking toward hypersonic aerospace vehicles, in case they might cruise high enough to leave an acceptable boom carpet.

As for the more immediate prospects of a domestic supersonic trans­port, Lynn Hunton of the Ames Research Center warned that "with regard to experimental problems in sonic boom research, it is essential that the techniques and assumptions used be continuously questioned as a requisite for assuring the maximum in reliability.”[424] Harry Carlson probably expressed the general opinion of Langley’s aerodynamicists when he cautioned that "the problem of sonic boom minimization through airplane shaping is inseparable from the problems of optimization of aerodynamic efficiency, propulsion efficiency, and struc­tural weight. . . . In fact, if great care is not taken in the application of sonic boom design principles, the whole purpose can be defeated by per­formance degradation, weight penalties, and a myriad of other practi­cal considerations.”[425]

After both the House and Senate voted in March 1971 to elimi­nate SST funding, a joint conference committee confirmed its termina­tion in May.[426] This and related cuts in supersonic research inevitably slowed momentum in dealing with sonic booms. Even so, research­ers in NASA, as well as in academia and the aerospace industry, would keep alive the possibility of civilian supersonic flight in a more constrained and less technologically ambitious era. Fortunately for them, the ill – fated SST program left behind a wealth of data and discoveries about sonic booms. As evidence, the Langley Research Center produced or sponsored more than 200 technical publications on the subject over 19 years, most related to the SST program. (Many of those published in the early 1970s were based on previous research and testing.) This literature, depicted in Figure 4, would be a legacy of enduring value in the future.[427]

Keeping Hopes Alive: Supersonic Cruise Research

"The number one technological tragedy of our time.” That was how President Nixon characterized the votes by the Congress to stop fund­ing an American supersonic transport.[428] Despite its cancellation, the White House, the Department of Transportation (DOT), and NASA—as well as some in Congress—did not allow the progress in supersonic tech­nologies the SST had engendered to completely dissipate. During 1971 and 1972, the DOT and NASA allocated funds for completing some of the tests and experiments that were underway when the program was terminated. The administration then added line-item funding to NASA’s fiscal year (FY) 1973 budget for scaled-down supersonic research, espe­cially as related to environmental problems. In response, NASA estab­lished the Advanced Supersonic Technology (AST) program in July 1972.

To more clearly indicate the exploratory nature of this effort and allay fears that it might be a potential follow-on to the SST, the AST program was renamed Supersonic Cruise Aircraft Research (SCAR) in 1974. When the term aircraft in its title continued to raise suspicion in some quarters that the goal might be some sort of prototype, NASA shortened the program’s name to Supersonic Cruise Research (SCR) in 1979.[429] For the sake of simplicity, the latter name is often applied to all 9 years of the program’s existence. For NASA, the principal purpose of AST, SCAR, and SCR was to conduct and support focused research into the problems of supersonic flight while advancing related technologies. NASA’s aeronautical Centers, most of the major airframe manufactures, and many research organizations and universities participated. From Washington, NASA’s Office of Aeronautics and Space Technology (OAST) provided overall supervision but delegated day-to-day management to the Langley Research Center, which established an AST Project Office in its Directorate of Aeronautics, soon placed under a new Aeronautics System Division. The AST program was organized into four major elements— propulsion, structure and materials, stability and control, and aerody­namic performance—plus airframe-propulsion integration. (NASA spun off the propulsion work on a variable cycle engine [VCE] as a separate program in 1976.) Sonic boom research was one of 16 subelements.[430]

At the Aeronautical Systems Division, Cornelius "Neil” Driver, who headed the Vehicle Integration Branch, and Ed McLean, as chief of the AST Project Office, were key officials in planning and managing the AST/SCAR effort. After McLean retired in 1978, the AST Project Office passed to a fellow aerodynamicist, Vincent R. Mascitti, while Driver took over the Aeronautical Systems Division. One year later, Domenic Maglieri replaced Mascitti in the AST Project Office.[431] Despite Maglieri’s sonic boom expertise, the goal of minimizing the AST’s sonic boom for overland cruise had long since ceased being an SCR objective. As later explained by McLean: "The basic approach of the SCR program. . . was to search for the solution of supersonic problems through disciplin­ary research. Most of these problems were well known, but no satisfac­tory solution had been found. When the new SCR research suggested a potential solution. . . the applicability of the suggested solution was assessed by determining if it could be integrated into a practical com­mercial supersonic airplane and mission. . . . If the potential solution could not be integrated, it was discarded.”[432]

To meet the practicality standard for integration into a supersonic airplane, solving the sonic boom problem had to clear a new and almost insurmountable hurdle. In April 1973, responding to years of political pres­sure, the FAA announced a new rule that banned commercial or civil air­craft from supersonic flight over the land mass or territorial waters of the United States if measureable overpressure would reach the surface.[433] One of the initial objectives of the AST’s sonic boom research had been to

Laboratory Experiments and Sonic Boom Theory

Figure 4. Reports produced or sponsored by NASA Langley, 1958-1976. NASA.

establish a metric on public acceptability of sonic boom signatures for use in the aerodynamic design process. The FAA’s stringent new regula­tion seemed to rule out any such flexibility.

As a result, when Congress cut FY 1974 funding for the AST pro­gram from $40 million to about $10 million, the subelement for sonic boom research went on NASA’s chopping block. The design criteria for the SCAR/SCR program became a 300-foot-long, 270-passenger airplane that could fly as effectively as possible over land at subsonic speeds yet still cruise efficiently at 60,000 feet and Mach 2.2 over water. To meet these criteria, Langley aerodynamicists modified their SCAT-15F design from the late 1960s into a notional concept with better low-speed per­formance (but higher sonic boom potential) called the ATF-100. This served as a baseline for three industry teams in coming up with their own designs.[434]

When the AST program began, however, prospects for a significant quieting of its sonic footprint appeared possible. Sonic boom theory had advanced significantly during the 1960s, and some promising if not yet practical ideas for reducing boom signatures had begun to emerge. As

indicated by Figure 4, some findings based on that research continued to come out in print during the early 1970s.

As far back as 1965, NASA’s Ed McLean had discovered that the sonic boom signature from a very long supersonic aircraft flying at the proper altitude could be nonasymptotic (i. e., not reach the ground in the form of an N-wave). This confirmed the possibility of tailoring an airplane’s shape into something more acceptable.[435] Some subsequent theoretical suggestions, such as various ways of projecting heat fields to create a longer "phantom” fuselage, are still decidedly futuristic, while others, such as adding a long spike to the nose of an SST to slow the rise of the bow shock wave, would (as described later) eventually prove more real­istic.[436] Meanwhile, researchers under contract to NASA kept advancing the state of the art in more conventional directions. For example, Antonio Ferri of New York University in partnership with Hans Sorensen of the Aeronautical Research Institute of Sweden used new 3-D measuring techniques in Sweden’s trisonic wind tunnel to more accurately correlate near-field effects with linear theory. Testing NYU’s model of a 300-foot – long SST cruising at Mach 2.7 at 60,000 feet, it showed the opportunity for sonic booms of less than 1.0 psf.[437] Ferri’s early death in 1975 left a void in supersonic aerodynamics, not least in sonic boom research.[438]

By the end of the SST program, Albert George and Richard Seebass had formulated a mathematical foundation for many of the previous theories. They also devised a near-field boom-minimization theory, appli­cable in an isothermal atmosphere, for reducing the overpressures of flattop and ramp-type signatures. It was applicable to both front and rear shock waves along with their parametric correlation to airframe lift and area. In a number of seminal papers and articles in the early 1970s, they explained the theory along with some ideas on possible aerodynamic

shaping (e. g., slightly blunting an aircraft’s nose) and the optimum cruise altitude (lower than previously thought) for reducing boom signatures.[439]

Theoretical refinements and new computer modeling techniques con­tinued to appear in the early 1970s. For example, in June 1972, Charles Thomas of the Ames Research Center explained a mathematical proce­dure using new algorithms for waveform parameters to extrapolate the formation of far-field N-waves. This was an alternative to using F-function effects (the pattern of near-field shock waves emanating from an air­frame), which were the basis of the previously discussed program devel­oped by Wallace Hayes and colleagues at ARAP. Although both methods accounted for acoustical ray tracing and could arrive at similar results, Thomas’s program allowed easier input of flight information (speed, altitude, atmospheric conditions, etc.) for automated data processing.[440]

In June 1973, at the end of the AST program’s first year, NASA Langley’s Harry Carlson, Raymond Barger, and Robert Mack published a study on the applicability of sonic boom minimization concepts for overland super­sonic transport designs. They examined four reduced-boom concepts for a commercially viable Mach 2.7 SST with a range of 2,500 nautical miles (i. e., coast to coast in the United States). Using experimentally verified minimization concepts of George, Seebass, Hayes, Ferri, Barger, and the English researcher L. B. Jones, along with computational techniques devel­oped at Langley during the SST program, Carlson’s team examined ways to manipulate the F-function to project a flatter far-field sonic boom signature. In doing this, the team was handicapped by the continuing lack of estab­lished signature characteristics (the combinations of initial peak overpres­sure, maximum shock strength, rise time, and duration) that people would best tolerate, both outdoors and especially indoors. Also, the complexity of aft aircraft geometry made measuring effects on tail shocks difficult.[441]

Even so, their study confirmed the advantages of designs with highly swept wings toward the rear of the fuselage with twist and camber for

sonic boom shaping. It also found the use of canards (small airfoils used as horizontal stabilizers near the nose of rear-winged aircraft) could optimize lift distribution for sonic boom benefits. Although two designs showed bow shocks of less than 1.0 psf, their report noted "that there can be no assurance at this time that [their] shock-strength values. . . if attainable, would permit unrestricted overland operations of supersonic transports.”[442] Ironically, these words were written just before the new FAA rule rendered them largely irrelevant.

In October 1973, Edward J. Kane of Boeing, who had been a key sonic boom expert during the SST program, released the results of a similar NASA-sponsored study on the feasibility of a commercially via­ble low-boom transport using technologies projected to be available in 1985. Based on the latest theories, Boeing explored two longer-range con­cepts: a high-speed (Mach 2.7) design that would produce a sonic boom of 1.0 psf or less, and a medium-speed (Mach 1.5) design with a signature of 0.5 psf or less.[443] In retrospect, this study, which reported mixed results, represented industry’s perspective on the prospects for boom minimiza­tion just as the AST program dropped plans for supersonic cruise over land.

Obviously, the virtual ban on civilian supersonic flight in the United States dampened any enthusiasm by private industry to continue invest­ing very much capital in sonic boom research. Within NASA, some of those with experience in sonic boom research also redirected their efforts into other areas of expertise. Of the approximately 1,000 techni­cal reports, conference papers, and articles by NASA and its contractors listed in bibliographies of the SCR program from 1972 to 1980, only 8 dealt directly with the sonic boom.[444]

Even so, progress in understanding sonic booms did not come to a complete standstill. In 1972, Christine M. Darden, a Langley mathemati­cian in an engineering position, had developed a computer code to adapt Seebass and George’s minimization theory, which was based on an iso­thermal (uniform) atmosphere, into a program that applied to a stan­dard (stratified) atmosphere. It also allowed more design flexibility than previous low-boom configuration theory did, such as better aerodynam­ics in the nose area.[445]

Using this new computer program, Darden and Robert Mack fol­lowed up on the previously described study by Carlson’s team by design­ing wing-body models with low-boom characteristics: one for cruise at Mach 1.5 and two for cruise at Mach 2.7. At 6 inches in length, these were the largest yet tested for sonic boom propagation in a4 by 4 super­sonic wind tunnel—an improvement made possible by continued prog­ress in measuring and extrapolating near-field effects to signatures in the far field. The specially shaped models (all arrow-wing configurations, which distributed lift effects to the rear) showed significantly lower over­pressures and flatter signatures than standard designs did, especially at Mach 1.5, at which both the bow and tail shocks were softened. Because of funding limitations, this promising research could not be sustained long enough to develop definitive boom minimization techniques.[446] It was apparently the last significant experimentation on sonic boom min­imization for more than a decade.

While this work was underway, Darden and Mack presented a paper on current sonic boom research at the first SCAR conference, held at Langley on November 9-12, 1976 (the only paper on that subject among the 47 presentations). "Contrary to earlier beliefs,” they explained, "it has been found that improved efficiency and lower sonic boom characteristics do not always go hand in hand.” As for the acceptability of sonic booms, they reported that the only research in North America was being done at the University of Toronto.[447] Another NASA contribution to understanding sonic booms came in early 1978 with the publication by Harry Carlson

of "Simplified Sonic-Boom Prediction,” a how-to guide on a relatively quick and easy method to determine sonic boom characteristics. It could be applied to a wide variety of supersonic aircraft configurations as well as spacecraft at altitudes up to 76 kilometers. Although his clever series of graphs and equations would not provide the accuracy needed to predict booms from maneuvering aircraft or in designing airframe configura­tions, Carlson explained that "for many purposes (including the conduct of preliminary engineering studies or environmental impact statements), sonic-boom predictions of sufficient accuracy can be obtained by using a simplified method that does not require a wind tunnel or elaborate computing equipment. Computational requirements can in fact be met by hand-held scientific calculators, or even slide rules.”[448]

The month after publication of this study, NASA released its final environmental impact statement (EIS) for the Space Shuttle program, which benefited greatly from the Agency’s previous research on sonic booms, including that with the X-15 and Apollo missions, and adapta­tions of Charles Thomas’s waveform-based computer program.[449] While ascending, the EIS estimated maximum overpressures of 6 psf (possi­bly up to 30 psf with focusing effects) about 40 miles downrange over open water, caused by both its long exhaust plume and its curving flight profile while accelerating toward orbit. During reentry of the manned vehicle, the sonic boom was estimated at a more modest 2.1 psf, which would affect about 500,000 people as it crossed the Florida peninsula or 50,000 when landing at Edwards.[450] In following decades, as pop­ulations in those areas boomed, millions more would be hearing the sonic signatures of returning Shuttles, more than 120 of which would be monitored for their sonic booms.[451]

Some other limited experimental and theoretical work on sonic booms continued in the late 1970s. Richard Seebass at Cornell and Kenneth Plotkin of Wyle Research, for example, delved deeper into the challenging phenomena of caustics and focused booms.[452] At the end of the decade, Langley’s Raymond Barger published a study on the relationship of caustics to the shape and curvature of acoustical wave fronts caused by actual aircraft maneuvers. To graphically display these effects, he programmed a computer to draw simulated three-dimensional line plots of the acoustical rays in the wave fronts. Figure 5 shows how even a simple decelerating turn, in this case from Mach 2.4 to Mach 1.5 in a radius of 23 kilometers (14.3 miles), can focus the kind of caustic that might result in a super boom.[453]

Unlike in the 1960s, there was little if any NASA sonic boom flight testing during the 1970s. As a case in point, NASA’s YF-12 Blackbirds at Edwards (where the Flight Research Center was renamed the Dryden Flight Research Center in 1976) flew numerous supersonic missions in support of the AST/SCAR/SCR program, but none of them were dedicated to sonic boom issues.[454] On the other hand, operations of the Concorde began providing a good deal of empirical data on sonic booms.

One discovery about secondary booms came after British Airways and Air France began Concorde service to the United Sates in May 1976. Although the Concordes slowed to subsonic speeds while well offshore, residents along the Atlantic seaboard began hearing what were called the "East Coast Mystery Booms.” These were detected all the way from Nova Scotia to South Carolina, some measurable on seismographs.[455] Although a significant number of the sounds defied explanation, studies by the Naval Research Laboratory, the Federation of American Scientists, a committee of the Jason DOD scientific advisory group, and the FAA eventually determined that most of the low rumbles heard in Nova Scotia and New England were secondary booms from the Concorde. They were reaching land after being bent or reflected by temperature varia-

Laboratory Experiments and Sonic Boom Theory

Figure 5. Acoustic wave front above a maneuvering aircraft. NASA.

tions high up in the thermosphere from Concordes still about 75 to 150 miles offshore. In July 1978, the FAA issued new rules prohibiting the Concorde from creating sonic booms that could be heard in the United States. The new FAA rules did not address the issue of secondary booms because of their low intensity; nevertheless, after Concorde secondary booms were heard by coastal communities, the Agency became even more sensitive to the sonic boom potential inherent in AST designs.[456]

The second conference on Supersonic Cruise Research, held at NASA Langley in November 1979, was the first and last under its new name. More than 140 people from NASA, other Government agencies, and the aerospace industry attended. This time there were no presentations on the sonic boom, but a representative from North American Rockwell did describe the concept of a Mach 2.7 business jet for 8-10 passengers that would generate a sonic boom of only 0.5 psf.[457] It would take another

20 years for ideas about low-boom supersonic business jets to result in more than just paper studies.

Despite SCR’s relatively modest cost versus its significant techno­logical accomplishments, the program suffered a premature death in 1981. Reasons for this included the Concorde’s economic woes, opposi­tion to civilian R&D spending by key officials in the new administration of President Ronald Reagan, and a growing federal deficit. These fac­tors, combined with cost overruns for the Space Shuttle, forced NASA to abruptly cancel Supersonic Cruise Research without even funding completion of many final reports.[458] As regards sonic boom research, an exception to this was a compilation of charts for estimating minimum sonic boom levels published by Christine Darden in May 1981. She and Robert Mack also published results of their previous experimentation that would be influential when efforts to soften the sonic boom resumed.[459]

Flight Control Systems and Their Design

During the Second World War, there were multiple documented inci­dents and several fatalities that occurred when fighter pilots dove their propeller-driven airplanes at speeds approaching the speed of sound. Pilots reported increasing levels of buffet and loss of control at these speeds. Wind tunnels at that time were incapable of producing reliable meaningful data in the transonic speed range because the local shock waves were reflected off the wind tunnel walls, thus invalidating the data measurements. The NACA and the Department of Defense (DOD) cre­ated a new research airplane program to obtain a better understanding of transonic phenomena through flight-testing. The first of the resulting aircraft was the Bell XS-1 (later X-1) rocket-powered research airplane.

On NACA advice, Bell had designed the X-1 with a horizontal tail configuration consisting of an adjustable horizontal stabilizer with a hinged elevator at the rear for pitch control, at a time when a fixed hor­izontal tail and hinged elevator constituted the standard pitch control configuration for that period.[674] The X-1 incorporated this as an emergency means to increase its longitudinal (pitch) control authority at transonic speeds. It proved a wise precaution because, during the early buildup flights, the X-1 encountered similar buffet and loss of control as reported by earlier fighters. Analysis showed that local shock waves were form­ing on the tail surface, eventually migrating to the elevator hinge line. When they reached the hinge line, the effectiveness of the elevator was significantly reduced, thus causing the loss of control. The X-1 NACA-

U. S. Air Force (USAF) test team determined to go ahead, thanks to the

X-1 having an adjustable horizontal tail. They subsequently validated that the airplane could be controlled in the transonic region by moving the horizontal stabilizer and the elevator together as a single unit. This discovery allowed Capt. Charles E. Yeager to exceed the speed of sound in controlled flight with the X-1 on October 14, 1947.[675]

An extensive program of transonic testing was undertaken at the NACA High-Speed Flight Station (HSFS; subsequently the Dryden Flight Research Center) on evaluating aircraft handling qualities using the conventional elevator and then the elevator with adjustable stabi­lizer.[676] As a result, subsequent transonic airplanes were all designed to use a one-piece, all-flying horizontal stabilizer, which solved the control problem and was incorporated on the prototypes of the first supersonic American jet fighters, the North American YF-100A, and the Vought XF8U-1 Crusader, flown in 1953 and 1954. Today, the all-moving tail is a standard design ele­ment of virtually all high-speed aircraft developed around the globe.[677]

Variable Stability Airplanes

Although the centrifuge was effective in simulating relatively steady high g accelerations, it lacked realism with respect to normal aircraft motions. There was even concern that some amount of negative training might be occurring in a centrifuge. One possible method of improving the fidelity of motion simulation was to install the entire simulation (computational math­ematical model, cockpit displays, and controls) in an airplane, then forc­ing the airplane to reproduce the flight motions of the simulated airplane, thus exposing the simulator pilot to the correct motion environment. An airplane so equipped is usually referred to as a "variable stability aircraft.”

Since their invention, variable stability aircraft have played a signif­icant role in advancing flight technology. Beginning in 1948, the Cornell Aeronautical Laboratory (now Calspan) undertook pioneering work on variable stability using conventional aircraft modified in such a fashion that their dynamic characteristics reasonably approximated those of dif­ferent kinds of designs. Waldemar Breuhaus supervised modification of a Vought F4U-5 Corsair fighter as a variable stability testbed. From this sprang a wide range of subsequent "v-stab” testbeds. NACA Ames research­ers modified another Navy fighter, a Grumman F6F-5 Hellcat, so that it could fly as if its wing were set at a variety of dihedral angles; this research, and that of a later North American F-86 Sabre jet fighter likewise modified for v-stab research, was applied to design of early Century series fighters, among them the Lockheed F-104 Starfighter, a design with pronounced anhedral (negative wing dihedral).[731]

As the analog simulation capability was evolving, Cornell researchers developed a concept of installing a simulator in one cockpit of a two-

seat Lockheed NT-33A Shooting Star aircraft. By carefully measuring the stability and controllability characteristics of the "T-Bird” and then sub­tracting those characteristics from the simulated mathematical model, the researchers could program the airplane with a completely different dataset that would effectively represent a different airplane.[732] Initially the variable stability feature was used to perform general research tests by changing various controlled variables and evaluating their effect on pilot performance. Eventually mathematical models were introduced that represented the complete predicted aerodynamic and control sys­tem characteristics of new designs. The NT-33A became the most-recog­nized variable-stability testbed in the world, having "modeled” aircraft as diverse as the X-15, the B-1 bomber, and the Rockwell Space Shuttle orbiter, and flying from the early 1950s until retirement after the end of the Cold War. Thanks to its contributions and those of other v-stab tes­tbeds developed subsequently,[733] engineers and pilots have had a greater understanding of anticipated flying qualities and performance of new aircraft before the crucial first flight.[734] In particular, the variable stability aircraft did not exhibit the false rotations associated with the centrifuge simulation and were thus more realistic in simulating rapid aircraft-like maneuvers. Several YF-22 control law variations were tested using the CALSPAN NT-33 prior to the first flight. Before the first flight of the F-22, the control laws were tested on the CALSPAN VISTA. Today it is incon­ceivable that a new aircraft would fly before researchers had first eval­uated its anticipated handling qualities via variable-stability research.