Category AERONAUTICS

The Evolution of Fluid Dynamics from da Vinci to Navier-Stokes

Fluid flow has fascinated humans since antiquity. The Phoenicians and Greeks built ships that glided over the water, creating bow waves and leaving turbulent wakes behind. Leonardo da Vinci made detailed sketches of the complex flow fields over objects in a flowing stream, show­ing even the smallest vortexes created in the flow. He observed that the force exerted by the water flow over the bodies was proportional to the cross-sectional area of the bodies. But nobody at that time had a clue about the physical laws that governed such flows. This prompted some substantive experimental fluid dynamics in the 17th and 18 th centuries. In the early 1600s, Galileo observed from the falling of bodies through the air that the resistance force (drag) on the body was proportional to the air density. In 1673, the French scientist Edme Mariotte published the first experiments that proved the important fact that the aerodynamic force on an object in a flow varied as the square of the flow velocity, not
directly with the velocity itself as believed by da Vinci and Galileo before him.[758] Seventeen years later, Dutch scientist Christiaan Huygens pub­lished the same result from his experiments. Clearly, by this time, fluid dynamics was of intense interest, yet the only way to learn about it was by experiment, that is, empiricism.[759]

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesThis situation began to change with the onset of the scientific rev­olution in the 17th century, spearheaded by the theoretical work of British polymath Isaac Newton. Newton was interested in the flow of fluids, devoting the whole Book II of his Principia to the subject of fluid dynamics. He conjured up a theoretical picture of fluid flow as a stream of particles in straight-line rectilinear motion that, upon impact with an object, instantly changed their motion to follow the surface of the object. This picture of fluid flow proved totally wrong, as Newton him­self suspected, and it led to Newton’s "sine-squared law” for the force on a object immersed in a flow, which famously misled many early aero­nautical pioneers. But if quantitatively incorrect, it was nevertheless the first to theoretically attempt an explanation of why the aerodynamic force varied directly with the square of the flow velocity.[760]

Newton, through his second law contributed indirectly to the break­throughs in theoretical fluid dynamics that occurred in the 18th century. Newton’s second law states that the force exerted on a moving object is directly proportional to the time rate of change of momentum of that object. (It is more commonly known as "force equals mass time accel­eration,” but this is not found in the Principia). Applying Newton’s sec­ond law to an infinitesimally small fluid element moving as part of a
fluid flow that is actually a continuum material, Leonhard Euler con­structed an equation for the motion of the fluid as dictated by Newton’s second law. Euler, arguably the greatest scientist and mathematician of the 18 th century, modeled a fluid as a continuous collection of infinitesi­mally small fluid elements moving with the flow, where each fluid element can continually change its size and shape as it moves with the flow, but, at the same time, all the fluid elements taken as a whole constitute an overall picture of the flow as a continuum. That was somewhat in con­trast to the individual and distinct particles in Newton’s impact theory model mentioned previously. To his infinitesimally small fluid element, Euler applied Newton’s second law in a form that used differential cal­culus, leading to a differential equation relating the variation of veloc­ity and pressure throughout the flow. This equation, simply labeled the "momentum equation,” came to be known simply as Euler’s equation. In the 18th century, it constituted a bombshell in launching the field of theoretical fluid dynamics and was to become a pivotal equation in CFD in the 20th century, a testament to Euler’s insight and its application.

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesThere is a second fundamental principle that underlies all of fluid dynamics, namely that mass is conserved. Euler applied this principle also to his model of an infinitesimally small moving fluid element, con­structing another differential equation labeled the "continuity equa­tion.” These two equations, the continuity equation and the momentum equation, were published in 1753, considered one of his finest works. Moreover, these two equations, 200 years later, were to become the phys­ical foundations of the early work in computational fluid dynamics.[761]

After Euler’s publication, for the next century all serious efforts to theoretically calculate the details of a fluid flow centered on efforts to solve these Euler equations. There were two problems, however. The first was mathematical: Euler’s equations are nonlinear partial differ­ential equations. In general, nonlinear partial differential equations are not easy to solve. (Indeed, to this day there exists no general analytical solution to the Euler equations.) When faced with the need to solve a practical problem, such as the airflow over an airplane wing, in most cases an exact solution of the Euler equations is unachievable. Only by simplifying the fluid dynamic problem and allowing certain terms in the
equations to be either dropped or modified in such a fashion to make the equations linear rather than nonlinear can these equations be solved in a useful manner. But a penalty usually must be paid for this simpli­fication because in the process at least some of the physical or geomet­rical accuracy of the flow is lost.

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesThe second problem is physical: when applying Newton’s second law to his moving fluid element, Euler did not account for the effects of fric­tion in the flow, that is, the force due to the frictional shear stresses rub­bing on the surfaces of the fluid element as it moves in the flow. Some fluid dynamic problems are reasonably characterized by ignoring the effects of friction, but the 18th and 19th century theoretical fluid dynam – icists were not sure, and they always worried about what role friction plays in a flow. However, a myriad of other problems are dominated by the effect of friction in the flow, and such problems could not even be addressed by applying the Euler equations. This physical problem was exacerbated by controversy as to what happens to the flow moving along a solid surface. We know today that the effect of friction between a fluid flow and a solid surface (such as the surface of an airplane wing) is to cause the flow velocity right at the surface to be zero (relative to the surface). This is called the no-slip condition in modern terminology, and in aerodynamic theory, it represents a "boundary condition” that must be accounted for in conjunction with the solution of the govern­ing flow equations. The no-slip condition is fully understood in modern fluid dynamics, but it was by no means clear to 19th century scientists. The debate over whether there was a finite relative velocity between a solid surface and the flow immediately adjacent to the surface contin­ued into the 2nd decade of the 20th century.[762] In short, the world of the­oretical fluid dynamics in the 18 th and 19 th centuries was hopelessly cast adrift from many desired practical applications.

The second problem, that of properly accounting for the effects of friction in the flow, was dealt with by two mathematicians in the middle 19th century, France’s Claude-Louis-Marie-Henri Navier, and Britain’s Sir George Gabriel Stokes. Navier, an instructor at the famed Ecole natio­nals des ponts et chaussees, changed the pedagogical style of teaching civil engineering from one based mainly on cut-and-try empiricism to a program emphasizing physics and mathematical analysis. In 1822, he
gave a paper to the Academy of Sciences that contained the first accu­rate representation of the effects of friction in the general partial differ­ential momentum equation for fluid flow.[763] Although Navier’s equations were in the correct form, his theoretical reasoning was greatly flawed, and it was almost a fluke that he arrived at the correct terms. Moreover, he did not fully understand the physical significance of what he had derived. Later, quite independently from Navier, Stokes, a professor at Cambridge who occupied the Lucasian Chair at Cambridge University (the same chair Newton had occupied a century and a half earlier) took up the derivation of the momentum equation including the effects of fric­tion. He began with the concept of internal shear stress caused by fric­tion in the fluid and derived the governing momentum equation much like it would be derived today in a fluid dynamics class, publishing it in 1845.[764] Working independently, then, Navier and Stokes derived the basic equations that describe fluid flows and contain terms to account for friction. They remain today the fundamental equations that fluid dynamicists employ for analyzing frictional flows.

The Evolution of Fluid Dynamics from da Vinci to Navier-StokesFinally, in addition to the continuity and momentum equations, a third fundamental physical principle is required for any flow that involves high speeds and in which the density of the flow changes from one point to another. This is the principle of conservation of energy, which holds that energy cannot be created or destroyed; it can only change its form. The origin of this principle in the form of the first law of thermo­dynamics is found in the history of the development of thermo­dynamics in the late 19th century. When applied to a moving fluid ele­ment in Euler’s model, and including frictional dissipation and heat transfer by thermal conduction, this principle leads to the energy equa­tion for fluid flow.

So there it is, the origin of the three Navier-Stokes equations exhib­ited so prominently at the National Air and Space Museum. They are horribly nonlinear partial differential equations. They are also fully cou­pled together because the variables of pressure, density, and velocity that appear in these equations are all dependent on each other. Obtaining a
general analytical solution of the Navier-Stokes equations is much more daunting than the problem of obtaining a general analytical solution of the Euler equations, for they are far more complex. There is today no general analytical solution of the Navier-Stokes equations (as is likewise true in the case of the Euler equations). Yet almost all of modern com­putational fluid dynamics is based on the Navier-Stokes equations, and all of the modern solutions of the Navier-Stokes equations are based on computational fluid dynamics.

Origins of NASTRAN

In the early 1960s, structures researchers from the various NASA Centers were gathering annually at Headquarters in Washington, DC, to exchange ideas and coordinate their efforts. They began to realize that many organizations—NASA Centers and industry—were independently developing computer programs to solve similar types of structural prob­lems. There were several drawbacks to this situation. Effort was being duplicated needlessly. There was no compatibility of input and out­put formats, or consistency of naming conventions. The programs

were only as versatile as the developers cared to make them; the inher­ent versatility of the finite element method was not being exploited. More benefit might be achieved by pooling resources and developing a truly general-purpose program. Thomas G. Butler of the Goddard Space Flight Center (GSFC), wholed the team that developed NASTRAN between 1965 and 1970, recalled in 1982:

NASA’s Office of Advanced Research and Technology (OART) under Dr. Raymond Bisplinghoff sponsored a considerable amount of research in the area of flight structures through its operating centers. Representatives from the centers who man­aged research in structures convened annually to exchange ideas. I was one of the representatives from Goddard Space Flight Center at the meeting in January 1964. . . . Center after center described research programs to improve analysis of structures. Shells of different kinds were logical for NASA to analyze at the time because rockets are shell-like. Each research concentrated on a different aspect of shells. Some were closed with discontinuous boundaries. Other shells had cutouts. Others were noncircular. Others were partial spans of less than 360°. This all seemed quite worthwhile if the prod­ucts of the research resulted in exact closed-form solutions. However, all of them were geared toward making some sim­plifying assumption that made it possible to write a computer program to give numerical solutions for their behavior. . . .

Each of these computer programs required data organization different from every other. . . . Each was intended for explor­ing localized conditions rather than complete shell-like struc­tures, such as a whole rocket. My reaction to these programs was that. . . technology was currently available to give engi­neering solutions to not just localized shells but to whole, highly varied structures. The method was finite elements.[806]

Doug Michel led the meetings at NASA Headquarters. Butler, Harry Runyan of Langley Research Center, and probably others proposed that NASA develop its own finite element program, if a suitable one could

not be found already existing. "The group thought this was a good idea, and Doug followed up with forming the Ad Hoc Group for Structural Analysis, which was headed by Tom Butler of Goddard,” recalled C. Thomas Modlin, Jr., who was one of the representatives from what is now Johnson Space Center.[807] The committee included representatives from all of the NASA Centers that had any significant activity in struc­tural analysis methods at the time, plus an adjunct member from the U. S. Air Force at Wright-Patterson Air Force Base, as listed in the accom­panying table.[808]

CENTER

REPRESENTATIVE(S)

Ames

Richard M. Beam and Perry P. Polentz

Flight Research (now Dryden)

Richard J. Rosecrans

Goddard

Thomas G. Butler (Chair) and Peter A. Smidinger

Jet Propulsion Laboratory

Marshall E. Alper and Robert M. Bamford

Langley

Herbert J. Cunningham

Lewis

William C. Scott and James D. McAleese

Manned Spacecraft (now Johnson)

C. Thomas Modlin, Jr., and William W. Renegar

Marshall

Robert L. McComas

Wright-Patterson AFB

James Johnson (adjunct member)

After visiting several aerospace companies, all of whom were "extremely cooperative and candid,” and reviewing the existing meth­ods, the committee recommended to Headquarters that NASA spon­sor the development of its own finite element program "to update the analytical capability of the whole aerospace community. The program should incorporate the best of the state of the arts, which were cur­rently splintered.”[809]

The effort was launched, under the management of Butler at the Goddard Space Flight Center, to define and implement the General Purpose Structural Analysis program. Requirements were collected from the information brought from the various Centers, from the industry vis­its, and from a conference on "Matrix Methods in Structural Mechanics”

held at Wright-Patterson Air Force Base.[810] Key requirements included the following:[811]

• General-purpose. The system must allow different analysis types—static, transient, thermal, etc.—to be per­formed on the same structural model without alteration.

• Problem size. At least 2,000 degrees of freedom for static and dynamic analyses alike. (Prior state of the art was approximately 100 d. o.f. for dynamic mode analysis and 100 to 600 d. o.f. for static analysis.)

• Modular. Parts of the program could be changed with­out disrupting other parts.

• Open-ended. New types of elements, new analysis mod­ules, and new formats could be added.

• Maintainable and capable of being updated.

• Machine-independent. Capable of operating on IBM 360,

CDC 6000 Series, and UNIVAC 1108 (the only 3 commer­cially available computers capable of performing such analysis at the time), and future generations of computers.

After an initial design phase, the implementation contract was awarded to a team led by Computer Sciences Corporation (CSC), with MacNeal Schwendler Corporation and Martin Baltimore as subcontrac­tors. Coding began in July 1966. Dr. Paul R. Peabody was the principal architect of the overall system design. Dr. Richard H. MacNeal (MacNeal Schwendler) designed the solution structure, taking each type of solu­tion from physics, to math, to programming, assisted by David Harting. Keith Redner was the implementation team lead and head program­mer, assisted by Steven D. Wall and Richard S. Pyle. Frank J. Douglas coded the element routines and wrote the programmer’s manual. Caleb W. McCormick was the author of the user’s manual and supervised NASTRAN installation and training. Other members of the development team included Stanley Kaufman (Martin Baltimore), Thomas L. Clark,

David B. Hall, Carl Hennrich, and Howard Dielmann. The project staff at Goddard included Richard D. McConnell, William R. Case, James B. Mason, William L. Cook, and Edward F. Puccinelli.[812]

NASTRAN embodied many technically advanced features that are beyond the scope of this paper (and, admittedly, beyond the scope of this author’s understanding), which provided the inherent capability to handle large problems accurately and efficiently. It was referred to as a "system” rather than just a program by its developers, and for good reasons. It had its own internal control language, called Digital Matrix Abstraction Programming (DMAP), which gave flexibility in the use of its different modules. There were 151,000 FORTRAN statements, equating to more than 1 million machine language statements. Twelve prepack­aged "rigid formats” permitted multiple types of analysis on the same structural model, including statics, steady-state frequency response, transient response, etc.[813]

The initial development of NASTRAN was not without setbacks and delays, and at introduction it did not have all of the intended capabili­ties. But the team stayed focused on the essentials, choosing which fea­tures to defer until later and which characteristics absolutely had to be maintained to keep NASTRAN true to its intent.[814] According to Butler: "One thing that must be mentioned about the project, that is remark­able, pertains to the spirit that infused it everywhere. Every man thought that he was the key man on the whole project. As it turned out, every man was key because for the whole to mesh no effort was inconsequen­tial. The marvelous thing was that every man felt it inside. There was a feeling of destiny on the project.”[815]

That the developers adhered to the original principles to make NASTRAN modular, open-ended, and general-purpose—with com­mon formats and interfaces among its different routines—proved to be more important in the long term than how many elements and analysis

capabilities were available at introduction. Preserving the intended architecture ensured that the details could be filled in later.

Computational Methods, Industrial Transfer, and the Way Ahead

Having surveyed the development of computational structural analysis within NASA, the contributions of various Centers, and key flight proj­ects that tested and validated structural design and analysis methods in their ultimate application, we turn to the current state of affairs as of 2010 and future challenges.

Overall, even a cursory historical examination clearly indicates that the last four decades have witnessed revolutionary improvements in all of the following areas:

• Analysis capability.

• Complexity of structures that can be analyzed.

• Number of nodes.

• Types of elements.

• Complexity of processes simulated.

• Nonlinearity.

• Buckling.

• Other geometric nonlinearity.

• Material nonlinearity.

• Time-dependent properties.

• Yield or ultimate failure of some members.

• Statistical/nondeterministic processes.

• Thermal effects.

• Control system interactions.

• Usability.

• Execution time.

• Hardware improvements.

• Efficiency of algorithms.

• Adequate but not excessive model complexity.

• Robustness, diagnostics, and restart capability.

• Computing environment.

• Pre – and post-processing.

Before NASTRAN, capabilities generally available (i. e., not count­ing proprietary programs at the large aerospace companies) were lim­ited to a few hundred nodes. In 1970, NASTRAN made it possible to analyze models with over 2,000 nodes. Currently, models with hundreds of thousands of nodes are routinely analyzed. The computing environ­ment has changed just as dramatically, or more so: the computer used to be a shared resource among many users—sometimes an entire com­pany, or it was located at a data center used by many companies—with punch cards for input and reams of paper for output. Now, there is a PC (or two) at every engineer’s desk. NASTRAN can run on a PC, although some users prefer to run it on UNIX machines or other platforms.

Technology has thus come full circle: NASA now makes extensive use of commercial structural analysis codes that have their roots in NASA technology. Commercial versions of NASTRAN have essentially super­seded NASA’s COSMIC NASTRAN. That is appropriate, in this author’s opinion, because it is not NASA’s role to provide commercially competi­tive performance, user interfaces, etc. The existence and widespread use of these commercial codes indicates successful technology transition.

At the time of this writing, basic capability is relatively mature. Advances are still being made, but it is now possible to analyze the vast majority of macroscopic structural problems that are of practical inter­est in aeronautics and many other industries.

Improvements in the "usability” category are of greater interest to most engineers. Execution speed has improved orders of magnitude, but this has been partially offset by the corresponding orders-of-mag – nitude increases in model size. Engineers build models with hundreds of thousands of nodes, because they can.

Pre – and post-processing challenges remain. Building the model and interpreting the results typically take longer than actually running the analysis. It is by no means a trivial task to build a finite element model of a complex structure such as a complete airframe, or a major portion thereof. Some commercial software can generate finite element mod­els automatically from CAD geometry. However, many practitioners in the aircraft industry prefer to have more involvement in the model­ing process, because of the complexity of the analysis and the safety – critical nature of the task. The fundamental challenge is to make the modeling job easier, while providing the user with control when required and the ability to thoroughly check the resulting model.[972]

In 1982, Thomas Butler wrote, "I would compare the state of graph­ics pre – and post-processors today with the state that finite elements were in before NASTRAN came on the scene in 1964. Many good fea­tures exist. There is much to be desired in each available package.”[973] Industry practitioners interviewed today have expressed similar sen­timents. There is no single pre – or post-processing product that meets every need. Some users deliberately switch between different pre – and post-processing programs, utilizing the strengths of each for different phases of the modeling task (such as creating components, manipulat­ing them, and visualizing and interrogating the finished model). A rea­sonable number of distinct pre – and post-processing systems maintain commercial competition, which many users consider to be important.[974]

As basic analysis capability has become well established, research­ers step back and look at the bigger picture. Integration, optimization, and uncertainty modeling are common themes at many of the NASA Centers. This includes integration of design and analysis, of analysis and testing, and of structural analysis with analysis in other disciplines. NASA Glenn Research Center is heavily involved in nondeterministic analysis methods, life prediction, modeling of failure mechanisms, and modeling of composite materials, including high-temperature material systems for propulsion applications. Research at Langley spans many fields, including multidisciplinary analysis and optimization of aircraft and spacecraft, analysis and test correlation, uncertainty modeling and "fuzzy structures,” and failure analysis.

In many projects, finite element analysis is being applied at the microscale to gain a better understanding of material behaviors. The

ability to perform such analysis is a noteworthy benefit coming from advances in structural analysis methods at the macroscopic level. Very real benefits to industry could result. The weight savings predicted from composite materials have been slow in coming, partly because of lim­itations on allowable stresses. In the civil aviation industry especially, such limitations are not necessarily based on the inherent character­istics of the material but on the limited knowledge of those character­istics. Analysis that gives insight into material behaviors near failure, documented and backed up by test results, may help to achieve the full potential of composite materials in airframe structures.

Applications of true optimization—such as rigorously finding the mathematical minimum of a "cost function”—are still relatively lim­ited in the aircraft industry. The necessary computational tools exist. However, the combination of practical difficulties in automating com­plex analyses and a certain amount of cultural resistance has somewhat limited the application of true optimization in the aircraft industry up to the present time. There is untapped potential in this area. The path to reaching it is not necessarily in the development of better computer pro­grams, but rather, in the development and demonstration of processes for the effective and practical use of capabilities that exist already. The DAMVIBS program (discussed previously in the section on the NASA Langley Research Center) might provide a model for how this kind of technology transfer can happen. In that program, industry teams essentially demonstrated to themselves that existing finite element programs could be useful in predicting and improving the vibration characteristics of helicopters—when coupled with some necessary improvements in modeling technique. All of the participants subse­quently embraced the use of such methods in the design processes of their respective organizations. A comparable program could, perhaps, be envisioned in the field of structural and/or multidisciplinary optimi­zation in aircraft design.[975]

Considering structural analysis as a stand-alone discipline, how­ever, it can be stated without question that computational methods have been adopted throughout the aircraft industry. Specific processes vary between companies. Some companies perform more upfront optimization than others; some still test exhaustively, while others test minimally. But the aircraft industry as a whole has embraced compu­tational structural analysis and benefited greatly from it.

The benefits of computational structural analysis may not be ade­quately captured in one concise list, but they include the following:

• Improved productivity of analysis.

• Ability to analyze a more complete range of load cases.

• Ability to analyze a structure more thoroughly than was previously practical.

• Ability to correct and update analyses as designs and requirements mature.

• Improved quality and consistency of analysis.

• Improved performance of the end product. Designs can be improved through more cycles of design/analysis in the early stages of a project, and earlier identification of structural issues, than previously practical.

• Improved capabilities in related disciplines: thermal modeling and acoustic modeling, for example. Some aircraft companies utilize finite element models in the design stage of an aircraft to develop effective noise reduction strategies.

• Ability to analyze structures that could not be practically analyzed before. For example, composite and metallic airframes are different. Metal structures typically have more discrete load paths. Composite structures, such as honeycomb-core panels, have less distinct load paths and are less amenable to analysis by hand using classi­cal methods. Therefore, finite element analysis enables airplanes to be built in ways that would not be possible (or, at least, not verifiable) otherwise.

• Reduced cost and increased utility of testing. Analysis does not replace all testing, but it can greatly enhance the amount of knowledge gained from a test. For exam­ple, modeling performed ahead of a test series can help identify the appropriate locations for strain gages, accel­erometers, and other instrumentation, and aid in the interpretation of the resulting test data. The most diffi­cult or costly types of testing can certainly be reduced.

In a greatly simplified sense, the old paradigm is that

testing was the proof of the structure; now, testing val­idates the model, and the model proves the structure. Practically speaking, most aircraft companies practice something in between these two extremes.

NASA’s contributions have included not only the development of the tools but also the development and dissemination of techniques to apply the tools to practical problems and the provision of opportuni­ties—through unique test facilities and, ultimately, flight research proj­ects—to prove, validate, and improve the tools.

In other industries also, there is now widespread use of computer­ized structural analysis for almost every conceivable kind of part that must operate under conditions of high mechanical and/or thermal stress. NASTRAN is used to analyze buildings, bridges, towers, ships, wind tun­nels and other specialized test facilities, nuclear power plants, steam turbines, wind turbines, chemical processing plants, microelectronics, robotic systems, tools, sports equipment, cars, trucks, buses, trains, engines, transmissions, and tires. It is used for geophysical and seismic analysis, and for medical applications.

In conclusion, finite element analysis would have developed with or without NASA’s involvement. However, by creating NASTRAN, NASA provided a centerpiece: a point of reference for all other development and an open-ended framework into which new capabilities could be inserted. This framework gradually collected the best or nearly best methods in every area. If NASTRAN had not been developed, differ­ent advances would have occurred only within proprietary codes used internally by different industrial companies or marketed by differ­ent software companies. There would have been little hope of consol­idating all the important capabilities into one code or of making such capabilities available to the general user. NASTRAN brought high-pow­ered finite element analysis within reach of many users much sooner than would have otherwise been the case. At the same time, the job of predicting every aspect of structural performance was by no means fin­ished with the initial release of NASTRAN—nor is it finished yet. NASA has been and continues to be involved in the development of many new capabilities—developing programs and new ways to apply existing programs—and making the resulting tools and methods available to users in the aerospace industry and in many other sectors of the U. S. economy.

Appendix A:

Metals, Ceramics, and Composites

Solid-state materials exist in one of these forms and may be reviewed separately. Metals and alloys, the latter being particularly common, exist usually as superalloys. These are defined as exhibiting excellent mechanical strength and creep resistance at high temperatures, good surface stability, and resistance to corrosion and oxidation. The base alloying element of a superalloy is usually nickel, cobalt, or nickel – iron. These three elements are compared in Table 1 with titanium.[1024]

TABLE 1:

COMPARISON OF TITANIUM WITH SELECTED SUPER ALLOYS

ELEMENT

NUMBER

MELTING POINT (K)

Titanium

22

1,941

Iron

26

1,810

Cobalt

27

1,768

Nickel

28

1,726

Superalloys generally are used at temperatures above 1,000 °F, or 810 K. They have been used in cast, rolled, extruded, forged, and pow­der-processed forms. Shapes produced have included sheet, bar, plate, tubing, airfoils, disks, and pressure vessels. These metals have been used in aircraft, industrial and marine gas turbines, nuclear reactors, aircraft skins, spacecraft structures, petrochemical production, and environ­mental-protection applications. Although developed for use at high tem­peratures, some are used at cryogenic temperatures. Applications continue to expand, but aerospace uses continue to predominate.

Superalloys consist of an austenitic face-centered-cubic matrix plus a number of secondary phases. The principal secondary phases are the carbides MC, M6C, M23C6, and the rare M7C3, which are found in all superalloy types, and the intermetallic compound Ni3(Al, Ti), known as gamma-prime, in nickel – and iron-nickel-base superalloys. The most important classes of iron-nickel-base and nickel-base superalloys are strengthened by precipitation of intermetallic compounds within a matrix. Cobalt-base superalloys are invariably strengthened by a com­bination of carbides and solid solution hardeners. No intermetallic compound possessing the same degree of utility as the gamma-prime precipitate—in nickel-base and iron-nickel-base superalloys—has been found to be operative in cobalt-base systems.

The superalloys derive their strength from solid solution harden­ers and precipitating phases. In addition to those elements that pro­mote solid solution hardening and promote the formation of carbides and intermetallics, elements including boron, zirconium, hafnium, and cerium are added to enhance mechanical or chemical properties.

TABLE 2:

SELECTED ALLOYING ADDITIONS AND THEIR EFFECTS

ELEMENT

PERCENTAGES

EFFECT

Iron-nickel- and nickel-base

Cobalt-base

Chromium

5-25

19-30

Oxidation and hot corrosion resistance; solution hardening; carbides

Molybdenum,

Tungsten

0-12

0-1 1

Solution hardening; carbides

Aluminum

0-6

0-4.5

Precipitation hard­ening; oxidation resistance

Titanium

0-6

0-4

Precipitation harden­ing; carbides

Cobalt

0-20

N/A

Affects amount of precipitate

Nickel

N/A

0-22

Stabilizes austenite; forms hardening precipitates

Niobium

0-5

0-4

Carbides; solution hardening; precipita­tion hardening (nickel-, iron-nickel-base)

Tantalum

0-12

0-9

Carbides; solution hardening; oxidation resistance

Table 2 presents a selection of alloying additions, together with their effects.13 The superalloys generally react with oxygen, oxida­tion being the prime environmental effect on these alloys. General oxidation is not a major problem up to about 1,600 °F, but at higher temperatures, commercial nickel-and cobalt-base superalloys are attacked by oxygen. Below about 1,800 °F, oxidation resistance depends on chromium content, with Cr2O3 forming as a protective oxide; at higher temperatures, chromium and aluminum contribute in an interactive fashion to oxidation protection, with aluminum forming the protective Al2O3. Because the level of aluminum is often insufficient to provide

 

9

 

long-term protection, protective coatings are often applied. Cobalt – base superalloys are readily welded using gas-metal-arc (GMA) or gas – tungsten-arc (GTA) techniques. Nickel – and iron-nickel-base super­alloys are considerably less weldable, for they are susceptible to hot cracking, postweld heat treatment cracking, and strain-age cracking. However, they have been successfully welded using GMA, GTA, electron – beam, laser, and plasma arc methods. Superalloys are difficult to weld when they contain more than a few percentage points of titanium and aluminum, but superalloys with limited amounts of these alloying elements are readily welded.[1025]

So much for alloys. A specific type of fiber, carbon, deserves discus­sion in its own right because of its versatility. It extends the temperature resistance of metals by having the unparalleled melting temperature of 6,700 °F. Indeed, it actually gains strength with temperature, being up to 50 percent stronger at 3,000 °F than at room temperature. It also has density of only l. 50 grams per cubic centimeter (g/cm3). These proper­ties allowed carbon fiber to serve in two path-breaking vehicles of recent decades. The Voyager aircraft, which flew around the world in 1986 on a single load of fuel, had some 90 percent of its structure made of car­bon fibers in a lightweight matrix. The Space Shuttle also relies on car­bon for thermal protection of the nose and wing leading edges.[1026]

These areas needed particularly capable thermal protection, and carbon was the obvious candidate. It was lighter than aluminum and could be protected against oxidation with a coating. Graphite was initially the standard form, but it had failed to enter the aerospace mainstream. It was brittle and easily damaged, and it did not lend itself to use with thin-walled structures.

The development of a better carbon began in 1958 with Vought Missiles and Space Company (later LTV Aerospace) in the forefront. The work went forward with support from the Dyna-Soar and Apollo pro­grams and brought the advent of an all-carbon composite consisting of graphite fibers in a carbon matrix. Existing composites had names such as carbon-phenolic and graphite-epoxy; this one was carbon-carbon.

It retained the desirable properties of graphite in bulk: lightweight, temperature resistance, and resistance to oxidation when coated. It had

a very low coefficient of thermal expansion, which reduced thermal stress. It also had better damage tolerance than graphite.

Carbon-carbon was a composite. As with other composites, Vought engineers fabricated parts of this material by forming them as layups. Carbon cloth gave a point of departure, being produced by oxygen-free pyrolysis of a woven organic fiber such as rayon. Sheets of this fabric, impregnated with phenolic resin, were stacked in a mold to form the layup and then cured in an autoclave. This produced a shape made of laminated carbon cloth phenolic. Further pyrolysis converted the resin to its basic carbon, yielding an all-carbon piece that was highly porous because of the loss of volatiles. It therefore needed densification, which was achieved through multiple cycles of reimpregnation under pressure with an alcohol, followed by further pyrolysis. These cycles continued until the part had its specified density and strength.

The Shuttle’s design specified carbon-carbon for the nose cap and leading edges, and developmental testing was conducted with care. Structural tests exercised their methods of attachment by simulating flight loads up to design limits, with design temperature gradients. Other tests, conducted within an arc-heated facility, determined the thermal responses and hot-gas leakage characteristics of interfaces between the carbon-carbon and the rest of the vehicle.

Additional tests used articles that represented substantial portions of the orbiter. An important test item, evaluated at NASA Johnson, repro­duced a wing-leading edge and measured 5 by 8 feet. It had two leading – edge panels of carbon-carbon set side by side, a section of wing structure that included its main spars, and aluminum skin covered with thermal – protection tiles. It had insulated attachments, internal insulation, and internal seals between the carbon-carbon and the tiles. It withstood sim­ulated air loads, launch acoustics, and mission temperature-pressure environments—not once but many times.[1027]

There was no doubt that left to themselves, the panels of carbon – carbon that protected the leading edges would have continued to do so. Unfortunately, they were not left to themselves. During the ascent of the

Shuttle Columbia, on January 16, 2003, a large piece of insulating foam detached itself from a strut that joined the external tank to the front of the orbiter. The vehicle at that moment was slightly more than 80 sec­onds into the flight, traveling at nearly Mach 2.5. This foam struck a carbon-carbon panel and delivered what proved to be a fatal wound. In words of the accident report:

Columbia re-entered Earth’s atmosphere with a preexisting breach in the leading edge of its left wing. This breach, caused by the foam strike on ascent, was of sufficient size to allow super­heated air (probably exceeding 5,000 degrees Fahrenheit) to penetrate the cavity behind the RCC panel. The breach widened, destroying the insulation protecting the wing’s leading edge support structure, and the superheated air eventually melted the thin aluminum wing spar. Once in the interior, the super­heated air began to destroy the left wing. Finally, over Texas, the increasing aerodynamic forces the Orbiter experienced in the denser levels of the atmosphere overcame the catastrophically damaged left wing, causing the Orbiter to fall out of control.[1028]

Three years of effort succeeded in securing the foam on future flights, and the Shuttle returned to flight in July 2006 with foam that stayed put. In contrast with the high tech of the Shuttle, carbon fibers also are finding use in such low-tech applications as automobiles. As with the Voyager round-the-world aircraft, what counts is carbon’s light weight, which promotes fuel economy. The Graphite Car employs carbon fiber epoxy-matrix composites for body panels, structural members, bum­pers, wheels, drive shafts, engine components, and suspension systems. A standard steel auto would weigh 4,000 pounds, but this car weighs only 2,750 pounds, for a saving in weight of nearly one-third.[1029]

Superalloys thus represent the mainstream in aerospace materials, with composites such as carbon fiber extending their areas of use. There also are ceramics, but these are highly specialized. They cannot com­pete with the temperature resistance of carbon or with its light weight. They nevertheless come into play as insulators on turbine blades that protect the underlying superalloy. This topic will be discussed separately.

The Precision Aircraft Control Technology YF-4E Program

In order to evaluate the use of computer-controlled fly-by-wire systems to actively control a relaxed stability aircraft, the SFCS YF-4E was further modified under the Air Force Precision Aircraft Control Technology (PACT) program. Movable canard surfaces were mounted
ahead of the wing and above the YF-4E’s inlets. A dual channel electronic fly-by-wire system with two hydraulic systems directed the canard actuators. The canards, along with the capability to manage internal fuel to move the center of gravity of the aircraft aft, effectively reduced the static stability margin to as low as -7.5 percent, that is, fully unstable, in the pitch axis. Relaxing static stability by moving the center of gravity aft reduces trim drag and decreases the downward force that the horizontal tail needs to produce to trim the aircraft. However, as cen­ter of gravity moves aft, an aircraft becomes less and less stable in the pitch axis, leading to a need to provide artificial stability augmentation. A negative static margin means that the aircraft is unstable, it cannot maintain stable flight and will be uncontrollable without artificial stabil­ity augmentation. During its test program, the PACT aircraft was flown 34 times, primarily by McDonnell-Douglas test pilots. The Pact aircraft demonstrated significant performance gains that included an increase in the 4 g maneuvering ceiling of over 4,000 feet (to 50,000 feet) along with an increase in turning radius.[1140] The approach used by the PACT YF-4E to create a relaxed stability research aircraft was soon mirrored by several foreign flight research programs that are discussed in a sep­arate section. In January 1979, the PACT YF-4E aircraft was delivered by Army helicopter from the McDonnell-Douglas factory in St. Louis to the Air Force Museum at Wright-Patterson AFB, OH, for permanent dis­play.[1141] By this time, the Air Force had initiated the Control Configured Vehicle (CCV) F-16 project. This was followed by the Advanced Fighter Technology Integration (AFTI) F-16 program. Those programs would carry forward the fly-by-wire explorations initiated by the PACT YF-4E.

Technology Transfer and Lessons Learned

Flight-test results from the AFTI/F-16 program were exceptionally well-documented by NASA and widely published in technical papers,
memorandums, and presentations.[1197] These provide invaluable insights into the problems, issues, and achievements encountered in this rela­tively early attempt to integrate an asynchronous digital flight control system into a high-performance military jet fighter. As the definition implies, in an asynchronous flight control system design, the redun­dant channels run autonomously. Each computer samples sensors and evaluates flight control laws independently. Each separately sends com­mand signals to an averaging or selection device that is used to drive the flight control actuators. In this DFCS implementation, the unsynchro­nized individual computers can sample the sensors at slightly different times. Thus, they can obtain readings that may differ quite appreciably from one another, especially if the aircraft is maneuvering aggressively. Flight control law gains can further amplify these input differences, causing even larger differences between the results that are submitted to the output selection algorithm.[1198]

Подпись: 10During ground qualification of the AFTI/F-16, it was found that these differences sometimes resulted in a channel being declared failed when no real failure had occurred.[1199] An even more serious shortcom­ing of asynchronous flight control systems can occur when the control laws contain decision points. Sensor noise and sampling variations may cause independent channels within the DFCS to take different paths at the decision points and to produce widely divergent outputs.[1200] This occurred on AFTI/F-16 flight No. 44. Two channels in the DFCS declared each other failed; the analog backup was not selected because simul­taneous failure of two DFCS channels had not been anticipated. The pilot could not reset the system, and the aircraft was flown home on the single remaining DFCS channel. In this case, all protective redun-

dancy had been lost, yet an actual hardware failure had not occurred. Several other difficulties and failure indications were observed dur­ing the flight-test program that were traced to asynchronous operation, allowing different channels to take different paths at certain selection points. The software was subsequently modified to introduce voting at some of these software selection points.[1201]

Propulsion Controlled Aircraft System

Подпись: 10Initiated in 1989, the Propulsion Controlled Aircraft (PCA) system was developed and flight-tested at NASA Dryden, with the goal being to help pilots land safely in the event that flight control components were dis­abled. PCA automatically provides computer-controlled variations in engine thrust that give pilots adequate pitch, yaw, and roll authority to fly their aircraft. The PCA system was tested and initially demonstrated on the HIDEC F-15. In simulator studies, NASA demonstrated the PCA concept on more than a dozen other commercial and military aircraft. The PCA system integrates the aircraft flight control and engine con­trol computers to manage engine thrust and ensure adequate aircraft control. When the PCA system is activated, moving the control column aft causes the engine thrust to be automatically increased, and the air­craft begins to climb. Forward movement of the control column results in reduced thrust, and descent begins. Right or left movements of the control column produce differential engine thrust, resulting in the air­craft yawing in the direction of the desired turn. Flight-testing with the HIDEC F-15 was carried out at landing approach speeds of 150 knots with the flaps down and between 170 and 190 knots with the flaps retracted. At the conclusion of testing, the HIDEC F-15 accomplished a successful landing using the PCA system on April 21, 1993, after a flight in which the pilot used only engine power to turn, climb, and descend for approach to the runway.[1269]

The NASA Dryden F-15A HIDEC testbed had originally been obtained from the Air Force in January 1976. During its career with NASA, it was involved in more than 25 advanced research projects involving aerodynamics, performance, propulsion control, systems integration, instrumentation development, human factors, and flight-test techniques before its last flight at Dryden, in October 1993.[1270]

A similar propulsion controlled aircraft approach was later evaluated and publicly demonstrated using a modified three-engine McDonnell – Douglas MD-11 jet airliner in a cooperative program between NASA, McDonnell-Douglas, Pratt & Whitney, and Honeywell. Pratt & Whitney modified the engine control software, and Honeywell designed the soft­ware for the MD-11 flight control computer. Standard autopilot controls already present in the aircraft were used along with the Honeywell PCA software in the reprogrammed MD-11 flight control computers. NASA Ames performed computer simulations in support of the PCA program. On August 29, 1995, NASA Ames test pilot Gordon Fullerton successfully landed the PCA-modified MD-11 at Edwards AFB with an engine out after activating the aircraft’s auto-land system.[1271] Simulator testing of a PCA system continued using a NASA Ames-FAA motion-based Boeing 747 simulator, with pilots making about 50 landings in the simulator. Additional simulation research by Ames resulted in further tests of the PCA system on B-747, B-757, MD-11, and C-17 aircraft. NASA Dryden test pilots flew simulated tests of the system in August 1998 in the NASA Ames Advanced Concepts Simulator. Ten pilots were involved in these tests, with 20 out of 20 attempted landings successfully accomplished. PCA technology can be used on current or future aircraft equipped with digital flight control systems.[1272]

Conclusion and a Look Ahead

For more than 50 years now, NASA has methodically and, for the most part, quietly advanced the state of art of propulsion technology. With the basic design of the jet engine unchanged since it was invented dur­ing World War II, modern jet engines incorporate every lesson learned during NASA’s past five decades of research. As a result, jet engines are
now quieter, safer, more fuel-efficient, less expensive to operate, and less polluting, while being easier to maintain. And thanks to advancements in computers and simulations, new engines can be tested for thousands of hours at a time without ever bending one piece of aluminum or braid­ing a square yard of composite material.

Подпись: 11So what’s in store for propulsion technology during the next few decades? More improvements with every possible variable of engine operations are still possible, with future advances more closely linked to new aircraft designs, such as the blended wing and body in which the engines may be more fully integrated into the structure of the aircraft.

In a feature story written in April 2009 for NASA’s Aeronautics Research Mission Directorate Web site, this author interviewed sev­eral key Agency officials who are considering what the future holds for engine development and making plans for what the Agency’s approach will be for managing the effort. Here is that look ahead.

NASA’s Experimental (Mod-0) 100-Kilowatt Wind Turbine Generator (1975-1987)

Подпись: 13Between 1974 and 1988, NASA Lewis led the U. S. program for large wind turbine development, which included the design and installation of 13 power-utility-size turbines. The 13 wind turbines included an ini­tial testbed turbine designated the Mod-0 and 3 generations of followup wind turbines designated Mod-0A/Mod-1, Mod-2, and Mod-5. As noted in the Project Independence task force report, the initial 100-kilowatt wind turbine project and related supporting research was to be performed in­house by NASA Lewis, while the remaining 100-kilowatt systems, mega­watt systems, and large-scale multiunit systems subprograms were to be performed by contractors under NASA Lewis direction. Each succes­sive generation of technology increased reliability and efficiency while reducing the cost of electricity. These advances were made by gaining a better understanding of the system-design drivers, improving the analyt­ical design tools, verifying design methods with operating field data, and incorporating new technology and innovative designs. However, before these systems could be fabricated and installed, NASA Lewis needed to design and construct an experimental testbed wind turbine generator.

NASA’s first experimental wind turbine (the Mod-0) was constructed at Plum Brook Station in Sandusky, OH, and first achieved rated speed and power in December 1975. The initial design of the Mod-0 drew upon some of the previous information from the Smith-Putnam and Hutter-Allgaier turbines. The primary objectives of the Mod-0 wind tur­bine generator were to provide engineering data for future use as a base for the entire Federal Wind Energy Program and to serve as a testbed for the various components and subsystems, including the testing of different design concepts for blades, hubs, pitch-change mechanisms, system controls, and generators. Also, a very important function of the Mod-0 was to validate a number of computer models, codes, tools, and control algorithms.

The Mod-0 was an experimental 100-kilowatt wind turbine gener­ator that at a wind speed of 18 miles per hour (mph) was expected to generate 180,000 kilowatthours of electricity per year in the form of 440-volt, 3-phase, 60-cycle alternating current output. The initial test­bed system, which included two metal blades that were each 62-feet long from hub to blade tip located downwind of the tower, was mounted on a 100-foot, four-legged steel lattice (pinned truss design) tower. The drive train and rotor were in a nacelle with a centerline 100 feet above ground.

The blades, which were built by Lockheed and were based on NASA’s and Lockheed’s experience with airplane wing designs, were capable of pitch change (up and down movement) and full feather (angle of the blade change so that wind resistance is minimized). The hub was of the rigid type, meaning that the blades were bolted to the main shaft. A yaw (deviation from a straight path) control aligned the wind turbine with the wind direction, and pitch control was used for startup, shutdown, and power control functions. When the wind turbine was in a shutdown mode, the blades were feathered and free to slowly rotate. The system was linked to a public utility power network through an automatic syn­chronizer that converted direct current to alternating current.[1500]

Подпись: 13A number of lessons were learned from the Mod-0 testbed. One of the first problems involved the detection of larger than expected blade bending incidents that would have eventually caused early fatigue failure of the blades. The blade bending occurred for both the flatwise (out-of­plane) and edgewise (in-plane) blade positions. Followup study of this problem determined that high blade loads that resulted in the bending of the blades were caused by impulses applied to the blade each time it passed through the wake of the tower. Basically, the pinned truss design of the tower was blocking the airflow to a much greater degree than anticipated. The cause of this problem, which related to the flatwise load factors, was confirmed by site wind measurements and wind tun­nel tower model tests. The initial measure taken to reduce the blocking effect was to remove the stairway from the tower. Eventually, however, NASA developed the soft tube style tower that later became the stan­dard construction method for most wind turbine towers. Followup study of the edgewise blade loads determined that the problem was caused by excessive nacelle yawing (side-to-side) motion. This problem was addressed by replacing a single yaw drive, which aligns the rotor with the wind direction, with a dual yaw drive, and by adding three brakes to the yaw system to provide additional stiffness.[1501]

Both of the above measures reduced the bending problems below the predicted level. Detection of these problems on the testbed Mod-0
resulted in reevaluation of the analytical tools and the subsequent rede­sign of the wind turbine that proved extremely important in the design of NASA’s subsequent horizontal-axis large wind turbines. In regard to other operational testing of the Mod-0 system, NASA engineers determined that the wind turbine controls for speed, power, and yaw worked satisfactorily and that synchronization to the power utility network was successfully demonstrated. Also, the startup, utility operation, shutdown, and standby subsystems worked in a satisfactory manner. Finally, the Mod-0 was used to check out remote operation that was planned for future power utility systems. In summary, the Mod-0 project satisfied its primary objective of providing the entire Federal Wind Energy Program with early oper­ations and performance data and through continued experience with testing new concepts and components. While NASA Lewis was ready to move forward with fabrication of its next level Mod-0A and Mod-1 wind turbines, the Mod-0 testbed continued to provide valuable testing of new configurations, components, and concepts for over 11 more years.

The Path to Area Rule

Conventional high-speed aircraft design emulated Ernst Mach’s finding that bullet shapes produced less drag. Aircraft designers started with a pointed nose and gradually thickened the fuselage to increase its cross­sectional area, added wings and a tail, and then decreased the diam­eter of the fuselage. The rule of thumb for an ideal streamlined body for supersonic flight was a function of the diameter of the fuselage. Understanding the incorporation of the wing and tail, which were added for practical purposes because airplanes need them to fly, into Mach’s ideal high-speed soon became the focus of Whitcomb’s investigation.[152]

The 8-foot HST team at Langley began a series of tests on various wing and body combinations in November 1951. The wind tunnel mod­els featured swept, straight, and delta wings, and fuselages with varying amounts of curvature. The objective was to evaluate the amount of drag generated by the interference of the two shapes at transonic speeds. The tests resulted in two important realizations for Whitcomb. First, vari­ations in fuselage shape led to marked changes in wing drag. Second, and most importantly, he learned that the combination of fuselage and wing drag had to be considered together as a synergistic aerodynamic system rather than separately, as they had been before.[153]

While Whitcomb was performing his tests, he took a break to attend a Langley technical symposium, where swept wing pioneer Adolf Busemann presented a helpful concept for imagining transonic flow. Busemann asserted that wind tunnel researchers should emulate aero – dynamicists and theoretical scientists in visualizing airflow as analogous to plumbing. In Busemann’s mind, an object surrounded by streamlines constituted a single stream tube. Visualizing "uniform pipes going over the surface of the configuration” assisted wind tunnel researchers in determining the nature of transonic flow.[154]

Whitcomb contemplated his findings in the 8-foot HST and Busemann’s analogy during one of his daily thinking sessions in December 1951. Since his days at Worcester, he dedicated a specific part of his day to thinking. At the core of Whitcomb’s success in solv­ing efficiency problems aerodynamically was the fact that, in the words of one NASA historian, he was the kind of "rare genius who can see things no one else can.”[155] His relied upon his mind’s eye—the non­verbal thinking necessary for engineering—to visualize the aerodynamic process, specifically transonic airflow.[156] Whitcomb’s ability to apply his findings to the design of aircraft was a clear indication that using his mind through intuitive reasoning was as much an analytical aerody­namic tool as a research airplane, wind tunnel, or slide rule.

With his feet propped up on his desk in his office a flash of inspira­tion—a "Eureka” moment, in the mythic tradition of his hero, Edison— led him to the solution of reducing transonic drag. Whitcomb realized that the total cross-sectional area of a fuselage, wing, and tail caused transonic drag or, in his words: "transonic drag is a function of the longitudinal development of the cross-sectional areas of the entire airplane.”[157] It was simply not just the result of shock waves forming at the nose of the airplane, but drag-inducing shock waves formed just behind the wings. Whitcomb visualized in his mind’s eye that if a designer narrowed the fuselage or reduced its cross section, where the wings attached, and enlarged the fuselage again at the trailing edge, then the fuselage would facilitate a smoother transition from subsonic to supersonic speeds. Pinching the fuselage to resemble a wasp’s waist allowed for smoother flow of the streamlines as they traveled from the nose and over the fuselage, wings, and tail. Even though the fuselage was shaped differently, the overall cross section was the same along the length of the fuselage. Without the pinch, the streamlines would bunch and form shock waves, which created the high energy losses that pre­vented supersonic flight.[158] The removal at the wing of those "aerody­namic anchors,” as historians Donald Baals and William Corliss called them, and the recognition of the sensitive balance between fuselage and wing volume were the key.[159]

Verification of the new idea involved the comparison of the data compiled in the 8-foot HST, all other available NACA-gathered transonic data, and Busemann’s plumbing concept. Whitcomb was convinced that his area rule made sense of the questions he had been investigat­ing. Interestingly enough, Whitcomb’s colleagues in the 8-foot HST, including John Stack, were skeptical of his findings. He presented his findings to the Langley community at its in-house technical seminar.[160] After Whitcomb’s 20-minute talk, Busemann remarked: "Some peo­ple come up with half-baked ideas and call them theories. Whitcomb comes up with a brilliant idea and calls it a rule of thumb.”[161] The name "area rule” came from the combination of "cross-sectional area” with "rule of thumb.”[162]

With Busemann’s endorsement, Whitcomb set out to validate the rule through the wind tunnel testing in the 8-foot HST. His models fea­tured fuselages narrowed at the waist. He had enough data by April 1952 indicating that pinching the fuselage resulted in a significant reduction in transonic drag. The resultant research memorandum, "A Study of the Zero Lift Drag Characteristics of Wing-Body Combinations near the Speed of Sound,” appeared the following September. The NACA immediately distributed it secretly to industry.[163]

The area rule provided a transonic solution to aircraft designers in four steps. First, the designer plotted the cross sections of the aircraft fuselage along its length. Second, a comparison was made between the design’s actual area distribution, which reflected outside considerations, such as engine diameter and the overall size dictated by an aircraft car­rier’s elevator deck, and the ideal area distribution that originated in previous NACA mathematical studies. The third step involved the recon­ciliation of the actual area distribution with the ideal area distribution. Once again, practical design considerations shaped this step. Finally, the designer converted the new area distribution back into cross sec­tions, which resulted in the narrowed fuselage that took into account the overall area of the fuselage and wing combination.[164] A designer that followed those four steps would produce a successful design with min­imum transonic drag.