Recent Advances in Fluid Mechanics

The methods of this field include ground test, flight test, and CFD. Ground-test facilities continue to show their limitations, with no improvements presently in view that would advance the realism of tests beyond Mach 10. A recently announced Air Force project, Mariah, merely underscores this point. This installation, to be built at AEDC, is to produce flows up to Mach 15 that are to run for as long as 10 seconds, in contrast to the milliseconds of shock tunnels. Mariah calls for a powerful electron beam to create an electrically charged airflow that can be accelerated with magnets. But this installation will require an e-beam of 200 megawatts. This is well beyond the state of the art, and even with support from a planned research program, Mariah is not expected to enter service until 2015.72

Similar slow progress is evident in CFD, for which the flow codes of recent projects have amounted merely to updates of those used in NASP. In designing the X-43A, the most important such code was the General Aerodynamic Simula­tion Program (GASP). NASP had used version 2.0; the X-43A used 3.0. The latter continued to incorporate turbulence models. Results from the codes often showed good agreement with test, but this was because the codes had been benchmarked extensively with wind-tunnel data. It did not reflect reliance on first principles at higher Mach.

Engine studies for the X-43A used their own codes, which again amounted to those of NASP. GASP 3.0 had the relatively recent date of 1996, but other pertinent litera­ture showed nothing more recent than 1993, with some papers dating to the 1970s.73

The 2002 design of ISTAR, a rocket-based combined-cycle engine, showed that specialists were using codes that were considerably more current. Studies of the forebody and inlet used OVERFLOW, from 1999, while analysis of the combustor used VULCAN version 4.3, with a users’ manual published in March 2002. OVER­FLOW used equilibrium chemistry while VULCAN included finite-rate chemistry, but both solved the Navier-Stokes equations by using a two-equation turbulence model. This was no more than had been done during NASP, more than a decade earlier.74

The reason for this lack of progress can be understood with reference to Karl Marx, who wrote that people’s thoughts are constrained by their tools of produc­tion. The tools of CFD have been supercomputers, and during the NASP era the best of them had been rated in gigaflops, billions of floating-point operations per second.75 Such computations required the use of turbulence models. But recent years have seen the advent of teraflop machines. A list of the world’s 500 most pow­erful is available on the Internet, with the accompanying table giving specifics for the top 10 of November 2004, along with number 500.

One should not view this list as having any staying power. Rather, it gives a snap­shot of a technology that is advancing with extraordinary rapidity. Thus, in 1980 NASA was hoping to build the Numerical Aerodynamic Simulator, and to have it online in 1986. It was to be the world’s fastest supercomputer, with a speed of one gigaflop (0.001 teraflop), but it would have fallen below number 500 as early as 1994. Number 500 of 2004, rated at 850 gigaflops, would have been number one as recently as 1996. In 2002 Japan’s Earth Simulator was five times faster than its nearest rivals. In 2004 it had fallen to third place.76

Today’s advances in speed are being accomplished both by increasing the number of processors and by multiplying the speed of each such unit. The ancestral Illiac – 4, for instance, had 64 processors and was rated at 35 megaflops.77 In 2004 IBM’s BlueGene was two million times more powerful. This happened both because it had 512 times more processors—32,768 rather than 64—and because each individual processor had 4,000 times more power. Put another way, a single BlueGene proces­sor could do the work of two Numerical Aerodynamic Simulator concepts of 1980.

Analysts are using this power. The NASA-Ames aerodynamicist Christian Stem – mer, who has worked with a four-teraflop machine, notes that it achieved this speed by using vectors, strings of 256 numbers, but that much of its capability went unused when his vector held only five numbers, representing five chemical species. The computation also slowed when finding the value of a single constant or when taking square roots, which is essential when calculating the speed of sound. Still, he adds, “people are happy if they get 50 percent” of a computers rated performance. “I do get 50 percent, so I’m happy.”78

THE WORLD’S FASTEST SUPERCOMPUTERS (Nov. 2004; updated annually)

Name

Manufacturer

Location

Year

Rated

speed

teraflops

Number

of

proces­

sors

1

BlueGene

IBM

Rochester, NY

2004

70,720

32,768

2

Numerical

Aerodynamic

Simulator

Silicon

Graphics

NASA-Ames

2004

51,870

10,160

3

Earth

Simulator

Nippon

Electric

Yokohama,

Japan

2002

35,860

5,120

4

Mare Nostrum

IBM

Barcelona, Spain

2004

20,530

3,564

5

Thunder

California

Digital

Corporation

Lawrence

Livermore

National

Laboratory

2004

19,940

4,096

6

ASCI Q

Hewlett-Packard

Los Alamos

National

Laboratory

2002

13,880

8,192

7

System X

Self-made

Virginia Tech

2004

12,250

2,200

8

BlueGene

(prototype)

IBM,

Livermore

Rochester, NY

2004

11,680

8,192

9

eServer p Series 655

IBM

Naval

Oceanographic

Office

2004

10,310

2,944

10

Tungsten

Dell

National Center for Supercomputer Applications

2003

9,819

2,500

500

Superdome 875

Hewlett-Packard

SBC Service, Inc.

2004

850.6

416

Source: http://www. top500.org/list/2004/! 1

Teraflop ratings, representing a thousand-fold advance over the gigaflops of NASP and subsequent projects, are required because the most demanding problems in CFD are four-dimensional, including three physical dimensions as well as time. William Cabot, who uses the big Livermore machines, notes that “to get an increase in resolution by a factor of two, you need 16” as the increase in computational speed because the time step must also be reduced. “When someone says, ‘I have a new computer that’s an order of magnitude better,”’ Cabot continues, “that’s about a factor of 1.8. That doesn’t impress people who do turbulence.”79

But the new teraflop machines increase the resolution by a factor of 10. This opens the door to two new topics in CFD: Large-Eddy Simulation (LES) and Direct Numerical Simulation (DNS).

One approaches the pertinent issues by examining the structure of turbulence within a flow. The overall flowfield has a mean velocity at every point. Within it, there are turbulent eddies that span a very broad range of stress. The largest carry most of the turbulent energy and accomplish most of the turbulent mixing, as in a combustor. The smaller eddies form a cascade, in which those of different sizes are intermingled. Energy flows down this cascade, from the larger to the smaller ones, and while turbulence is often treated as a phenomenon that involves viscosity, the transfer of energy along the cascade takes place through inviscid processes. However, viscosity becomes important at the level of the smallest eddies, which were studied by Andrei Kolmogorov in the Soviet Union and hence define what is called the Kolmogorov scale of turbulence. At this scale, viscosity, which is an intermolecular effect, dissipates the energy from the cascade into heat. The British meteorologist Lewis Richardson, who introduced the concept of the cascade in 1922, summarized the matter in a memorable sendup of a poem by England’s Jonathan Swift:

Big whorls have little whorls Which feed on their velocity;

And little whorls have lesser whorls,

And so on to viscosity.80

In studying a turbulent flow, DNS computes activity at the Kolmogorov scale and may proceed into the lower levels of the cascade. It cannot go far because the sizes of the turbulent eddies span several orders of magnitude, which cannot be captured using computational grids of realistic size. Still, DNS is the method of choice for studies of transition to turbulence, which may predict its onset. Such simulations directly reproduce the small disturbances within a laminar flow that grow to produce turbulence. They do this when they first appear, making it possible to observe their growth. DNS is very computationally intensive and remains far from ready for use with engineering problems. Even so, it stands today as an active topic for research.

LES is farther along in development. It directly simulates the large energy-bear­ing eddies and goes onward into the upper levels of the cascade. Because its com­putations do not capture the complete physics of turbulence, LES continues to rely on turbulence models to treat the energy flow in the cascade along with the Kol – mogorov-scale dissipation. But in contrast to the turbulence models of present-day codes, those of LES have a simple character that applies widely across a broad range of flows. In addition, their errors have limited consequence for a flow as a whole, in an inlet or combustor under study, because LES accurately captures the physics of the large eddies and therefore removes errors in their modeling at the outset.81

The first LES computations were published in 1970 by James Deardorff of the National Center for Atmospheric Research.82 Dean Chapman, Director of Astro­nautics at NASA-Ames, gave a detailed review of CFD in the 1979 AIAA Dryden Lectureship in Research, taking note of the accomplishments and prospects of LES.83 However, the limits of computers restricted the development of this field. More than a decade later Luigi Martinelli of Princeton University, a colleague of Antony Jameson who had established himself as a leading writer of flow codes, declared that “it would be very nice if we could run a large-eddy simulation on a full three-dimensional configuration, even a wing.” Large eddies were being simulated only for simple cases such as flow in channels and over flat plates, and even then the computations were taking as long as 100 hours on a Cray supercomputer.84

Since 1995, however, the Center for Turbulence Research has come to the fore­front as a major institution where LES is being developed for use as an engineering tool. It is part of Stanford University and maintains close ties both with NASA – Ames and with Lawrence Livermore National Laboratory. At this center, Kenneth Jansen published LES studies of flow over a wing in 1995 and 1996, treating a NACA 4412 airfoil at maximum lift.85 More recent work has used LES in studies of reacting flows within a combustor of an existing jet engine of Pratt & Whitneys PW6000 series. The LES computation found a mean pressure drop across the injec­tor of 4,588 pascals, which differs by only two percent from the observed value of 4,500 pascals. This compares with a value of 5,660 pascals calculated using a Reynolds-averaged Navier-Stokes code, which thus showed an error of 26 percent, an order of magnitude higher.86

Because LES computes turbulence from first principles, by solving the Navier – Stokes equations on a very fine computational grid, it holds high promise as a means for overcoming the limits of ground testing in shock tunnels at high Mach. The advent of LES suggests that it indeed may become possible to compute one’s way to orbit, obtaining accurate results even for such demanding problems as flow in a scramjet that is flying at Mach 17.

Parviz Moin, director of the Stanford center, cautions that such flows introduce shock waves, which do not appear in subsonic engines such as the PW6000 series, and are difficult to treat using currently available methods of LES. But his colleague

Heinz Pitsch anticipates rapid progress. He predicted in 2003 that LES will first be applied to scramjets in university research, perhaps as early as 2005. He adds that by 2010 “LES will become the state of the art and will become the method of choice” for engineering problems, as it emerges from universities and begins to enter the mainstream of CFD.87