A Time of Turbulence

This dread and darkness of the mind cannot be dispelled by sunbeams, the shining shafts of day, but only by an understand­ing of the outward form and inner working of nature…

First, then, the reason why the blue expanses of heaven are shaken by thunder…

As for lightning, it is caused when many seeds of fire have been squeezed out…

The formation of clouds is due to the sudden coalescence…

—Lucretius, On the Nature of Things

L

ucretius sought rational, deterministic explanations for the weather.

These turned out to be wrong, but one suspects that the Roman philosopher may have guessed this for himself. He wrote that it was better to venture on an incorrect rational explanation than to submit to supersti­tion: no sacrifices for him to propitiate the gods. And no sacrifices, except of time and effort, for those who during the past hundred years or so have wrestled to turn meteorology into a science.

For most people—farmers, sailors, or those of us going about our ordinary business—meteorology means and has always meant the weather forecast: the difference between heading for the golf course or curling up at home with a good book, between planting crops or waiting, and ulti­mately for some the difference between life and death. Those forecasts, dispensed in a few minutes on nightly news broadcasts, rest on the integra­tion of a staggering amount of mathematics, physics, engineering, and computer science. In the first century B. C.E., while incorporating his own ideas with the philosophy of Epicurus and turning the whole into verse, Lucretius was at a considerable disadvantage.

Only in the nineteenth century did the modern era of weather fore­casting begin. The introduction of the telegraph allowed observers to communicate to those at distance points what weather was coming their way. Such timely reporting also allowed meteorologists to plot weather
maps and to develop the concept of storm fronts and cyclones. From the 1920s, radio balloons collected readings of temperature, wind speed, pres­sure, and moisture content, improving knowledge of conditions at altitudes in the lower atmosphere. Later, in the 1950s and 1960s, scientists took the important step of incorporating knowledge of the upper atmosphere into their understanding of meteorological conditions in the lower atmosphere, that is, they explored how the upper atmosphere affects weather at the sur­face.

But until the middle of the twentieth century, meteorology was only slowly breaking free of its ancient reliance on folklore and supersti­tion. It was still more of an art than a science. Then came computers, mathematical modeling of atmospheric behavior, and weather predictions based on computer models. Gradually, it became possible to combine and manipulate observations from many different sources—from ocean buoys to Doppler radar and satellites.

Weather satellites inserted themselves into this history as best they could—not always felicitously. They were a technology in which some in the 1950s intuitively saw promise because of the unique bird’s-eye view from space, but it was only in the early 1980s that the advocates of satellite meteorology succeeded in winning widespread acceptance from the mete­orological community.

In the very earliest days of satellite meteorology, a few names stand out in what was a tiny, intertwined community. The first are William Kel­logg and Stanley Greenfield, who in 1951 while at the RAND Corpora­tion (consultants to the Air Force) published the first feasibility study on weather satellites. Then came Bill Stroud and Verner Suomi, who com­peted to have their experiments launched on one of the satellites of the International Geophysical Year. Each, after vicissitudes, flew an experi­ment. Stroud’s failed, because the Vanguard satellite that carried it into space was precessmg wildly. Stroud went on to head NASA’s early meteo­rological work at the Goddard Space Flight Center and to argue the case for satellite meteorology at congressional hearings. Suomi’s satellite pro­duced data, and he remained in the trenches of science and engineering, making frequent forthright forays into the policy world both nationally and internationally.

There were also Harry Wexler and Sig Fritz from what was then called the Weather Bureau. Wexler, who died in the 1960s, is someone whose name in this context is often forgotten, but as chief scientist of the

Weather Bureau and an active participant in the committees planning the IGY, he was an important supporter of satellite meteorology. He was one of the scientists arguing persuasively in the face of Merle Tuve’s doubts that the IGY should include a satellite program. And Wexler was a staunch ally of a belated attempt by Verner Suomi to participate in the IGY, drum­ming up support for Suomi from eminent meteorologists like Kellogg at RAND.

Fritz worked for Wexler. When the Weather Bureau set up a satellite service, Fritz was its first employee. He was assigned office space in a cleaned out broom cupboard. There, undaunted by the Vanguard failures and the modesty of his office space, Fritz worked with NASA on the first American weather satellite—TIROS. Both Wexler and Fritz were consul­tants for Verner Suomi’s IGY experiment.

Fritz recruited Dave Johnson,[9] who, like Suomi, became an out­spoken proponent of satellite meteorology. Johnson eventually headed the satellite division of what, after several bureaucratic incarnations, was to become the National Oceanic and Atmospheric Administration.

Except for Kellogg and Greenfield, these men worked in the civilian world but also made forays into the “black” world of defense projects, namely the Air Force s Defense Meteorological Satellite Program. The Air Force was an important player in the history of satellite meteorology, devel­oping both engineering and analytical methods for interpreting satellite imagery. And the participation of people like Johnson in both worlds pro­vided a conduit, albeit of limited capacity, for technology transfer from mil­itary to civilian satellites. The story of this important part of the history of satellite meteorology—the way that the defense and civilian worlds inter­mixed—will have to wait until all the relevant documents are declassified.

Despite the limitations imposed by not having a full understanding of the interplay between civilian and defense projects, some broad aspects of the history of satellite meteorology are clear. It is a more complicated story than that of satellite navigation, mainly because it is the story of a technology being developed for a field that was still transforming itself from art to science.

One of the most outspoken and energetic participants in the field’s history was Verner Suomi, of the University of Wisconsin in Madison. Some have called him the father of satellite meteorology.

In 1992, Dave Johnson, then working for the National Research Council of the National Academy of Sciences, recalled a meeting of the world’s leading meteorologists in 1967 when they were planning an inter­national effort, known as the Global Atmosphere Research Program, to study the atmosphere. GARP eventually got underway in the late 1970s. Suomi’s task was to summarize the specifications that weather satellites would have to meet in order to fulfill GARP’s research goals. Johnson said: “We threatened to lock Vern in a room and not to let him have food or drink until he’d written everything down. We didn’t, of course, but he hated writing, and we had to keep an eye on him.”

Suomi’s colleagues were wise to put pressure on him. During late 1963 and early 1964, when Suomi spent a year in Washington DC. as chief scientist of the Weather Bureau, he claims to have written only four memos—which may be the all-time minimalist record for a bureaucrat.

One of GARP’s roles was to set research priorities given what were then the comparatively new technologies of high-speed computing, math­ematical modeling, and satellites. Those priorities give a sense of the immensity of the task facing meteorologists.

The priorities were:

• Atmospheric composition and structure;

• Solar and other external influences on the earth’s atmosphere;

• Interaction between the upper and lower atmosphere;

• Interaction between the earth’s surface and the atmosphere;

• General circulation and budgets of energy, momentum, and water vapor;

• Cloud and precipitation physics;

• Atmospheric pollution;

• Weather prediction;

• Modification of weather and climate (no longer popular);

• Research in sensors and measuring techniques.

A study of these topics would need the “observation heaped on observation” that Sir Oliver Lodge spoke of in his lecture about Johannes Kepler: some observations were to be made by radar, others by airline pilots, weather balloons, and ground-based instruments. And some, of course, would be recorded by satellites.

Despite the vibrancy of meteorological research typified by plans for GARP, it was clear by 1967 that persuading the wider meteorological community—both line forecasters and many research meteorologists—to accept data from satellites would be an arduous task.

Many of the important steps to acceptance were choreographed, in part at least, by Suomi or Johnson and the groups that they headed. Nei­ther man was shy in his advocacy of the technology. Johnson, in fact, threatened on one occasion to “blow his stack” with his boss, whom John­son felt was hostile to satellite data. None of the advocates of satellites could afford too many niceties. The money spent on weather satellites prompted resentment from many. And there were reservations and criti­cisms about satellite meteorology.

Part of the opposition lay, as always, in suspicion of a new technology. But part of it was due to the technology’s acknowledged limitations, which were (and are) imposed by the nature of satellite observations. Satel­lites do not directly measure the meteorological parameters—tempera­tures, pressures, wind speeds and moisture contents at as many latitudes, longitudes, and altitudes as possible—that are essential for computer mod­els and any quantitative predictive understanding of the atmosphere’s behavior. Instead, satellites “see” visible and infrared radiation welling up from the earth. Meteorologists thus have either images or radiometric measurements as their raw data, and from these they must infer quantita­tive meteorological parameters. The inferences are not easy to draw. They call for considerable knowledge of atmospheric physics and chemistry and rely on clever mathematical manipulations of the equations describing atmospheric behavior.

Images rather that radiometric measurements came first in the his­tory of meteorology satellites. Kellogg’s and Greenfield’s study of satellites for “weather reconnaissance,” which was carried out before numerical weather prediction had become central to the future of weather forecast­ing, envisaged that spacecraft would carry still cameras aloft. These would photograph cloud cover, and meteorologists would then study the cloud types and distribution in a qualitative attempt to gain insight into atmo­spheric behavior and thus improve weather forecasting. In the course of their study, Kellogg and Greenfield posed some of the important questions that would preoccupy early satellite meteorologists. These were:

• How could you tell which bit of the earth the camera was looking at and thus where the cloud cover was?

• How could you tell what type of clouds you were looking at and what their altitudes and thicknesses were, and thus what signifi­cance they had to a developing weather system?

• How could you get the information to line forecasters in a timely fashion? It would not be much use telling a ship that there had been an eighty percent chance of a storm yesterday The launch of the first weather satellite—TIROS I (for thermal infrared and observing system)—in April 1960 confirmed that these were all tough and legitimate concerns.

Nevertheless, TIROS showed for the first time what global weather patterns looked like. The promise inherent in the technology was there for all to see in grainy black and white. But it convinced only those who already believed. Succeeding satellites in the TIROS and improved TIROS series carried gradually more sophisticated instruments, each of which slowly took satellite meteorology closer to wide acceptance.

One such class of instruments—known as sounders—were first developed by Johnson’s group in the 1960s. Sounders measure temperature and, more recently, the moisture content of the atmosphere at different altitudes and in places where direct measurements with, say, a thermometer are not possible—over oceans, for example, where much of the weather develops. They are important for near-term predictions of severe weather such as thunderstorms.

The sounder relies on inferences made from radiometric readings at different frequency ranges in the infrared portion of the spectrum and on its operators’ detailed knowledge of atmospheric chemistry and physics. Inevitably, there is greater inaccuracy in the values of temperature and moisture content taken from satellite sounders than from direct measure­ments of the same parameters. And so modelers have, for the most part, not liked to rely on data from satellite sounders. A notable exception is the European Center for Medium Range Weather Forecasting, which has taken the lead in finding ways to extract from satellites the information that is needed for computer models. By the early 1990s, the center was saying that satellite soundings had extended useful predictions from five and a half to seven days in the Northern Hemisphere and from three and a half to five days in the Southern Hemisphere.

While Johnson’s group developed the first sounder, Suomi came up with the idea for the spin-scan camera, which flew for the first time in 1966. Although this class of camera was to become a crucial meteorologi­cal instrument, Suomi was told by a colleague ten years after it first flew that if submitted as part of a Ph. D. thesis, it would not merit a doctorate.

Thus satellites were not entirely welcome participants in meteorol­ogy. Far more welcome were the new high-speed computers and John Von Neumann’s conviction that with sufficient computational power one could model the atmosphere’s behavior and predict the weather.

The idea for such numerical weather prediction was proposed first in 1922 by Lewis Richardson. He tested his idea by feeding meteorological data that had been collected at the beginning of International Balloon Day in May 1910 into mathematical models describing atmospheric behavior. He compared his numerical predictions with the data collected during the day and found no agreement. Discouraged, Richardson concluded that to predict the weather numerically one would need 64,000 mathematicians who would not be able to predict weather conditions for more than sec­onds ahead; they would, in effect, be “calculating the weather as it hap­pened.”

In the thirty years following Richardson’s depressing experience, much changed, including improved understanding of the physics of the atmosphere and mathematical analysis of its behavior. Thus, when the tech­nology of computing emerged, modelers set to work, weaving the basic physical laws into models mimicking the behavior of the atmosphere. And the computers took over the calculations. Initially, the models represented only surface events in small regions. Subsequently, modelers incorporated the influence of the upper atmosphere on weather at the surface.

There are now many models—global, hemispheric, regional. Some are mathematical behemoths constructed from thousands of equations. Some give short-term weather predictions, while others look up to two weeks ahead—so-called medium-term forecasts. Yet others make forecasts, extremely controversial ones, far into the future as climatologists explore climatic change.

All, however, devour numbers—values of temperature, pressure, etc. And because the early satellites did not supply the quantitative data that the models required, there was tension between computer modelers and satellite advocates. Both groups, after all, were seeking scarce public funds for expensive technologies.

In 1969, ten years alter the first meteorological payload was launched, the National Academy of Sciences wrote, .. numerical weather prediction techniques demand quantitative inputs, and until weather satellites are able to generate these, their use in modern meteorol­ogy will be at best supplementary.”

Nearly thirty years later, the technologies have become more com­patible and weather satellites have obtained a secure place in meteorology. The Air Force, NOAA, NASA, and academic groups like that of Suomi’s at Wisconsin have done what they can to extract meteorological values from unprepossessing streams of satellite data and, importantly, to make this information compatible with observations from weather balloons, radar and surface instruments. Yet, says Johnson, considerably more information could be extracted from the meteorological satellite data.

Weather satellites gather their data—images and soundings—from two different types of orbit: polar and geostationary. Like Transit, a weather satellite in polar orbit follows a path that takes it over the poles on each orbit, while the earth turns through a certain number of degrees of longi­tude m the time it takes the satellite to complete one orbit. Thus polar – orbiting satellites, if they have a wide enough field of view to either side of the subsatellite point, provide global coverage. Their altitude, and thus how long they take to complete an orbit, is chosen so that the satellite will “see” all parts of the earth once every twelve hours.

To be truly useful, however, weather satellites need to occupy a special kind of polar orbit, known as sun-synchronous. Sun-synchronous orbits are chosen so that the satellite maintains the same angular relationship to the sun, which means that the satellite will be above the same subsatellite point at a given time of day. Its readings are then consistent from day to day. The timing of the orbit is chosen so that the satellite readings are available for the computer prediction models, which are run twice a day.

If the orbit is to maintain the same angular relationship to the sun throughout the year, it cannot remain fixed in space. But orbits are not, of course, fixed. They respond to the earth’s gravitational anomalies. Mission planners achieve sun-synchronous orbits by exploiting the known effects of the earth’s gravitational field. They select inclinations and altitudes that result in the orbit moving in such a way that the satellite’s sun-synchronous position is maintained. The consequences of the natural world that the Transit team had to understand and to compensate for can thus be exploited usefully by those planning the orbits of weather satellites.

The laws of physics result, too, in the existence of the extremely use­ful geostationary orbit. A satellite at an altitude of about 36,000 kilometers takes twenty-four-hours to complete an orbit. If the orbit has an inclina­tion of zero degrees, that is, the plane of its orbit is coincidental (more or less) with the plane of the equator, then the satellite remains above the same spot on the earth. Thus, the satellite is with respect to the earth for all practical purposes stationary and can view the same third of the earth’s surface while the weather moves underneath it. Suomi’s spin-scan cameras were designed for this orbit. Geostationary orbits were also to prove of critical importance to communication satellites, and Suomi’s spin-scan camera was first launched aboard a satellite designed by one of the fathers of communication satellites—Harold Rosen.

While they are crucial to the beginnings of satellite meteorology, the issues mentioned so far scarcely scratch the surface of the history of weather satellites. There was also an important battle in the early 1960s between NASA and NOAA’s forerunner about the technology of the satellites to replace TIROS and about who would pay for operational satellites. Finally, an improved version of TIROS was selected, and NASA developed the alternative proposal, a more experimental satellite series dubbed Nimbus.

White recalls, “On the same day I was sworn in as chief of the Weather Bureau, Herbert Holloran, the assistant secretary for science and technology, took me to one side and said we have to make a decision about Nimbus. The issue was would we be willing to use Nimbus as our operational satellite. The cost would have been two to three times the cost of using TIROS. This was important to the weather satellite program. If we had followed Nimbus, the cost would have skyrocketed, and maybe we wouldn’t have got the money from Congress. We decided on the basis of cost to go with TIROS. I think that was the right step.”

Even from a technical standpoint, the history is not straightforward. There was no single event, such as Guier and Weiffenbach’s tuning into Sputnik’s signal, from which the story unfolds. Nor was there one clearly defined technical goal such as that of the Transit program—locate position with a CEP of one tenth of a mile. All of the physics and engineering that went into the Transit program were harnessed to meet that goal and were refined to enable the subsequent improvements in the system. In the field of meteorology, satellites were just one tool wielded to learn more about the atmosphere, and no one really knew what needed to be learned as is apparent from the breadth of The Global Atmosphere Research Program’s aims. It is, therefore, not surprising that meteorology satellites took longer than navigation spacecraft to find acceptance.

In further contrast to Transit, there was no single group, like the Navy’s Special Projects Office, that wanted weather satellite technology. Even the Weather Bureau, outside of Johnson’s group, was unenthusiastic. Further, no single group, like the Applied Physics Laboratory’s Transit team, was central to the development of weather satellites. True, the Air Force, backed by sundry laboratories and consultants such as the RAND Corporation, was interested from early days, but once the IGY’s satellite program was announced, more scientists became involved, including Verner Suomi and Bill Stroud. After the launch of Sputnik I, the Advanced Research Projects Agency sponsored the TIROS program, which NASA took over when that agency opened its doors in October 1958. Industry, including companies like RCA, took a hand, and, of course so did the Weather Bureau.

If the professionals were slow to accept meteorology satellites, the lay audience was intrigued by the potential of a spacecraft’s global view, and popular articles appeared in the newspapers of the 1950s speculating on the importance of satellites for weather forecasting. They pointed out that only satellites would be able to provide comprehensive and frequent read­ings over the approximately seventy-five percent of the earth’s surface that is covered by ocean.

Since the first TIROS went into orbit, the United States has launched more than one hundred meteorology satellites. Now the coun­tries of the former Soviet Union, Europe, Japan, the People’s Republic of China, and India maintain meteorology satellites. All contribute to the global economy by improving forecasts for agriculture and transport, and to safety by monitoring severe weather such as hurricanes and allowing more timely and accurate predictions of where they will make landfall. It is unlikely, in the U. S. at least, that a hurricane will ever kill more than 6000 people, as did the hurricane that struck Galveston, Texas, in 1906. It has taken more than three decades, but weather satellites are now living up to the popular expectations of the 1950s.