Category Dreams of Other Worlds

A Long and Bumpy Road

In 1946, Yale astronomy professor Lyman Spitzer wrote a paper detailing the advantages of an Earth-orbiting telescope for deep observations of the universe.4 The concept had been floated even earlier, in 1923, by Hermann Oberth, one of the pioneers of mod­ern rocketry. In 1962, the U. S. National Academy of Sciences gave its imprimatur to the idea, and a few years later Spitzer was ap­pointed chair of a committee to flesh out the scientific motiva­tion for a space observatory. The young space agency NASA was to provide the launch vehicle and support for the mission. NASA cut its teeth with the Orbiting Astronomical Observatory mis­sions from 1966 to 1972.5 They demonstrated the great potential of space astronomy, but also the risks—two of the four missions failed. We’ve already encountered Spitzer since he gave his name to NASA’s infrared Great Observatory. Spitzer worked diligently to convince his colleagues around the country of the benefits of such a risky and expensive undertaking as an orbiting telescope.

After the National Academy of Sciences reiterated its support of a 3-meter telescope in space in 1969, NASA started design stud­ies. But the estimated costs were $400-500 million and Congress balked, denying funding in 1975. Astronomers regrouped, NASA enlisted the European Space Agency as a partner, and the telescope shrunk to 2.4 meters. With these changes, and a price tag of $200 million, Congress approved funding in 1977 and the launch was set for 1983. More delays followed. Making the primary mirror was very challenging and the entire optical assembly wasn’t put to­gether until 1984, by which time launch had been pushed back to 1986. The whole project was thrown into limbo by the tragic loss of the Challenger Space Shuttle in January 1986. When the shuttle flights finally resumed, there was a logjam of missions so another couple of years slipped by.6

Hubble was launched on April 24, 1990, by the shuttle Discov­ery. A few weeks after the systems went live and were checked out, euphoria turned to dismay as scientists examined the first images and saw they were slightly blurred. The telescope could still do science but some of the original goals were compromised. Instead of being focused into a sharp point, some of the light was smeared into a large and ugly halo. This symptom indicated spherical aber­ration, and further in-flight tests confirmed that the primary mir­ror had an incorrect shape. It was too flat near the edges by a tiny amount, about one-fiftieth of the width of a human hair. Such was the intended precision of Hubble’s optics that this tiny flaw made for poor images.7 Hubble’s mirror was still the most precise mirror ever made, but it was precisely wrong.

The spherical aberration problem may be ancient history and in the rearview mirror now, but at the time it was a public relations nightmare for NASA. Its flagship mission could only take blurry images. Commentators and talk show hosts lampooned the tele­scope and David Letterman presented a Top Ten list of “excuses” for the problem on the Late Show with David Letterman.8 More seriously, the episode became fodder for case studies in business schools around the country. The fundamental error was the result of poor management, not poor engineering. The Space Telescope project had two primary contractors: Perkin-Elmer, who built the optical telescope assembly, and Lockheed, who built the support systems for the telescope. There was also a network of two dozen secondary contractors from the aerospace industry. The mission was jointly executed by Marshall Space Flight Center and God­dard Space Flight Center, whose relationship involved rivalry and was not always harmonious, with overall supervision from NASA Headquarters. Complexity of this degree can be a recipe for disas­ter without tight and transparent management, and clear commu­nication among the best technical experts.

When the primary mirror was being ground and polished in the lab by Perkin-Elmer, they used a small optical device to test the shape of the mirror. Because two of the elements in this device were mis-positioned by 1.3 millimeters, the mirror was made with the wrong shape. This mistake was then compounded. Two addi­tional tests carried out by Perkin-Elmer gave an indication of the problem, but those results were discounted as being flawed! No completely independent test of the primary mirror was required by NASA, and the entire assembled telescope was not tested be­fore launch, because the project was under budget pressure. Also, NASA managers didn’t have their best optical scientists and engi­neers looking at the test results as they were collected. The agency was embarrassed and humbled by the failure. Their official inves­tigation put it succinctly: “Reliance on a single test method was a process which was clearly vulnerable to simple error.”9 In this way a multi-billion-dollar mission was hamstrung by a millimeter-level mistake and the failure to do some relatively cheap tests. In the old English idiom: penny wise, pound foolish. The propagation of a small problem into a huge one recalls another aphorism from En­gland, where a lost horseshoe stops the transmission of a message and the result affects a critical battle: for the want of a nail, the war was lost.


Awareness of the size and age of the universe is hard-won knowledge that has taxed scientists for the past 2,500 years. To ancient cul­tures, the sky was a proximate canopy that circled overhead, and there was no sense of the vast distance to the stars, let alone the idea that something might lie beyond those pinpoints of light. The ancient Greeks were the first civilization to spawn a class of philosopher-scientists, who applied logic and mathematics to their observations of the sky.

Cosmology has its root in the Greek idea of “cosmos,” or an orderly and harmonious system. In the Greek view, the antitheti­cal concept of “chaos” referred to the initial state of the universe, which was darkness or an abyss.1 Thus, order emerged from disor­der when the universe was born. Pythagoras is believed to be the first to use the term cosmos, and the first to say that the universe was based on mathematics and numbers, although in truth, so lit­tle is known about Pythagoras and his followers that direct attri­bution of these ideas is impossible. Pythagoras is also credited with “harmony of the spheres,” a semi-mystical, semi-mathematical idea that simple numerical relationship or harmonics were mani­fested by celestial bodies, with the overall result having commonal­ity with music. Pythagoreans didn’t think the music of the spheres was literally audible.2

Aristotle’s geocentric cosmology dominated Western thought for nearly two millennia, but Aristarchus developed a heliocentric

cosmology that implied large distances to the stars, so as not to observe a parallax shift from one season to another. Mapping the stars in the third dimension didn’t become possible until parallax was measured in the nineteenth century, giving William Herschel an inkling of the extent of the system of stars that we inhabit. Twin foundational discoveries by Edwin Hubble early in the twentieth century—the distances to the nebulae, or galaxies, and the uni­versal recession velocities of galaxies—set the stage for modern cosmology. By the mid-twentieth century, the universe was known to be billions of years old and billions of light-years in extent.

A Gentle Astrophysical Giant

In 1923, Wilhelm Rontgen died as one of the most celebrated physicists of his time. A modest man, he had declined to take out a patent on his discovery, wanting X-rays to be used for the benefit of humankind. He also refused to name them after himself and donated the money he won from his Nobel Prize to his university.

That same year Subrahmanyan Chandrasekhar started high school in a small town in southern India. Chandra, as he was universally known, would grow into another great but modest scientific fig­ure. Chandra means Moon or luminous in Sanskrit. He was one of nine children and although his family had modest means, they valued education; therefore he was home schooled since local pub­lic schools were inferior and his parents couldn’t afford a private school. His family hoped Chandra would follow his father into government service but he was inspired by science and his mother supported his goal. In addition, he had a notable role model in his uncle, C. U. Raman, who went on to win the Nobel Prize in Physics in 1930 for the discovery of resonant scattering of light by molecules.18

Chandra’s family moved to Madras and he started at the univer­sity there, but was offered a scholarship to study at the University of Cambridge in England, which he accepted. He would never live in India again. At Cambridge he studied, and held his own, with some of the great scholars of the day. As a young man he fell into a controversy with Arthur Eddington, at the time the foremost stellar theorist in the world, which upset him greatly. Eddington refused to accept Chandra’s calculations on how a star might con­tinue to collapse when it had no nuclear reactions to keep it puffed up. Chandra was a gentle man, unfailingly polite and courteous. In part due to this conflict, he chose to accept a position at the Uni­versity of Chicago, where he spent the bulk of his career.

His name is associated with the Chandrasekhar limit, the theo­retical upper bound on the mass of a white dwarf star, or about 1.4 times the Sun’s mass. Above this limit, the force of gravity over­comes all resistance due to inter-particle forces, and a stellar corpse will collapse into an extraordinary dark object, either a neutron star, or if there’s enough mass, a black hole. In the 1970s, he spent several years developing the detailed theory of black holes. Chan­dra was amazingly productive and diverse in his scientific interests. He wrote more than four hundred research papers, and ten books, each on a different topic in astrophysics. For nineteen years, he was the editor-in-chief of The Astrophysical Journal, guiding it from a modest local publication to the international flagship jour­nal in astronomy. He mentored more than fifty graduate students while at the University of Chicago, many of whom went on to be the leaders in their fields. In 1983, he was honored with the Nobel Prize in Physics for his work on the theory of stellar structure and evolution.

Through his long career, Chandra watched the infant field of X-ray astronomy mature. The telescopes flown in sounding rock­ets in the early 1960s were no bigger than Galileo’s spyglass. Just before its 1999 launch, NASA’s Advanced X-Ray Astrophysical Facility was renamed the Chandra X-ray Observatory (CXO) in honor of this giant of twentieth-century astrophysics.19 In a span of less than four decades, Chandra improved on the sensitivity of the early sounding rockets by a factor of 100 million.20 The same sensitivity gain for optical telescopes, from Galileo’s best device to the Hubble Space Telescope, took four hundred years—ten times longer!

Probing the Very Early Universe

The landscape in cosmology has shifted. Scientists no longer worry about the validity of the big bang model. It rests on a sturdy tri­pod of evidence. One leg is the expansion of the universe as traced by the recession of galaxies. Another leg is the cosmic abundance of light elements in the first three minutes, when the temperature was 10 million degrees. The third is the microwave background radiation. WMAP has advanced the level of diagnostic power of the third piece of evidence to a level that is unprecedented. The big bang model has so far passed all tests with flying colors.

The frontier of cosmology involves gaining better physical un­derstanding of dark matter and dark energy, and pushing tests of the big bang to earlier eras. WMAP has “weighed” dark matter with better accuracy than ever before, and it has shown that dark energy is an inherent property of space-time itself (as with Ein­stein’s “cosmological constant) rather than being a particle or field existing in space-time. The current frontier is the epoch of infla­tion, an incredible trillion trillion trillionth of a second after the big bang. Inflation was motivated by the unexpected flatness and smoothness of the universe, plus the lack of space – time glitches surviving to the present day, like monopoles and strings.

Inflation got early support from WMAP data showing that the strength of temperature variations was independent of scale. But in current theory, “inflation” is not one thing; it’s an umbrella for a bewildering array of ideas about the infant universe. Since it refers to a time when all forces of nature except gravity were unified, in­flation models involve speculation as to the fundamental nature of matter. Superstrings are one such concept for matter, but almost all the theories have to incorporate gravity in some way, so the theory of the very early universe is proximate to the search for a “theory of everything.”

In 2003, the WMAP team announced a new measurement re­sulting from the polarization measurements and increasingly re­fined temperature maps. The temperature variations are generally equal in strength on all scales, but as data improved, it became clear they were not exactly equal on all scales. The sense of this deviation was exactly as predicted by the favored inflation mod­els, and different from predictions by rival theories for the early expansion (exotic “cold big bang” models were also ruled out). In 2010, using WMAP’s seven-year data, the team even managed to use the data to confirm the helium abundance from the big bang, and they put constraints on the number of neutrino species and other exotic particles.62 Most gratifyingly, the data confirmed the fundamental correctness of the model of temperature variations as resulting from acoustic oscillations. The piper’s tune is better understood than ever before (plate 22).

Inflation is required by the data, and cosmologists are homing in on the correct model. Part of the inflation landscape is the fact that the physical universe—all that there is—is much larger than the visible universe—all we can see. Inflation also motivates the idea of the multiverse: parallel universes with wildly different proper­ties that emerge from the quantum substrate that preceded the big bang.63 Our dreams of other worlds should now be expanded to encompass other universes filled with worlds. The ambition and scope of these theories is extraordinary, and it undoubtedly would have amazed Pythagoras to know how far we’ve taken his ideas of a universe based on mathematics and harmony.

A Telescope Rejuvenated

In fact, NASA had lost a big battle, but they weren’t yet ready to concede the war. With Hubble working at part strength, the agency immediately began planning to diagnose and correct the problem with the optics. Although a backup mirror was available, the cost of bringing the telescope down to Earth and re-launching it would have been exorbitant. It was just as well that the telescope had been built to be visited by the Space Shuttle and serviced by astronauts. The instruments sitting behind the telescope fit snugly into bays like dresser drawers and they could be pulled out and replaced with others of the same size.

It also helped that the mirror flaw was profound but relatively simple, and the challenge was reduced to designing components with exactly the same mistake but in the opposite sense, essentially giving the telescope prescription eyeglasses. The system designed to correct the optics was the brainchild of electrical engineer James Crocker, and it was called COSTAR, or Corrective Optical Space Telescope Axial Replacement.10 COSTAR was a delicate and com­plicated apparatus with more than five thousand parts that had to be positioned in the optical path to within a hair’s breadth. Install­ing COSTAR raised the difficulty level of an already challenging first servicing mission that was planned for late 1993. Seven astro­nauts spent thousands of hours training for the mission, learning to use nearly a hundred tools that had never been used in space before. They did a record five back-to-back space walks, each one grueling and dangerous, during which they replaced two instru­ments, installed new solar arrays, and replaced four gyros. This last fix was needed because of the disconcerting tendency of Hub­ble’s gyros to fail at a high rate, in a way never seen in lab testing. Without working gyros, the telescope could not point at or lock on a target. Before leaving, the crew boosted Hubble’s altitude, since three years of drag in Earth’s tenuous upper atmosphere had started to degrade the orbit.

The first servicing mission was a stunning success. It restored the imaging capability to the design spec and added new capabili­ties with a more modern camera. It also played significantly into the vigorous debate in the astronomy and space science commu­nity over the role of humans in space. NASA had always placed a strong bet that the public would be engaged by the idea of space as a place for us to work and eventually live. But after the success of Apollo, public interest and enthusiasm waned. (It even waned during the program; a final three planned Moon landings were scrapped.) Also, most scientists thought that it was cheaper and less dangerous to create automated or robotic missions than to ser­vice them with astronauts. NASA’s amazingly successful planetary missions from the 1970s to the 1990s were seen as evidence of the primacy of robotic spacecraft. Hubble was of course designed to be serviced by astronauts, but the often-unanticipated problems they were able to solve in orbit, coupled with the positive public response (and high TV ratings during the space walks, at least the early ones), persuaded many that the human presence was essen­tial and inspirational.

More servicing missions followed. Each one rejuvenated the facility and kept it near the cutting edge of astronomy research. The second mission in 1997 installed a sensitive new spectrograph and Hubble’s first instrument designed to work in the infrared. Astronauts also upgraded the archaic onboard computers. The third mission in 1999 was moved forward to deal with the vex­ing problem of failing gyros. Before the mission flew, the telescope lost a fourth gyro, essentially leaving it dead in the water.11 All six gyros were replaced and the computer was upgraded to one with a blistering speed of 25 MHz and capacious two megabytes of RAM (that’s fifty times slower and thousands of times less storage than the average smart phone). The fourth mission in 2002 replaced the last of the original instruments and also removed COSTAR since it was no longer needed—each subsequent instrument has had its optics designed to compensate for the aberration of the primary mirror. The infrared instrument was revived, having run out of coolant two years early. New and better solar arrays were installed, along with a new power system, which caused some anx­iety since to install it the telescope had to be completely powered down for the first time since launch. Except for its mirror, Hubble is reminiscent of the ship of Theseus, a story from antiquity where every plank and piece of wood of a ship is replaced as it plies the seas.

The fifth and last servicing mission almost didn’t happen. Once again, the mission was affected by a Shuttle disaster, in this case the catastrophic loss of Columbia and crew in February 2003. NASA Administrator Sean O’Keefe decided that human repairs of the telescope were too risky and future Shuttle flights could only go to the safe harbor of the International Space Station. He also studied engineering reports and concluded that a robotic servicing mission was so difficult that it would most likely fail. At that point, Hubble was destined to die a natural death as gyros and data links failed.

When in January 2004 it was announced that the Hubble Space Telescope would be scrapped, the public’s response was overwhelm­ingly in support of refurbishing the instrument in orbit. David De – vorkin and Robert Smith report that a “‘Save the Hubble’ move­ment sprang into life.”12 Robert Zimmerman similarly comments: “The public response. . . was, to put it mildly, loud. Very soon, NASA was getting four hundred emails a day in protest, and over the next few weeks editorials and op-eds in dozens of newspapers across the United States came out against the decision.”13 Zim­merman details the history, development, and deployment of the telescope as well as its more significant findings. He suggests that Hubble, more than any other telescope or major scientific instru­ment, has allowed humankind to explore what lies at the furthest depths of space and that this is one reason for the widespread pub­lic sense of ownership of what some newspapers have called the “people’s telescope.”14 Not since Edwin Hubble did his pioneering work on cosmology with the 100-inch reflector at Mount Wilson Observatory has public interest in a particular instrument been so intense or pervasive.15

O’Keefe and other NASA officials were taken aback by the pub­lic response. They’d expected astronomers to lobby hard for an­other servicing mission, and they weren’t surprised that astronauts weighed in, saying they’d signed up knowing the risks and they wanted to service the telescope. But it turned out that a large num­ber of people felt attached to Hubble through its spectacular pic­tures and newsworthy discoveries. They felt clear affection for the facility, and since the taxpayer had indeed paid for it, the agency took notice and O’Keefe first reevaluated and then reversed his decision.16 The fifth servicing mission went off without a hitch in May 2009, leaving the telescope in better shape overall and with certain capabilities a hundred times more effective than its original configuration.17 Two new instruments were installed, two others were fixed, and many other repairs were carried out since it was agreed by everyone that this was the last time the telescope would be serviced (figure 11.1).

A Telescope Rejuvenated

Figure 11.1. The Hubble Space Telescope is still in high demand as astronomy’s flagship facility, over two decades after it was launched. In large part, this is due to five servicing missions with the Space Shuttle, where astronauts performed extremely challenging space walks to replace and upgrade vital components, and install new state-of-the-art instruments (NASA/Johnson Space Center).

Hubble’s continual rejuvenation is a major part of its scientific impact. The instruments built for the telescope are state-of-the-art, and competition for time on the telescope has consistently been so intense that only one in eight proposals gets approved. All this comes with a hefty price tag. Estimating the cost of Hubble is dif­ficult because of how much to assign to the Shuttle launches and astronaut activities, but a twenty-two-year price tag of $6 billion is probably not far from the mark. For reference, the budget was $400 million when construction started and the cost at launch was $2.5 billion. Compared to slightly larger 4-meter telescopes on the ground, Hubble generates fifteen times as many scientific citations (one crude measure of impact on a field) but costs one hundred times as much to operate and maintain.18 Regardless of its cost, the facility sets a very high bar on any subsequent space telescope. As Malcolm Longair, Emeritus Professor of Physics at the University of Cambridge and former chairman of the Space Telescope Sci­ence Institute Council, has observed: “The Hubble Space Telescope has undoubtedly had a greater public impact than any other space astronomy mission ever.” He also notes that this small telescope’s “images are not only beautiful, but are full of spectacular new sci­ence,” much of which was unimagined by the astronomers who conceived and launched the instrument.19

A Beginning for the Universe

It was only in the most recent seconds of human existence, com­paratively speaking, that the universe began to take shape in the human imagination. For the vast majority of the 200,000 years since the emergence of Homo sapiens, the reaches of the depths of space were unfathomable. In just the last hundred years humans began to discover the universe and develop a basic understanding of its mass and age. One key to unfolding our current view was detection and mapping of the cosmic microwave background, the remnant light and heat of the big bang. The temperature of the vacuum of space is 2.725 K, a trace above absolute cold, exactly what the universe should have cooled to if it had expanded to its current size from a hot and dense initial state 13.8 billion years ago. NASA’s COBE and WMAP spacecraft have mapped this sig­nature of the moment when the abyss of space and everything in it came into being.

Prior to and throughout the first third of the twentieth century, most people, even most astronomers, simply assumed that the uni­verse always existed. Science historian Helge Kragh comments, “The notion of a universe of finite age was rarely considered and never seriously advocated.”3 Astronomers knew very little about the depths of space before the 100-inch telescope at Mount Wil­son Observatory near Pasadena became operational. At that time, they intensely debated whether the Milky Way comprised the en­tire universe, or whether the nebulae might lie far beyond our gal-

axy. On an October night in 1923, American astronomer Edwin Hubble, working with the 100-inch, then the largest telescope in the world, observed a variable star in M31, the Andromeda Neb­ula, which ultimately confirmed that it was millions of light-years beyond the Milky Way. The announcement of Hubble’s result in 1925 radically altered the scientific and public understanding of our place in space. Just a few years later, in 1929, Hubble and his assistant Milton Humason again rocked the world by reporting that remote galaxies were racing away from the Milky Way at 700 miles per second or faster. Their observations indicated that the universe was expanding, a seemingly preposterous idea that Albert Einstein himself initially refused to believe.

Given that relativity theory recognized space and time as insep­arable, the Belgian astronomer and Jesuit priest Georges Lemai- tre interpreted Hubble and Humason’s findings of the “runaway” galaxies as meaning only one thing. The universe itself was ex­panding, which in turn suggested that the universe must have been smaller, denser, and hotter in the distant past. Among Lemaitre’s many contributions to cosmology, three of his most simple and yet profound ideas were that the universe had a beginning, that both relativity and quantum theory were needed to explain this origin in terms of expanding space-time, and that Edwin Hubble’s reced­ing galaxies were evidence of this cosmic expansion.

Lemaitre was the first to suggest that Hubble and Humason’s redshifted nebulae indicated the expansion of space-time itself.4 In 1931, in the journal Nature, Lemaitre offered a short proposal for what English astronomer Fred Hoyle later derogatorily dubbed the big bang. In that article, Lemaitre postulated “the beginning of the universe in the form of a unique atom, the atomic weight of which is the total mass of the universe.”5 He theorized inflation of the cosmos from a “primeval nebula” or “primeval atom” of dense matter. Working from Einstein’s theory of general relativity as well as emerging theory in quantum mechanics of elementary atomic particles, Lemaitre proposed that the universe inflated from a dense, highly compacted soup of subatomic particles that at a moment of quantum instability resulted in the unfolding of space. “What’s remarkable about his Nature letter,” writes John Farrell, “is that—apart from discussing the idea of a temporal beginning of the cosmos—it marks the first time that a physicist directly tied the notion of the origin of the cosmos to quantum processes.”6 Even before the neutron had been discovered, Lemaitre understood that the beginnings of the universe could be explained in part via quan­tum theory and argued that “all the energy of the universe [was] packed in a few or even in a unique quantum.” By Lemaitre’s es­timation space and time or space-time could “only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the [universe] happened a little before the begin­ning of space and time.”7 Lemaitre depicted the early universe as analogous to a “conic cup,” the bottom of which represents “the first instant at the bottom of space-time, the now which has no yesterday because, yesterday, there was no space.”8

In 1934, in “The Evolution of the Universe,” Lemaitre outlined his “fireworks theory of evolution” in which the stars and galax­ies, having evolved over billions of years, were merely “ashes and smoke of bright but very rapid fireworks.”9 He described our situ­ated view from Earth, scanning the night skies, as we look back toward the primordial past: “Standing on a well-chilled cinder, we see the slow fading of the suns, and try to recall the vanished bril­liance of the origin of the worlds.”10 Lemaitre additionally intuited that a fossil light would be the signature of the universe’s begin­ning. As James Peebles, Lyman Page, and Bruce Partridge point out: “One learns from fossils what the world used to be like. The fossil microwave background radiation is no exception.”11 Lemai – tre expected evidence of a fossil radiation from the early stages of the universe could be detected. Thinking in 1945 that cosmic rays were the signatures of this fossil light, Lemaitre supposed that these “ultra-penetrating rays” would reveal the “primeval activity of the cosmos” and were “evidence of the super-radioactive age, indeed they are a sort of fossil rays which tell us what happened when the stars first appeared.”12 Just weeks before his death in 1966, Lemaitre celebrated learning of the discovery of the cos­mic microwave background, the fossil light he had anticipated. Throughout his career, he debated with Einstein whether or not his cosmological constant was a repulsive force that “could be un­derstood as a vacuum energy density.”13 Cosmologists now regard Einstein’s cosmological constant as indicative of the effects of dark energy contributing to the universe’s expansion (figure 12.1).

A Beginning for the Universe

Billions of years from now

Figure 12.1. The recession of galaxies implies the universe is expanding; using general relativity, the expansion history can be calculated. The curves show the past and future expansion of the universe in terms of matter content (Dm) and dark energy content (D). Observations agree with the upwards curving dashed line. The expansion history of the universe was dominated initially by dark mat­ter and more recently by dark energy (Wikimedia Commons/BebRG).

The Chandra X-ray Observatory

Until the Chandra X-ray Observatory was launched, the best X – ray telescopes were only as capable as Galileo’s best optical tele­scope, with limited collecting area and very poor angular resolu­tion. With Chandra, X-ray astronomers gained several orders of magnitude of sensitivity, and the ability to make images as sharp as a medium-size optical telescope. Chandra was the third of NASA’s four “Great Observatories.” The others are the Hubble Space Tele­scope, launched in 1990 and still doing frontier science, the Comp­ton Gamma Ray Observatory, launched in 1991 and deorbited in 2000 after a successful mission, and the Spitzer Space Telescope, launched in 2003 and currently in the final “warm” phase of its mission since its liquid helium coolant ran out in 2009 (plate 17).

It took a while to open the X-ray window on the universe be­cause the Earth’s atmosphere is completely opaque to X-rays. In the 1920s, scientists first proposed using versions of Robert God­dard’s rocket to explore the upper atmosphere and peer into space. However, this idea wasn’t realized until 1948, when a re-purposed V2 rocket was used to detect X-rays from the Sun.21 The next few decades saw the development of imaging capabilities for X-rays and new detector technologies, and X-ray astronomy tracked the maturation of the space program. The first X-ray source beyond the Solar System, Scorpius X-1, in the constellation of Scorpius, was detected by physicist Riccardo Giacconi in 1962.22 This in­tense source of high-energy radiation is a neutron star, the end result of the evolution of a massive star where gravity crushes the remnant to a state as dense as nuclear matter. The intense X-rays result from gas being drawn onto the neutron star from a com­panion and being heated violently enough to emit high-energy radiation.

Giacconi is another “giant” in astrophysics—as leading scientist for X-ray observatories from Uhuru in the 1970s to Chandra in the 1990s, first director of the Space Telescope Science Institute, and winner of the Nobel Prize in Physics in 2002. This last and ultimate accolade fittingly came almost exactly a century after the award of the first Nobel Prize in Physics for the discovery of X – rays to Wilhelm Rontgen. Born in Genoa, Italy, his early life was disrupted by the Second World War; as a high school student, he had to leave Milan during allied bombing raids. He returned to complete his degree and started his life as a scientist in the lab, working on nuclear reactions in cloud chambers. With a Fulbright Fellowship he moved to the United States and forged his career there. He had his hand in all the pivotal discoveries of X-ray as­tronomy: the identification of the first X-ray sources, character­ization of black holes and close binary systems, the high-energy emission that emerges from the heart of some galaxies, and the nature of the diffuse X-rays that seem to come from all directions in the sky.23

The growth in the number of celestial X-ray sources gives a sense of how each new mission has advanced the capabilities: 160 sources in the final catalog of the UHURU satellite in 1974, 840 in the HEAO A-1 catalog in 1984, nearly 8,000 from the com­bined Einstein and EXOSAT catalogs of 1990, and about 220,000 from the ROSAT catalog of 2000. Chandra has had a wider-field but lower sensitivity counterpart in the European XMM-Newton mission, also launched in 1999. These two X-ray satellites have detected a total of over a million X-ray sources.

Chandra was launched by the Space Shuttle Columbia into a highly elliptical orbit that takes it a third of the way to the Moon. At five tons, it was the most massive payload launched up to that time by the Space Shuttle. The elongated orbit gives it lots of “hang time” in the perfect vacuum of deep space and lets science be done for 55 hours of the 64-hour orbit. This comes at the expense of the spacecraft being unserviceable by the Shuttle when it was still flying, so the facility has to work perfectly. In fact, the only tech­nical problem was soon after launch when the imaging camera suffered radiation hits during passage through the Van Allen radia­tion belts; it is now stowed as the spacecraft passes through those regions. The spacecraft had a nominal five-year mission at time of launch but it’s producing good science well into its second decade and is expected to last at least fifteen years.24

One of two different imaging instruments can be the target of incoming X-rays at any given time. The High Resolution Camera uses a vacuum and a strong electric field to convert each X-ray into an electron and then amplify each one into a cloud of electrons. The camera can make measurements as quickly as 100,000 times per second, allowing it to detect flares or monitor rapid variations. Chandra’s workhorse instrument is the Advanced Camera for Im­aging Spectroscopy. With 10 CCDs, it has one hundred times bet­ter imaging capability than any previous X-ray instrument. Either of these cameras can have one of two gratings inserted in front of it, to enable high – and low-resolution spectroscopy. Spectroscopy at X-ray wavelengths is a bit different from optical spectroscopy. The spectral lines seen by Chandra are usually very high excitation lines of heavy elements like neon and iron, coming from gas that’s kept highly agitated by high-energy radiation or violent atomic collisions.25

There are three major differences between optical and X-ray detection of sources in the sky. The first is the way the radiation is gathered. X-rays falling directly on silvered glass have such high energy that they penetrate the surface and are absorbed, like tiny bullets. X-ray telescopes use a shallow angle of incidence, so that the photons bounce off the mirror like a stone skimming off water. Chandra uses a set of four concentric mirrors, six feet long and very slightly tapered so they almost look like nested cylinders. This method of gathering radiation makes it difficult to achieve a large collecting area. The second difference is the much higher energy of X-rays. Chandra measures photons in a range of energy from 0.1 to 10 keV, or 100 to 10,000 electron volts, which is a standard unit of measure for photons. On the electron volt energy scale, two numbers that bracket this range are 13.6 eV, the modest energy required to liberate an electron from a hydrogen atom and 511 keV, the rest mass energy of an electron. For reference, photons of visible light have wavelengths 10,000 times longer and energies 10,000 times lower.

With each photon packing such a punch, a typical astronomical source emits far fewer X-ray photons than visible photons, so each is very valuable. The goal in X-ray astronomy is to detect every photon individually. Very few photons are required to detect a source (this is helped by the fact that the “background rate” is low; X-rays are not created by miscellaneous or competing sources, so the X-ray sky is sparsely populated). In some of the deepest ob­servations made by Chandra, two or three photons collected over a two-week period is enough evidence to declare that a source has been detected. There are papers in the research literature with more authors than photons!

Chandra unlocks the violent universe because celestial X-rays have high energy and can only be produced by extreme physical processes.26 The Sun and all normal stars are very weak X-ray sources because their cool outer regions produce thermal radia­tion peaking at visible wavelengths. It would take a gas at hun­dreds of thousands or millions of degrees to emit copious X-rays; diffuse gas with this very high temperature is distributed between galaxies. Another way X-rays can be made is when particles are accelerated to extremely high energies; they release the energy in a smooth spectrum that extends to X-rays and even gamma rays.27 Despite the million plus X-ray sources that have been cataloged, there are thousands of times more optical sources, so the X-ray sky is relatively quiet. But many of those X-ray sources are extremely interesting because they’re situations where matter has been sub­ject to extreme violence.


At the beginning of this book we encountered the Greek philosophers who let their imaginations roam beyond the visible, everyday world. One was Democritus, who was forty years younger than Anaxagoras; apparently, they knew each other. Democritus was known as the “laughing philosopher” for his habit of seeing the lighter side of life and mocking human frailties. He developed an original idea of Leucippus into the atomic theory. According to Democritus, the physical world was made of microscopic, indivis­ible entities called atoms.1 The atoms were in constant motion and they could take up an infinite number of different arrangements; sensory attributes like hot, smooth, bitter, and acrid emerged from assemblages of atoms but were not properties of the atoms themselves.2 It’s a strikingly modern view of matter.3 By analogy, a beach looks smooth from a distance, but close up we see it’s actually composed of particles ranging from tiny sand grains to pebbles and boulders.

Democritus speculated in a similar way about the Milky Way, where its smoothness conceals the fact that it is composed of the combined light of myriad stars. Several centuries later, Archytas made a logical argument that there was space beyond the whirling crystalline spheres.4 Arriving at the edge of heaven, he imagined, and extending your arm or a staff, either it meets resistance and a physical boundary, or it extends beyond the edge. If the universe is contained, it must be contained within something larger. If we

can reach beyond the edge, we define a new edge and that must continue without limit. In this way he argued that space must be infinite.5 The Greek imagination leapt downward to the invisibly small and upward to the unknowably large. By the late twentieth century, scientists knew that the span of scales from the size of a proton to the size of the observable universe was 42 powers of ten.

Human imagination, however, is not so neatly bounded. We can easily imagine what is—a phenomenon that’s allowed by the laws of nature—as well as what isn’t—a phenomenon that vio­lates laws of physics as we know them. There’s even a third, lim­bic category—a situation that doesn’t violate any laws of nature but which doesn’t occur in our universe.6 In the early twenty-first century, scientists are once again pushing the boundaries of physi­cal explanation, small and large. So particle theorists dream of nine-dimensional vibrating strings deep within every subatomic particle—worlds within the world—and cosmologists dream of an ensemble of space-times with wildly different properties, of which our universe is just one example—worlds beyond the world.

On Earth, the horizon is the farthest you can see because of the curvature of the surface of the planet. The distance is surprisingly small; standing in a small boat on a large body of water you could only see three miles in any direction. Sailors in antiquity were fa­miliar with the slow disappearance of a ship as it sailed away, until only the tip of the mast was visible.7 This led them to speculate and search for exotic lands that might lie “beyond the horizon.” Astronomy also has the concept of a horizon as a limit to our view or to our knowledge. The event horizon emerges from the general theory of relativity. Einstein’s formulation of gravity is geomet­ric; instead of the linear and absolute space and time of Newton’s theory, Einstein made an explicit connection between space and time and posited that mass curves space. A compact enough object will distort space-time sufficiently to form a black hole and the event horizon is the surface that seals a black hole off from the rest of the universe.8 It’s not a physical barrier. Rather, it’s an informa­tion membrane; radiation and matter can pass inward but not out­ward. Black holes are enigmatic because the region inside the event horizon lies beyond the scrutiny of observations. Cosmology has its own version of this idea, which is complicated by the expan­sion of the universe. The “particle” horizon for the universe divides events into those we can or can’t see at the present time. The uni­verse had an origin, so there are regions from which light has not had time to reach us in the 13.8 billion years since the big bang. If we’re patient, light from ever more distant regions will reach us, so the observable universe grows with time. Meanwhile, the “event” horizon of the universe is the boundary between events that are visible at some time or another and events that aren’t visible at any time.9 Standard big bang cosmology predicts the existence of space-time beyond our horizon, possibly containing many star sys­tems in addition to the 1023 visible through our telescopes. There are many worlds beyond view, perhaps more than we can imagine.


Observational astronomy is a young activity. After tens of thou­sands of years of naked-eye observing, the telescope is just four hundred years old. Space astronomy is only fifty years into de­velopment, and the cost of launching a big glass into orbit means that the Hubble Space Telescope isn’t even among the fifty larg­est optical telescopes.10 In what follows we summarize what the future might hold for space science and astronomy, in terms of the missions recently launched or those on the near horizon, a few years from launch. The mid-horizon is five or ten years from now, where the uncertain funding landscape and the high cost of missions make the view very indistinct. The far horizon can de­pend on technologies not yet developed or perfected and is beyond view. Any projection of more than a few decades is in the realm of speculation and imagination. To close this book, we consider three pairs of missions that promise to advance our knowledge of other worlds—two within the Solar System, two looking at nearby stel­lar systems, and two studying the distant universe.

The heir to Viking and the Mars Exploration Rovers is the Mars Science Laboratory, or MSL.11 Following the tradition they estab­lished with Spirit and Opportunity, NASA asked students to sub­mit essays to give MSL a name. The winner, from among over nine thousand entries, was sixth grader Clara Ma, who wrote: “Curios­ity is an everlasting flame that burns in everyone’s mind. It makes me get out of bed in the morning and wonder what surprises life will throw at me that day.”12 And so the most sophisticated robot ever built was named Curiosity. Clara Ma got to inscribe her name on the metallic skin of the rover before it was bundled up for launch. Over a million people worldwide are part of the mission in a smaller way, by having their names added digitally to a micro­chip onboard. The public is clearly engaged in this Mars rover. In the months leading up to launch, Curiosity had over 30,000 fol­lowers on Twitter (plate 23).

The Mars Science Laboratory is a mission with a high degree of difficulty. Technical problems forced it to slip from a launch window in 2009 to the next one in late 2011, and meanwhile the budget ballooned to over $2.5 billion by the time of launch on November 26, 2011. Choosing a single place to land on Mars and answer profound questions about the entire planet was difficult; mission planners had a series of five workshops to whittle sixty possible sites down to four and they let the decision float as long as they could until selecting Gale Crater. It’s like playing roulette and “betting the farm” on a single number and a single spin of the wheel. The stakes are enormous—imagine trying to identify a single place on Earth that would be representative of all terrestrial geology and biology. Site selection was made a lot easier by maps from the Mars Reconnaissance Orbiter that shows features and surface rocks as small as a sofa. It hinged on engineering and safety constraints like having a navigable terrain and avoiding high lati­tudes where less energy is available to run the rover. Beyond that, the key landing site requirement was habitability: evidence for presence of surface water in the past.13 It’s hoped that Gale Crater will show evidence of once having been a shallow lake bed.

Pathfinder, which crawled over Mars in 1997, was little bigger than a child’s radio-controlled toy car. Spirit and Opportunity were the size of golf carts. Curiosity is like a small SUV. It’s ten feet long, nine feet wide, seven feet tall, and it weighs a ton.14 With that large mass, NASA couldn’t use the “bouncing airbag” landing method of previous rovers, where retro rockets slowed the lander and the payload sprouted airbags on all sides to protect it as it bounced to a lazy halt in the gentle Martian gravity. MSL used a procedure challenging enough to have flight engineers reaching for the Tums when the spacecraft started its final maneuvers 150 million miles from Earth with no real-time control possible. It steered through a series of S-shaped curves to lose speed, similar to those used by astronauts in landing the Space Shuttle. The target landing area is twelve miles across; that sounds large but it’s five times smaller than for any previous lander. A parachute slowed its descent for three minutes, then jettisoned its heat shield, leaving the rover ex­posed inside an aeroshell, attached to a “sky crane” mechanism. Retro rockets on the upper rim of the aeroshell further slowed the descent and then the aeroshell and parachute were jettisoned. At that point, the sky crane became the descent vehicle, with its own retro rockets guiding it toward the surface. About a hundred meters above the surface the sky crane slowed to a hover, and it lowered the rover to the surface on three cables and an electrical umbilical cord. The rover made a soft “wheels-down” landing and the connections were released (and severed as a fallback if the re­lease mechanism were to fail). The sky crane flew off to crash land at a safe distance.15 Landing was set for August 6, 2012. The rover then readied itself for two years of exploring the red planet. Whew.

That was the plan. And the outcome: everything worked flaw­lessly. All the scary outcomes and potential disasters were avoided and the spacecraft touched down gently near the edge of Gale Crater on August 6, 2012. Hundreds of engineers cheered, and the millions who watched online shared the pride of the team in their technical feat. In fact, the landing worked so well that NASA has decided to use the technology again to launch another rover in 2020, with an entirely different set of scientific instruments onboard.

Curiosity has a science payload five times heavier than any pre­vious Mars mission. There are ten instruments; eight have U. S. investigators, and one each comes from Russia and Spain. Curios­ity has a titanium robotic arm with two joints at the shoulder, one at the elbow, and two at the wrist. The arm can extend seven feet from the rover, and it’s powerful enough to pulverize rocks while being delicate enough to drop an aspirin tablet into a thimble. The arm is versatile.16 It has a hand lens imager that can see details smaller than the width of a human hair, it has a spectrometer that can identify elements, it has a geologist’s rock brush, and it has tools for scooping, sieving, and delivering rock and soil samples to instruments within the rover. A set of three instruments called “Sample Analysis at Mars” will analyze the atmosphere and sam­ples collected by the arm, with an emphasis on the detection of organic molecules and isotopes that trace the history of water in the rocks.17 The arm will deliver samples to another instrument that uses X-ray spectroscopy and fluorescence methods to analyze the complex mixture of minerals in a typical rock or sample of soil. Engineers took instruments that would fill a living room on Earth and squeezed them into a space the size of a microwave oven on Curiosity.

Film director James Cameron lobbied hard for a high-resolution 3D zoom camera to be included in the mission. The option for 3D imaging had been cut for budgetary reasons in 2007, but NASA was savvy enough to recognize a compelling public relations angle, so Cameron was appointed as a Mastcam co-investigator and in­formally as a “public engagement co-investigator” and the option was reinstated. However, in early 2011 the idea was shelved be­cause there wasn’t enough time to fully test the cameras before launch.18 Curiosity lost the potential for cinematic 3D imaging with a director’s eye, but retained its fixed focal length cameras, which should return crisp 3D images for the duration of the mis­sion. The mast cameras will view Mars from eye level, giving the public a sense of roaming on a distant world themselves. Nearly compensating for the loss of Cameron’s cinematography is a hi – tech instrument called ChemCam, which uses laser pulses to va­porize rocks at a distance of up to seven meters.19 It then performs a chemical analysis of the vapor, returning results within seconds. A “Star Wars” capability like that will undoubtedly capture the public imagination.

If Curiosity’s landing site is likened to a crime scene where dam­age was caused by the action of water, Curiosity is trying to iden­tify the culprit long after they have left the scene. It will tell us be­yond any reasonable doubt whether at least one part of this small, arid world was once wet and hospitable for life. Curiosity isn’t de­signed to find life. It’s not designed to detect DNA or other essen­tial biological molecules, and it can’t look for fossils in the rocks it studies.20 Rather, it’s designed to detect organic compounds and provide an inventory of life’s essential elements—carbon, hydro­gen, nitrogen, oxygen, sulfur, and phosphorus. Viking’s tantalizing verdict on life of “not proven” resonates among planetary scien­tists so they’re cautious in setting expectations for Curiosity, but the improvements will be dramatic. Unlike the Vikings, Curiosity will be at a site almost certain to have been wet in the past. It can roam widely for its samples; the Vikings could only gather soil their arms could reach. Curiosity will do analyses of the interiors of rocks and it will be extraordinarily sensitive, able to detect one part in a billion of organic compounds. It will test the hypothesis that perchlorate can mask the detection of organics. It will decide if low levels of Martian methane are more likely to be geological or biological in origin. It will even study the pattern of carbon atoms in any organic material it detects. Dominance of even or odd numbers of carbon atoms, as opposed to random mixtures of each, would be evidence of the repetitious subunits seen in biologi­cal assembly.21 Early results indicate that the Martian soil is chemi­cally complex and potentially life-bearing.

And then we’ll have to wait. The middle horizon for Solar Sys­tem exploration is cloudy.22 As impressive as Curiosity is, there’s only so much that can be learned from a small payload of instru­ments shipped to the Martian surface. If chunks of Mars could be brought back to Earth they could be analyzed in exhaustive detail, molecule by molecule. So sample return has long been the “Holy Grail” of Mars exploration.23 Unfortunately it will be dif­ficult and expensive. The current plan is for two rovers to go to the same site on Mars and drill down as much as six feet for samples, then “cache” them in a sealed storage unit. Then a rocket would be required to boost the samples into Mars orbit, and finally a third mission would ferry them to Earth. Just the first part of this procedure will cost at least $2.5 billion and we wouldn’t have the samples in our hands until 2022 or even later.

The alternative is equally exciting: a study of “water worlds” in the outer Solar System. NASA and ESA combined forces to draw up plans for a flagship mission to interesting targets far from the Earth. There were two concepts: an orbiter to study the Jovian moons Europa and Ganymede and another to study Saturn’s moon Titan. After a “shootout” in 2009, it was announced that the Jupi­ter mission would go first. The Europa Jupiter System Mission,24 or Laplace, will launch no earlier than 2020 and would reach the Jovian system in 2028, and it will consist of two orbiters: one built by NASA to study Europa and Io and another built by ESA to study Ganymede and Callisto. Just NASA’s part of this project has a price tag of $4.7 billion so funding is not yet guaranteed.25 If all goes well, 420 years after Galileo first noted Jupiter’s largest moons, they will be observed in unprecedented detail by robotic spacecraft.

These missions could cement the idea that “habitable worlds” don’t just include Earth-like planets, but also large moons of giant planets. Mars lies beyond the edge of the traditional habitable zone, where water can be liquid on the surface of a planet. The outer Solar System planets are miniature versions of the Sun in chemical composition, and so uninteresting for potential biology. However, large moons like Europa and Ganymede almost cer­tainly have watery oceans under crusts of ice and rock. Tidal and geological heating provide a localized energy source so all the in­gredients for biology are there. NASA’s orbiter will measure three­dimensional distribution of ice and water on Europa and deter­mine whether the icy crust has compounds relevant to prebiotic chemistry.26 ESA’s orbiter will do the same for the enigmatic moon Ganymede, the largest in the Solar System.27 The biological “real estate” of the cosmos could easily be dominated by moons like these, languishing far from their stars’ warming rays.

The second pair of missions is exploring new worlds farther from home, across swaths of the Milky Way galaxy. When the first planet beyond the Solar System was first discovered in 1995, it marked the dramatic end of decades of searching and frustration. A planet like Jupiter reflects hundreds of millions of times less light than is emitted by its parent star, and from afar it will ap­pear very close to the star, like a firefly lost in a floodlight’s glare. Astronomers therefore used stealth to discover exoplanets. They monitored the spectra of stars like the Sun and looked for Dop­pler shifts in the spectra caused by giant planets tugging on the star.28 This method has been an unqualified success; the number of exoplanets doubles every eighteen months, and more than three thousand have been discovered.29 It’s a major step in the continu-

ing Copernican Revolution—planets form throughout the galaxy as a natural consequence of star formation.

Timing is everything in science. It’s often fruitless to be too far ahead of your time, but being slow to innovate means being left in the dust. Bill Borucki hit the sweet spot with his idea of a dif­ferent way to find exoplanets. Working at NASA Ames Research Center, he realized that a planet transiting across its star would dim it slightly, by an amount equal to the ratio of the area of the planet to the area of the star. It was a statistical method since even if all stars had planets, only a small fraction of them would be oriented so that the planet passed in front of the star. He realized that with the stability of the space environment, it would be pos­sible to not only detect the 1 percent dimming caused by a Jupiter transit but to detect the 0.01 percent dimming of an Earth transit (figure 13.1). This was the chance to find other worlds like ours in distant space.30

Borucki and his team pitched the project to NASA Headquar­ters in 1992, but it was rejected as being technically too difficult. In 1994 they tried again, but this time it was rejected as being too expensive. In 1996, and then again in 1998, the proposal was re­jected on technical grounds, even though lab work had been done

on the proof of concept. By the time the project was finally given the go ahead as a NASA Discovery class mission in 2001, exoplan­ets had been discovered and the first transits had been detected from the ground.31 Persistence had paid off.

Kepler was launched in March 2009 and almost immediately showed it had the sensitivity to detect Earths. It is using a one- meter mirror to “stare” at about 150,000 stars in the direction of the Cygnus constellation. Designed to gather data for 3.5 years, its mission was extended in early 2012 through 2016, which is long enough for it to succeed in its core mission: conducting a census of Earth-like planets transiting Sun-like stars. Early in 2013, the Ke­pler team announced results from the first two years of data. The results were spectacular: over 2,700 high-probability candidates (plate 24), more than quadrupling the number of known exoplan­ets.32 Nearly 350 of these were similar to the Earth in size, and fifty-four were in the habitable zones of their stars, ten of which were Earth-sized. The team projected that 15 percent of stars would host Earths and 43 percent would host multiple planets. The estimated number of habitable worlds in the Milky Way is a hundred million. The imagination struggles to contemplate what might exist on so many potentially living worlds.

Kepler is looking at stars within a few hundred light-years, our celestial backyard. Meanwhile, astronomers have their sights set on mapping larger swaths of our stellar system. Due for a launch by the European Space Agency in 2013, Gaia is the heir to Hip – parcos, measuring all stars in the Milky Way down to a level a mil­lion times fainter than the naked eye can see. The heart of Gaia is a pair of telescopes and a two-ton optical camera, feeding a focal plane tiled with over a hundred electronic detectors. Kepler has a camera with 100 million pixels; Gaia’s camera has a billion.33 Hipparcos studied 100,000 stars; Gaia will chart the positions, colors, and brightness levels of a billion stars. Not only that, it will measure each star a hundred times, for a total of a hundred billion observations. Like WMAP, Gaia will be at the L2 Lagrange point a million miles from Earth, gently spinning to scan the sky during its five-year mission.34 The result will be a stereoscopic 3D map of our corner of the Milky Way galaxy.

Gaia will detect planets, although that’s not its primary objec­tive. It will carry out a full census of Jupiters within 150 light – years, not by looking for a Doppler shift or an eclipse of the par­ent star but by actually seeing the star wobble as it is tugged to and fro by the planet. This is a truly minuscule effect—the Sun wobbles around its edge due to the influence of Jupiter, but Gaia will be looking for a similar motion on stars hundreds of trillions of miles away. It’s like trying to see a penny pirouetting about its edge on the surface of the Moon. As many as 50,000 exoplanets may be discovered this way, compared to the current census of a few thousand candidates. Gaia will also detect tens of thousands of dark and warm worlds called brown dwarfs, which represent the “twilight” state of stars not massive enough to initiate nuclear fusion. It will also be able to detect dwarf planets like Pluto in the remote reaches of the Solar System. Such planets are distinctive and interesting geological worlds in their own right.

Gaia represents our “coming of age” in the Milky Way. It also provides a bridge to the last pair of missions we will consider, since the motions of the oldest stars in our galaxy were imparted at birth, so those motions represent a “frozen record” of how the gal­axy formed. Current theories suggest that large galaxies were as­sembled from the mergers of smaller galaxies. Most of these merg­ers occurred billions of years ago, but traces remain in the form of streams of stars threading the halo of the galaxy like strands of spaghetti.35 This type of galactic archaeology is a vital part of understanding how the universe turned from a smooth and hot gas to a cold void sprinkled with “island universes” of stars.

Cosmology is a vibrant field, spurred by new observational ca­pabilities, computer simulations, and increasingly tried-and-tested theories of gravity and the big bang origin of the universe. The mid-horizon in cosmology is defined by the James Webb Space Telescope (JWST), which is an ambitious successor to the Hub­ble Space Telescope.36 Hubble’s mirror is 2.4 meters in diameter, limited by the size of payload the Space Shuttle could heft into a low Earth orbit. JWST will gather seven times more light with its 6.5-meter mirror, even though it will be launched on an Ariane 5 rocket, which has a maximum payload diameter of 4 meters


Figure 13.2. The James Webb Space Telescope is the successor to the Hubble Space Tele­scope, due for launch in 2018. It will be located a million miles from the Earth and will unfold in orbit to become the largest telescope ever deployed in space. Its primary goal is to detect “first light” in the universe, when the first stars and galaxies formed, over 13 billion years ago (NASA/JWST).

(figure 13.2). This clever trick is accomplished by a design where the segments of the telescope unfold after it is deployed, like flower petals, adding greatly to the engineering challenge. JWST will also be located at L2, the increasingly crowded “watering hole” of many astrophysics missions. The telescope and its instruments will be too far away to be serviced by astronauts, so the stakes are high and everyone involved in the project feels the pressure. JWST’s cost has ballooned to over $8 billion and the launch date has slipped several times, with a current estimate of 2018. Although the project has broad support from the astronomical community, the high cost and repeated delays have brought it close to cancella­tion. NASA can only afford one flagship planetary science mission like MSL at a time, and it can only afford one flagship astronomy mission at a time. Until JWST is operating, no other big concept can get traction and a funding start. Astronomers struggle to main­tain solidarity under such conditions.

Since JWST is a general-purpose telescope, it will contribute to the study of exoplanets as well as the more remote universe. One of its instruments can take spectra of the feeble light reflected by a planet next to a remote star. By observing in the infrared, the exo­planet will be ten to twenty times brighter relative to the star than it would be at visible wavelengths. Astronomers will take the most Earth-l ike planets and search for spectral features due to oxygen and ozone. These two gases are telltale indicators of life on Earth, and if they’re seen in abundance in the atmospheres of exoplanets, it will be evidence of photosynthetic organisms elsewhere. The ob­servations are extremely challenging, requiring a hundred hours or more per target.37 But if they succeed, the detection of biomarkers will be a dramatic step in our quest to find a world like our own.

However, the core mission of JWST is to detect “first light.” First light is the time in the youthful universe when gravity has had time to congeal the first structures from the rapidly expanding and cooling gas. The big bang was unimaginably hot—a cauldron in­tense enough to melt the four forces of nature into one super-force. By 400,000 years after the big bang, the temperature had dropped to 3000 K and the universe was glowing dull red as the fog lifted and the photons of the radiation from the big bang began to travel freely through space. The “Dark Ages” began.38 Theorists are not sure of the exact timing, but they think that it took another hun­dred million years or so for small variations in density to grow enough for pockets of gas to collapse. This would have been when the universe was at room temperature, a hundred times smaller, and a million times denser than it is today. At this point the Dark Ages ended. It’s likely that stars formed first and went through a number of cycles of birth and death before they agglomerated into galaxies. The first stars were probably massive, 80-100 times the mass of the Sun, and they died quickly and violently, leaving be­hind black holes. In this strange universe no worlds like the Earth existed. The universe was made of hydrogen and helium; it would take generations of stars to live and die before enough heavy ele­ments were created for planets and biology to be possible.

Even JWST will probably not be able to detect the first stars, but it should be able to detect the first galaxies. Its superior capabili­ties are required because the Hubble Space Telescope has almost exhausted its light grasp as it looks for the first light. Less than 500 million years after the big bang, light travels so far and for so long to reach us that cosmic expansion stretches visible photons to near infrared wavelengths. Distant light slides off the red end of the spectrum and becomes invisible. In a sense, JWST is really looking for “first heat.”

To take us beyond the horizon, we look to a mission that is currently taking data. Planck is the successor to WMAP, launched by the European Space Agency in 2009. It has a primary mirror about the size of a dining room table made of carbon fiber re­inforced plastic coated with a thin layer of aluminum, weighing only twenty-eight pounds. The detectors are cooled to just above absolute zero. Planck improves on WMAP by a factor of three in angular resolution, a factor of ten in wavelength coverage, and a factor of ten in sensitivity. All of this means Planck will extract fifteen times more information from the cosmic microwave back­ground than WMAP. Nobody doubts the big bang anymore, so Planck is taking the theory to its limits, testing inflation, the notion that the universe underwent a phase of exponential expansion an amazing 10-35 seconds after the big bang.39 By making measure­ments with a precision of a millionth of a degree, the Planck mis­sion may be able to determine how long inflation lasted and detect the signature of primordial gravitational waves imprinted on the microwaves. Or it may not provide any support for inflation, driv­ing scientists back to the drawing board.

If inflation occurred, quantum fluctuations from the dawn of time became the seeds for galaxy formation. If inflation occurred, there are regions of space and time far beyond the reach of our best telescopes. If inflation occurred, one quantum fluctuation grew to become the cold and old universe around us. The initial conditions of the big bang can accommodate many quantum fluc­tuations, each of which gives rise to a universe with randomly dif­ferent physical properties. In most of those universes the properties would be unlikely to support long-lived stars and the formation of heavy elements and life. Inflation therefore supports the “multi – verse” concept, where there are myriad other universes with differ­ent properties, unobservable by us, and we happen to live in one of the rare universes hospitable for life and intelligent observers.40

In fact, the multiverse offers so many possibilities that everything that could possibly happen actually will happen somewhere. There might not just be clones of Earth beyond the horizon, but clones of us as well. Quantum genesis and the multiverse take us to a point where science becomes as wild and bold as any science fiction—a vision of worlds without end.


The Peoples Telescope

The Hubble Space Telescope has contributed to the identification of exoplanets, the dark energy that permeates the universe, and massive black holes that lurk in nearby galaxies. Probably no other science facility has left its mark in so many homes or done more to advance the general public’s understanding of the structure, age, and size of the universe. Breathtaking photographs of regions of star formation, stunning spiral galaxies, exploding planetary neb­ula, and the most distant galaxies in the visible universe spill out of coffee table books, adorn the walls of children’s bedrooms, and serve as computer screensavers. Why is that? The answer is not simply that the Hubble images pervade popular culture, though of course they do. There are multiple factors that might account for the telescope’s tremendous popularity and an increasing public awareness of, and affection for, the telescope.

While there are telescopes with twenty-five times the light­gathering power on Earth, Hubble remains the premier tool of astronomers due to its exquisite sensitivity, and the ubiquity of the Hubble photographs has been unprecedented. From newspa­pers and magazine covers, to planetarium and museum programs and displays, popular science books, posters, calendars, and post­age stamps, Hubble images pervade popular culture. Moreover, the emergence of the Internet has afforded global access to the telescope’s photos and scientific results. In July 1994, when Comet Shoemaker-Levy 9 slammed into Jupiter with an estimated ex­plosive force of six hundred times humanity’s entire nuclear ar­senal, Hubble offered up-close views of the devastating planetary impacts. Astronomer David Levy, co-discoverer of the comet, re­ported that millions of people around the world watched on televi­sion or via the Internet as the comet’s line of fragments bombarded the massive planet.20

Art historian Elizabeth Kessler offers several less obvious rea­sons why the Hubble Space Telescope is so cherished worldwide. She contends that Hubble’s spectacular images have become “in­terwoven into our larger visual culture” in part because of their very deliberate construction along the lines of sublime art, which often seeks to evoke grandeur, great height and breadth of field, and an overwhelming awe of the power of nature (plate 19). She points out that many of the Hubble images released by the Space Telescope Science Institute (STScI) and the associated Hubble Her­itage Project reflect the aesthetics of nineteenth-century paintings of the American West. “Light streams from above” in the Hubble images as in landscape paintings, explains Kessler,21 despite the fact that there is no up or down in space. Through such fram­ing strategies Hubble’s photos are often configured like landscape paintings or photographs.

Kessler argues that what also endears Hubble to so many is that its photos provide a means for public audiences “to imagine the possibility of seeing such spacescapes with our own eyes.”22 But Kessler is careful to clarify that all published Hubble photos are interpretations, usually composites of multiple images captured at various wavelengths of light. Even if we could travel at many times the speed of light across the galaxy to observe astrophysical ob­jects, because of the rods and cones in the human eye, we would likely be unable to see color in faint light sources. The reason is that the cones in the human eye allow us to see color, but cones need lots of light to do so. The rods require less light, but do not pick up color. That’s why photographs of the Milky Way arced across the night sky often include color that we cannot see with the naked eye.23

Kessler points out that the Hubble Heritage images are exten­sively processed and represent not what the human eye would see, “but a careful series of steps that translate numeric data into pic – ture.”24 Hubble collects data in visible light but also supports a suite of instruments that “see” at wavelengths not visible to the human eye, such as infrared and ultraviolet light that can reveal additional details. In actuality, HST’s electronic detectors see only intensity, which can be represented in grayscale, a range of shades of black and white, or gray. A set of filters made of colored glass, such as red, green and blue, are rotated in front of the detector as images are captured.25 With spectroscopy, different wavelengths of light are dispersed with a grating and linearly arrayed on the detector. Once the data have been collected, color is added in pro­cessing by combining the images using different filters with the appropriate weights to reproduce “true color”26 that can be used to distinguish gases within a nebula or their temperatures, or to distinguish young, hot, newly formed stars from older, cooler stars.

The Hot Big Bang

Pivotal roles in science are often taken by outsiders. The “father” of the big bang model was a part-time lecturer at the Catholic University of Leuven when he published a paper in a journal that was little read outside Belgium. George Lemaitre was a Catho­lic priest who had moved from civil engineering into physics and astronomy. Einstein was initially skeptical, but at a seminar by Lemaitre in Princeton in 1935, he expressed his admiration for the beauty of the theory.

The other intellectual parent of the scientific theory of creation was a Russian emigre called George Gamow. This physicist was elected to the Academy of Sciences of the U. S.S. R. at the age of twenty-eight, the youngest person ever to gain that honor. With his student Ralph Alpher, he showed that the early hot universe could have produced the amount of helium we see, which is too much to have been produced by stars. Gamow jokingly had his colleague Hans Bethe listed on the paper so it would read Alpher-Bethe – Gamow, in a pun on the first three letters of the Greek alphabet; Bethe had no other role in the paper.14 In 1948, the same year, Gamow and Robert Herman predicted that the afterglow should have cooled down after billions of years, filling the universe with microwave radiation at a temperature of five degrees above abso­lute zero, or 5 K.15 However, nobody pursued the prediction, due in part to a lack of widespread awareness of the theory, and in part to the primitive state of microwave technology at the time.

The new theory was given its catchy name by Fred Hoyle in a BBC radio broadcast in 1949.16 Even though he advocated the rival “steady state” theory, which didn’t involve a hot and dense early phase for the universe, he claimed the label was descriptive and not pejorative. As Hoyle noted, with typically sardonic wit, the big bang was an audacious theory: the entire universe, holding enough matter to yield more than a trillion trillion stars in many billions of galaxies, somehow emerged instantaneously and without any precedent from an iota of space-time! Steady state theory called for the gradual creation of matter in the vacuum of space between the receding galaxies. Although spontaneous creation of matter was ad hoc physics, it seemed like a more modest proposition.

For fifteen years, the status of the theory remained tentative. The universal recession of galaxies certainly pointed to a time when the universe was smaller, denser, and hotter. The explanation of the fact that the universe is a quarter helium by mass (and 10 percent by number of atoms) was a success, but the difficulty of measuring cosmic abundances of other light elements meant no further progress could be made on testing this idea, called big bang nucleosynthesis. Counts of extragalactic radio sources indicated the population was evolving and argued against the steady state theory. The missing ingredient was a decisive observation that fa­vored the hot big bang. It came by accident in 1964 when two engineers working at Bell Labs in Holmdel, New Jersey, detected a microwave signal of equal intensity in every direction in the sky (figure 12.2). NASA’s Wilkinson Microwave Anisotropy Probe, or WMAP, is the illustrious descendant of this pioneering experiment.

The Hot Big Bang


Figure 12.2. Theorists working with the expanding universe model predicted that the universe should be filled with relic radiation from the big bang, diluted and cooled by the expansion to just under 3 K. NASA’s COBE satellite measured the spectrum and shows the radiation had exactly the predicted temperature; the data fit the model so well that the error bars are smaller than the thickness of the curve (NASA/COBE/FIRAS Science Team).