Category Dreams of Other Worlds

X-rays in Popular Culture

News of Rontgen’s discovery of X-rays rippled around the world. “When Rontgen discovered X-rays in November of 1895 by plac­ing his hand between a Crookes Tube and a florescent screen, the medical applications of the technology were immediately appar – ent.”4 By January 1896, Rontgen had issued a report regarding his experiments to media in Europe and the United States. The public was immediately fascinated by the possibility of seeing through seemingly solid objects or inside the human body, and the topic made headlines in newspapers and weeklies. Apparently, X-r ay

X-rays in Popular Culture

Figure 10.1. At the end of the nineteenth century, X-rays were harnessed for medical imaging, where dense material like bones cast a shadow in X-rays. In the astronomical realm, a very hot gas or plasma emits X-rays, as do regions near compact objects like black holes. The X-rays may also be absorbed by intervening material, and they can only be detected by a satellite above the Earth’s atmosphere (NASA/CXC/M. Weiss).

“mania” swept through the public sphere but at a level far sur­passing a simple cultural fad, explains Nancy Knight, who notes that “X-rays appeared in advertising, songs, and cartoons.”5 John Lienhard similarly reports, “Seldom has anything taken hold of the public imagination so powerfully and completely. . . . Right away, magazine cartoons celebrated the idea. A typical one showed a man using an X-ray viewer to see through a lady’s hat at the theatre.”6

Edwin Gerson has investigated the speed with which X-rays were incorporated into the branding of products in the United States, and he explores why so many marketers used the term X – ray to appeal to consumers. Being a medical doctor, Gerson began collecting household products dating back to the 1890s, none of which have anything whatsoever to do with X-rays. One could purchase Sniteman’s X-Ray Liniment for horses and cattle, and Patt’s X-Ray Liniment for people. X-Ray Golf Balls, ostensibly to improve the accuracy of one’s game, were sold by John Wana – maker of New York. There was the D-cell X-Ray flashlight battery, X-Ray Stove Polish, X-Ray Cream Furniture Polish manufactured in New York, and X-Ray Soap made in Port Huron, Michigan. X-Ray Blue Double Edge razor blades were touted as “the fin­est blade known to science,” while drinking fountains for chicken houses were marketed by the X-Ray Incubator Company in Wayne, Nebraska. Housewives could purchase an X-ray Coffee Grinder or the X-Ray lemon squeezer, and the X-Ray Raisin Seeder was available as early as November 1896. A company from Baltimore, Maryland, sold boxes of X-Ray headache tablets, with 8 pills for ten cents promising relief in just fifteen minutes.7 Clearly, adver­tising by means of non sequitur is not just a phenomenon of the modern age.

The popular fascination with X-rays continued for decades. By 1924, an X-ray shoe fitter was deployed by shoe vendors to image both feet inside a new pair of shoes. David Lapp reports that in the United States “there were approximately 10,000 of these fluo – roscopes in use, being made by companies such as Adrian X-Ray Shoe Fitter,” and that by the early 1950s most shoe stores were using the devices.8 As Gerson points out, “The public was simply astonished with X rays, and advertisers played off this spellbound attention by adding the name to almost any type of product. . . . Not only did the image of the X ray convey a sense of cutting-edge technology, it also functioned as a metaphor for ‘powerful unseen truth and strength.’”9

Researchers and medical personnel working with X-rays were frequently exposed to dangerous or fatal levels of radiation, in­cluding Elizabeth Fleischmann, who at age 28 read a news article on how to build an X-ray device and by 1900 became the best medical radiologist in the United States. But in ten years’ time, Fleischmann had been so badly overexposed to radiation by im­aging her patients and developing the films that she did not sur – vive.10 At the time, the effect of exposure to radiation was poorly understood and this ignorance played out in dangerous exposure for both medical practitioners and those working with the new technology for purposes of entertainment. Nancy Knight, for in­stance, indicates that “papers reported daily” on Thomas Edison’s efforts to X-ray a person’s brain in order to figure out how the brain functions.11

Allen Grove reports that some speculated at the time that the new technology might afford a kind of X-ray vision that might make it possible to capture on film the images of ghosts, appar­ently a popular notion in England. He also comments that within months of Rontgen’s announcement, theaters were headlining performances inspired by the popularity of X-rays.12 Piano sheet music for American composer Harry L. Tyler’s “X Ray Waltzes” from 1896 depicted on its cover a man holding his hand under an X-ray tube. Beneath the man’s hand appears a skeletal X-ray image. In fact, in 2010, Tyler’s waltzes were dusted off and fea­tured in a BBC broadcast titled “Images that Changed the World.” And, any discussion of X-rays in popular culture must recall that in the 1930s, when writer Jerry Siegel and artist Joe Shuster in­vented the prototypical superhero, it was inevitable that Superman would be gifted with X-ray vision. Though Superman didn’t have such ability at the outset, in later adventures he is able to see in multiple wavelengths of the electromagnetic spectrum, including radio waves, infrared, ultraviolet, and eventually X-rays.13

Rontgen’s discovery had a deep shaping effect on the arts. This was because X-rays were an entree into a hidden world, the “world within the world” represented by the interior structure of atoms and molecules. In 1912, less than twenty years after the initial dis­covery, physicist Max von Laue showed that X-rays were a form of electromagnetic radiation because they diffracted when inter­acting with matter. A year later, the father and son team of William Henry and William Lawrence Bragg described mathematically how X-rays scattered within a crystal, winning the Nobel Prize in Physics for their work in 1915. This opened the door to X-ray crystallography, a powerful tool for measuring the spacing and arrangement of atoms within a solid. The impact of this discovery was particularly strong in the visual arts, which were undergoing their own revolution with the onset of Fauvism, Cubism, and Fu­turism. The X-ray was liberating because to artists it “proved that the external is not valid, it’s just a false layer.”14

Art historian Linda Dalrymple Henderson contends that X-rays suggested the possibility of stepping into the fourth dimension. She writes, “From the 1880’s to the 1920’s, popular fascination with an invisible, higher dimension of space—of which our famil­iar world might be only a section or shadow—is readily apparent in the vast number of articles and the books. . . published on the topic.” Henderson has demonstrated that modern artists read­ily embraced X-rays, and their ability to expose realities and per­spectives not perceptible to the human eye, as suggestive of four­dimensional space.15 Cubist paintings, with their simultaneous depiction of all sides of a seemingly transparent object, and their suggestion of interior planes, explains Henderson, “are testaments to the new paradigm of reality ushered in by the discovery of X – rays and interest in the fourth dimension. Such paintings are new kinds of ‘windows’—in this case, into the complex, invisible reality or higher dimensional world as imagined by the artist.”16 The in­spiration was sometimes very direct. According to historian Arthur Miller, “Picasso’s Standing Female Nude” (1910) was inspired by the power of X-rays to glimpse beyond the visible: what you see is not what you get. In this case, the inspiration was X-ray photo­graphs taken to diagnose the illness of Picasso’s mistress, Fernande Olivier. Superposed on a background of planes, her body lies open to reveal pelvic hip bones made up of geometrical shapes: forms reduced to geometry—the aesthetic of Cubism, inspired by mod­ern science.”17 From the arts, to medicine, and particularly through astronomy and space exploration, X-rays have profoundly altered human culture.

Dark Force

The Hubble Space Telescope has touched every area of astronomy. But some of its key contributions have profoundly shaped our view of the universe. One of the original hopes for the Hubble was to extend the Key Project, which measured the local or current ex­pansion rate, and measure the entire expansion history of the uni­verse over cosmic time. Hubble’s sensitivity and resolution allow it to observe supernovae in distant galaxies. A Type 1a supernova is a white dwarf star that detonates as a result of mass steadily siphoned onto it from a massive companion star. When the white dwarf exceeds the Chandrasekhar limit, which we encountered in the last chapter, it collapses and then explodes as a supernova. This well-regulated process means it’s a “standard bomb” with an in­trinsic brightness that doesn’t vary from one supernova to another by more than 15 percent. At peak brightness, the supernova rivals its parent galaxy in brightness (figure 11.4), so these explosions can be seen to distances of 10 billion light-years or more.39

In the mid-1990s, two groups studying supernovae saw some­thing utterly unexpected.40 In an expanding universe, the effect of normal matter and dark matter is to slow down the expansion rate. By looking back in time with more and more distant super­novae, these researchers had expected to see supernovae appear­ing slightly brighter than expected for a constant expansion rate. The reasoning is that deceleration reduces the distance between

Dark Force

Figure 11.4. Although the Hubble Space Telescope is relatively small by modern standards, its sharp imaging and sensitive instruments allow it to see supernovas at distances of billions of light-years. When they die, these stars rival the bright­ness of their surrounding galaxies, enabling the distances to those galaxies to be measured (NASA/STScI/P. Garnavich).

us and a supernova relative to constant expansion, making it ap­pear brighter. Instead, they saw the opposite effect: the distant su­pernovae were dimmer than expected. The interpretation was the distance to the dying star was larger than expected. Rather than decelerating, the universe has been accelerating! Cosmic accelera­tion is a very puzzling effect, because it implies a force acting in op­position to gravity. No such force is known in physics, so the cause of the phenomenon was called “dark energy,” where that phrase is really just a placeholder for ignorance. Dark energy seems to have the character of the cosmological constant, the term that Ein­stein added to the solutions of his equations of general relativity to suppress natural expansion (and which he later called the greatest blunder of his life). It’s new and fundamental physics.

The discovery of cosmic acceleration was based on images and spectra taken with ground-based telescopes. The Hubble Space Telescope didn’t make the initial discovery. But confirming the re­sult, extending the measurements to higher redshift, and putting constraints on the nature of dark energy—all of those have been essential contributions. Hubble’s depth has been used to trace the expansion history back over two thirds of the age of the universe. The data show that acceleration reverts to deceleration more than 5 billion years ago.41 Astronomers can now apportion the two major components of the universe: dark matter and dark energy. Dark energy accounts for 68 percent, dark matter accounts for 27 percent, diffuse and hot gas in intergalactic space is about 4.5 percent, and all the stars in all of the galaxies in the observable uni­verse amount to only 0.4 percent of the cosmic “pie.” Dark forces govern the universe.

Imagine the expanding universe with a brake and an accelerator. The “driver” isn’t very competent so they press both pedals at the same time. Dark matter is the brake because gravity slows the ex­pansion. Dark energy is the accelerator. In the first two thirds of the expansion history, the dark matter dominates and the expansion slows with time. But the effect of dark matter weakens, because the density and the gravity force go down, while dark energy has a constant strength in both time and space. So the pressure on the brake eases while the accelerator is pressed the same amount, and about 5 billion years ago all galaxies started to separate at ever – increasing rates. Some cosmologists consider it an unexplained coincidence that dark energy and dark matter—two mysterious entities with fundamentally different behavior—happen to have roughly the same strength and crossed paths relatively recently in cosmic time. This is the only time in cosmic history they are close to equal strength; for most of the early history of the universe, dark matter was utterly dominant and forever into the future dark energy will dominate. This coincidence only sharpens the enigma.

Precision Cosmology

WMAP has not only put us “in tune” with the cosmos; it has re­fined and sharpened our view of the extraordinary event that cre­ated all matter and energy 13.8 billion years ago.

WMAP has taken quantities that were poorly known or only hinted at and turned them into well-measured cosmological param – eters.58 The temperature is measured to a precision of a thousandth of a degree. Since space can be curved according to general relativ­ity, the universe can act as a gigantic lens. To do this vast optics experiment we look at the microwaves that have been traveling across space for billions of years. The fundamental harmonic of the microwave radiation sets the size of the “spot.” Radiation from that typical spot size travels through space and the angular size can be magnified or de-magnified depending on whether space has positive or negative curvature, which is like the universe acting as either a convex or a concave lens. WMAP has shown that the spot size doesn’t change so the universe is behaving like a smooth sheet of glass. The inference is that space is not curved; the universe is flat to a precision of 1 percent.59 This is just as expected from inflation.

WMAP has measured the mass and energy of the universe with unprecedented precision. The ratio of fundamental to first harmonic powers depends on the baryonic or normal matter con­tent of the universe. Ordinary atoms only make up 4.6 percent of the universe, to within 0.1 percent. The strength of the second harmonic is sensitive to dark matter, and shows that dark matter is 23 percent of the universe, to within 1 percent.60 Knowing the mass and energy content of the universe, General Relativity can be used to calculate the current age. It’s 13.73 billion years with a precision of 1 percent or 120 million years, though that precision depends on assuming that space is exactly flat (which is a good assumption). These numbers have been refined and slightly altered by Planck. The cosmic “pie” has dark slices and just a sliver of vis­ible stuff (figure 12.4).

We’ve also been able to learn about the epoch when the first stars and galaxies formed using the WMAP data. If stars formed soon enough after recombination, they would have ionized the still-diffusing gas. That would have recreated the conditions where photons bounce off electrons, as was routinely the case before re­combination. This late scattering imprints polarization on the mi­crowave radiation, analogous to sunlight being polarized when it bounces off a water surface. Polarization requires a special direc­tion, and that can only be seen if photons travel relatively freely before interacting. Before recombination no polarization is ex­pected. WMAP saw a polarization indicating that 20 percent of the photons were scattered by a sparse fog of ionized gas a couple of hundred million years after the big bang. This is surprising because astronomers didn’t expect the first stars to form that quickly.61

The sum of all WMAP’s improvements leaves little doubt that the universe began with a hot big bang. The model is described quite precisely and any competing idea would have to clear some very high bars of evidence to be viable. It’s extraordinary that we can know some attributes of the overall universe better than we know attributes of the Earth.

Precision Cosmology

Figure 12.4. Observations of the microwave background radiation combined with ground-based observations determine the contents of the universe. Most of the universe is in the form of enigmatic dark matter and dark energy, with just a small component of the normal matter that comprises all stars, planets, and people (NASA/WMAP Science Team).

A Gentle Astrophysical Giant

In 1923, Wilhelm Rontgen died as one of the most celebrated physicists of his time. A modest man, he had declined to take out a patent on his discovery, wanting X-rays to be used for the benefit of humankind. He also refused to name them after himself and donated the money he won from his Nobel Prize to his university.

That same year Subrahmanyan Chandrasekhar started high school in a small town in southern India. Chandra, as he was universally known, would grow into another great but modest scientific fig­ure. Chandra means Moon or luminous in Sanskrit. He was one of nine children and although his family had modest means, they valued education; therefore he was home schooled since local pub­lic schools were inferior and his parents couldn’t afford a private school. His family hoped Chandra would follow his father into government service but he was inspired by science and his mother supported his goal. In addition, he had a notable role model in his uncle, C. U. Raman, who went on to win the Nobel Prize in Physics in 1930 for the discovery of resonant scattering of light by molecules.18

Chandra’s family moved to Madras and he started at the univer­sity there, but was offered a scholarship to study at the University of Cambridge in England, which he accepted. He would never live in India again. At Cambridge he studied, and held his own, with some of the great scholars of the day. As a young man he fell into a controversy with Arthur Eddington, at the time the foremost stellar theorist in the world, which upset him greatly. Eddington refused to accept Chandra’s calculations on how a star might con­tinue to collapse when it had no nuclear reactions to keep it puffed up. Chandra was a gentle man, unfailingly polite and courteous. In part due to this conflict, he chose to accept a position at the Uni­versity of Chicago, where he spent the bulk of his career.

His name is associated with the Chandrasekhar limit, the theo­retical upper bound on the mass of a white dwarf star, or about 1.4 times the Sun’s mass. Above this limit, the force of gravity over­comes all resistance due to inter-particle forces, and a stellar corpse will collapse into an extraordinary dark object, either a neutron star, or if there’s enough mass, a black hole. In the 1970s, he spent several years developing the detailed theory of black holes. Chan­dra was amazingly productive and diverse in his scientific interests. He wrote more than four hundred research papers, and ten books, each on a different topic in astrophysics. For nineteen years, he was the editor-in-chief of The Astrophysical Journal, guiding it from a modest local publication to the international flagship jour­nal in astronomy. He mentored more than fifty graduate students while at the University of Chicago, many of whom went on to be the leaders in their fields. In 1983, he was honored with the Nobel Prize in Physics for his work on the theory of stellar structure and evolution.

Through his long career, Chandra watched the infant field of X-ray astronomy mature. The telescopes flown in sounding rock­ets in the early 1960s were no bigger than Galileo’s spyglass. Just before its 1999 launch, NASA’s Advanced X-Ray Astrophysical Facility was renamed the Chandra X-ray Observatory (CXO) in honor of this giant of twentieth-century astrophysics.19 In a span of less than four decades, Chandra improved on the sensitivity of the early sounding rockets by a factor of 100 million.20 The same sensitivity gain for optical telescopes, from Galileo’s best device to the Hubble Space Telescope, took four hundred years—ten times longer!

Dark Force Redux

Hubble has weighed in on the other major ingredient of the uni­verse: dark matter. The existence of dark matter was first indicated in an observation of the Coma Cluster of galaxies by the maver­ick Caltech astronomer Fritz Zwicky. Zwicky measured redshifts or radial velocities for galaxies in the cluster and saw velocities that were much higher than anticipated. The galaxies were buzzing around like angry bees at over 500 kilometers per second, over a million miles per hour.42 If the cluster had the mass indicated by the stars in all the galaxies, it wasn’t enough to keep those galaxies bound in one region of space. In fact, visible mass was insufficient by a factor of ten. The Coma Cluster should be flying apart. But it’s not, so Zwicky hypothesized an invisible form of matter to hold it together. It had to be a form of matter that exerted gravity but didn’t radiate light or even interact with radiation—dark matter.

This observation was so odd and so unexpected that most as­tronomers simply set it aside. (It didn’t help that Zwicky was bril­liant but extremely cantankerous, and he made a lot of enemies in the profession with his blunt and often rude comments.) But in the 1970s the rotation of spiral galaxies also indicated unseen forms of matter extending far beyond the visible stars. Astronomers re­visited Zwicky’s observation and found that it was correct, and that other clusters of galaxies showed the same effect. Dark matter was a ubiquitous feature of the universe, on galactic scales and beyond galaxies in the space between them.

The Hubble Space Telescope has cemented the measurement of dark matter in a very elegant way. Einstein’s theory of general rela­tivity says that mass bends light, and this prediction was confirmed in 1919 with starlight bending around the limb of the Sun during a solar eclipse. We’ve seen that both Cassini and Hipparcos had a hand in showing that relativity was correct. Zwicky realized that a galaxy could bend light by a detectable amount and he urged astronomers to search for the effect. Lensing was finally seen for the first time five years after Zwicky died, when the twin-i mage mirage of a single quasar was observed in 1979. In the mid-1980s, the phenomenon of lensed arcs was discovered with 4-meter tele­scopes on the ground. Each lensed arc is an image of a galaxy be­hind a rich cluster, magnified and distorted by the cluster. Cluster lensing has a very particular signature because the arcs are frag­ments of concentric circles centered on the cluster core. The beau­tiful part of the effect is that the gravitational deflection is sensitive to all mass, light or dark, so it’s a reliable way to weigh a cluster.

Hubble’s exquisite imaging has been used to study lensed arcs in dozens of clusters.43 Each background galaxy that gets imaged is a little experiment in gravitational optics (and a confirmation

Dark Force Redux

Figure 11.5. The cluster of galaxies Abell 2218 acts as a gravitational lens, where its visible and dark matter distort and amplify the light of more distant, back­ground galaxies. They are seen as tiny arcs in this Hubble image, many of which are concentric with the center of the cluster. The lensing effect is an affirmation of general relativity, where mass causes space-time curvature and the universe acts like a gigantic optics experiment (NASA/ESA/SM4 ERO Team).

of general relativity). In some clusters there are hundreds of little arcs, so the mass measurement of the cluster is very reliable— it’s like having an optics experiment with hundreds of light rays (figure 11.5). The observations clinch the fact that dark matter exceeds normal matter by a factor of six, or the ratio of the 27 percent to 4.5 percent contributions mentioned earlier. They also allow the dark matter to be mapped in the cluster, and the spatial distribution is critical information in helping decide the physical nature of this universal but mysterious substance.

Hubble has also played a pivotal role in locating a large type of dark object: massive black holes. Since the discovery of quasars in the 1960s, it has been clear that only a gravitational “engine” could generate so much energy from such a small volume. The energy source for quasars is thought to be a supermassive black hole, billions of times more massive than the Sun. As we saw in the last chapter, black holes aren’t always black. Nothing can escape from the event horizon, but a black hole will gather hot gas into a rotating disk around it. The disk siphons material into the black hole while the poles of the spinning black hole act as giant particle accelerators. The accretion disk emits huge amounts of ultraviolet and X-ray radiation while the jets emit radio waves. For a long time, astronomers thought that the galaxies surrounding the qua­sars were special because they housed such a black hole. Over the last fifteen years, Hubble has used its spectrographs to study stellar velocities near the centers of apparently normal galaxies near the Milky Way. Often, the data showed a sharp rise in star velocities near the nucleus of the galaxy.44 Calculating the density of mat­ter that would generate such high velocities in such a small vol­ume, the only possible explanation was an efficient gravitational engine—a black hole.

But these black holes were surprising in two ways. First, they weren’t as massive as the black holes that powered quasars. They ranged from 10 million to a few hundred million times the mass of the Sun. Second, the galaxies otherwise looked completely normal. Apparently, these black holes weren’t active even though there was plenty of “food,” or gaseous fuel in the center of these galaxies. As the data accumulated, it became clear that every galaxy has a black hole, with mass proportional to the mass of old stars in the galaxy, but those black holes must only be active a small fraction of the time and inert the rest of the time.45

This result has led to a paradigm shift in extragalactic astron­omy. There’s no division between “normal” and “active” galaxies— all galaxies are active at some level but not all the time. Black holes are a standard component of a galaxy. There are a few intermedi­ate mass black holes residing in globular clusters and in dwarf galaxies, filling in the mass range from a thousand to a million solar masses. Nature knows how to make black holes spanning a factor of a billion in mass! The low end of the range is collapsed stellar corpses the size of a city and the high end is behemoths in galaxies ten times larger than the Milky Way. Moreover, in their active phases black holes eject mass and quench star formation and generally alter the properties of the surrounding galaxy. The co-evolution of galaxies and black holes is now a major field of research; with its exquisite resolution and sensitivity Hubble is playing a big role. Some time in the first hundred million years or so after the big bang, the first stars and galaxies formed, and black holes began to grow at the same time.

Probing the Very Early Universe

The landscape in cosmology has shifted. Scientists no longer worry about the validity of the big bang model. It rests on a sturdy tri­pod of evidence. One leg is the expansion of the universe as traced by the recession of galaxies. Another leg is the cosmic abundance of light elements in the first three minutes, when the temperature was 10 million degrees. The third is the microwave background radiation. WMAP has advanced the level of diagnostic power of the third piece of evidence to a level that is unprecedented. The big bang model has so far passed all tests with flying colors.

The frontier of cosmology involves gaining better physical un­derstanding of dark matter and dark energy, and pushing tests of the big bang to earlier eras. WMAP has “weighed” dark matter with better accuracy than ever before, and it has shown that dark energy is an inherent property of space-time itself (as with Ein­stein’s “cosmological constant) rather than being a particle or field existing in space-time. The current frontier is the epoch of infla­tion, an incredible trillion trillion trillionth of a second after the big bang. Inflation was motivated by the unexpected flatness and smoothness of the universe, plus the lack of space – time glitches surviving to the present day, like monopoles and strings.

Inflation got early support from WMAP data showing that the strength of temperature variations was independent of scale. But in current theory, “inflation” is not one thing; it’s an umbrella for a bewildering array of ideas about the infant universe. Since it refers to a time when all forces of nature except gravity were unified, in­flation models involve speculation as to the fundamental nature of matter. Superstrings are one such concept for matter, but almost all the theories have to incorporate gravity in some way, so the theory of the very early universe is proximate to the search for a “theory of everything.”

In 2003, the WMAP team announced a new measurement re­sulting from the polarization measurements and increasingly re­fined temperature maps. The temperature variations are generally equal in strength on all scales, but as data improved, it became clear they were not exactly equal on all scales. The sense of this deviation was exactly as predicted by the favored inflation mod­els, and different from predictions by rival theories for the early expansion (exotic “cold big bang” models were also ruled out). In 2010, using WMAP’s seven-year data, the team even managed to use the data to confirm the helium abundance from the big bang, and they put constraints on the number of neutrino species and other exotic particles.62 Most gratifyingly, the data confirmed the fundamental correctness of the model of temperature variations as resulting from acoustic oscillations. The piper’s tune is better understood than ever before (plate 22).

Inflation is required by the data, and cosmologists are homing in on the correct model. Part of the inflation landscape is the fact that the physical universe—all that there is—is much larger than the visible universe—all we can see. Inflation also motivates the idea of the multiverse: parallel universes with wildly different proper­ties that emerge from the quantum substrate that preceded the big bang.63 Our dreams of other worlds should now be expanded to encompass other universes filled with worlds. The ambition and scope of these theories is extraordinary, and it undoubtedly would have amazed Pythagoras to know how far we’ve taken his ideas of a universe based on mathematics and harmony.

The Chandra X-ray Observatory

Until the Chandra X-ray Observatory was launched, the best X – ray telescopes were only as capable as Galileo’s best optical tele­scope, with limited collecting area and very poor angular resolu­tion. With Chandra, X-ray astronomers gained several orders of magnitude of sensitivity, and the ability to make images as sharp as a medium-size optical telescope. Chandra was the third of NASA’s four “Great Observatories.” The others are the Hubble Space Tele­scope, launched in 1990 and still doing frontier science, the Comp­ton Gamma Ray Observatory, launched in 1991 and deorbited in 2000 after a successful mission, and the Spitzer Space Telescope, launched in 2003 and currently in the final “warm” phase of its mission since its liquid helium coolant ran out in 2009 (plate 17).

It took a while to open the X-ray window on the universe be­cause the Earth’s atmosphere is completely opaque to X-rays. In the 1920s, scientists first proposed using versions of Robert God­dard’s rocket to explore the upper atmosphere and peer into space. However, this idea wasn’t realized until 1948, when a re-purposed V2 rocket was used to detect X-rays from the Sun.21 The next few decades saw the development of imaging capabilities for X-rays and new detector technologies, and X-ray astronomy tracked the maturation of the space program. The first X-ray source beyond the Solar System, Scorpius X-1, in the constellation of Scorpius, was detected by physicist Riccardo Giacconi in 1962.22 This in­tense source of high-energy radiation is a neutron star, the end result of the evolution of a massive star where gravity crushes the remnant to a state as dense as nuclear matter. The intense X-rays result from gas being drawn onto the neutron star from a com­panion and being heated violently enough to emit high-energy radiation.

Giacconi is another “giant” in astrophysics—as leading scientist for X-ray observatories from Uhuru in the 1970s to Chandra in the 1990s, first director of the Space Telescope Science Institute, and winner of the Nobel Prize in Physics in 2002. This last and ultimate accolade fittingly came almost exactly a century after the award of the first Nobel Prize in Physics for the discovery of X – rays to Wilhelm Rontgen. Born in Genoa, Italy, his early life was disrupted by the Second World War; as a high school student, he had to leave Milan during allied bombing raids. He returned to complete his degree and started his life as a scientist in the lab, working on nuclear reactions in cloud chambers. With a Fulbright Fellowship he moved to the United States and forged his career there. He had his hand in all the pivotal discoveries of X-ray as­tronomy: the identification of the first X-ray sources, character­ization of black holes and close binary systems, the high-energy emission that emerges from the heart of some galaxies, and the nature of the diffuse X-rays that seem to come from all directions in the sky.23

The growth in the number of celestial X-ray sources gives a sense of how each new mission has advanced the capabilities: 160 sources in the final catalog of the UHURU satellite in 1974, 840 in the HEAO A-1 catalog in 1984, nearly 8,000 from the com­bined Einstein and EXOSAT catalogs of 1990, and about 220,000 from the ROSAT catalog of 2000. Chandra has had a wider-field but lower sensitivity counterpart in the European XMM-Newton mission, also launched in 1999. These two X-ray satellites have detected a total of over a million X-ray sources.

Chandra was launched by the Space Shuttle Columbia into a highly elliptical orbit that takes it a third of the way to the Moon. At five tons, it was the most massive payload launched up to that time by the Space Shuttle. The elongated orbit gives it lots of “hang time” in the perfect vacuum of deep space and lets science be done for 55 hours of the 64-hour orbit. This comes at the expense of the spacecraft being unserviceable by the Shuttle when it was still flying, so the facility has to work perfectly. In fact, the only tech­nical problem was soon after launch when the imaging camera suffered radiation hits during passage through the Van Allen radia­tion belts; it is now stowed as the spacecraft passes through those regions. The spacecraft had a nominal five-year mission at time of launch but it’s producing good science well into its second decade and is expected to last at least fifteen years.24

One of two different imaging instruments can be the target of incoming X-rays at any given time. The High Resolution Camera uses a vacuum and a strong electric field to convert each X-ray into an electron and then amplify each one into a cloud of electrons. The camera can make measurements as quickly as 100,000 times per second, allowing it to detect flares or monitor rapid variations. Chandra’s workhorse instrument is the Advanced Camera for Im­aging Spectroscopy. With 10 CCDs, it has one hundred times bet­ter imaging capability than any previous X-ray instrument. Either of these cameras can have one of two gratings inserted in front of it, to enable high – and low-resolution spectroscopy. Spectroscopy at X-ray wavelengths is a bit different from optical spectroscopy. The spectral lines seen by Chandra are usually very high excitation lines of heavy elements like neon and iron, coming from gas that’s kept highly agitated by high-energy radiation or violent atomic collisions.25

There are three major differences between optical and X-ray detection of sources in the sky. The first is the way the radiation is gathered. X-rays falling directly on silvered glass have such high energy that they penetrate the surface and are absorbed, like tiny bullets. X-ray telescopes use a shallow angle of incidence, so that the photons bounce off the mirror like a stone skimming off water. Chandra uses a set of four concentric mirrors, six feet long and very slightly tapered so they almost look like nested cylinders. This method of gathering radiation makes it difficult to achieve a large collecting area. The second difference is the much higher energy of X-rays. Chandra measures photons in a range of energy from 0.1 to 10 keV, or 100 to 10,000 electron volts, which is a standard unit of measure for photons. On the electron volt energy scale, two numbers that bracket this range are 13.6 eV, the modest energy required to liberate an electron from a hydrogen atom and 511 keV, the rest mass energy of an electron. For reference, photons of visible light have wavelengths 10,000 times longer and energies 10,000 times lower.

With each photon packing such a punch, a typical astronomical source emits far fewer X-ray photons than visible photons, so each is very valuable. The goal in X-ray astronomy is to detect every photon individually. Very few photons are required to detect a source (this is helped by the fact that the “background rate” is low; X-rays are not created by miscellaneous or competing sources, so the X-ray sky is sparsely populated). In some of the deepest ob­servations made by Chandra, two or three photons collected over a two-week period is enough evidence to declare that a source has been detected. There are papers in the research literature with more authors than photons!

Chandra unlocks the violent universe because celestial X-rays have high energy and can only be produced by extreme physical processes.26 The Sun and all normal stars are very weak X-ray sources because their cool outer regions produce thermal radia­tion peaking at visible wavelengths. It would take a gas at hun­dreds of thousands or millions of degrees to emit copious X-rays; diffuse gas with this very high temperature is distributed between galaxies. Another way X-rays can be made is when particles are accelerated to extremely high energies; they release the energy in a smooth spectrum that extends to X-rays and even gamma rays.27 Despite the million plus X-ray sources that have been cataloged, there are thousands of times more optical sources, so the X-ray sky is relatively quiet. But many of those X-ray sources are extremely interesting because they’re situations where matter has been sub­ject to extreme violence.

The Deepest Picture Ever Taken

To learn how galaxies formed, astronomers took their best facility and pushed it to the limit. The result was the deepest picture of the sky ever made. The Hubble Ultra Deep Field had its genesis in an earlier project called the Hubble Deep Field, and a bold decision by the second director of the Space Telescope Science Institute, Bob Williams. In 1995, Williams devoted the 10 percent of the observ­ing time that he had at his discretion as director to a very deep multi-color image of a single patch of sky. To see why this was bold, let’s take a brief excursion into the culture and sociology of research astronomy.

For astronomers, the Hubble Space Telescope is the best game in town. The Hubble has made more discoveries and generated more papers than any other research facility. Every year, astronomers craft proposals for time, and there are many times more proposals than can get on the telescope. There’s a natural tendency to spread the bets. Directors had used their discretionary time similarly, giv­ing most of it to small proposals to ease the over-subscription. Williams decided to put all his eggs in one basket by devoting 150 orbits—a huge allocation of time—to a single deep image. His de­cision changed the culture of astronomy. He let the research com­munity decide where the telescope should be pointed for those 140 hours, and what color of filters should be used, but he insisted that the data be processed and made public immediately for any astron­omer to use. The tiny region chosen in Ursa Major—1/28,000,000 of the sky—contained over three thousand galaxies, and the data paper for the Hubble Deep Field has been cited more than eight hundred times by other research papers.46 Also, this large invest­ment of a scarce resource in one field persuaded infrared, radio, and X-ray astronomers to follow suit. Other leading telescopes put copious time into complementing the optical images with data across the electromagnetic spectrum, and in many cases those data

were also made available quickly. An intriguing mix of competi­tion and altruism spurred the research forward.

But what if the one field you pick isn’t typical for some rea­son? A premise of cosmology is that our location isn’t special or unusual. This assumption is called the cosmological principal; in practice it means showing that the universe is homogeneous and isotropic. Homogeneous means roughly the same at all locations, which is hard to prove since we can’t travel beyond the galaxy in a space ship. Isotropic means the same in all directions. While there’s been no indication that we see different numbers and types of galaxies looking in one direction in the universe compared to any other, astronomers were nervous, so Bob Williams committed additional Hubble time to a small, deep field in the southern sky in 2000. Since then, deep fields have sprouted like mushrooms. When a new sensitive camera was installed during the fourth servicing mission in 2002, the Space Telescope Institute director at the time, Steve Beckwith, upped the ante by putting four hundred orbits, a million seconds of observing time distributed in four colors, into a tiny patch of sky in the direction of the Fornax constellation. That’s the Hubble Ultra Deep Field.47 The most distant light in this image has taken 95 percent of the age of the universe to reach us, so it comes from close to the “dawn” of light. To get a sense of this incredible image, hold a pin out at arm’s length; the head of the pin covers as much sky as the image produced by Hubble’s CCD camera. Astronomers harvested 10,000 galaxies from this minuscule bit of the sky. The faintest are five billion times fainter than the eye can see, and Hubble can only collect one photon per minute from them—think of trying to see a firefly on the Moon. Surveying the entire sky to this depth would take a million years of uninterrupted observing.

The numbers are staggering, and they can be used to derive some important information about the contents of the universe. Since the Ultra Deep Field covers 1/13,000,000 of the sky, the projected total number of galaxies in all directions is 130 billion. Each galaxy will on average contain 400 billion stars, so there are about 1023 stars in the visible universe, or a hundred thousand bil­lion billion. That’s a mind-bending number, but the real excitement comes from the implications for life. We’ve learned that planets are ubiquitous around Sun-like stars and expect to know soon about the abundance of habitable and Earth-like planets. The number of potential biological experiments in the universe may not be very different from the number of stars. What odds would you put on us being alone? When you look at the faint galaxies littering these deep fields, mere smudges of ancient light, it’s irresistible to imag­ine that in many of them or even all of them someone or something is looking across the canyons of time and space back at you.

NEW HORIZONS, NEW WORLDS

At the beginning of this book we encountered the Greek philosophers who let their imaginations roam beyond the visible, everyday world. One was Democritus, who was forty years younger than Anaxagoras; apparently, they knew each other. Democritus was known as the “laughing philosopher” for his habit of seeing the lighter side of life and mocking human frailties. He developed an original idea of Leucippus into the atomic theory. According to Democritus, the physical world was made of microscopic, indivis­ible entities called atoms.1 The atoms were in constant motion and they could take up an infinite number of different arrangements; sensory attributes like hot, smooth, bitter, and acrid emerged from assemblages of atoms but were not properties of the atoms themselves.2 It’s a strikingly modern view of matter.3 By analogy, a beach looks smooth from a distance, but close up we see it’s actually composed of particles ranging from tiny sand grains to pebbles and boulders.

Democritus speculated in a similar way about the Milky Way, where its smoothness conceals the fact that it is composed of the combined light of myriad stars. Several centuries later, Archytas made a logical argument that there was space beyond the whirling crystalline spheres.4 Arriving at the edge of heaven, he imagined, and extending your arm or a staff, either it meets resistance and a physical boundary, or it extends beyond the edge. If the universe is contained, it must be contained within something larger. If we

can reach beyond the edge, we define a new edge and that must continue without limit. In this way he argued that space must be infinite.5 The Greek imagination leapt downward to the invisibly small and upward to the unknowably large. By the late twentieth century, scientists knew that the span of scales from the size of a proton to the size of the observable universe was 42 powers of ten.

Human imagination, however, is not so neatly bounded. We can easily imagine what is—a phenomenon that’s allowed by the laws of nature—as well as what isn’t—a phenomenon that vio­lates laws of physics as we know them. There’s even a third, lim­bic category—a situation that doesn’t violate any laws of nature but which doesn’t occur in our universe.6 In the early twenty-first century, scientists are once again pushing the boundaries of physi­cal explanation, small and large. So particle theorists dream of nine-dimensional vibrating strings deep within every subatomic particle—worlds within the world—and cosmologists dream of an ensemble of space-times with wildly different properties, of which our universe is just one example—worlds beyond the world.

On Earth, the horizon is the farthest you can see because of the curvature of the surface of the planet. The distance is surprisingly small; standing in a small boat on a large body of water you could only see three miles in any direction. Sailors in antiquity were fa­miliar with the slow disappearance of a ship as it sailed away, until only the tip of the mast was visible.7 This led them to speculate and search for exotic lands that might lie “beyond the horizon.” Astronomy also has the concept of a horizon as a limit to our view or to our knowledge. The event horizon emerges from the general theory of relativity. Einstein’s formulation of gravity is geomet­ric; instead of the linear and absolute space and time of Newton’s theory, Einstein made an explicit connection between space and time and posited that mass curves space. A compact enough object will distort space-time sufficiently to form a black hole and the event horizon is the surface that seals a black hole off from the rest of the universe.8 It’s not a physical barrier. Rather, it’s an informa­tion membrane; radiation and matter can pass inward but not out­ward. Black holes are enigmatic because the region inside the event horizon lies beyond the scrutiny of observations. Cosmology has its own version of this idea, which is complicated by the expan­sion of the universe. The “particle” horizon for the universe divides events into those we can or can’t see at the present time. The uni­verse had an origin, so there are regions from which light has not had time to reach us in the 13.8 billion years since the big bang. If we’re patient, light from ever more distant regions will reach us, so the observable universe grows with time. Meanwhile, the “event” horizon of the universe is the boundary between events that are visible at some time or another and events that aren’t visible at any time.9 Standard big bang cosmology predicts the existence of space-time beyond our horizon, possibly containing many star sys­tems in addition to the 1023 visible through our telescopes. There are many worlds beyond view, perhaps more than we can imagine.

IIIII

Observational astronomy is a young activity. After tens of thou­sands of years of naked-eye observing, the telescope is just four hundred years old. Space astronomy is only fifty years into de­velopment, and the cost of launching a big glass into orbit means that the Hubble Space Telescope isn’t even among the fifty larg­est optical telescopes.10 In what follows we summarize what the future might hold for space science and astronomy, in terms of the missions recently launched or those on the near horizon, a few years from launch. The mid-horizon is five or ten years from now, where the uncertain funding landscape and the high cost of missions make the view very indistinct. The far horizon can de­pend on technologies not yet developed or perfected and is beyond view. Any projection of more than a few decades is in the realm of speculation and imagination. To close this book, we consider three pairs of missions that promise to advance our knowledge of other worlds—two within the Solar System, two looking at nearby stel­lar systems, and two studying the distant universe.

The heir to Viking and the Mars Exploration Rovers is the Mars Science Laboratory, or MSL.11 Following the tradition they estab­lished with Spirit and Opportunity, NASA asked students to sub­mit essays to give MSL a name. The winner, from among over nine thousand entries, was sixth grader Clara Ma, who wrote: “Curios­ity is an everlasting flame that burns in everyone’s mind. It makes me get out of bed in the morning and wonder what surprises life will throw at me that day.”12 And so the most sophisticated robot ever built was named Curiosity. Clara Ma got to inscribe her name on the metallic skin of the rover before it was bundled up for launch. Over a million people worldwide are part of the mission in a smaller way, by having their names added digitally to a micro­chip onboard. The public is clearly engaged in this Mars rover. In the months leading up to launch, Curiosity had over 30,000 fol­lowers on Twitter (plate 23).

The Mars Science Laboratory is a mission with a high degree of difficulty. Technical problems forced it to slip from a launch window in 2009 to the next one in late 2011, and meanwhile the budget ballooned to over $2.5 billion by the time of launch on November 26, 2011. Choosing a single place to land on Mars and answer profound questions about the entire planet was difficult; mission planners had a series of five workshops to whittle sixty possible sites down to four and they let the decision float as long as they could until selecting Gale Crater. It’s like playing roulette and “betting the farm” on a single number and a single spin of the wheel. The stakes are enormous—imagine trying to identify a single place on Earth that would be representative of all terrestrial geology and biology. Site selection was made a lot easier by maps from the Mars Reconnaissance Orbiter that shows features and surface rocks as small as a sofa. It hinged on engineering and safety constraints like having a navigable terrain and avoiding high lati­tudes where less energy is available to run the rover. Beyond that, the key landing site requirement was habitability: evidence for presence of surface water in the past.13 It’s hoped that Gale Crater will show evidence of once having been a shallow lake bed.

Pathfinder, which crawled over Mars in 1997, was little bigger than a child’s radio-controlled toy car. Spirit and Opportunity were the size of golf carts. Curiosity is like a small SUV. It’s ten feet long, nine feet wide, seven feet tall, and it weighs a ton.14 With that large mass, NASA couldn’t use the “bouncing airbag” landing method of previous rovers, where retro rockets slowed the lander and the payload sprouted airbags on all sides to protect it as it bounced to a lazy halt in the gentle Martian gravity. MSL used a procedure challenging enough to have flight engineers reaching for the Tums when the spacecraft started its final maneuvers 150 million miles from Earth with no real-time control possible. It steered through a series of S-shaped curves to lose speed, similar to those used by astronauts in landing the Space Shuttle. The target landing area is twelve miles across; that sounds large but it’s five times smaller than for any previous lander. A parachute slowed its descent for three minutes, then jettisoned its heat shield, leaving the rover ex­posed inside an aeroshell, attached to a “sky crane” mechanism. Retro rockets on the upper rim of the aeroshell further slowed the descent and then the aeroshell and parachute were jettisoned. At that point, the sky crane became the descent vehicle, with its own retro rockets guiding it toward the surface. About a hundred meters above the surface the sky crane slowed to a hover, and it lowered the rover to the surface on three cables and an electrical umbilical cord. The rover made a soft “wheels-down” landing and the connections were released (and severed as a fallback if the re­lease mechanism were to fail). The sky crane flew off to crash land at a safe distance.15 Landing was set for August 6, 2012. The rover then readied itself for two years of exploring the red planet. Whew.

That was the plan. And the outcome: everything worked flaw­lessly. All the scary outcomes and potential disasters were avoided and the spacecraft touched down gently near the edge of Gale Crater on August 6, 2012. Hundreds of engineers cheered, and the millions who watched online shared the pride of the team in their technical feat. In fact, the landing worked so well that NASA has decided to use the technology again to launch another rover in 2020, with an entirely different set of scientific instruments onboard.

Curiosity has a science payload five times heavier than any pre­vious Mars mission. There are ten instruments; eight have U. S. investigators, and one each comes from Russia and Spain. Curios­ity has a titanium robotic arm with two joints at the shoulder, one at the elbow, and two at the wrist. The arm can extend seven feet from the rover, and it’s powerful enough to pulverize rocks while being delicate enough to drop an aspirin tablet into a thimble. The arm is versatile.16 It has a hand lens imager that can see details smaller than the width of a human hair, it has a spectrometer that can identify elements, it has a geologist’s rock brush, and it has tools for scooping, sieving, and delivering rock and soil samples to instruments within the rover. A set of three instruments called “Sample Analysis at Mars” will analyze the atmosphere and sam­ples collected by the arm, with an emphasis on the detection of organic molecules and isotopes that trace the history of water in the rocks.17 The arm will deliver samples to another instrument that uses X-ray spectroscopy and fluorescence methods to analyze the complex mixture of minerals in a typical rock or sample of soil. Engineers took instruments that would fill a living room on Earth and squeezed them into a space the size of a microwave oven on Curiosity.

Film director James Cameron lobbied hard for a high-resolution 3D zoom camera to be included in the mission. The option for 3D imaging had been cut for budgetary reasons in 2007, but NASA was savvy enough to recognize a compelling public relations angle, so Cameron was appointed as a Mastcam co-investigator and in­formally as a “public engagement co-investigator” and the option was reinstated. However, in early 2011 the idea was shelved be­cause there wasn’t enough time to fully test the cameras before launch.18 Curiosity lost the potential for cinematic 3D imaging with a director’s eye, but retained its fixed focal length cameras, which should return crisp 3D images for the duration of the mis­sion. The mast cameras will view Mars from eye level, giving the public a sense of roaming on a distant world themselves. Nearly compensating for the loss of Cameron’s cinematography is a hi – tech instrument called ChemCam, which uses laser pulses to va­porize rocks at a distance of up to seven meters.19 It then performs a chemical analysis of the vapor, returning results within seconds. A “Star Wars” capability like that will undoubtedly capture the public imagination.

If Curiosity’s landing site is likened to a crime scene where dam­age was caused by the action of water, Curiosity is trying to iden­tify the culprit long after they have left the scene. It will tell us be­yond any reasonable doubt whether at least one part of this small, arid world was once wet and hospitable for life. Curiosity isn’t de­signed to find life. It’s not designed to detect DNA or other essen­tial biological molecules, and it can’t look for fossils in the rocks it studies.20 Rather, it’s designed to detect organic compounds and provide an inventory of life’s essential elements—carbon, hydro­gen, nitrogen, oxygen, sulfur, and phosphorus. Viking’s tantalizing verdict on life of “not proven” resonates among planetary scien­tists so they’re cautious in setting expectations for Curiosity, but the improvements will be dramatic. Unlike the Vikings, Curiosity will be at a site almost certain to have been wet in the past. It can roam widely for its samples; the Vikings could only gather soil their arms could reach. Curiosity will do analyses of the interiors of rocks and it will be extraordinarily sensitive, able to detect one part in a billion of organic compounds. It will test the hypothesis that perchlorate can mask the detection of organics. It will decide if low levels of Martian methane are more likely to be geological or biological in origin. It will even study the pattern of carbon atoms in any organic material it detects. Dominance of even or odd numbers of carbon atoms, as opposed to random mixtures of each, would be evidence of the repetitious subunits seen in biologi­cal assembly.21 Early results indicate that the Martian soil is chemi­cally complex and potentially life-bearing.

And then we’ll have to wait. The middle horizon for Solar Sys­tem exploration is cloudy.22 As impressive as Curiosity is, there’s only so much that can be learned from a small payload of instru­ments shipped to the Martian surface. If chunks of Mars could be brought back to Earth they could be analyzed in exhaustive detail, molecule by molecule. So sample return has long been the “Holy Grail” of Mars exploration.23 Unfortunately it will be dif­ficult and expensive. The current plan is for two rovers to go to the same site on Mars and drill down as much as six feet for samples, then “cache” them in a sealed storage unit. Then a rocket would be required to boost the samples into Mars orbit, and finally a third mission would ferry them to Earth. Just the first part of this procedure will cost at least $2.5 billion and we wouldn’t have the samples in our hands until 2022 or even later.

The alternative is equally exciting: a study of “water worlds” in the outer Solar System. NASA and ESA combined forces to draw up plans for a flagship mission to interesting targets far from the Earth. There were two concepts: an orbiter to study the Jovian moons Europa and Ganymede and another to study Saturn’s moon Titan. After a “shootout” in 2009, it was announced that the Jupi­ter mission would go first. The Europa Jupiter System Mission,24 or Laplace, will launch no earlier than 2020 and would reach the Jovian system in 2028, and it will consist of two orbiters: one built by NASA to study Europa and Io and another built by ESA to study Ganymede and Callisto. Just NASA’s part of this project has a price tag of $4.7 billion so funding is not yet guaranteed.25 If all goes well, 420 years after Galileo first noted Jupiter’s largest moons, they will be observed in unprecedented detail by robotic spacecraft.

These missions could cement the idea that “habitable worlds” don’t just include Earth-like planets, but also large moons of giant planets. Mars lies beyond the edge of the traditional habitable zone, where water can be liquid on the surface of a planet. The outer Solar System planets are miniature versions of the Sun in chemical composition, and so uninteresting for potential biology. However, large moons like Europa and Ganymede almost cer­tainly have watery oceans under crusts of ice and rock. Tidal and geological heating provide a localized energy source so all the in­gredients for biology are there. NASA’s orbiter will measure three­dimensional distribution of ice and water on Europa and deter­mine whether the icy crust has compounds relevant to prebiotic chemistry.26 ESA’s orbiter will do the same for the enigmatic moon Ganymede, the largest in the Solar System.27 The biological “real estate” of the cosmos could easily be dominated by moons like these, languishing far from their stars’ warming rays.

The second pair of missions is exploring new worlds farther from home, across swaths of the Milky Way galaxy. When the first planet beyond the Solar System was first discovered in 1995, it marked the dramatic end of decades of searching and frustration. A planet like Jupiter reflects hundreds of millions of times less light than is emitted by its parent star, and from afar it will ap­pear very close to the star, like a firefly lost in a floodlight’s glare. Astronomers therefore used stealth to discover exoplanets. They monitored the spectra of stars like the Sun and looked for Dop­pler shifts in the spectra caused by giant planets tugging on the star.28 This method has been an unqualified success; the number of exoplanets doubles every eighteen months, and more than three thousand have been discovered.29 It’s a major step in the continu-

ing Copernican Revolution—planets form throughout the galaxy as a natural consequence of star formation.

Timing is everything in science. It’s often fruitless to be too far ahead of your time, but being slow to innovate means being left in the dust. Bill Borucki hit the sweet spot with his idea of a dif­ferent way to find exoplanets. Working at NASA Ames Research Center, he realized that a planet transiting across its star would dim it slightly, by an amount equal to the ratio of the area of the planet to the area of the star. It was a statistical method since even if all stars had planets, only a small fraction of them would be oriented so that the planet passed in front of the star. He realized that with the stability of the space environment, it would be pos­sible to not only detect the 1 percent dimming caused by a Jupiter transit but to detect the 0.01 percent dimming of an Earth transit (figure 13.1). This was the chance to find other worlds like ours in distant space.30

Borucki and his team pitched the project to NASA Headquar­ters in 1992, but it was rejected as being technically too difficult. In 1994 they tried again, but this time it was rejected as being too expensive. In 1996, and then again in 1998, the proposal was re­jected on technical grounds, even though lab work had been done

on the proof of concept. By the time the project was finally given the go ahead as a NASA Discovery class mission in 2001, exoplan­ets had been discovered and the first transits had been detected from the ground.31 Persistence had paid off.

Kepler was launched in March 2009 and almost immediately showed it had the sensitivity to detect Earths. It is using a one- meter mirror to “stare” at about 150,000 stars in the direction of the Cygnus constellation. Designed to gather data for 3.5 years, its mission was extended in early 2012 through 2016, which is long enough for it to succeed in its core mission: conducting a census of Earth-like planets transiting Sun-like stars. Early in 2013, the Ke­pler team announced results from the first two years of data. The results were spectacular: over 2,700 high-probability candidates (plate 24), more than quadrupling the number of known exoplan­ets.32 Nearly 350 of these were similar to the Earth in size, and fifty-four were in the habitable zones of their stars, ten of which were Earth-sized. The team projected that 15 percent of stars would host Earths and 43 percent would host multiple planets. The estimated number of habitable worlds in the Milky Way is a hundred million. The imagination struggles to contemplate what might exist on so many potentially living worlds.

Kepler is looking at stars within a few hundred light-years, our celestial backyard. Meanwhile, astronomers have their sights set on mapping larger swaths of our stellar system. Due for a launch by the European Space Agency in 2013, Gaia is the heir to Hip – parcos, measuring all stars in the Milky Way down to a level a mil­lion times fainter than the naked eye can see. The heart of Gaia is a pair of telescopes and a two-ton optical camera, feeding a focal plane tiled with over a hundred electronic detectors. Kepler has a camera with 100 million pixels; Gaia’s camera has a billion.33 Hipparcos studied 100,000 stars; Gaia will chart the positions, colors, and brightness levels of a billion stars. Not only that, it will measure each star a hundred times, for a total of a hundred billion observations. Like WMAP, Gaia will be at the L2 Lagrange point a million miles from Earth, gently spinning to scan the sky during its five-year mission.34 The result will be a stereoscopic 3D map of our corner of the Milky Way galaxy.

Gaia will detect planets, although that’s not its primary objec­tive. It will carry out a full census of Jupiters within 150 light – years, not by looking for a Doppler shift or an eclipse of the par­ent star but by actually seeing the star wobble as it is tugged to and fro by the planet. This is a truly minuscule effect—the Sun wobbles around its edge due to the influence of Jupiter, but Gaia will be looking for a similar motion on stars hundreds of trillions of miles away. It’s like trying to see a penny pirouetting about its edge on the surface of the Moon. As many as 50,000 exoplanets may be discovered this way, compared to the current census of a few thousand candidates. Gaia will also detect tens of thousands of dark and warm worlds called brown dwarfs, which represent the “twilight” state of stars not massive enough to initiate nuclear fusion. It will also be able to detect dwarf planets like Pluto in the remote reaches of the Solar System. Such planets are distinctive and interesting geological worlds in their own right.

Gaia represents our “coming of age” in the Milky Way. It also provides a bridge to the last pair of missions we will consider, since the motions of the oldest stars in our galaxy were imparted at birth, so those motions represent a “frozen record” of how the gal­axy formed. Current theories suggest that large galaxies were as­sembled from the mergers of smaller galaxies. Most of these merg­ers occurred billions of years ago, but traces remain in the form of streams of stars threading the halo of the galaxy like strands of spaghetti.35 This type of galactic archaeology is a vital part of understanding how the universe turned from a smooth and hot gas to a cold void sprinkled with “island universes” of stars.

Cosmology is a vibrant field, spurred by new observational ca­pabilities, computer simulations, and increasingly tried-and-tested theories of gravity and the big bang origin of the universe. The mid-horizon in cosmology is defined by the James Webb Space Telescope (JWST), which is an ambitious successor to the Hub­ble Space Telescope.36 Hubble’s mirror is 2.4 meters in diameter, limited by the size of payload the Space Shuttle could heft into a low Earth orbit. JWST will gather seven times more light with its 6.5-meter mirror, even though it will be launched on an Ariane 5 rocket, which has a maximum payload diameter of 4 meters

NEW HORIZONS, NEW WORLDS

Figure 13.2. The James Webb Space Telescope is the successor to the Hubble Space Tele­scope, due for launch in 2018. It will be located a million miles from the Earth and will unfold in orbit to become the largest telescope ever deployed in space. Its primary goal is to detect “first light” in the universe, when the first stars and galaxies formed, over 13 billion years ago (NASA/JWST).

(figure 13.2). This clever trick is accomplished by a design where the segments of the telescope unfold after it is deployed, like flower petals, adding greatly to the engineering challenge. JWST will also be located at L2, the increasingly crowded “watering hole” of many astrophysics missions. The telescope and its instruments will be too far away to be serviced by astronauts, so the stakes are high and everyone involved in the project feels the pressure. JWST’s cost has ballooned to over $8 billion and the launch date has slipped several times, with a current estimate of 2018. Although the project has broad support from the astronomical community, the high cost and repeated delays have brought it close to cancella­tion. NASA can only afford one flagship planetary science mission like MSL at a time, and it can only afford one flagship astronomy mission at a time. Until JWST is operating, no other big concept can get traction and a funding start. Astronomers struggle to main­tain solidarity under such conditions.

Since JWST is a general-purpose telescope, it will contribute to the study of exoplanets as well as the more remote universe. One of its instruments can take spectra of the feeble light reflected by a planet next to a remote star. By observing in the infrared, the exo­planet will be ten to twenty times brighter relative to the star than it would be at visible wavelengths. Astronomers will take the most Earth-l ike planets and search for spectral features due to oxygen and ozone. These two gases are telltale indicators of life on Earth, and if they’re seen in abundance in the atmospheres of exoplanets, it will be evidence of photosynthetic organisms elsewhere. The ob­servations are extremely challenging, requiring a hundred hours or more per target.37 But if they succeed, the detection of biomarkers will be a dramatic step in our quest to find a world like our own.

However, the core mission of JWST is to detect “first light.” First light is the time in the youthful universe when gravity has had time to congeal the first structures from the rapidly expanding and cooling gas. The big bang was unimaginably hot—a cauldron in­tense enough to melt the four forces of nature into one super-force. By 400,000 years after the big bang, the temperature had dropped to 3000 K and the universe was glowing dull red as the fog lifted and the photons of the radiation from the big bang began to travel freely through space. The “Dark Ages” began.38 Theorists are not sure of the exact timing, but they think that it took another hun­dred million years or so for small variations in density to grow enough for pockets of gas to collapse. This would have been when the universe was at room temperature, a hundred times smaller, and a million times denser than it is today. At this point the Dark Ages ended. It’s likely that stars formed first and went through a number of cycles of birth and death before they agglomerated into galaxies. The first stars were probably massive, 80-100 times the mass of the Sun, and they died quickly and violently, leaving be­hind black holes. In this strange universe no worlds like the Earth existed. The universe was made of hydrogen and helium; it would take generations of stars to live and die before enough heavy ele­ments were created for planets and biology to be possible.

Even JWST will probably not be able to detect the first stars, but it should be able to detect the first galaxies. Its superior capabili­ties are required because the Hubble Space Telescope has almost exhausted its light grasp as it looks for the first light. Less than 500 million years after the big bang, light travels so far and for so long to reach us that cosmic expansion stretches visible photons to near infrared wavelengths. Distant light slides off the red end of the spectrum and becomes invisible. In a sense, JWST is really looking for “first heat.”

To take us beyond the horizon, we look to a mission that is currently taking data. Planck is the successor to WMAP, launched by the European Space Agency in 2009. It has a primary mirror about the size of a dining room table made of carbon fiber re­inforced plastic coated with a thin layer of aluminum, weighing only twenty-eight pounds. The detectors are cooled to just above absolute zero. Planck improves on WMAP by a factor of three in angular resolution, a factor of ten in wavelength coverage, and a factor of ten in sensitivity. All of this means Planck will extract fifteen times more information from the cosmic microwave back­ground than WMAP. Nobody doubts the big bang anymore, so Planck is taking the theory to its limits, testing inflation, the notion that the universe underwent a phase of exponential expansion an amazing 10-35 seconds after the big bang.39 By making measure­ments with a precision of a millionth of a degree, the Planck mis­sion may be able to determine how long inflation lasted and detect the signature of primordial gravitational waves imprinted on the microwaves. Or it may not provide any support for inflation, driv­ing scientists back to the drawing board.

If inflation occurred, quantum fluctuations from the dawn of time became the seeds for galaxy formation. If inflation occurred, there are regions of space and time far beyond the reach of our best telescopes. If inflation occurred, one quantum fluctuation grew to become the cold and old universe around us. The initial conditions of the big bang can accommodate many quantum fluc­tuations, each of which gives rise to a universe with randomly dif­ferent physical properties. In most of those universes the properties would be unlikely to support long-lived stars and the formation of heavy elements and life. Inflation therefore supports the “multi – verse” concept, where there are myriad other universes with differ­ent properties, unobservable by us, and we happen to live in one of the rare universes hospitable for life and intelligent observers.40

In fact, the multiverse offers so many possibilities that everything that could possibly happen actually will happen somewhere. There might not just be clones of Earth beyond the horizon, but clones of us as well. Quantum genesis and the multiverse take us to a point where science becomes as wild and bold as any science fiction—a vision of worlds without end.

NEW HORIZONS, NEW WORLDS

Heart of Darkness

Black holes are black. That seems like a self-evident statement. Nothing can escape the event horizon of a black hole; the event horizon isn’t a physical barrier or a boundary but an information membrane, defining the region from which no particle or radiation can escape. Black holes are the ultimate expressions of general rel­ativity, where mass curves space so much that a region is pinched off from view.28

Black holes are the final states of massive stars. Every star is in a life-long battle between the forces of light and darkness. The light comes from fusion reactions creating pressure that pushes outward, while the dark is the implacable force of gravity pulling inward. In the end gravity always wins. When a star twenty or more times the Sun’s mass exhausts its nuclear fuel, the core collapses into a state so dense that nothing can escape, not even light. An isolated black hole would be black and undetectable. However, more than half of all stars are in binary or multiple systems, and that’s also true of the most massive stars that collapse catastrophically when their nuclear fuel is exhausted, forming a black hole. The rotation of the star is amplified in its newly compact state, so the black hole spins very fast. As material from a companion star is pulled onto the black hole, it forms a disk of gas, like water swirling into the drain of a bathtub.29 The disk is very hot, tens of thousands of degrees, and it glows in ultraviolet radiation and X-rays. Some hot plasma is accelerated along the pole of the spinning black hole, where it emits X-rays and gamma rays. So while a black hole is black, gas from a companion can be heated into pyrotechnic activity when it’s falling into the black hole (figure 10.2). The accretion process is well enough understood theoretically that X-ray signatures can be used to identify black holes. Some of the radiation comes from no more than 100 kilometers from the event horizon. The Chandra Observatory has played a vital role in this work.30

Chandra has the sensitivity to detect stellar black holes hun­dreds of light-years away. Only about twenty binary systems have well-enough measured masses to be sure the dark companion is a black hole, but X-ray observations can be used to identify black

Heart of Darkness

Figure 10.2. Black holes do not emit any energy or particles, but when a black hole is part of a binary system, gas is drawn from the companion onto an accretion disk that glows in X-rays. The binary orbit gives the mass of the black hole. The image is an artist’s impression, while the inset shows an X-ray spectrum which diagno­ses the temperature of the plasma near the black hole and gives clues to the black hole’s properties (NASA/CXC/M. Weiss/J. Miller).

holes with fairly high reliability. The examples studied with X – ray telescopes are the brightest representatives of a population of about 100 million black holes in the Milky Way.31

X-ray observations have also pushed the limit of our under­standing of black holes. In 2007, a research team used Chandra to discover a black hole in M33, a nearby spiral galaxy. The black hole was sixteen times the mass of the Sun, making it the most massive stellar black hole known.32 Moreover, it was in a binary orbit with a huge star seventy times the Sun’s mass. The formation mechanism of the black hole that placed it in such a tight embrace with its companion is unknown. This is the first black hole in a binary system that shows eclipses, which provides unusually ac­curate measurements of mass and other properties. The massive companion will also die as a black hole, so future astronomers will be able to gaze on a binary black hole where energy is lost as gravi­tational radiation and the two black holes dance a death spiral as they coalesce into a single beast.33