In 1911, a British scientist named C. T. R. Wilson was studying cloud formations by tramping regularly to the summit of Ben Nevis, a famously damp Scottish mountain, when it occurred to him that there must be an easier way. Back in the Cavendish Lab in Cambridge he built an artificial cloud chamber—a simple device in which he could cool and moisten the air, creating a reasonable model of a cloud in laboratory conditions.

The device worked very well, but had an additional, unexpected benefit. When he accelerated an alpha particle through the chamber to seed his make-believe clouds, it left a visible trail—like the contrails of a passing airliner. He had just invented the particle detector. It provided convincing evidence that subatomic particles did indeed exist.

Eventually two other Cavendish scientists invented a more powerful proton-beam device, while in California Ernest Lawrence at Berkeley produced his famous and impressive cyclotron, or atom-smasher as such devices were long excitingly known. All of these contraptions worked— and indeed still work—on more or less the same principle, the idea being to accelerate a proton or other charged particle to an extremely high speed along a track (sometimes circular, sometimes linear), then bang it into another particle and see what flies off. That’s why they were called atom-smashers. It wasn’t science at its subtlest, but it was generally effective.

As physicists built bigger and more ambitious machines, they began to find or postulate particles or particle families seemingly without number: muons, pions, hyperons, mesons, K-mesons, Higgs bosons, intermediate vector bosons, baryons, tachyons. Even physicists began to grow a little uncomfortable. “Young man,” Enrico Fermi replied when a student asked him the name of a particular particle, “if I could remember the names of these particles, I would have been a botanist.”

Today accelerators have names that sound like something Flash Gordon would use in battle: the Super Proton Synchrotron, the Large Electron-Positron Collider, the Large Hadron Collider, the Relativistic Heavy Ion Collider. Using huge amounts of energy (some operate only at night so that people in neighbouring towns don’t have to witness their lights fading when the apparatus is fired up), they can whip particles into such a state of liveliness that a single electron can do 47,000 laps around a 7-kilometre tunnel in under a second. Fears have been raised that in their enthusiasm scientists might inadvertently create a black hole or even something called “strange quarks,” which could, theoretically, interact with other subatomic particles and propagate uncontrollably. If you are reading this, that hasn’t happened.

An interior view of a cloud chamber, taken in 1932, providing early confirmation of the existence of neutrons. The white line shows the track of a proton recoiling after being struck by a neutron. The neutron, lacking an electric charge, leaves no track itself. (credit 11.2)

Finding particles takes a certain amount of concentration. They are not just tiny and swift but often also tantalizingly evanescent. Particles can come into being and be gone again in as little as 0.000000000000000000000001 of a second (10−24 seconds). Even the most sluggish of unstable particles hang around for no more than 0.0000001 of a second (10−7 seconds).

Some particles are almost ludicrously slippery. Every second the Earth is visited by ten thousand trillion trillion tiny, all-but-massless neutrinos (mostly shot out by the nuclear broilings of the Sun) and virtually all of them pass right through the planet and everything that is on it, including you and me, as if it weren’t there. To trap just a few of them, scientists need tanks holding up to 57,000 cubic metres of heavy water (that is, water with a relative abundance of deuterium in it) in underground chambers (old mines usually) where they can’t be interfered with by other types of radiation.

Very occasionally, a passing neutrino will bang into one of the atomic nuclei in the water and produce a little puff of energy. Scientists count the puffs and by such means take us very slightly closer to understanding the fundamental properties of the universe. In 1998, Japanese observers reported that neutrinos do have mass, but not a great deal—about one ten-millionth that of an electron.

(credit 11.3)

What it really takes to find particles these days is money and lots of it. There is a curious inverse relationship in modern physics between the tininess of the thing being sought and the scale of the facilities required to do the searching. CERN, the European Organization for Nuclear Research, is like a little city. Straddling the border of France and Switzerland, it employs three thousand people and occupies a site that is measured in square kilometres. CERN boasts a string of magnets that weigh more than the Eiffel Tower and an underground tunnel some 26 kilometres around.

Breaking up atoms, as James Trefil has noted, is easy; you do it each time you switch on a fluorescent light. Breaking up atomic nuclei, however, requires quite a lot of money and a generous supply of electricity. Getting down to the level of quarks—the particles that make up particles—requires still more: trillions of volts of electricity and the budget of a small Central American state. CERN’s new Large Hadron Collider, scheduled to begin operations in 2005, will achieve 14 trillion volts of energy and cost something over $1.5 billion to construct.1

But these numbers are as nothing compared with what could have been achieved by, and spent upon, the vast and now unfortunately never-to-be Superconducting Supercollider, which began construction near Waxahachie, Texas, in the 1980s, before experiencing a supercollision of its own with the United States Congress. The intention of the collider was to let scientists probe “the ultimate nature of matter,” as it is always put, by recreating as nearly as possible the conditions in the universe during its first ten thousand billionths of a second. The plan was to fling particles through a tunnel 84 kilometres long, achieving a truly staggering 99 trillion volts of energy. It was a grand scheme, but would have cost $8 billion to build (a figure that eventually rose to $10 billion) and hundreds of millions of dollars a year to run.

In perhaps the finest example in history of pouring money into a hole in the ground, Congress spent $2 billion on the project, then cancelled it in 1993 after 22 kilometres of tunnel had been dug. So Texas now boasts the most expensive hole in the universe. The site is, I am told by my friend Jeff Guinn of the Fort Worth Star-Telegram, “essentially a vast, cleared field dotted along the circumference by a series of disappointed small towns.”

Since the supercollider debacle, particle physicists have set their sights a little lower, but even comparatively modest projects can be quite breathtakingly costly when compared with, well, almost anything. A proposed neutrino observatory at the old Homestake Mine in Lead, South Dakota, would cost $500 million to build—this in a mine that is already dug—before even looking at the annual running costs. There would also be $281 million of “general conversion costs.” A particle accelerator at Fermilab in Illinois, meanwhile, cost $260 million merely to refit.

Particle physics, in short, is a hugely expensive enterprise—but it is a productive one. Today the particle count is well over 150, with a further 100 or so suspected, but unfortunately, in the words of Richard Feynman, “it is very difficult to understand the relationships of all these particles, and what nature wants them for, or what the connections are from one to another.” Inevitably, each time we manage to unlock a box, we find that there is another locked box inside. Some people think there are particles called tachyons, which can travel faster than the speed of light. Others long to find gravitons—the seat of gravity. At what point we reach the irreducible bottom is not easy to say. Carl Sagan in Cosmos raised the possibility that if you travelled downwards into an electron, you might find that it contained a universe of its own, recalling all those science-fiction stories of the 1950s. “Within it, organized into the local equivalent of galaxies and smaller structures, are an immense number of other, much tinier elementary particles, which are themselves universes at the next level and so on forever—an infinite downward regression, universes within universes, endlessly. And upward as well.”

A photograph, taken at CERN in 1986 in a piece of apparatus known as a streamer chamber, showing 220 charged subatomic particles bursting out when a high-energy oxygen nucleus collides with a nucleus in a lead target. (credit 11.4)

For most of us it is a world that surpasses understanding. To read even an elementary guide to particle physics nowadays you must find your way through lexical thickets such as this: “The charged pion and antipion decay respectively into a muon plus antineutrino and an antimuon plus neutrino with an average lifetime of 2.603 × 10−8 seconds, the neutral pion decays into two photons with an average lifetime of about 0.8 × 10−16 seconds, and the muon and antimuon decay respectively into…” And so it runs on—and this from a book for the general reader by one of the (normally) most lucid of interpreters, Steven Weinberg.

Construction work in 1975 on CERN’s Super-Proton-Synchroton accelerator, running for seven kilometres in a circle under French and Swiss countryside. (credit 11.5)

In the 1960s, in an attempt to bring just a little simplicity to matters, the Caltech physicist Murray Gell-Mann invented a new class of particles, essentially, in the words of Steven Weinberg, “to restore some economy to the multitude of hadrons”—a collective term used by physicists for protons, neutrons and other particles governed by the strong nuclear force. Gell-Mann’s theory was that all hadrons were made up of still smaller, even more fundamental particles. His colleague Richard Feynman wanted to call these new basic particles partons, as in Dolly, but was over-ruled. Instead they became known as quarks.

Explaining the unexplainable: the American theoretical physicist and Nobel laureate Richard Feynman lecturing on the vastly complex theories of “quarks,” a collective term that encompasses all particles that are governed by the strong nuclear force. (credit 11.6)

Gell-Mann took the name from a line in Finnegans Wake: “Three quarks for Muster Mark!” (Discriminating physicists rhyme the word with storks, not larks, even though the latter is almost certainly the pronunciation Joyce had in mind.) The fundamental simplicity of quarks was not long-lived. As they became better understood it was necessary to introduce subdivisions. Although quarks are much too small to have colour or taste or any other physical characteristics we would recognize, they became clumped into six categories—up, down, strange, charm, top and bottom—which physicists oddly refer to as their “flavours,” and these are further divided into the colours red, green and blue. (One suspects that it was not altogether coincidental that these terms were first applied in California during the age of psychedelia.)

Eventually out of all this emerged what is called the Standard Model, which is essentially a sort of parts kit for the subatomic world. The Standard Model consists of six quarks, six leptons, five known bosons and a postulated sixth, the Higgs boson (named for a Scottish scientist, PeterHiggs), plus three of the four physical forces: the strong and weak nuclear forces and electromagnetism.

The arrangement essentially is that among the basic building blocks of matter are quarks; these are held together by particles called gluons; and together quarks and gluons form protons and neutrons, the stuff of the atom’s nucleus. Leptons are the source of electrons and neutrinos. Quarks and leptons together are called fermions. Bosons (named for the Indian physicist S. N. Bose) are particles that produce and carry forces, and include photons and gluons. The Higgs boson may or may not actually exist; it was invented simply as a way of endowing particles with mass.

It is all, as you can see, just a little unwieldy, but it is the simplest model that can explain all that happens in the world of particles. Most particle physicists feel, as Leon Lederman remarked in a 1985 television documentary, that the Standard Model lacks elegance and simplicity. “It is too complicated. It has too many arbitrary parameters,” Lederman said. “We don’t really see the creator twiddling twenty knobs to set twenty parameters to create the universe as we know it.” Physics is really nothing more than a search for ultimate simplicity, but so far all we have is a kind of elegant messiness—or as Lederman put it: “There is a deep feeling that the picture is not beautiful.”

Three of the twentieth century’s greatest physicists. Left: Murray Gell-Mann (credit 11.7a) Middle: Satyendra Bose (credit 11.7b) Right: Leon Lederman (credit 11.7c)

The Standard Model is not only ungainly but incomplete. For one thing, it has nothing at all to say about gravity. Search through the Standard Model as you will and you won’t find anything to explain why when you place a hat on a table it doesn’t float up to the ceiling. Nor, as we’ve just noted, can it explain mass. In order to give particles any mass at all we have to introduce the notional Higgs boson; whether it actually exists is a matter for twenty-first century physics. As Feynman cheerfully observed: “So we are stuck with a theory, and we do not know whether it is right or wrong, but we do know that it is a little wrong, or at least incomplete.”

In an attempt to draw everything together, physicists have come up with something called superstring theory. This postulates that all those little things like quarks and leptons that we had previously thought of as particles are actually “strings”—vibrating strands of energy that oscillate in eleven dimensions, consisting of the three we know already plus time and seven other dimensions that are, well, unknowable to us. The strings are very tiny—tiny enough to pass for point particles.

The Standard Model, the simplest scheme yet devised to convey the decidedly unsimple world of particles. The model constitutes “a kind of elegant messiness.” (credit 11.8)

By introducing extra dimensions, superstring theory enables physicists to pull together quantum laws and gravitational ones into one comparatively tidy package; but it also means that anything scientists say about the theory begins to sound worryingly like the sort of thoughts that would make you edge away if conveyed to you by a stranger on a park bench. Here, for example, is the physicist Michio Kaku explaining the structure of the universe from a superstring perspective:

The heterotic string consists of a closed string that has two types of vibrations, clockwise and counterclockwise, which are treated differently. The clockwise vibrations live in a ten-dimensional space. The counterclockwise live in a 26-dimensional space, of which 16 dimensions have been compactified. (We recall that in Kaluza’s original five-dimensional, the fifth dimension was compactified by being wrapped up into a circle.)

And so it goes, for some 350 pages.

String theory has further spawned something called M theory, which incorporates surfaces known as membranes—or simply branes to the hipper souls of the world of physics. This, I’m afraid, is the stop on the knowledge highway where most of us must get off. Here is a sentence from theNew York Times, explaining this as simply as possible to a general audience:

The ekpyrotic process begins far in the indefinite past with a pair of flat empty branes sitting parallel to each other in a warped five-dimensional space … The two branes, which form the walls of the fifth dimension, could have popped out of nothingness as a quantum fluctuation in the even more distant past and then drifted apart.

No arguing with that. No understanding it either. Ekpyrotic, incidentally, comes from the Greek word for conflagration.

Andrew Strominger and Cumrun Vafa of Harvard jocularly demonstrate string theory, which holds that quarks resemble strings of energy that exist in multiple dimensions, many of them beyond the comprehension of humans. (credit 11.9)

Matters in physics have now reached such a pitch that, as Paul Davies noted in Nature, it is “almost impossible for the non-scientist to discriminate between the legitimately weird and the outright crackpot.” The question came interestingly to a head in the autumn of 2002 when two French physicists, twin brothers Igor and Grichka Bogdanov, produced a theory of ambitious density involving such concepts as “imaginary time” and the “Kubo–Schwinger–Martin condition,” and purporting to describe the nothingness that was the universe before the Big Bang—a period that was always assumed to be unknowable (since it predated the birth of physics and its properties).

Almost at once the Bogdanov theory excited debate among physicists as to whether it was twaddle, a work of genius or a hoax. “Scientifically, it’s clearly more or less complete nonsense,” Columbia University physicist Peter Woit told the New York Times, “but these days that doesn’t much distinguish it from a lot of the rest of the literature.”

Karl Popper, whom Steven Weinberg has called “the dean of modern philosophers of science,” once suggested that there may not in fact be an ultimate theory for physics—that, rather, every explanation may require a further explanation, producing “an infinite chain of more and more fundamental principles.” A rival possibility is that such knowledge may simply be beyond us. “So far, fortunately,” writes Weinberg in Dreams of a Final Theory, “we do not seem to be coming to the end of our intellectual resources.”

Saul Steinberg (credit 11.10)

Almost certainly this is an area that will see further developments of thought, and almost certainly again these thoughts will be beyond most of us. While physicists in the middle decades of the twentieth century were looking perplexedly into the world of the very small, astronomers were finding no less arresting an incompleteness of understanding in the universe at large.

When we last met Edwin Hubble, he had determined that nearly all the galaxies in our field of view are flying away from us, and that the speed and distance of this retreat are neatly proportional: the further away the galaxy, the faster it is moving. Hubble realized that this could be expressed with a simple equation, Ho = v/d (where Ho is the constant, v is the recessional velocity of a flying galaxy and d its distance away from us). Ho has been known ever since as the Hubble constant and the whole as Hubble’s Law. Using his formula, Hubble calculated that the universe was about two billion years old, which was a little awkward because even by the late 1920s it was increasingly evident that many things within the universe—including, probably, the Earth itself—were older than that. Refining this figure has been an ongoing preoccupation of cosmology.

Edwin Hubble photographed shortly before his death in 1953. By measuring the speed at which galaxies are receding, Hubble in his later years came up with a formula known as Hubble’s Law, suggesting that the universe was about two billion years old. The figure is universally agreed to be wrong, but by how much is still undecided. (credit 11.11)

Almost the only thing constant about the Hubble constant has been the amount of disagreement over what value to give it. In 1956, astronomers discovered that Cepheid variables were more variable than they had thought; they came in two varieties, not one. This allowed them to rework their calculations and come up with a new age for the universe of between seven billion and twenty billion years—not terribly precise, but at least old enough, at last, to embrace the formation of the Earth.

In the years that followed there erupted a dispute that would run and run, between Allan Sandage, heir to Hubble at Mount Wilson, and Gérard de Vaucouleurs, a French-born astronomer based at the University of Texas. Sandage, after years of careful calculations, arrived at a value for the Hubble constant of 50, giving the universe an age of twenty billion years. De Vaucouleurs was equally certain that the Hubble constant was 100.2 This would mean that the universe was only half the size and age that Sandage believed—ten billion years. Matters took a further lurch into uncertainty when in 1994 a team from the Carnegie Observatories in California, using measures from the Hubble Space Telescope, suggested that the universe could be as little as 8 billion years old—an age even they conceded was younger than some of the stars within the universe. In February 2003, a team from NASA and the Goddard Space Flight Center in Maryland, using a new, far-reaching type of satellite called the Wilkinson Microwave Anistropy Probe, announced with some confidence that the age of the universe is 13.7 billion years, give or take a hundred million years or so. There matters rest, at least for the moment.

The difficulty in making final determinations is that there is often acres of room for interpretation. Imagine standing in a field at night and trying to decide how far away two distant electric lights are. Using fairly straightforward tools of astronomy you can easily enough determine that the bulbs are of equal brightness and that one is, say, 50 per cent more distant than the other. But what you can’t be certain of is whether the nearer light is, let us say, a 58-watt bulb that is 37 metres away or a 61-watt light that is 36.5 metres away. On top of that you must make allowances for distortions caused by variations in the Earth’s atmosphere, by intergalactic dust, by contaminating light from foreground stars and many other factors. The upshot is that your computations are necessarily based on a series of nested assumptions, any of which could be a source of contention. There is also the problem that access to telescopes is always at a premium and historically measuring red shifts has been notably costly in telescope time. It could take all night to get a single exposure. In consequence, astronomers have sometimes been compelled (or willing) to base conclusions on notably scanty evidence. In cosmology, as the journalist Geoffrey Carr has suggested, we have “a mountain of theory built on a molehill of evidence.” Or as Martin Rees has put it: “Our present satisfaction [with our state of understanding] may reflect the paucity of the data rather than the excellence of the theory.”

This uncertainty applies, incidentally, to relatively nearby things as much as to the distant edges of the universe. As Donald Goldsmith notes, when astronomers say that the galaxy M87 is 60 million light years away, what they really mean (“but do not often stress to the general public”) is that it is somewhere between 40 million and 90 million light years away—not quite the same thing. For the universe at large, matters are naturally magnified. For all the éclat surrounding the latest pronouncements, we remain a long way from unanimity.

One interesting theory recently suggested is that the universe is not nearly as big as we thought; that when we peer into the distance some of the galaxies we see may simply be reflections, ghost images created by rebounded light.

A necessarily fanciful artist’s rendering of “dark matter,” which is invisible to us and yet is believed to account for 90 per cent, or more, of all the matter in the universe. Dark matter was first theorized in the 1930s by Fritz Zwicky, but was widely dismissed during his lifetime. (credit 11.12)

The fact is, there is a great deal, even at quite a fundamental level, that we don’t know—not least what the universe is made of. When scientists calculate the amount of matter needed to hold things together, they always come up desperately short. It appears that at least 90 per cent of the universe, and perhaps as much as 99 per cent, is composed of Fritz Zwicky’s “dark matter”—stuff that is by its nature invisible to us. It is slightly galling to think that we live in a universe that for the most part we can’t even see, but there you are. At least the names for the two main possible culprits are entertaining: they are said to be either WIMPs (for Weakly Interacting Massive Particles, which is to say specks of invisible matter left over from the Big Bang) or MACHOs (for MAssive Compact Halo Objects—really just another name for black holes, brown dwarfs and other very dim stars).

Dark objects known as MACHOs surround the Milky Way galaxy in one theorized version of the depths of space. Such objects would consist of brown dwarf stars, neutron stars, black holes and possibly other light-shy objects, and together would account for the missing mass of the universe. (credit 11.13)

Particle physicists have tended to favour the particle explanation of WIMPs, astrophysicists the stellar explanation of MACHOs. For a time MACHOs had the upper hand, but not nearly enough of them were detected, so sentiment swung back towards WIMPs—with the problem that no WIMP has ever been found. Because they are weakly interacting, they are (assuming they even exist) very hard to identify. Cosmic rays would cause too much interference. So scientists must go deep underground. One kilometre underground cosmic bombardments would be one-millionth what they would be on the surface. But even when all these are added in, “two-thirds of the universe is still missing from the balance sheet,” as one commentator has put it. For the moment we might very well call them DUNNOS (for Dark Unknown Nonreflective Nondetectable Objects Somewhere).

Recent evidence suggests not only that the galaxies of the universe are racing away from us, but that they are doing so at a rate that is accelerating. This is counter to all expectations. It appears that the universe may be filled not only with dark matter, but with dark energy. Scientists sometimes also call it vacuum energy or quintessence. Whatever it is, it seems to be driving an expansion that no-one can altogether account for. The theory is that empty space isn’t so empty at all—that there are particles of matter and antimatter popping into existence and popping out again—and that these are pushing the universe outwards at an accelerating rate. Improbably enough, the one thing that resolves all this is Einstein’s cosmological constant—the little piece of maths he dropped into the General Theory of Relativity to stop the universe’s presumed expansion and that he called “the biggest blunder of my life.” It now appears that he may have got things right after all.

The upshot of all this is that we live in a universe whose age we can’t quite compute, surrounded by stars whose distances from us and each other we don’t altogether know, filled with matter we can’t identify, operating in conformance with physical laws whose properties we don’t truly understand.

(credit 11.14)

And on that rather unsettling note, let’s return to Planet Earth and consider something that we do understand—though by now you perhaps won’t be surprised to hear that we don’t understand it completely and what we do understand we haven’t understood for long.

1 There are practical side-effects to all this costly effort. The World Wide Web is a CERN offshoot. It was invented by a CERN scientist, Tim Berners-Lee, in 1989.

2 You are of course entitled to wonder what is meant exactly by “a constant of 50” or “a constant of 100.” The answer lies in astronomical units of measure. Except conversationally, astronomers don’t use light years. They use a distance called the parsec (a contraction of parallax and second), based on a universal measure called the stellar parallax and equivalent to 3.26 light years. Really big measures, like the size of a universe, are measured in megaparsecs: 1 megaparsec = 1 million parsecs. The constant is expressed in terms of kilometres per second per megaparsec. Thus when astronomers refer to a Hubble constant of 50, what they really mean is “50 kilometres per second per megaparsec.” For most of us that is of course an utterly meaningless measure; but then, with astronomical measures most distances are so huge as to be utterly meaningless.

Running for more than 800 miles, the San Andreas Fault in California is both the most famous and in places most visible of tectonic scars, the result of earlier lateral movements. Seen here as it crosses the Carrizo Plain near San Luis Obispo, it marks the point where two tectonic plates meet. For an outline of the Earth’s principal plates, see here. (credit 12.1)

If you find an error or have any questions, please email us at Thank you!