欢迎光临 TXT小说天堂 收藏本站(或按Ctrl+D键)
手机看小说:m.xstt5.com
当前位置:首页 > 人文社科 > 《万物简史英文版》在线阅读 > 正文 9    THE MIGHTY ATOM
背景:                     字号: 加大    默认

《万物简史英文版》 作者:比尔·布莱森

9    THE MIGHTY ATOM

小_说[Txt=_天.堂

WHILE EINSTEIN AND Hubble were productively unraveling the large-scale structure ofthe cosmos, others were struggling to understand something closer to hand but in its way justas remote: the tiny and ever- mysterious atom.

The great Caltech physicist Richard Feynman once observed that if you had to reducescientific history to one important statement it would be “All things are made of atoms.” Theyare everywhere and they constitute every thing. Look around you. It is all atoms. Not just thesolid things like walls and tables and sofas, but the air in between. And they are there innumbers that you really cannot conceive.

The basic working arrangement of atoms is the molecule (from the Latin for “little mass”).

A molecule is simply two or more atoms working together in a more or less stablearrangement: add two atoms of hydrogen to one of oxygen and you have a molecule of water.

Chemists tend to think in terms of molecules rather than elements in much the way thatwriters tend to think in terms of words and not letters, so it is molecules they count, and theseare numerous to say the least. At sea level, at a temperature of 32 degrees Fahrenheit, onecubic centimeter of air (that is, a space about the size of a sugar cube) will contain 45 billionbillion molecules. And they are in every single cubic centimeter you see around you. Thinkhow many cubic centimeters there are in the world outside your window—how many sugarcubes it would take to fill that view. Then think how many it would take to build a universe.

Atoms, in short, are very abundant.

They are also fantastically durable. Because they are so long lived, atoms really get around.

Every atom you possess has almost certainly passed through several stars and been part ofmillions of organisms on its way to becoming you. We are each so atomically numerous andso vigorously recycled at death that a significant number of our atoms—up to a billion foreach of us, it has been suggested—probably once belonged to Shakespeare. A billion moreeach came from Buddha and Genghis Khan and Beethoven, and any other historical figureyou care to name. (The personages have to be historical, apparently, as it takes the atomssome decades to become thoroughly redistributed; however much you may wish it, you arenot yet one with Elvis Presley.)So we are all reincarnations—though short-lived ones. When we die our atoms willdisassemble and move off to find new uses elsewhere—as part of a leaf or other human beingor drop of dew. Atoms, however, go on practically forever. Nobody actually knows how longan atom can survive, but according to Martin Rees it is probably about 1035years—a numberso big that even I am happy to express it in notation.

Above all, atoms are tiny—very tiny indeed. Half a million of them lined up shoulder toshoulder could hide behind a human hair. On such a scale an individual atom is essentiallyimpossible to imagine, but we can of course try.

Start with a millimeter, which is a line this long: -. Now imagine that line divided into athousand equal widths. Each of those widths is a micron. This is the scale of microorganisms.

A typical paramecium, for instance, is about two microns wide, 0.002 millimeters, which isreally very small. If you wanted to see with your naked eye a paramecium swimming in adrop of water, you would have to enlarge the drop until it was some forty feet across.

However, if you wanted to see the atoms in the same drop, you would have to make the dropfifteen miles across.

Atoms, in other words, exist on a scale of minuteness of another order altogether. To getdown to the scale of atoms, you would need to take each one of those micron slices and shaveit into ten thousand finer widths. That’s the scale of an atom: one ten-millionth of amillimeter. It is a degree of slenderness way beyond the capacity of our imaginations, but youcan get some idea of the proportions if you bear in mind that one atom is to the width of amillimeter line as the thickness of a sheet of paper is to the height of the Empire StateBuilding.

It is of course the abundance and extreme durability of atoms that makes them so useful,and the tininess that makes them so hard to detect and understand. The realization that atomsare these three things—small, numerous, practically indestructible—and that all things aremade from them first occurred not to Antoine-Laurent Lavoisier, as you might expect, or evento Henry Cavendish or Humphry Davy, but rather to a spare and lightly educated EnglishQuaker named John Dalton, whom we first encountered in the chapter on chemistry.

Dalton was born in 1766 on the edge of the Lake District near Cockermouth to a family ofpoor but devout Quaker weavers. (Four years later the poet William Wordsworth would alsojoin the world at Cockermouth.) He was an exceptionally bright student—so very brightindeed that at the improbably youthful age of twelve he was put in charge of the local Quakerschool. This perhaps says as much about the school as about Dalton’s precocity, but perhapsnot: we know from his diaries that at about this time he was reading Newton’s Principia in theoriginal Latin and other works of a similarly challenging nature. At fifteen, stillschoolmastering, he took a job in the nearby town of Kendal, and a decade after that hemoved to Manchester, scarcely stirring from there for the remaining fifty years of his life. InManchester he became something of an intellectual whirlwind, producing books and paperson subjects ranging from meteorology to grammar. Color blindness, a condition from whichhe suffered, was for a long time called Daltonism because of his studies. But it was a plumpbook called A New System of Chemical Philosophy, published in 1808, that established hisreputation.

There, in a short chapter of just five pages (out of the book’s more than nine hundred),people of learning first encountered atoms in something approaching their modernconception. Dalton’s simple insight was that at the root of all matter are exceedingly tiny,irreducible particles. “We might as well attempt to introduce a new planet into the solarsystem or annihilate one already in existence, as to create or destroy a particle of hydrogen,”

he wrote.

Neither the idea of atoms nor the term itself was exactly new. Both had been developed bythe ancient Greeks. Dalton’s contribution was to consider the relative sizes and characters ofthese atoms and how they fit together. He knew, for instance, that hydrogen was the lightestelement, so he gave it an atomic weight of one. He believed also that water consisted of sevenparts of oxygen to one of hydrogen, and so he gave oxygen an atomic weight of seven. Bysuch means was he able to arrive at the relative weights of the known elements. He wasn’talways terribly accurate—oxygen’s atomic weight is actually sixteen, not seven—but theprinciple was sound and formed the basis for all of modern chemistry and much of the rest ofmodern science.

The work made Dalton famous—albeit in a low-key, English Quaker sort of way. In 1826,the French chemist P .J. Pelletier traveled to Manchester to meet the atomic hero. Pelletierexpected to find him attached to some grand institution, so he was astounded to discover himteaching elementary arithmetic to boys in a small school on a back street. According to the scientific historian E. J. Holmyard, a confused Pelletier, upon beholding the great man,stammered:

“Est-ce que j’ai l’honneur de m’addresser à Monsieur Dalton?” for he couldhardly believe his eyes that this was the chemist of European fame, teaching a boyhis first four rules. “Yes,” said the matter-of-fact Quaker. “Wilt thou sit downwhilst I put this lad right about his arithmetic?”

Although Dalton tried to avoid all honors, he was elected to the Royal Society against hiswishes, showered with medals, and given a handsome government pension. When he died in1844, forty thousand people viewed the coffin, and the funeral cortege stretched for twomiles. His entry in the Dictionary of National Biography is one of the longest, rivaled inlength only by those of Darwin and Lyell among nineteenth-century men of science.

For a century after Dalton made his proposal, it remained entirely hypothetical, and a feweminent scientists—notably the Viennese physicist Ernst Mach, for whom is named the speedof sound—doubted the existence of atoms at all. “Atoms cannot be perceived by the senses . .

. they are things of thought,” he wrote. The existence of atoms was so doubtfully held in theGerman-speaking world in particular that it was said to have played a part in the suicide of thegreat theoretical physicist, and atomic enthusiast, Ludwig Boltzmann in 1906.

It was Einstein who provided the first incontrovertible evidence of atoms’ existence withhis paper on Brownian motion in 1905, but this attracted little attention and in any caseEinstein was soon to become consumed with his work on general relativity. So the first realhero of the atomic age, if not the first personage on the scene, was Ernest Rutherford.

Rutherford was born in 1871 in the “back blocks” of New Zealand to parents who hademigrated from Scotland to raise a little flax and a lot of children (to paraphrase StevenWeinberg). Growing up in a remote part of a remote country, he was about as far from themainstream of science as it was possible to be, but in 1895 he won a scholarship that took himto the Cavendish Laboratory at Cambridge University, which was about to become the hottestplace in the world to do physics.

Physicists are notoriously scornful of scientists from other fields. When the wife of thegreat Austrian physicist Wolfgang Pauli left him for a chemist, he was staggered withdisbelief. “Had she taken a bullfighter I would have understood,” he remarked in wonder to afriend. “But a chemist . . .”

It was a feeling Rutherford would have understood. “All science is either physics or stampcollecting,” he once said, in a line that has been used many times since. There is a certainengaging irony therefore that when he won the Nobel Prize in 1908, it was in chemistry, notphysics.

Rutherford was a lucky man—lucky to be a genius, but even luckier to live at a time whenphysics and chemistry were so exciting and so compatible (his own sentimentsnotwithstanding). Never again would they quite so comfortably overlap.

For all his success, Rutherford was not an especially brilliant man and was actually prettyterrible at mathematics. Often during lectures he would get so lost in his own equations thathe would give up halfway through and tell the students to work it out for themselves.

According to his longtime colleague James Chadwick, discoverer of the neutron, he wasn’teven particularly clever at experimentation. He was simply tenacious and open-minded. Forbrilliance he substituted shrewdness and a kind of daring. His mind, in the words of onebiographer, was “always operating out towards the frontiers, as far as he could see, and thatwas a great deal further than most other men.” Confronted with an intractable problem, hewas prepared to work at it harder and longer than most people and to be more receptive tounorthodox explanations. His greatest breakthrough came because he was prepared to spendimmensely tedious hours sitting at a screen counting alpha particle scintillations, as they wereknown—the sort of work that would normally have been farmed out. He was one of the firstto see—possibly the very first—that the power inherent in the atom could, if harnessed, makebombs powerful enough to “make this old world vanish in smoke.”

Physically he was big and booming, with a voice that made the timid shrink. Once whentold that Rutherford was about to make a radio broadcast across the Atlantic, a colleague drilyasked: “Why use radio?” He also had a huge amount of good-natured confidence. Whensomeone remarked to him that he seemed always to be at the crest of a wave, he responded,“Well, after all, I made the wave, didn’t I?” C. P. Snow recalled how once in a Cambridgetailor’s he overheard Rutherford remark: “Every day I grow in girth. And in mentality.”

But both girth and fame were far ahead of him in 1895 when he fetched up at theCavendish.

1It was a singularly eventful period in science. In the year of his arrival inCambridge, Wilhelm Roentgen discovered X rays at the University of Würzburg in Germany,and the next year Henri Becquerel discovered radioactivity. And the Cavendish itself wasabout to embark on a long period of greatness. In 1897, J. J. Thomson and colleagues woulddiscover the electron there, in 1911 C. T. R. Wilson would produce the first particle detectorthere (as we shall see), and in 1932 James Chadwick would discover the neutron there.

Further still in the future, James Watson and Francis Crick would discover the structure ofDNA at the Cavendish in 1953.

In the beginning Rutherford worked on radio waves, and with some distinction—hemanaged to transmit a crisp signal more than a mile, a very reasonable achievement for thetime—but gave it up when he was persuaded by a senior colleague that radio had little future.

On the whole, however, Rutherford didn’t thrive at the Cavendish. After three years there,feeling he was going nowhere, he took a post at McGill University in Montreal, and there hebegan his long and steady rise to greatness. By the time he received his Nobel Prize (for“investigations into the disintegration of the elements, and the chemistry of radioactivesubstances,” according to the official citation) he had moved on to Manchester University,and it was there, in fact, that he would do his most important work in determining thestructure and nature of the atom.

1The name comes from the same Cavendishes who producec Henry. This one was William Cavendish, seventhDuke of Devonshire, who was a gifted mathematician and steel baron in Victoriar England. In 1870, he gave theuniversity £6,300 to build an experimental lab.

By the early twentieth century it was known that atoms were made of parts—Thomson’sdiscovery of the electron had established that—but it wasn’t known how many parts therewere or how they fit together or what shape they took. Some physicists thought that atomsmight be cube shaped, because cubes can be packed together so neatly without any wastedspace. The more general view, however, was that an atom was more like a currant bun or aplum pudding: a dense, solid object that carried a positive charge but that was studded withnegatively charged electrons, like the currants in a currant bun.

In 1910, Rutherford (assisted by his student Hans Geiger, who would later invent theradiation detector that bears his name) fired ionized helium atoms, or alpha particles, at asheet of gold foil.

2To Rutherford’s astonishment, some of the particles bounced back. It wasas if, he said, he had fired a fifteen-inch shell at a sheet of paper and it rebounded into his lap.

This was just not supposed to happen. After considerable reflection he realized there could beonly one possible explanation: the particles that bounced back were striking something smalland dense at the heart of the atom, while the other particles sailed through unimpeded. Anatom, Rutherford realized, was mostly empty space, with a very dense nucleus at the center.

This was a most gratifying discovery, but it presented one immediate problem. By all the lawsof conventional physics, atoms shouldn’t therefore exist.

Let us pause for a moment and consider the structure of the atom as we know it now. Everyatom is made from three kinds of elementary particles: protons, which have a positiveelectrical charge; electrons, which have a negative electrical charge; and neutrons, which haveno charge. Protons and neutrons are packed into the nucleus, while electrons spin aroundoutside. The number of protons is what gives an atom its chemical identity. An atom with oneproton is an atom of hydrogen, one with two protons is helium, with three protons is lithium,and so on up the scale. Each time you add a proton you get a new element. (Because thenumber of protons in an atom is always balanced by an equal number of electrons, you willsometimes see it written that it is the number of electrons that defines an element; it comes tothe same thing. The way it was explained to me is that protons give an atom its identity,electrons its personality.)Neutrons don’t influence an atom’s identity, but they do add to its mass. The number ofneutrons is generally about the same as the number of protons, but they can vary up and downslightly. Add a neutron or two and you get an isotope. The terms you hear in reference todating techniques in archeology refer to isotopes—carbon-14, for instance, which is an atomof carbon with six protons and eight neutrons (the fourteen being the sum of the two).

Neutrons and protons occupy the atom’s nucleus. The nucleus of an atom is tiny—only onemillionth of a billionth of the full volume of the atom—but fantastically dense, since itcontains virtually all the atom’s mass. As Cropper has put it, if an atom were expanded to thesize of a cathedral, the nucleus would be only about the size of a fly—but a fly manythousands of times heavier than the cathedral. It was this spaciousness—this resounding,unexpected roominess—that had Rutherford scratching his head in 1910.

It is still a fairly astounding notion to consider that atoms are mostly empty space, and thatthe solidity we experience all around us is an illusion. When two objects come together in the2Geiger would also later become a loyal Nazi, unhesitatingly betraying Jewish colleagues, including many whohad helped him.

real world—billiard balls are most often used for illustration—they don’t actually strike eachother. “Rather,” as Timothy Ferris explains, “the negatively charged fields of the two ballsrepel each other . . . were it not for their electrical charges they could, like galaxies, pass rightthrough each other unscathed.” When you sit in a chair, you are not actually sitting there, butlevitating above it at a height of one angstrom (a hundred millionth of a centimeter), yourelectrons and its electrons implacably opposed to any closer intimacy.

The picture that nearly everybody has in mind of an atom is of an electron or two flyingaround a nucleus, like planets orbiting a sun. This image was created in 1904, based on littlemore than clever guesswork, by a Japanese physicist named Hantaro Nagaoka. It iscompletely wrong, but durable just the same. As Isaac Asimov liked to note, it inspiredgenerations of science fiction writers to create stories of worlds within worlds, in which atomsbecome tiny inhabited solar systems or our solar system turns out to be merely a mote in somemuch larger scheme. Even now CERN, the European Organization for Nuclear Research, usesNagaoka’s image as a logo on its website. In fact, as physicists were soon to realize, electronsare not like orbiting planets at all, but more like the blades of a spinning fan, managing to fillevery bit of space in their orbits simultaneously (but with the crucial difference that the bladesof a fan only seem to be everywhere at once; electrons are ).

Needless to say, very little of this was understood in 1910 or for many years afterward.

Rutherford’s finding presented some large and immediate problems, not least that no electronshould be able to orbit a nucleus without crashing. Conventional electrodynamic theorydemanded that a flying electron should very quickly run out of energy—in only an instant orso—and spiral into the nucleus, with disastrous consequences for both. There was also theproblem of how protons with their positive charges could bundle together inside the nucleuswithout blowing themselves and the rest of the atom apart. Clearly whatever was going ondown there in the world of the very small was not governed by the laws that applied in themacro world where our expectations reside.

As physicists began to delve into this subatomic realm, they realized that it wasn’t merelydifferent from anything we knew, but different from anything ever imagined. “Becauseatomic behavior is so unlike ordinary experience,” Richard Feynman once observed, “it isvery difficult to get used to and it appears peculiar and mysterious to everyone, both to thenovice and to the experienced physicist.” When Feynman made that comment, physicists hadhad half a century to adjust to the strangeness of atomic behavior. So think how it must havefelt to Rutherford and his colleagues in the early 1910s when it was all brand new.

One of the people working with Rutherford was a mild and affable young Dane namedNiels Bohr. In 1913, while puzzling over the structure of the atom, Bohr had an idea soexciting that he postponed his honeymoon to write what became a landmark paper. Becausephysicists couldn’t see anything so small as an atom, they had to try to work out its structurefrom how it behaved when they did things to it, as Rutherford had done by firing alphaparticles at foil. Sometimes, not surprisingly, the results of these experiments were puzzling.

One puzzle that had been around for a long time had to do with spectrum readings of thewavelengths of hydrogen. These produced patterns showing that hydrogen atoms emittedenergy at certain wavelengths but not others. It was rather as if someone under surveillancekept turning up at particular locations but was never observed traveling between them. No onecould understand why this should be.

It was while puzzling over this problem that Bohr was struck by a solution and dashed offhis famous paper. Called “On the Constitutions of Atoms and Molecules,” the paper explainedhow electrons could keep from falling into the nucleus by suggesting that they could occupyonly certain well-defined orbits. According to the new theory, an electron moving betweenorbits would disappear from one and reappear instantaneously in another without visiting thespace between. This idea—the famous “quantum leap”—is of course utterly strange, but itwas too good not to be true. It not only kept electrons from spiraling catastrophically into thenucleus; it also explained hydrogen’s bewildering wavelengths. The electrons only appearedin certain orbits because they only existed in certain orbits. It was a dazzling insight, and itwon Bohr the 1922 Nobel Prize in physics, the year after Einstein received his.

Meanwhile the tireless Rutherford, now back at Cambridge as J. J. Thomson’s successor ashead of the Cavendish Laboratory, came up with a model that explained why the nuclei didn’tblow up. He saw that they must be offset by some type of neutralizing particles, which hecalled neutrons. The idea was simple and appealing, but not easy to prove. Rutherford’sassociate, James Chadwick, devoted eleven intensive years to hunting for neutrons beforefinally succeeding in 1932. He, too, was awarded with a Nobel Prize in physics, in 1935. AsBoorse and his colleagues point out in their history of the subject, the delay in discovery wasprobably a very good thing as mastery of the neutron was essential to the development of theatomic bomb. (Because neutrons have no charge, they aren’t repelled by the electrical fields atthe heart of an atom and thus could be fired like tiny torpedoes into an atomic nucleus, settingoff the destructive process known as fission.) Had the neutron been isolated in the 1920s, theynote, it is “very likely the atomic bomb would have been developed first in Europe,undoubtedly by the Germans.”

As it was, the Europeans had their hands full trying to understand the strange behavior ofthe electron. The principal problem they faced was that the electron sometimes behaved like aparticle and sometimes like a wave. This impossible duality drove physicists nearly mad. Forthe next decade all across Europe they furiously thought and scribbled and offered competinghypotheses. In France, Prince Louis-Victor de Broglie, the scion of a ducal family, found thatcertain anomalies in the behavior of electrons disappeared when one regarded them as waves.

The observation excited the attention of the Austrian Erwin Schr?dinger, who made some deftrefinements and devised a handy system called wave mechanics. At almost the same time theGerman physicist Werner Heisenberg came up with a competing theory called matrixmechanics. This was so mathematically complex that hardly anyone really understood it,including Heisenberg himself (“I do not even know what a matrix is ,” Heisenberg despairedto a friend at one point), but it did seem to solve certain problems that Schr?dinger’s wavesfailed to explain. The upshot is that physics had two theories, based on conflicting premises,that produced the same results. It was an impossible situation.

Finally, in 1926, Heisenberg came up with a celebrated compromise, producing a newdiscipline that came to be known as quantum mechanics. At the heart of it was Heisenberg’sUncertainty Principle, which states that the electron is a particle but a particle that can bedescribed in terms of waves. The uncertainty around which the theory is built is that we canknow the path an electron takes as it moves through a space or we can know where it is at agiven instant, but we cannot know both.

3Any attempt to measure one will unavoidably3There is a little uncertainty about the use of the word uncertainty in regard to Heisenbergs principle. MichaelFrayn, in an afterword to his play Copenhagen, notes that several words in German-Unsicherheit, Unscharfe,Unbestimmtheit-have been used by various translators, but that none quite equates to the English uncertainty.

Frayn suggests that indeterminacy would be a better word for the principle and indeterminability would be betterstill.

disturb the other. This isn’t a matter of simply needing more precise instruments; it is animmutable property of the universe.

What this means in practice is that you can never predict where an electron will be at anygiven moment. You can only list its probability of being there. In a sense, as Dennis Overbyehas put it, an electron doesn’t exist until it is observed. Or, put slightly differently, until it isobserved an electron must be regarded as being “at once everywhere and nowhere.”

If this seems confusing, you may take some comfort in knowing that it was confusing tophysicists, too. Overbye notes: “Bohr once commented that a person who wasn’t outraged onfirst hearing about quantum theory didn’t understand what had been said.” Heisenberg, whenasked how one could envision an atom, replied: “Don’t try.”

So the atom turned out to be quite unlike the image that most people had created. Theelectron doesn’t fly around the nucleus like a planet around its sun, but instead takes on themore amorphous aspect of a cloud. The “shell” of an atom isn’t some hard shiny casing, asillustrations sometimes encourage us to suppose, but simply the outermost of these fuzzyelectron clouds. The cloud itself is essentially just a zone of statistical probability marking thearea beyond which the electron only very seldom strays. Thus an atom, if you could see it,would look more like a very fuzzy tennis ball than a hard-edged metallic sphere (but not muchlike either or, indeed, like anything you’ve ever seen; we are, after all, dealing here with aworld very different from the one we see around us).

It seemed as if there was no end of strangeness. For the first time, as James Trefil has put it,scientists had encountered “an area of the universe that our brains just aren’t wired tounderstand.” Or as Feynman expressed it, “things on a small scale behave nothing like thingson a large scale.” As physicists delved deeper, they realized they had found a world where notonly could electrons jump from one orbit to another without traveling across any interveningspace, but matter could pop into existence from nothing at all—“provided,” in the words ofAlan Lightman of MIT, “it disappears again with sufficient haste.”

Perhaps the most arresting of quantum improbabilities is the idea, arising from WolfgangPauli’s Exclusion Principle of 1925, that the subatomic particles in certain pairs, even whenseparated by the most considerable distances, can each instantly “know” what the other isdoing. Particles have a quality known as spin and, according to quantum theory, the momentyou determine the spin of one particle, its sister particle, no matter how distant away, willimmediately begin spinning in the opposite direction and at the same rate.

It is as if, in the words of the science writer Lawrence Joseph, you had two identical poolballs, one in Ohio and the other in Fiji, and the instant you sent one spinning the other wouldimmediately spin in a contrary direction at precisely the same speed. Remarkably, thephenomenon was proved in 1997 when physicists at the University of Geneva sent photonsseven miles in opposite directions and demonstrated that interfering with one provoked aninstantaneous response in the other.

Things reached such a pitch that at one conference Bohr remarked of a new theory that thequestion was not whether it was crazy, but whether it was crazy enough. To illustrate thenonintuitive nature of the quantum world, Schr?dinger offered a famous thought experimentin which a hypothetical cat was placed in a box with one atom of a radioactive substanceattached to a vial of hydrocyanic acid. If the particle degraded within an hour, it would triggera mechanism that would break the vial and poison the cat. If not, the cat would live. But we could not know which was the case, so there was no choice, scientifically, but to regard thecat as 100 percent alive and 100 percent dead at the same time. This means, as StephenHawking has observed with a touch of understandable excitement, that one cannot “predictfuture events exactly if one cannot even measure the present state of the universe precisely!”

Because of its oddities, many physicists disliked quantum theory, or at least certain aspectsof it, and none more so than Einstein. This was more than a little ironic since it was he, in hisannus mirabilis of 1905, who had so persuasively explained how photons of light couldsometimes behave like particles and sometimes like waves—the notion at the very heart of thenew physics. “Quantum theory is very worthy of regard,” he observed politely, but he reallydidn’t like it. “God doesn’t play dice,” he said.

4Einstein couldn’t bear the notion that God could create a universe in which some thingswere forever unknowable. Moreover, the idea of action at a distance—that one particle couldinstantaneously influence another trillions of miles away—was a stark violation of the specialtheory of relativity. This expressly decreed that nothing could outrace the speed of light andyet here were physicists insisting that, somehow, at the subatomic level, information could.

(No one, incidentally, has ever explained how the particles achieve this feat. Scientists havedealt with this problem, according to the physicist Yakir Aharanov, “by not thinking aboutit.”)Above all, there was the problem that quantum physics introduced a level of untidiness thathadn’t previously existed. Suddenly you needed two sets of laws to explain the behavior ofthe universe—quantum theory for the world of the very small and relativity for the largeruniverse beyond. The gravity of relativity theory was brilliant at explaining why planetsorbited suns or why galaxies tended to cluster, but turned out to have no influence at all at theparticle level. To explain what kept atoms together, other forces were needed, and in the1930s two were discovered: the strong nuclear force and weak nuclear force. The strong forcebinds atoms together; it’s what allows protons to bed down together in the nucleus. The weakforce engages in more miscellaneous tasks, mostly to do with controlling the rates of certainsorts of radioactive decay.

The weak nuclear force, despite its name, is ten billion billion billion times stronger thangravity, and the strong nuclear force is more powerful still—vastly so, in fact—but theirinfluence extends to only the tiniest distances. The grip of the strong force reaches out only toabout 1/100,000 of the diameter of an atom. That’s why the nuclei of atoms are so compactedand dense and why elements with big, crowded nuclei tend to be so unstable: the strong forcejust can’t hold on to all the protons.

The upshot of all this is that physics ended up with two bodies of laws—one for the worldof the very small, one for the universe at large—leading quite separate lives. Einstein dislikedthat, too. He devoted the rest of his life to searching for a way to tie up these loose ends byfinding a grand unified theory, and always failed. From time to time he thought he had it, butit always unraveled on him in the end. As time passed he became increasingly marginalizedand even a little pitied. Almost without exception, wrote Snow, “his colleagues thought, andstill think, that he wasted the second half of his life.”

4Or at least that is how it is nearly always rendered. The actual quote was: “It seems hard to sneak a look atGod’s cards. But that He plays dice and uses ‘telepathic’ methods. . . is something that I cannot believe for asingle moment.”

Elsewhere, however, real progress was being made. By the mid-1940s scientists hadreached a point where they understood the atom at an extremely profound level—as they alltoo effectively demonstrated in August 1945 by exploding a pair of atomic bombs over Japan.

By this point physicists could be excused for thinking that they had just about conqueredthe atom. In fact, everything in particle physics was about to get a whole lot morecomplicated. But before we take up that slightly exhausting story, we must bring anotherstraw of our history up to date by considering an important and salutary tale of avarice, deceit,bad science, several needless deaths, and the final determination of the age of the Earth.

Www.xiaoshUotxt.net
上一章 下一章 (可以用方向键翻页,回车键返回目录) 加入收藏比尔·布莱森作品集
万物简史英文版万物简史