‘It seems probable,’ Herschel wrote in 1801, ‘that some temporary scarcity or defect of vegetation has generally takenplace, when the Sun has been without those appearances which we surm
Trang 1Jacques Laskar of the Bureau des Longitudes in Paris was a pioneer in the study
of planetary chaos He found many fascinating effects, including the possibilitythat Mercury may one day collide with Venus, and he drew special attention tochaotic influences on the orientations of the planets The giant planets arescarcely affected, but the tilt of Mars for example, which at present is similar tothe Earth’s, can vary between 0 and 60 degrees With a large tilt, summers onMars would be much warmer than now, but the winters desperately cold Somehigh-latitude gullies on that planet have been interpreted as the products ofslurries of melt-water similar to those seen on Greenland in summer
‘All of the inner planets must have known a powerfully chaotic episode in thecourse of their history,’ Laskar said ‘In the absence of the Moon, the orientation
of the Earth would have been very unstable, which without doubt would havestrongly frustrated the evolution of life.’
E Also of relevance to the Earth’s origin areC o m e t s a n d a s t e r o i d s andM i n e r a l s i n
s pa c e For more on life-threatening events, seeC h a o s , I m pa c t s , E x t i n c t i o n s and
F l o o d b a s a lt s Geophysical processes figure inP l a t e m o t i o n s , E a r t h q u a k e s and
C o n t i n e n t s a n d s u p e r c o n t i n e n t s For surface processes and climate change, see thecross-references in E a r t h s y s t e m
S
h i p s t h a t l e a v e t o k y o b a y crammed with exports pass between twopeninsulas: Izu to starboard and Boso to port The cliffs of their headlands areterraced, like giant staircases The flat part of each terrace is a former beach,carved by the sea when the land was lower The vertical rise from terrace toterrace tells of an upward jerk of the land during a great earthquake Sailorswishing for a happy return ought to cross their fingers and hope that thelandmarks will be no taller when they get back
On Boso, the first step up from sea level is about four metres, and correspondswith the uplifts in earthquakes afflicting the Tokyo region in 1703 and 1923 Theinterval between those two was too brief for a beach to form The second step,
219
Trang 2five metres higher, dates from about 800 b c Greater rises in the next two stepshappened around 2100 b c and 4200 b c The present elevations understate therises, because of subsidence between quakes.
Only 20 kilometres offshore from Boso, three moving plates of the Earth’s outershell meet at a triple junction The Eurasian Plate with Japan standing on it hasthe ocean floor of both the Pacific Plate and the Philippine Plate diving to
destruction under its rim, east and west of Boso, respectively The latter two have
a quarrel of their own, with the Pacific Plate ducking under the Philippine Plate.All of which makes Japan an active zone Friction of the descending platescreates Mount Fuji and other volcanoes Small earthquakes are so commonplacethat the Japanese may not even pause in their conversations during a jolt thatsends tourists rushing for the street And there, in a nutshell, is why the next bigearthquake is unpredictable
I Too many false alarms
As a young geophysicist, Hiroo Kanamori was one of the first in Japan to embracethe theory of plate tectonics as an explanation for geological action He was co-author of the earliest popular book on the subject, Debate about the Earth (1970).For him, the terraces of Izu and Boso were ample proof of an unstoppable process
at work, such that the earthquake that devastated Tokyo and Yokohama in 1923,and killed 100,000 people, is certain to be repeated some day
First at Tokyo University and then at Caltech, Kanamori devoted his career tofundamental research on earthquakes, especially the big ones His special skilllay in extracting the fullest possible information about what happened in anearthquake, from the recordings of ground movements by seismometers lying
in different directions from the scene Kanamori developed the picture of asubducted tectonic plate pushing into the Earth with enormous force, becomingtemporarily locked in its descent at its interface with the overriding plate, andthen suddenly breaking the lock
Looking back at the records of a big earthquake in Chile in 1960, for example,
he figured out that a slab of rock 800 by 200 kilometres suddenly slipped by 21metres, past the immediately adjacent rock He could deduce this even thoughthe fault line was hidden deep under the surface That, by the way, was thelargest earthquake that has been recorded since seismometers were invented.Its magnitude was 9.5
When you hear the strength of an earthquake quoted as a figure on the Richterscale, it is really Kanamori’s moment magnitude, which he introduced in 1977
He was careful to match it as closely as possible to the scale pioneered in the1930s by Charles Richter of Caltech and others, so the old name sticks TheKanamori scale is more directly related to the release of energy
220
Trang 3Despite great scientific progress, the human toll of earthquakes continued,aggravated by population growth and urbanization In Tangshan in China in
1976, a quarter of a million died Earthquake prediction to save lives thereforebecame a major goal for the experts The most concerted efforts were in
Japan, and also in California, where the coastal strip slides north-westward
on the Pacific Plate, along the San Andreas Fault and a swarm of related
faults
Prediction was intended to mean not just a general declaration that a region isearthquake prone, but a practical early warning valid for the coming minutes orhours For quite a while, it looked as if diligence and patience might give theanswers
Scatter seismometers across the land and the seabed to record even the smallesttremors Watch for foreshocks that may precede big earthquakes Check
especially the portions of fault lines that seem to be ominously locked, withoutany small, stress-relieving earthquakes The scientists pore over the seismiccharts like investors trying to second-guess the stock markets
Other possible signs of an impending earthquake include electrical changes inthe rocks, and motions and tilts of the ground detectable by laser beams ornavigational satellites Alterations in water levels in wells, and leaks of radon andother gases, speak of deep cracks developing And as a last resort, you canobserve animals, which supposedly have a sixth sense about earthquakes
Despite all their hard work, the forecasters failed to give any warning of theKobe earthquake in Japan in 1995, which caused more than 5000 deaths Thatevent seemed to many experts to draw a line under 30 years of effort in
prediction Kanamori regretfully pointed out that the task might be impossible.Micro-earthquakes, where the rock slippage or creep in a fault is measured inmillimetres, rank at magnitude 2 They are imperceptible either by people or bydistant seismometers And yet, Kanamori reasoned, many of them may have thepotential to grow into a very big one, ranked at magnitude 7–9, with slippages
of metres or tens of metres over long distances
The outcome depends on the length of the eventual crack in the rocks Crackprediction is a notoriously difficult problem in materials science, with theuncertainties of chaos theory coming into play In most micro-earthquakes therupture is halted in a short distance, so the scope for false alarms is unlimited
‘As there are 100,000 times more earthquakes of magnitude 2 than of magnitude
7, a short-term prediction is bound to be very uncertain,’ Kanamori concluded
in 1997 ‘It might be useful where false alarms can be tolerated However, inmodern highly industrialized urban areas with complex lifelines, communicationsystems and financial networks, such uncertain predictions might damage localand global economies.’
221
Trang 4I Earthquake control?
During the Cold War a geophysicist at UC Los Angeles, Gordon MacDonald,speculated about the use of earthquakes as a weapon It would operate by theexplosion of bombs in small faults, intended to trigger movement in a majorfault ‘For example,’ he explained, ‘the San Andreas fault zone, passing near LosAngeles and San Francisco, is part of the great earthquake belt surrounding thePacific Good knowledge of the strain within this belt might permit the settingoff of the San Andreas zone by timed explosions in the China Sea and thePhilippines Sea.’
In 1969, soon after MacDonald wrote those words, Canada and Japan lodgedprotests against a US series of nuclear weapons tests at Amchitka in the AleutianIslands, on the grounds that they might trigger a major natural earthquake.They didn’t, and the question of whether a natural earthquake or an explosion,volcanic or man-made, can provoke another earthquake far away is still debated
If there is any such effect it is probably not quick, in the sense envisaged here.MacDonald’s idea nevertheless drew on his knowledge of actual man-madeearthquakes that happened by accident An underground H-bomb test in Nevada
in 1968 caused many small earthquakes over a period of three weeks, along anancient fault nearby And there was a longer history of earthquakes associatedwith the creation of lakes behind high dams, in various parts of the world.Most thought provoking was a series of small earthquakes in Denver, from 1963
to 1968, which were traced to an operation at the nearby Rocky MountainArsenal Water contaminated with nerve gas was disposed of by pumping itdown a borehole 3 kilometres deep The first earthquake occurred six weeksafter the pumping began, and activity more or less ceased two years after theoperation ended
Evidently human beings could switch earthquakes on or off by using waterunder pressure to reactivate and lubricate faults within reach of a borehole Thiswas confirmed by experiments in 1970–71 at an oilfield at Rangely, Colorado.They were conducted by scientists from the US National Center for EarthquakeResearch, where laboratory tests on dry and wet rocks under pressure showedthat jerks along fractures become more frequent but much weaker in thepresence of water
From this research emerged a formal proposal to save San Francisco from itsnext big earthquake by stage-managing a lot of small ones These would gentlyrelieve the strain that had built up since 1906, when the last big one happened.About 500 boreholes 4000 metres deep, distributed along California’s fault lines,would be needed Everything was to be done in a controlled fashion, by
pumping water out of two wells to lock the fault on either side of a third wellwhere the quake-provoking water would be pumped in
222
Trang 5The idea was politically impossible Since every earthquake in California would
be blamed on the manipulators, whether they were really responsible or not,litigation against the government would continue for centuries And it was alltoo credible that a small man-made earthquake might trigger exactly the majorevent that the scheme was intended to prevent By the end of the centuryKanamori’s conclusion, that the growth of a small earthquake into a big onemight be inherently unpredictable, carried the additional message: you’d betternot pull the tiger’s tail
I Outpacing the earthquake waves
Research efforts switched from prediction and prevention to mitigating theeffects when an earthquake occurs Japan leads the world in this respect, and alarge part of the task is preparation, as if for a war It begins with town
planning, the design of earthquake-resistant buildings and bridges,
reinforcements of hillsides against landslips, and improvements of sea
defences against tsunamis—the great ‘tidal waves’ that often accompany
earthquakes
City by city, district by district, experts calculate the risks of damage and
casualties from shaking, fire, landslides and tsunamis The entire Japanesepopulation learns from infancy what to do in the event of an earthquake, andthere are nationwide drills every 1 September, the anniversary of the 1923Tokyo–Yokohama earthquake Operations rooms like military bunkers standready to take charge of search and rescue, firefighting, traffic control and otheremergency services, equipped with all the resources of information technology.Rooftops are painted with numbers, so that helicopter pilots will know wherethey are when streets are filled with rubble
The challenge to earthquake scientists is now to feed real-time informationabout a big earthquake to societies ready and able to use it A terrible irony inKobe in 1995 was that the seismic networks and communications systems werethemselves the first victims of the earthquake The national government inTokyo was unaware of the scale of the disaster until many hours after the event.The provision for Japan’s bullet trains is the epitome of what is needed As soon
as a strong quake begins to be felt in a region where they are running, the trainsslow down or stop They respond automatically to radio signals generated by acomputer that processes data from seismometers near the epicentre Whentracks twist and bridges tumble, the life–death margin is reckoned in seconds
So the system’s designers use the speed of light and radio waves to outpace theearthquake waves
Similar systems in use or under development, in Japan and California, alert thegeneral public and close down power stations, supercomputers and the like
223
Trang 6Especially valuable is the real-time warning of aftershocks, which endangerrescue and repair teams A complication is that, in a very large earthquake, theidea of an epicentre is scarcely valid, because the great crack can run for ahundred or a thousand kilometres.
I Squeezing out the water
There is much to learn about what happens underground at the sites of
earthquakes Simple theories about the sliding of one rock mass past another,and the radiation of shock waves, have now to take more complex processesinto account Especially enigmatic are very deep earthquakes, like one of
magnitude 8 in Bolivia in 1994 It was located 600 kilometres below the surfaceand Kanamori figured out that nearly all of the energy released in the event was
in the form of heat rather than seismic waves It caused frictional melting of therocks along the fault and absorbed energy
In a way, it is surprising that deep earthquakes should occur at all, seeing thatrocks are usually plastic rather than brittle under high temperatures and
pressures But the earthquakes are associated with pieces of tectonic plates thatare descending at plate boundaries Their diving is a crucial part of the process
by which old oceanic basins are destroyed, while new ones grow, to operate theentire geological cycle of plate tectonics
A possible explanation for deep earthquakes is that the descending rocks aremade more rigid by changes in composition as temperatures and pressuresincrease Olivine, a major constituent of the Earth, converts into serpentine byhydration if exposed to water near the surface When carried back into theEarth on a descending tectonic plate, the serpentine could revert to olivine byhaving the water squeezed out of its crystals Then it would suddenly becomebrittle Although this behaviour of serpentine might explain earthquakes to adepth of 200 kilometres, dehydration of other minerals would be needed toaccount for others, deeper still
A giant press at Universita¨t Bayreuth enabled scientists from University CollegeLondon to demonstrate the dehydration of serpentine under enormous pressure
In the process, they generated miniature earthquakes inside the apparatus DavidDobson commented, ‘Understanding these deep earthquakes could be the key tounlocking the remaining secrets of plate tectonics.’
I The changes at a glance
After an earthquake, experts traditionally tour the region to measure groundmovements revealed by miniature scarps or crooked roads Nowadays they canuse satellites to do the job comprehensively, simply by comparing radar picturesobtained before and after an earthquake The information contained within an
224
Trang 7image generated by synthetic-aperture radar is so precise that changes in relativepositions by only a centimetre are detectable.
The technique was put to the test in 1999, when the Izmit earthquake occurred
on Turkey’s equivalent of California’s San Andreas Fault Along the NorthAnatolian Fault, the Anatolian Plate inches westwards relative to the EurasianPlate, represented by the southern shoreline of the Black Sea The quake killed18,000 people Europe’s ERS-2 satellite had obtained a radar image of the Izmitregion just a few days before the event, and within a few weeks it grabbedanother
When scientists at the Delft University of Technology compared the images by
an interference technique, they concluded that the northern shore of Izmit Gulfhad moved at least 1.95 metres away from the satellite, compared with thesouthern shore of the Black Sea Among many other details perceptible was anominous absence of change along the fault line west of Izmit
‘At that location there is no relative motion between the plates,’ said RamonHanssen, who led the analysis ‘A large part of the strain is still apparent, whichcould indicate an increased risk for a future earthquake in the next section of thefault, which is close to the city of Istanbul.’
E For the driving force of plate motions and the use of earthquake waves as a means ofprobing the Earth’s interior, see P l a t e m o t i o n sandH o t s p o t s
225
Trang 8m o n g l e o n a r d o d a v i n c i’s many scientific intuitions that have stood thetest of half a millennium is his suggestion that the Moon is lit by the Earth, aswell as by the Sun That was how he accounted for the faint glow visible fromthe dark portion of a crescent Moon
‘Some have believed that the Moon has some light of its own,’ the artist noted
in his distinctive back-to-front writing, ‘but this opinion is false, for they havebased it upon that glimmer visible in the middle between the horns of the newMoon.’ With neat diagrams depicting relative positions of Sun, Earth and Moon,Leonardo reasoned that our planet ‘performs the same office for the dark side ofthe Moon as the Moon when at the Full does for us’
The Florentine polymath was wrong in one respect Overimpressed by theglistening western sea at sunset, he thought that the earthshine falling on theMoon came mainly from sunlight returned into space by the Earth’s oceans Infact, seen from space, the oceans look quite dark The brightest features arecloud tops, which modern air travellers know well but Leonardo did not
If you could stand on the Moon’s dark side and behold the Full Earth it would
be a splendid sight, almost four times wider than the Full Moon seen from theEarth, and 50 times more luminous The whole side of the Earth turned towardsthe Moon contributes to the lighting of each patch of the lunar surface, tovarying degrees Ice, snow, deserts and airborne dust appear bright But theangles of illumination from Sun to Earth to Moon have a big effect too, so thatthe most important brightness is in the garland of cloud tops in the tropics.From your lunar vantage point you’d see the Earth rotating, which is a pleasuredenied to Moon watchers, who only ever see one face In the monsoon season,East and South Asia are covered with dense rain clouds So when dawn breaks inShanghai, and Asia swings out of darkness and into the sunlight, the earthshinecan increase by as much as ten per cent from one hour to the next
In the 21st century a network of stations in California, China, the Crimea andTenerife is to measure Leonardo’s earthshine routinely, as a way of monitoringclimate change on our planet Astronomers can detect small variations in theEarth’s brightness These relate directly to warming or cooling, because the
226
Trang 930 per cent or so of sunlight that the Earth reflects can play no part in keepingthe planet warm The rejected fraction is called the albedo, and the brighter theEarth is, the cooler it must be.
What’s more, the variations seen on the dark side of the Moon occur mainlybecause of changes in the Earth’s cloudiness Weather satellites observe theclouds, region by region, and with due diligence NASA scientists combine datafrom all the world’s satellites to build up global maps of cloudiness, month bymonth But if you are interested in the total cloud cover, it is easier, cheaper andmore reliably consistent just to look at the Moon And you are then well on theway to testing theories about why the cloud cover changes
I The ‘awfully clever’ Frenchmen
The pioneer of earthshine measurements, beginning in 1925, was Andre´-LouisDanjon of the Observatoire de Strasbourg, who later became director of theObservatoire de Paris Danjon used a prism to put two simultaneous images ofthe Moon side by side With a diaphragm like a camera stop he then made oneimage fainter until a selected patch on its sunlit part looked no brighter than aselected earthlit patch By the stoppage needed, he could tell that the
earthshine’s intensity was only one five-thousandth of the sunshine’s
Danjon found that the intensity varied a lot, from hour to hour, season toseason, and year to year His student J E Dubois used the technique in
systematic observations from 1940 to 1960 and came to suspect that changes inthe intensity of earthshine were linked to the activity of the Sun in its 11-yearsunspot cycle, with the strongest earthshine when the sunspots were fewest Butthe measurements were not quite accurate enough for any firm conclusions to
be drawn, in that respect
‘You realize that those old guys were awfully clever,’ said Steven Koonin
of Caltech in 1994 ‘They didn’t have the technology but they invented ways
of getting around without it.’ Koonin was a nuclear physicist who becameconcerned about global warming and saw in earthshine a way of using theMoon as a mirror in the sky, for tracking climate change He re-examined theFrench theories for interpreting earthshine results, and improved on them
A reconstruction of Danjon’s instrument had been made at the University ofArizona, but Koonin wanted modern electronic light detectors to do the jobthoroughly He persuaded astronomers at Caltech’s Big Bear Solar Observatory tobegin measurements of earthshine in 1993 And by 2000 Philip Goode from BigBear was able to report to a meeting on climate in Tenerife that the earthshine
in that year was about two per cent fainter than it had been in 1994–95
As nothing else had changed, to affect the Earth’s brightness so much, theremust have been an overall reduction of the cloud cover Goode noted two
227
Trang 10possible explanations One had to do with the cycle of El Nin˜o, affecting seatemperatures in the eastern Pacific, which were at a minimum in 1994 and amaximum in 1998 Conceivably that affected the cloud cover.
The other explanation echoed Dubois in noting a possible link with the Sun’sbehaviour Its activity was close to a minimum in 1994–95, as judged by thesunspot counts, and at maximum in 2000 Referring to a Danish idea, Goodedeclared: ‘Our result is consistent with the hypothesis, based on cloud coverdata, that the Earth’s reflectance decreases with increasing solar activity.’
I Clouds and cosmic rays
Two centuries earlier, when reading Adam Smith’s The Wealth of Nations, thecelebrated astronomer William Herschel of Slough noticed that dates given forhigh prices of wheat in England were also times when he knew there was a lack
of dark sunspots on the Sun’s bright face ‘It seems probable,’ Herschel wrote in
1801, ‘that some temporary scarcity or defect of vegetation has generally takenplace, when the Sun has been without those appearances which we surmise to
be symptoms of a copious emission of light and heat.’
Thereafter solar variations always seemed to be a likely explanation of persistentcooling or warming of the Earth, from decade to decade and century to century,
as seen throughout climate history since the end of the last ice age They stillare There was, though, a lapse of a few years in the early 1990s, when therewas no satisfactory explanation for how the Sun could exert a significant effect
on climate
The usual assumption, following Herschel, was that changes in the averageintensity of the Sun’s radiation would be responsible Accurate gauging ofsunshine became possible only with instruments on satellites, but by 1990 thespace measurements covered a whole solar cycle, from spottiest to least spotty
to spottiest again Herschel was right in thinking that the spotty, active Sun wasbrightest, but the measured variations in solar radiation seemed far too small
to account for important climatic changes
During the 1990s other ways were suggested, whereby the Sun may exert
a stronger influence on climate One of them involved a direct effect on
cloudiness, and therefore on earthshine This was the Danish hypothesis towhich Goode of Big Bear alluded It arose before there was any accurate series
of earthshine measurements; however, the compilations of global cloud coverfrom satellite observations spanned enough years for the variations in globalcloudiness to be compared with possible causes
Henrik Svensmark, a physicist at the Danmarks Meteorologiske Institut inCopenhagen, shared the general puzzlement about the solar effect on climate.Despite much historical evidence for it, there was no clear mechanism He knew
228
Trang 11that the Sun’s activity during a sunspot cycle affects the influx of cosmic rays.These energetic atomic particles rain down on the Earth from exploded stars
in the Galaxy They are fewest when the Sun is in an active state, with manysunspots and a strong solar wind that blows away many of the cosmic rays byits magnetic effect
Cosmic rays make telltale radioactive materials in the Earth’s atmosphere,including the radiocarbon used in archaeological dating Strong links are knownbetween changes in their rate of production and climate changes in the past,such that high rates went with chilly weather and low rates with warmth Mostclimate scientists had regarded the cosmic-ray variations, revealed in radiocarbonproduction rates, merely as indicators of the changes in the Sun’s general mood,which might affect its brightness Like a few others before him, Svensmarksuspected that the link to climate could be more direct
What if cosmic rays help clouds to form? Then a high intensity of cosmic rays,corresponding with a lazy Sun, would make the Earth shinier with extra clouds
It would reject more of the warming sunshine and become cooler Conversely,
a low count of cosmic rays would mean fewer clouds and a warmer world.Working in his spare time, during the Christmas holiday in 1995, Svensmarksurfed the Internet until he found the link he was looking for By comparingcloud data from the International Satellite Cloud Climatology Project withcounts of cosmic rays from Chicago’s station in the mountains of Colorado, hesaw the cloud cover increasing a little between 1982 and 1986, in lockstep withincreasing cosmic rays Then it diminished as the cosmic-ray intensity wentdown, between 1987 and 1991
Svensmark found a receptive listener in the head of his institute’s solar–terrestrialphysics division, Eigil Friis-Christensen, who had published evidence of a strongsolar influence on climate during the 20th century Friis-Christensen was lookingfor a physical explanation, and he saw at once that Svensmark might have foundthe missing link The two of them worked together on the apparent relationshipbetween cosmic rays and clouds, and announced their preliminary results at aspace conference in Birmingham, England, in the summer of 1996
Friis-Christensen then went off to become Denmark’s chief space scientist, ashead of the Dansk Rumforskningsinstitut Svensmark soon followed him there,
to continue his investigations By 2000, in collaboration with Nigel Marsh andusing new cloud data from the International Satellite Cloud Climatology Project,Svensmark had identified which clouds were most affected by cosmic-ray
variations They were clouds at low altitudes and low latitudes, most noticeablyover the tropical oceans
At first sight this result was surprising The cosmic rays, focused by the Earth’smagnetism and absorbed by the air, are strongest near the Poles and at high
229
Trang 12altitudes But that means that plenty are always available for making clouds
in those regions, even at times when the counts are relatively low Like rainshowers in a desert, increases in cosmic rays have most effect in the regionswhere they are normally scarce
A reduction in average low-cloud cover during the 20th century, because offewer cosmic rays, should have resulted in less solar energy being rejected intospace as earthshine The result would be a warming of the planet Marsh andSvensmark concluded that: ‘Crude estimates of changes in cloud radiativeforcing over the past century, when the solar magnetic flux more than doubled,indicates that a galactic-cosmic-ray/cloud mechanism could have contributedabout 1.4 watts per square metre to the observed global warming These
observations provide compelling evidence to warrant further study of the effect
of galactic cosmic rays on clouds.’
That was fighting talk, because the Intergovernmental Panel on Climate Changegave a very similar figure, 1.5 watts per square metre, for the warming effect ofall the carbon dioxide added to the air by human activity As the world’s officialcaretaker of the hypothesis that global warming was due mainly to carbondioxide and other greenhouse gases, the panel was reluctant to give credence tosuch a big effect of the Sun In 2001 its scientific report declared: ‘The evidencefor a cosmic ray impact on cloudiness remains unproven.’
I Chemicals in the air
Particle physicists and atmospheric chemists came into the story of the gleamingcloud tops, in an effort to pin down exactly how the cosmic rays might help tomake clouds Jasper Kirkby at CERN, Europe’s particle physics lab in Geneva,saw a special chance for his rather esoteric branch of science to shine in
environmental research, by investigating a possible cause of climate change ‘Wepropose to test experimentally the link between cosmic rays and clouds and, ifconfirmed, to uncover the microphysical mechanism,’ he declared
Starting in 1998, Kirkby floated the idea of an experiment called CLOUD, inwhich a particle accelerator should shoot a beam, simulating the cosmic rays,through a chamber representing the chilled, moist atmosphere where cloudsform By 2000 Kirkby had recruited more than 50 scientists from 17 institutes
in Europe, Russia and the USA, who made a joint approach to CERN
Despite this strong support, the proposal was delayed by criticisms By the timethese had been fully answered, CERN had run out of money for new researchprojects In 2003, the Stanford Linear Accelerator Center in California was
considering whether to accommodate the experiment, with more US participation.The aim of the CLOUD team is to see how a particle beam’s creation of ions inthe experimental chamber—charged atoms and electrons—might stimulate cloud
230
Trang 13formation Much would depend on the presence of traces of chemicals such assulphuric acid and ammonia, known to be available in the air, and seeing howthey behave with and without the presence of ions It is from such chemicals,
in the real atmosphere, that Mother Nature makes microscopic airborne grainscalled cloud condensation nuclei on which water droplets form from air
supersaturated with moisture
Advances in atmospheric chemistry favoured the idea of a role for cosmic rays
in promoting the formation of cloud condensation nuclei, especially in the cleanair over the oceans, where Svensmark said the effect was greatest Fangqun Yuand Richard Turco of UC Los Angeles had studied the contrails that aircraftleave behind them across the sky They found that the necessary grains forcondensation formed in an aircraft’s wake far more rapidly than expected by thetraditional theory Ions produced by the burning fuel evidently helped the grains
to form and grow
It was a short step for Yu and Turco then to acknowledge, in 2000, that cosmicrays could assist in making cloud condensation nuclei, and therefore in makingclouds The picture is of sulphuric acid and water molecules that collide andcoalesce to form minute embryonic clusters Electric charges, provided by theions that appear in the wake of passing cosmic rays, help the clusters to survive,
to grow and then quickly to coagulate into grains large enough to act as cloudcondensation nuclei
The reconvergence of the sciences in the 21st century has no more strikingexample than the proposition that the glimmer of the dark side of the Moon,diagnosed by Leonardo and measured by astronomers, may vary according tochemical processes in the Earth’s atmosphere that are influenced by the Sun’sinteraction with the Galaxy—and are best investigated by the methods of
particle physics
E For more about climate, seeC l i m a t e c h a n g e For evidence of a powerful solar effect onclimate, see I c e - r a f t i n g e v e n t s The control of cosmic rays by the Sun’s behaviour isexplained more fully inS o l a r w i n d
231
Trang 14n t h e m i d - 1 9 8 0 s f r a n c i s b r e t h e r t o n, a British-born fluid dynamicist,was chairman of NASA’s Earth System Science Committee He produced adiagram to show what he meant by the Earth system It had boxes and
interconnecting lines, like a circuit diagram, to indicate the actions and reactions
in the physics, chemistry and biochemistry of the fluid outer regions of ourplanet On the left of the diagram were the Sun and volcanoes as externalnatural agents of change, and on the right were human beings, affecting theEarth system and being affected by it
In simpler terms, the Earth system consists of rocks, soil, water, ice, air, life andpeople The Sun and the heat of the Earth’s interior provide the power for theterrestrial machinery The various parts interact in complicated, often obscureways But new powers of satellites and global networks of surface stations tomonitor changes, and of computers to model complex interactions, as in
weather forecasting, encouraged hopes of taming the complexity
In 1986 the International Council of Scientific Unions instituted the InternationalGeosphere–Biosphere Programme, sometimes called Global Change for short
A busy schedule of launches of Earth-observing satellites followed, computersgalore came into use, and by 2001 $18 billion had gone into research on globalchange in the USA alone In 2002 the Yokohama Institute for Earth Sciencesbegan operating the world’s most powerful supercomputer as the Earth
Simulator
Difficulties were emerging by then The leading computer models concernedclimate predictions, and as more and more factors in the complex Earth systemwere added to try to make the models more realistic, they brought with themscope for added errors and conflicting results—seeC l i m a t e c h a n g e Man-madecarbon dioxide figured prominently in the models, but uncertainties surroundedthe disappearance of about half of it into unidentified sinks—seeC a r b o n c y c l e.The most spectacular observational successes came with the study of vegetationusing satellites, but again there were problems with modelling—seeB i o s p h e r e
f r o m s p a c e Doubts still attend other components of the Earth system Themonitoring and modelling of the world’s ice give answers that sometimes seemcontradictory—seeC r y o s p h e r e The key role of the oceans, as a central heating
232
Trang 15system for the planet, still poses a fundamental question about what drives thecirculation—seeO c e a n c u r r e n t s.
Forecasting the intermittent changes in the Eastern Pacific Ocean that have globaleffects has also proved to be awkward—seeE l N i n ˜ o From the not-so-solid Earthcome volcanoes, both as a vital source of trace elements needed for nutrients, and
as unpredictable factors in the climate system—seeV o l c a n i c e x p l o s i o n s
Another source of headaches is the non-stop discovery of new linkages An earlyexample in the era of Earth system science was the role of marine algae as asource of sulphate grains in the air—see G l o b a l e n z y m e s Later came a
suggestion that cosmic rays from the Galaxy are somehow involved in cloudformation—seeE a r t h s h i n e The huge effort in the global-change programmes
in the USA, Europe, Japan and elsewhere is undoubtedly enlarging knowledge ofthe Earth system, but a comprehensive and reliable description of it, in a singlecomputer model, remains a distant dream
H
o w b l u e s h o u l d t h e b a l t i c b e? As recently as the 1940s, northern
Europe’s almost-landlocked sea was noted for its limpid water Thereafter itbecame more and more murky as a direct result of man-made pollution
Environmentalists said, ‘Look, the Baltic Sea is dying.’
It was just the opposite The opacity was due to the unwonted prosperity ofalgae, the microscopic plants that indirectly sustain the fishes, shellfish, seals,porpoises and seabirds By the early 21st century, the effluent of nine countriessurrounding the Baltic, especially the sewage and agricultural run-off fromPoland, had doubled the nutrients available to the algae Herring and musselsthrived as never before Unwittingly the populations of its shores had turned theBaltic Sea into a fish farm
There were some adverse results Blooms of poisonous algae occurred morefrequently, in overfertilized water Seals were vulnerable Cod and porpoisesdisliked the murk, and there was a danger that particular species might moveout, or die out, reducing biodiversity On top of that, dying sea eagles and
233
Trang 16infertile seals told of the harm done by toxic materials like PCB and DDT, whichwere brought partially under control after the 1970s.
The issue of quantity versus quality of life in the polluted Baltic illustrated anurgent need for fresh thinking about ecology—the interactions of species withtheir physical and chemical environments, and with other species includinghuman beings Until recently, most scientific ecologists shared with
conservationists a static view of the living world All change was abhorrent.Only gradually did the scientists wake up to the fact that forests and otherecosystems are in continual flux, even without human interference Climateschange too, and just 10,000 years ago the Baltic was a lump of ice
Recent man-made effects on the Baltic have to be understood, not just as aninsult to Mother Nature, but as an environmental change to which variousspecies will adapt well or badly, as their ancestors did for billions of years In theprocess the species themselves will change a little They will evolve to suit thenew circumstances
A symptom of new scientific attitudes, towards the end of the 20th century,was the connecting of ecology with evolutionary biology, evident in the titles
of more and more university departments Some of the academic teams weredeeply into molecular biology, of which the founding fathers of ecology knewvery little These trends made onlookers hopeful that, in the 21st century,ecology might become an exact science at last Then it could play a moreeffective role in minimizing human damage to the environment and wildlife.Jon Norberg of Stockholm was one who sought to recast ecological theory
in evolutionary terms He was familiar with the algae of the Baltic, the
zooplankton that prey on them, and the fishes that prey on the zooplankton So
it was natural for him to use them as an example to think about While atPrinceton, he simulated by computer a system in which 100 species of algae areexposed to seasonal depredation
The algal community thrives better, as gauged by its total mass, when there
is variability within each species You might not expect that, because someindividuals are bound to be, intrinsically, less productive than others But thehighly variable species score because they are better able to cope with thechanging predation from season to season
In 2001 Norberg published, together with American and Italian colleagues atPrinceton, the first mathematical theory of a community of species in whichthe species themselves may be changing Some of the maths was borrowedfrom theories of the evolutionists, about how different versions of genes behave
in a population of plants or animals The message is that communities ofspecies are continually adapting to ever-changing circumstances, from season
to season
234
Trang 17In an ecosystem, just as in long-term evolution, what counts is the diversity
of individuals that gives each species its flexibility The variability between
individuals within a species makes it more efficient in finding a role in the
environment, if need be by evolving in a new direction Meanwhile, an ensemble
of different species improves the use of the resources available in their ecosystem
As often happens, these biologists found that Charles Darwin had been therebefore them He reported in 1859, ‘It has been experimentally proved, that if aplot of ground be sown with several distinct genera of grasses, a greater number
of plants and a greater weight of dry herbage can thus be raised.’
The new mathematical theory carried a warning for environmental makers, not to be complacent when a system such as the Baltic Sea seems to
policy-be coping with man-made pollution fairly well If the environment changes toorapidly, some species may fail to adapt fast enough ‘Biologically speaking, thismeans that an abrupt transition of species occurs in the community within avery short time,’ Norberg and his colleagues commented ‘This phenomenondeserves further attention, because ongoing global changes may very well causeaccelerating environmental changes.’
I Ecology in a test-tube
Bacteria were put through their evolutionary and ecological paces in a series
of experiments initiated by Paul Rainey at Oxford in the late 1990s Pseudomonasfluorescens is an unusually versatile bug It mutates to suit its circumstances, toproduce what are virtually different species, easily distinguishable by eye A week
in the life of bacteria, growing in a nutritious broth in a squat glass culture tube,
is equivalent to years or decades in an ecosystem of larger species in a lake.Within the bulk of the broth the bacteria retain their basic form, which is called
‘smooth’ The submerged glass surface, equivalent to the bed of a lake, comes to
be occupied by the form called ‘fuzzy spreaders’ The surface of the broth, bestprovided with oxygen, becomes home for another type of Pseudomonas
fluorescens, ‘wrinkly spreaders’ that create distinctive mats of cellulose Whenthe Oxford experimenters shook the culture continually, so that individualbacteria no longer had distinctive niches to call their own, the diversity ofbacterial types disappeared So a prime requirement for biodiversity is a choice
of habitats
In more subtle experiments, small traces of either wrinkly or fuzzy forms of thebacteria were introduced in competition with the predominant smooth form.Five times out of six the invader successfully competed with the incumbentform, and established itself well In an early report on this work, Rainey andMichael Travisano stressed the evolutionary aspect of the events in their
ecological microcosm
235
Trang 18They likened them to the radiation of novel species into newly available habitats.
‘The driving force for this radiation was competition,’ they wrote ‘We canattribute the evolution and proliferation of new designs directly to mutation andnatural selection.’
In later research, ecologists from McGill University in Montreal teamed up withRainey’s group to test other theoretical propositions about the factors governingthe diversity of species Experiments with increasing amounts of nutrients in thebroth showed, as expected, that diversity peaked at an intermediate level of thetotal mass of the bacteria With too much nutrition, the number of types ofPseudomonas fluorescenseventually declined That was because the benefits ofhigh nutrition are not equal in all niches, and the type living in the most
favoured niche swamped the others
Disturbing a culture occasionally, by eliminating most of it and restarting thesurvivors in a fresh broth, also encouraged diversity This result accorded withthe ‘intermediate disturbance hypothesis’ proposed a quarter of a century earlier
to explain the diversity of species found among English wild flowers, in rainforests and around coral reefs Occasional disturbances of ecosystems giveopportunities for rare species to come forward, which are otherwise
overshadowed by the commonest ones
‘The laboratory systems are unimpressive to look at—small glass tubes filledwith a cloudy liquid,’ commented Graham Bell of McGill, about these
British–Canadian experiments with bacteria ‘It is only by creating such
simple microcosms, however, that we can build a sound foundation for
understanding the much larger and richer canvas of biodiversity in naturalcommunities.’
I Biodiversity in the genes
Simple counts of species and their surviving numbers are a poor guide to theirprospects The capacity for survival, adaptation and evolution is represented inthe variant genes within each species The molecules of heredity underpinbiodiversity
So molecular genetics, by reading and comparing the genes written in
deoxyribonucleic acid, DNA, gives another new pointer for ecologists According
to William Martin of the Heinrich-Heine Universita¨t Du¨sseldorf and FrancescoSalamini of the Max-Planck-Institut fu¨r Zu¨chtungsforschung in Cologne, even acommunity of plants, animals and microbes that seems poor in species may havemore genetic diversity than you’d think
Martin and Salamini offered a Gedankenexperiment—a thought experiment—fortrying to understand life afresh Imagine that you can make a complete genetic
236
Trang 19analysis of every individual organism, population and living community onEarth You see the similarities and variations of genes that create distinctness atall levels, between individuals, between subspecies and between species.
With a gigantic computer you relate the genes and their frequencies to
population compositions, to the geography, climate and history of habitats,
to evolutionary rates of change, and to human influences Then you have thegrandest possible view of the biological past, of its preservation within existingorganisms, and of evolution at work today, right down to the differences
between brothers and sisters And you can redefine species by finding thegenetic distinctness associated with the shapes, features and other traits that fieldbiologists use to distinguish one plant, animal or microbe from another
Technically, that’s far-fetched at present Yet Martin and Salamini affirmed thatthe right sampling principles and automated DNA analysers could start to sketchthe genetics of selected ecosystems These techniques could also home in onendangered plants and animals, to identify the endangered genes In a small butimportant way they have already identified, in wild relatives of crop plants,genes that remain unused in domesticated varieties
‘Measures of genetic distinctness within and between species hold the key tounderstanding how Nature has generated and preserved biological diversity,’Martin and Salamani concluded in a manifesto for 21st-century natural history
‘But if there are no field biologists who know their flora and fauna, geneticistswill neither have material to work on, nor will they know the biology of theorganisms they are studying In this sense, there is a natural predispositiontowards a symbiosis between genetics and biodiversity—a union that currentprogress in DNA technology is forging.’
E For further theories and discoveries about genetic diversity and survival, see P l a n t
d i s e a s e s and C l o n i n g For other impressions of how ecological science is evolving,seeB i o d i v e r s i t y,B i o s p h e r e f r o m s pa c e andP r e d a t o r s For a long-term
molecular perspective on ecology and evolution, seeG l o b a l e n z y m e s
237
Trang 20p r i v i l e g e of being a science reporter is the chance to learn the latest ideasdirectly from people engaged in making discoveries In 1976, after visiting ahundred experts across Europe and the USA, a reporter scripting a televisiondocumentary on particle physics knew in detail what the crucial experimentswere, and what the theorists were predicting But he still didn’t understand theideas deeply enough to explain them in simple terms to a TV audience Thereporter therefore begged an urgent tutorial from Abdus Salam, a Pakistanitheorist then in London
In the senior common room at Imperial College, a promised hour becamethree hours as the conversation went around the subject again and again Itkept coming back to photons, which are particles of light, to electrons, whichare dinky, negatively charged particles, and to positrons, which are anti-electronswith positive charge The relationship between those particles was central toSalam’s own idea about how the electric force might be united to anotherforce in the cosmos, the weak force that changes one form of matter intoanother
The turning point came when the reporter started to comment, ‘Yes, you saythat a photon can turn into an electron and a positron ,’ and Salam
interrupted him ‘No! I say a photon is an electron and a positron.’ The scalesdropped from the reporter’s eyes and he saw the splendour of the new physics,which was soon to evolve into what came to be called the Standard Model
In the 19th century, matter was one thing, whilst the forces acting on it—gravity,electricity, magnetism—were as different from matter as the wind is from thewaves of the sea During the early decades of the 20th century, two other cosmicforces became apparent One, already mentioned, was the weak force, thecosmic alchemist best known in radioactivity The other novelty was the strongnuclear force that binds together the constituents of the nuclei of atoms
As the nature of subatomic particles and their various interactions becameplainer, the distinction between matter and forces began to disappear Bothconsisted of particles—either matter particles or force carriers The first force to
be accounted for in terms of particles was the electric force, which had already
238
Trang 21been unified with magnetism by James Clerk Maxwell in London in 1864.Maxwell knew nothing about the particles, yet he established an intimate linkbetween electromagnetism and light.
By the 1930s physicists suspected that light in the form of particles—the
photons—act as carriers of the electric force The photons are not to be seen,
as visible or even invisible light in the ordinary sense Instead they are virtualparticles that swarm in a cloud around particles of matter, making them
iridescent in an abstract sense The virtual photons exist very briefly by
permission of the uncertainty of quantum theory They can come into existenceand disappear before Mother Nature has time to notice
Charged particles of matter, such as electrons, can then exert a mutual electricforce by exchanging virtual photons At first this idea seemed crazy or at bestunmanageable because the calculations gave electrons an infinite mass andinfinite charge Nevertheless, in 1947 slight discrepancies in the wavelengths
of light emitted by hydrogen atoms established the reality of the cloud of virtualphotons
Sin-Itiro Tomonaga in Tokyo, Julian Schwinger at Harvard and Richard Feynman
at Cornell then tamed the very tricky mathematics The result was the theory ofthe electric force called quantum electrodynamics, or QED It became the mostprecisely verified theory in the history of physics
Now hark back to Salam’s assurance that a photon consists of an electron and
an anti-electron The latter term is here used in preference to positron, toemphasize the persistent role of antimatter in the story of cosmic forces Theevidence for the photon’s composition is that if it possesses sufficient energy,
in a shower of cosmic rays for example, a photon can actually break up intotangible particles, electron and anti-electron
The electric force carrier is thus made of components of the same kind as thematter on which it acts A big idea that emerged in the early 1960s was thatother combinations of particles and antiparticles create the carriers of othercosmic forces
I The star breaker and rock warmer
The weak force is by no means as boring or ineffectual as it sounds On thecontrary, it plays an indispensable part in the nuclear reactions by which theSun and the stars burn And it can blow a star to smithereens in the nuclearcataclysm of a supernova, when vast numbers of neutrinos are created—
electrons without an electric charge
Neutrinos can react with other matter only by the weak force and as a resultthey are shy, ghostly particles that pass almost unnoticed through the Earth But
239
Trang 22the neutrinos released in a supernova are so numerous that, however slight thechance of interaction by any particular neutrino, the stuff of the doomed star isblasted out into space.
Nearer home, the weak force contributes to the warming of the Earth’s
interior—and hence to volcanoes, earthquakes and the motions of continents—
by the form of radioactivity called beta-decay occurring in atoms in the rocks.Here the weak force operates in the nuclei of atoms, which are made of
positively charged protons and neutral neutrons The weak force can change aneutron into a proton, with an electron and an antineutrino as the by-products.Alternatively it converts a proton into a neutron, with the emission from thenucleus of an anti-electron and a neutrino
Just as the cosmic rays report that the electric force carrier consists of an
electron and anti-electron, so the emissions of radioactive atoms suggest that theweak force carrier is made of an electron and a neutrino—one or other beingnormal and its companion, anti The only difference from the photon is thatone of the electrons is uncharged While the charges of the electron and anti-electron cancel out in the photon, the weak force carrier called W is left with
a positive or negative electric charge
Another, controversial possibility that Salam and others had in mind in the 1960sconcerned an uncharged weak-force carrier called Z It would be like a photon,the carrier of the electric force, except that it would have the ability to interactwith neutrinos Ordinary photons can’t do so, because neutrinos have no electriccharge
Such a Z particle would enable a neutrino to act like a cannon ball, settingparticles of matter in motion while remaining unchanged itself This would bequite different from the well-known manifestations of the weak force In fact itwould be a force of an entirely novel kind
The first theoretical glimpse of the Z came when Sheldon Glashow, a youngHarvard postdoc working in Copenhagen in 1958–60, was wondering whetherthe electric and weak forces might be united, as Maxwell had done with
electromagnetism a century earlier He used mathematics developed in 1954 byChen Ning Yang and Robert Mills, who happened to be sharing a room at theBrookhaven National Laboratory on Long Island The logic of the theory—thesymmetry, as physicists call it—required a neutral Z as well as the Ws of positiveand negative charge
There was a big snag Although the W and Z particles of the weak force weresupposedly similar to the photons of the electric force, they operated only over avery short range, on a scale smaller than an atomic nucleus A principle ofquantum theory relates a particle’s sphere of influence inversely to its mass, sothe W and Z had to be very heavy
240
Trang 23I A breakthrough in Utrecht
There was at first no explanation of where the mass might come from ThenPeter Higgs in Edinburgh postulated the existence of a heavy particle that couldinteract with other particles and give them mass In 1967–68, Abdus Salam inLondon and Steven Weinberg at Harvard independently seized on the Higgsparticle as the means of giving mass to the carriers of the weak force
W and Z particles would feel the dead weight of the Higgs particle, like
antelopes plodding ponderously through a bog, while the photons would skitterlike insects, unaffected by the quagmire In very hot conditions, such as mayhave prevailed at the start of the Universe, the W and Z would skitter too Thenthe present gross differences between the electric force and the weak forcewould be absent, and there would be just one force instead of two
This completed a sketch of a unified electroweak force As with the earlieranalysis of the electric force, the experts could not at first do the sums to getsensible answers about the details of the theory To illustrate the problem, Salamcited the 17th-century Budshahi Mosque in Lahore in his homeland, renownedfor the symmetry of its central dome framed by two smaller domes
‘The task of the [architectural] theory would be in this case to determine therelative sizes of the three domes to give us the most perfect symmetrical
pattern,’ Salam said ‘Likewise the task of the [electroweak] theory will be togive us the relative sizes—the relative masses—of the particles of light and theparticles of the weak force.’
A 24-year-old Dutch graduate student, Gerard ’t Hooft of Utrecht, cracked theproblem in 1971 His professor, Martin Veltman, had developed a way of doingcomplicated algebra by computer, in the belief that it would make the
calculations of particles and forces more manageable In those days computerswere monsters that had to be fed with cards punched by hand The student’ssuccess with Veltman’s technique surpassed all expectation, in a field where topexperts had floundered for decades
‘I came fresh into all that and just managed to combine the right ideas,’ ’t Hooftsaid ‘Veltman’s theory for instance just needed one extra particle So I walkedinto his office one day and suggested to him to try this particle He was verysceptical at first but then his computer told him point blank that I was right.’The achievements in Utrecht were to influence the whole of theory-makingabout particles and forces In the short run they made the electroweak theoryrespectable, mathematically speaking The masses of the W and Z particlescarrying the weak force were predicted to be about 100 times the mass of theproton That was far beyond the particle-making powers of accelerators at thetime
241
Trang 24I Evidence among the bubbles
‘The Z particle wasn’t just a possibility—it was a necessity,’ ’t Hooft recalledlater ‘The theory of the electroweak force wouldn’t work without it Manypeople didn’t like the Z, because it appeared to be coming from the blue, anartefact invented to fix a problem To take it for real required some courage,some confidence that we were seeing things really right.’
Although there was no early prospect for making Z particles with availablemachines, their reality might be confirmed by seeing them in action, in particlecollisions This was accomplished within a few years of ’t Hooft’s theoreticalresults in an experiment at CERN, Europe’s particle physics laboratory inGeneva It used a large French-built particle detector called a bubble chamber.Filled with 18 tonnes of freon, it took its name from the gluttonous giantessGargamelle in Franc¸ois Rabelais’ Gargantua et Pantagruel
The CERN experimenters shot pulses of neutrinos into Gargamelle When one
of the neutrinos condescended to interact with other matter, by means of theweak force, trails of bubbles in the freon revealed charged products of theinteractions Analysts in Aachen, Brussels, Paris, Milan, London, Orsay, Oxfordand at CERN itself patiently examined Gargamelle photos of hundreds ofthousands of neutrino pulses, and measured the bubble-tracks
About one picture in a thousand showed particles set in motion by an impactingneutrino that did not change its own identity in the interaction It behaved inthe predicted cannon-ball fashion The first example turned up in HelmutFaissner’s laboratory in Aachen in 1972 An electron had suddenly startedmoving as if driven by an invisible agency—by a neutrino that, being neutral, left
no track in the bubble chamber
In other cases, a spray of particles showed the debris from an atomic nucleus inthe freon of the bubble chamber, hit by a neutrino In normal weak interactions,the track of a heavy electron (muon) also appeared, made by the transformation
of the neutrino But in a few cases an unmodified neutrino went on its wayunseen This was precisely the behaviour expected of the uncharged Z carrier ofthe weak force, expected in the electroweak theory
The first that Salam heard about it was when he was carrying his luggagethrough Aix-en-Provence on his way to a scientific meeting, and a car pulled upbeside him ‘Get in,’ said Paul Musset of the Gargamelle Collaboration ‘Wehave found neutral currents.’
Oh dear, the jargon! When announced in 1973, the discovery by the Europeanteam might have caused more of a stir if the physicists had not kept calling itthe weak interaction via the neutral current It sounded not just vague and
242
Trang 25complicated, but insipid In reality the bubble chamber had revealed a new kind
of force in the Universe, Weak Force Mark II
I As expensive gamble pays off
To confirm the story, the physicists had no option but to try to make W and Zparticles with a new accelerator Since 1945, Western Europe’s particle physicistsand the governments funding them had barely kept up with their Americancolleagues, in the race to build ever-larger accelerators and discover new
particles with them That was despite a great pooling of effort in the creation ofCERN Could the physicists of Europe turn the tables for once, and be first toproduce the weak-force particles?
CERN’s biggest machine was then the Super Proton Synchrotron It took inpositively charged protons, the nuclei of hydrogen, and accelerated them to highenergies while whirling them around in a ring of magnets 7 kilometres incircumference A Dutch scientist, Simon van der Meer, developed a techniquefor storing and accumulating antiprotons If introduced into the Super ProtonSynchrotron, the negatively charged antiprotons would naturally circle in theopposite sense around the ring of magnets
Pulses of protons and antiprotons, travelling in contrary directions, might then
be accelerated simultaneously in the machine When they achieved their
maximum energy of motion, they could be allowed to collide head-on Thedebris from the colliding beams might include free-range examples of the W and
Z particles predicted by the electroweak theory
Carlo Rubbia, an Italian physicist at CERN, was the most vocal in calling forsuch a collider to be built It would mean halting existing research programmes.There was no guarantee that the collider would work properly, still less that itwould make the intended discoveries
‘Would cautious administrators and committees entrusted with public funds frommany countries overcome their inhibitions and take a very expensive gamble?’Rubbia wrote later ‘Europe passed the test magnificently.’ CERN’s research boardmade the decision to proceed with the collider at CERN, in 1978 Large
multinational teams assembled two big arrays of particle detectors to record theproducts of billions of collisions of protons and antiprotons
The wiring of the detectors looked like spaghetti factories The problem was to find
a handful of rare events among the tracks of confusing junk of well-known particlesflung out from the impacts The signature of a W particle would be its decay into asingle very energetic electron, accompanied by an unseen neutrino For a Z particle,
an energetic electron and anti-electron pair would be a detectable product
Success came in 1983 with the detection first of ten Ws, and then of five Zs Inline with ’t Hooft’s expectations, the Ws weighed in at 85 proton masses, and
243
Trang 26the Zs at 97 The discoveries were historic in more than a technical sense Half acentury after Einstein and many other physicists began fleeing Europe, firstbecause of Hitler, then of war, and finally of post-war penury that crippledresearch, the old continent was back in its traditional place at the cutting edge
of fundamental physics
CERN was on a roll, and the multinational organization went on to build a kilometre ring accelerator, the Large Electron–Positron Collider, or LEP It could
27-be tuned to make millions of Ws or Zs to order, so that their 27-behaviour could
be examined in great detail This was done to such effect that the once-elusivecarriers of the weak force are now entirely familiar to physicists and the
electroweak theory is comprehensively confirmed
Just before LEP’s shutdown in 2000, experimenters at CERN thought they mightalso have glimpsed the mass-giving Higgs particle required by the theory Theirhopes were dashed, and the search for the Higgs resumed at Fermilab nearChicago But before the first decade of the 21st century was out, CERN wasexpected to return to a leading position with its next collider for protons andantiprotons, occupying LEP’s huge tunnel
E For more about the Standard Model and other cosmic forces, see Pa r t i c l e f a m i l i e s
andH i g g s b o s o n s
A
l t h o u g h i t a s c o m m o n as copper in the Earth’s crust, cerium is oftenconsidered an esoteric metal It belongs to what the Italian writer Primo Levicalled ‘the equivocal and heretical rare-earth group’ Cerium was the subject ofthe most poignant of all the anecdotes that Levi told in Il sistema periodico (1975)concerning the chemical elements—the tools of his profession of industrialchemistry
As a Jew in the Auschwitz extermination camp, he evaded death by makinghimself useful in a laboratory He seemed nevertheless doomed to perish fromhunger before the Red Army arrived to liberate them in 1945 In the lab Levi
244
Trang 27found unlabelled grey rods that sparked when scraped with a penknife Heidentified them as cerium alloy, the stuff of cigarette-lighter flints.
His friend Alberto told him that one flint was worth a day’s ration of bread onthe camp’s black market During an air raid Levi stole the cerium rods and thetwo prisoners spent their nights whittling them under a blanket till they fittedthrough a flint-sized hole in a metal plate Alberto was marched away before theRussians came, never to be seen again, but Levi survived
Mother Nature went to great lengths to create, from the nuclear particlesprotons and neutrons, her cornucopia of elements that makes the world not justhabitable but beautiful The manufacture of Levi’s life-saving cerium requireddying stars much more massive than the Sun And their products had to bestockpiled for making future stars and planets From our point of view thewondrous assembly of stars that we call the Milky Way Galaxy is just a 12-billion-year-old chemical cauldron for making the stuff we need
Primordial hydrogen and helium, tainted with a little lithium, are thought to beproducts of the Big Bang Otherwise, the atomic nuclei of all of our elementswere synthesized in stars that grew old and expired before the Sun and theEarth came into existence They divide naturally into lighter and heavier
elements, made mainly by lighter and heavier stars, respectively The nuclearfusion that powers the Sun and the stars creates from the hydrogen additionalhelium, plus a couple of dozen other elements of increasing atomic mass, up toiron These are therefore the commonplace elements
Most favoured are those whose masses are multiples of the ubiquitous helium-4.They include carbon-12 and oxygen-16, convenient for life, and silicon-28 which
is handy for making planets as well as microchips Stars of moderate size puffedout many such light atoms as they withered for lack of hydrogen fuel Iron-56,which colours blood, magnetizes the Earth and builds automobiles, came mostgenerously from stars a little bigger than the Sun that evolved into explodingwhite dwarfs
Beyond iron, the nuclear reactions needed for building heavier atomic nucleiabsorb energy instead of releasing it The most productive factories for theelements were big stars where, towards the end of brilliant but short lives,temperatures soared to billions of degrees and drenched everything with
neutrons, providing us with another 66 elements From the largest stars, theculminating explosions prodigally scattered zinc and bismuth and uranium nucleiinto interstellar space, along with even heavier elements, too gross to survive forlong The explosions left behind only the collapsed cores of the parent stars.Patterns in starlight enable astronomers to distinguish ancient stars, inhabitants
of a halo around the Galaxy, that were built when all elements heavier thanhelium were much scarcer than today Star-making was more difficult then,
245
Trang 28because the elements themselves assist in the process by cooling the gas.
Without their help, stars may have been larger than now, which would haveinfluenced the proportions of different elements that they made At any rate,very large stars are the shortest-lived and so would have been the first to
explode By observing large samples of ancient halo stars, astronomers began
to see the trends in element-making early in the history of our Galaxy, as thetypical exploding stars became less massive
‘Certain chemical elements don’t form until the stars that make them have hadtime to evolve,’ noted Catherine Pilachowski of the US National Optical
Astronomy Observatory, when reporting in 2000 on a survey of nearly 100 halostars ‘Therefore we can read the history of star formation in the compositions
of the oldest stars.’ According to the multi-institute team to which Pilachowskibelonged, the typical mass of the exploding stars had fallen below ten times themass of the Sun by some 30–100 million years after star-making began in theMilky Way That was when the production of Primo Levi’s ‘heretical’ rare earthswas particularly intensive
I All according to Hoyle
‘The grand concept of nucleosynthesis in stars was first definitely established byHoyle in 1946.’ So William Fowler of Caltech insisted, when he won the 1983Nobel Physics Prize for this discovery and Fred Hoyle at Cambridge was
unaccountably passed over Hoyle’s contribution can hardly be exaggerated.While puzzling out the processes in stars that synthesize the nuclei of theelements, he taught the nuclear physicists a thing or two Crucial for our ownexistence was an excited state of carbon nuclei that he discovered, which
favoured the survival of carbon in the stellar furnaces
In 1957 Hoyle and Fowler, together with Margaret and Geoffrey Burbidge,published a classic paper on element formation in stars that was known everafter as B2FH, for Burbidge, Burbidge, Fowler and Hoyle As Hoyle commented
in a later textbook, ‘The parent stars are by now faint white dwarfs or
superdense neutron stars which we have no means of identifying So here wehave the answer to the question in Blake’s poem, The Tyger, ‘‘In what furnacewas thy brain?’’ ’
The operation of your own brain while you read these words depends primarily
on the fact that potassium behaves very like sodium, but has bigger atoms Forevery impulse, a nerve cell expels potassium and admits sodium, throughmolecular ion channels, and then briskly restores the status quo in readiness forthe next impulse The repetition of similar chemical properties in atoms of verydifferent mass and size had enabled chemistry’s Darwin, Dmitri Mendeleev of
St Petersburg, to predict in 1871 the existence of undiscovered elements simplyfrom gaps in his periodic table—il sistema periodico in Levi’s tongue
246
Trang 29When atomic physicists rudely invaded chemistry early in the 20th century, theyexplained the periodic table and chemical bonding as a numbers game.
Behaviour depends on the electric charge of the atomic nucleus, which is
matched by the count of electrons arranging themselves in orderly shells aroundthe nucleus The number of electrons in the outermost shell fixes the chemicalproperties Thus sodium has 11 electrons, and potassium 19, but in both of them
a single electron in the outermost shell is easily detached, making them suitablefor playing Box and Cox in and out of your brain cells
A similar numbers game, with shelly structures within atomic nuclei, thenexplained why this nucleus is stable whilst that one will disappear Many
elements, and especially variant isotopes with inappropriate masses, retire fromthe scene either by throwing off small bits in radioactivity or by breaking in two
in fission As a result, Mendeleev’s table of the elements that survive naturally
on the Earth comes to an end at uranium, number 92, with a typical mass 238times heavier than hydrogen
Physicists found that they could make heavier, transuranic elements in nuclearreactors or accelerators Most notorious was plutonium, which in 1945
demonstrated its aptitude for fission by reducing much of Madame Butterfly’scity of Nagasaki to radioactive rubble Relics of natural plutonium turned uplater in tracks bored by its fission products in meteorites By 1996 physicists inGermany had reached element number 112, which is 277 times heavier thanhydrogen, by bombarding lead atoms with zinc nuclei Three years later aRussian team reported the ‘superheavy’ element 114, made from plutoniumbombarded with calcium and having an atomic mass of 289
The successful accounting, at the nuclear and electronic levels, for all the
elements, their isotopes and chemical behaviour—in fine detail excruciating forstudents—should not blind you to the dumbfounding creativity of inanimatematter Mother Nature mass-produced rudimentary particles of matter:
lightweight electrons and heavy quarks of various flavours The quarks dulygathered by threes into protons and neutrons, with the option of changing theone into the other by altering the flavour of a quark
From this ungenerous start, the mindless stuff put together 118 neutrons, 79protons and 79 electrons and so invented gold This element’s ingenious looseelectrons reflect yellow light brightly while nobly resisting the tarnish of casualchemistry It was a feat not unlike turning bacteria into dinosaurs and any naturalphilosopher should ask, ‘How come gold was implied in the specification for thecosmos?’ To be candid, the astrophysicists aren’t quite sure why so much gold wasmade, and some have suggested that collisions between stars were required.Hoyle rekindled a sense of wonder about where our elements came from.Helping to fill out the picture is the analysis of meteorites In the oldest stonesfalling from the sky, some microscopic grains pre-date the Solar System Their
247
Trang 30elements, idiosyncratic in their proportions of isotopes of variant masses, aresignatures of individual stars that contributed to the Earth’s cargo of elements.
By 2002 the most active of stellar genealogists, Ernst Zinner of WashingtonUniversity in Missouri, had distinguished several different types of parent stars.These were low-to-intermediate mass stars approaching the end of their lives asred giants, novae periodically throwing off their outer layers, and massive starsexploding as supernovae
‘Ancient grains in meteorites are the most tangible evidence of the Earth’sancestry in individual stars long defunct, and they confirm that we ourselvesare made of stardust,’ Zinner said ‘My great hope is that one day we’ll findmore types of these stellar fossils and new sources of pre-solar grains, perhaps
in samples quarried from a comet and brought to Earth for analysis Then
we might be able to paint a more complete picture of our heavenly
progenitors.’
Meanwhile astrophysicists and astrochemists can study the element factories, andthe processes they use, by identifying materials added much more recently tothe cauldron of the Milky Way Galaxy Newly made elements occur in clouds ofgas and dust that are remnants of stars that exploded in the startling eventsknown as supernovae Although ‘supernova’ means literally some kind of newstar, they are actually old stars blowing up A few cases were recorded in history
I ‘The greatest miracle’
Gossips at the Chinese court related that the keeper of the calendar prostratedhimself before the emperor and announced, ‘I have observed the appearance of
a guest star.’ In the summer of 1054, the bright star that we would call
Aldebaran in the Taurus constellation had acquired a much brighter neighbour.The guest was visible even in daylight for three weeks, and at night for morethan a year
Half a millennium later in southern Sweden, an agitated young man with a goldennose, which hid a duelling disfigurement, was pestering the passers-by He pointed
at the Cassiopeia constellation and demanded to know how many stars they couldsee Although the Danish astronomer Tycho Brahe could hardly believe his eyes,
on that November evening in 1572 he had discovered a new star
As the star faded over the months that followed, his book De Nova Stella madehim famous throughout Christendom Untrammelled by modesty or sinology,Tycho claimed that the event was perhaps ‘the greatest miracle since the worldbegan’ In 1604, his pupil Johannes Kepler of Prague found a new star of his own
‘in the foot of the Serpent Bearer’—in Ophiuchus we’d say today
In 1987, in a zinc mine at Mozumi, Japan, the water inside an experimental tank
of the Kamioka Observatory flashed with pale blue light It did so 11 times in
248
Trang 31an interval of 13 seconds, recording a burst of the ghostly subatomic particlescalled neutrinos Simultaneously neutrino detectors in Ohio and Russia picked
up the burst Although counted only by the handful, because of their veryreluctant interaction with other matter, zillions of neutrinos had flooded
through the Earth from an unusual nuclear cataclysm in the sky
A few hours later, at Las Campanas in Chile, Ian Shelton from Toronto wasobserving the Large Magellanic Cloud, the closest galaxy to our own, and hesaw a bright new speck of light It was the first such event since Kepler’s thatwas visible to the naked eye Austerely designated as Supernova 1987A, it peaked
in brightness 80 days after discovery and then faded over the years that followed.Astronomers had the time of their lives They had routinely watched supernovae
in distant galaxies, but Supernova 1987A was the first occurring at fairly close rangethat could be examined with the panoply of modern telescopes The multinationalInfrared Ultraviolet Explorer was the quickest satellite on the case and its datarevealed exactly which star blew up Sanduleak698 202 was about 20 times moremassive than the Sun Contrary to textbook expectations it was not a red star but ablue supergiant The detection of a shell of gas, puffed off from the precursor star20,000 years earlier, explained a recent colour change from red to blue
Telescopes in space and on the ground registered signatures of various chemicalelements newly made by the dying star But most obvious was a dusty cloudblasted outwards from the explosion Even without analysing it, astronomersknew that dust requires elements heavier than hydrogen and helium for itsformation
The scene was targeted by the Hubble Space Telescope soon after its launch in
1990, and as the years passed it recorded the growth of the dusty cloud It willevolve into a supernova remnant like those identified in our own Galaxy byastronomers at the sites of the 1054, 1572 and 1604 supernovae
By a modern interpretation, Tycho’s and Kepler’s Stars may have been
supernovae of Type Ia That means a small, dense white dwarf star, the corpse
of a defunct normal star, sucking in gas from a companion star until it becomesjust hot enough to burn carbon atoms in a stupendous nuclear explosion,making radioactive nickel that soon decays into stable iron Certainly the 1987event and probably the 1054 event were Type II supernovae, meaning massivestars that collapsed internally at the ends of their lives, triggering explosions thatare comparable in brilliance but more variable and more complicated
I Most of our supernovae are missing
The Crab Nebula in Taurus is the classic example of the supernova remnantsthat litter the Galaxy Produced by the Chinese guest star of 1054, it showsfilaments of hot material, rich in newly made elements, still rushing outwards
249
Trang 32from the scene of the stellar explosion and interacting with interstellar gas At itscentre is a neutron star, a small but extremely dense object flashing 30 times asecond, at every wavelength from radio waves to gamma rays.
Astronomers estimate that supernovae of one kind or another should occurroughly once every 50 years in our Galaxy Apart from the events of 1054, 1572and 1604, historians of astronomy scouring the worldwide literature have notedonly two other sightings of supernovae during the past millennium There was
an exceptionally bright one in Lupus in 1006, and a less spectacular candidate inCassiopeia in 1181 Most of the expected events are unaccounted for
Clouds of dust, such as those you can see as dark islands in the luminous river
of the Milky Way, obscure large regions of the Galaxy First the radio telescopes,and then X-ray, gamma-ray and infrared telescopes in space, revealed manysupernova remnants hidden behind the clouds, so that the events that producedthem were not seen from the Earth by visible light The most recent of theseoccurred around 1680, far away in the Cassiopeia constellation, and it firstshowed up as an exceptionally loud source of radio waves
NASA’s Compton gamma-ray observatory, launched in 1991, carried a Germaninstrument that charted evidence of element-making all around the sky Forthis purpose, the German astronomers selected the characteristic gamma rayscoming from aluminium-26, a radioactive element that decays away, losing half ofall its nuclei in a million years So the chart of the sky showed element-making
of the past few million years
The greatest concentrations are along the band of the Milky Way, especiallytowards the centre of the Galaxy where the stars are most concentrated Buteven over millions of years the newly made elements have remained close totheir points of origin, leaving many quite compact features ‘We expected amuch more even picture,’ said Volker Scho¨nfelder of the Max-Planck-Institut fu¨rextraterrestrische Physik ‘The bright spots were a big surprise.’
At the same institute in Garching, astronomers looked at supernova remnants incloser detail with the German–US–UK X-ray satellite Rosat (1990–99) BerndAschenbach studied, in 1996, Rosat images of the Vela constellation, where alarge and bright supernova remnant spreads across an area of sky 20 times widerthan the Moon It dates from a relatively close stellar explosion about 11,000years ago, and it is still a strong source of X-rays
Aschenbach wondered what the Vela remnant would look like if he viewed itwith only the most energetic rays He was stunned to find another supernovaremnant otherwise hidden in the surrounding glare Immediately he saw that itwas much nearer and younger than the well-known Vela object
With the Compton satellite, his colleagues recorded gamma-ray emissions fromAschenbach’s object, due to radioactive titanium-44 made in the stellar
250
Trang 33explosion As half of any titanium-44 disappears by decay every 90 years, thisconfirmed that the remnant was young Estimated to date from around a d
1300, and to be only about 650 light-years away, it was the nearest supernova tothe Earth in the past millennium—indeed in the past 10,000 years Yet if anyonespotted it, no record survives from any part of the world
‘This very close supernova should have appeared brighter than the Full Moon,unless it were less luminous than usual or hidden by interstellar dust,’
Aschenbach commented ‘X-rays, gamma rays and infrared rays can penetratethe dust, so we have a lot of unfinished business concerning recent supernovae
in our Galaxy, which only telescopes in space can complete.’
The launch in 2002 of Europe’s gamma-ray satellite Integral carried the storyforward It was expected to gauge the relative abundances of many kinds ofnewly made radioactive nuclei, and measure their motions as a cloud expands.That will put to a severe test the present theories about how supernovae andother stars manufacture the chemical elements
I The heyday of element-making
A high priority for astronomers at the start of the 21st century is to trace thehistory of the elements throughout the Milky Way and in the Universe as awhole The rate of production of freshly minted elements, whether in our ownGalaxy or farther afield, is far less than it used to be Exceptions prove the rule
An overabundance of newly made elements occurs in certain other galaxies,which are therefore obscured by very thick dust clouds
In these so-called starburst galaxies, best seen by infrared light from the cooldust, the rates of birth and death of stars can be a hundred times faster than inthe Milky Way today Collisions between galaxies seem to provoke the frenzy ofstar-making At any rate the starburst events were commoner long ago, whenthe galaxies were first assembling themselves in a smaller and more crowdedUniverse The delay in light and other radiation reaching the Earth from acrossthe huge volume of the cosmos enables astronomers with the right equipment
to inspect galaxies as they were when very young
Most of the Earth’s cargo of elements may have been made in starbursts in ourown Galaxy 10 billion years ago or more The same heyday of element-makingseems to have occurred universally To confirm, elaborate or revise this
elemental history will require all of the 21st century’s powerful new telescopes,
in space and on the ground, examining the cosmos by every sort of radiationfrom radio waves to gamma rays
As powerful new instruments are brought to bear, surprises will surely come.There was a foretaste in 2002, when Europe’s XMM-Newton X-ray satelliteexamined a very remote quasar, seen when the Universe was one tenth of its
251
Trang 34present age It detected three times as much iron in the quasar’s vicinity as there
is in the Milky Way today
Historians of the elements pin special hopes on two infrared observatories inspace The European Space Agency’s Herschel spacecraft is scheduled to stationitself 1,500,000 kilometres out, on the dark side of the Earth, in 2007 It is to
be joined in 2010, at the same favoured station, by the James Webb SpaceTelescope, the NASA–Europe–Canada successor to the Hubble Space Telescope.After a lifelong investigation of the chemical evolution of stars and galaxies, atthe Royal Greenwich Observatory and Denmark’s Niels Bohr Institute, BernardPagel placed no bet on which of these spacecraft would have more to tell aboutthe earliest history of the elements
‘Will long infrared waves, to which Herschel is tuned, turn out to be morerevealing than the short infrared waves of the James Webb Telescope?’ Pagelwondered ‘It depends on the sequence and pace of early element-makingevents, which are hidden from us until now Perhaps the safest prediction is thatneither mission will resolve all the mysteries of cosmic chemical evolution,which may keep astronomers very busy at least until the 2040s and the
centenary of Hoyle’s supernova hypothesis.’
E For related subjects, seeS t a r s , S t a r b u r s t s , G a l a x i e s , N e u t r o n s t a r s , B l a c k
h o l e s andG a m m a - r a y b u r s t s For a remarkable use of supernovae in cosmology, see
D a r k e n e r g y For the advent of chemistry in the cosmos, seeM o l e c u l e s i n s pa c e and
M i n e r a l s i n s pa c e
252
Trang 35In January 1972 the sea off Peru turned warm The usual upwelling of cold,nutrient-rich water at the edge of the continent ceased The combined effect
of overfishing and this climatic blip wiped out the great Peruvian fishery, whichwas officially closed for a while in April 1973 The anchovies would then taketwo decades to recover to world-beating numbers, only to be hit again by asimilar event in 1997–98
Aged pescadores contemplating the tied-up boats in Calloa, Chimbote and otherfishing ports were not in the least surprised by such events, which occurredevery few years They called them El Nin˜o, the Christ Child, because theyusually began around Christmas Before the 1972 crash, particularly severe ElNin˜o events reducing the fish catches occurred in 1941, 1926, 1914, and so onback as far as grandpa could remember
A celebrated Norwegian meteorologist, Jacob Bjerknes, in semi-retirement at UCLos Angeles, became interested in the phenomenon in the 1960s He establishedthe modern picture of El Nin˜o as an event with far-flung connections, by linking
it to the Southern Oscillation discovered by Gilbert Walker when working inIndia in the 1920s As a result, meteorologists, oceanographers and climatescientists often refer to the phenomenon as El Nin˜o/Southern Oscillation, orENSO for short
Walker had noticed that the vigour or weakness of monsoon rains in India andother parts of Asia and Africa is related to conditions of atmospheric pressureacross the tropical Pacific Ocean Records from Tahiti and from Darwin inAustralia tell the story In normal circumstances, pressure is high in the east andlow in the west, in keeping with the direction of the trade winds But in someyears the system seesaws the other way, with pressure low in the east Then themonsoon tends to be weak, bringing a risk of hunger to those dependent on its
253
Trang 36rainfall Bjerknes realized that Walker’s years of abnormal pressure patterns werealso the years of El Nin˜o off western South America.
The trade winds usually push the ocean water away from the shore, allowingcold, fertile water to well up Not only does this upwelling nourish the Peruviananchovies, it generates a long current-borne tongue of cool water stretchingacross the Pacific, following the Equator a quarter of the way around the world.The pressure of the current makes the sea level half a metre higher off Indonesiathan off South America, and the sea temperature difference on the two sides ofthe ocean is a whopping 88C
In El Nin˜o years the trade winds slacken and water sloshing eastwards fromthe warm side of the Pacific overrides the cool, nourishing water Nowadayssatellites measuring sea-surface temperatures show El Nin˜o dramatically, withthe Equator turning red hot in the colour-coded images The events occur atintervals of two to seven years and persist for a year or two Apart from theeffects on marine life, consequences in the region include floods in SouthAmerica, and droughts and bush fires in Indonesia and Australia
The disruption of weather patterns goes much wider, as foreshadowed inWalker’s link between the Southern Oscillation and the monsoon in India.Before the end of the century the general public heard El Nin˜o blamed for allsorts of peculiar weather, from droughts in Africa to floods on the Mississippi.And newspaper readers were introduced to the little sister, La Nin˜a, meaningconditions when the eastern Pacific is cooler than usual and the effects arecontrary to El Nin˜o’s La Nin˜a is associated with an increase in the frequencyand strength of Atlantic hurricanes, whilst El Nin˜o moderates those storms
A major El Nin˜o alters jet stream tracks in such a way as to raise the meantemperature at the world’s surface and in its lower atmosphere temporarily, byabout half a degree C That figure is significant because it is almost as great
as the whole warming of the world that occurred during the 20th century.Sometimes simultaneous cooling due to a volcano or low solar activity masksthe effect The temperature blip was conspicuous in 1997–98
Arguments arose among experts about the relationship between El Nin˜o andglobal warming An increased frequency in El Nin˜o events, from the 1970sonwards and especially in the 1990s, made an important contribution to thereported rise in global temperatures towards the end of the century, when theywere averaged out So those who blamed man-made emissions of carbon dioxideand other greenhouse gases for an overall increase in global temperature wanted
to say that the quickfire events were a symptom rather than a cause of thetemperature rise
‘You have to ask yourself, why is El Nin˜o changing that way?’ said Kevin
Trenberth of the US National Center for Atmospheric Research ‘A direct
254
˜ o
Trang 37greenhouse effect would change temperatures locally, but it would also changeatmospheric circulation.’ William Nierenberg, a former director of the ScrippsInstitution of Oceanography in San Diego, retorted that to explain El Nin˜o byglobal warming, it would be necessary to show that global warming caused thetrade winds to stop ‘And there is no evidence for this,’ he said.
I What the coral had to say about it
Instrumental records of El Nin˜o and the Southern Oscillation, stretching back tothe latter part of the 19th century when the world was cooler, show no obviousconnection between the prevailing global temperature and the frequency andseverity of the climatic blips To go further back in time, scientists look to otherindicators Climate changes in tropical oceans are well recorded in fast-growingcoral You can count off the years in seasonal growth bands and, in the deadskeletons that build a reef, you can find evidence for hundreds of years offluctuations in temperature, salinity and sunshine
New Caledonia has vast amounts of coral, second only to the Great Barrier Reef
of nearby Australia, and scientists gained insight into the history of El Nin˜o byboring into a 350-year-old colony of coral beneath the Ame´de´e lighthouse at theisland’s south-eastern tip Where New Caledonia lies, close to the Tropic ofCapricorn in the western Pacific, the ocean water typically cools by about 18Cduring El Nin˜o The temperature changes are recorded by fluctuations in theproportions of strontium and uranium taken up from the seawater when thecoral was alive
A team led by Thierry Corre`ge of France’s Institut de Recherche pour le
De´veloppement reconstructed the sea temperatures month by month for theperiod 1701–61 This included some of the chilliest times in the Little Ice Age,which ran from 1400 to 1850 and reached its coldest point around 1700 Thewestern Pacific at New Caledonia was on the whole about 18C cooler than now,yet El Nin˜o’s behaviour then was very similar to what it is today
‘In spite of a decrease in average temperatures,’ Corre`ge said, ‘neither the strengthnor the frequency of El Nin˜o appears to have been affected, even during the verycoldest period.’ That put the lid on any simple idea of a link between El Nin˜o andglobal warming But the team also saw the intensity of El Nin˜o peaking in 1720,
1730 and 1748, and wondered if some other climate cycle was at work
In view of the huge impacts on the world’s weather, and the disastrous floodsand droughts that the Pacific changes provoke, you might well hope that climatescientists should have a grip on them by now To help them try to forecast theonset, intensity and duration of El Nin˜o they have supercomputers, satellites and
a network of buoys deployed across the Pacific by a US–French–Japanese
cooperative effort But results so far are disappointing
255
˜ o
Trang 38In 2000 Christopher Landsea and John Knaff of the US National Oceanic andAtmospheric Administration evaluated claims that the 1997–98 El Nin˜o had beenwell predicted They gave all the forecasts poor marks, including those fromwhat were supposedly the most sophisticated computer models Landsea said,
‘When you look at the totality of the event, there wasn’t much skill, if any.’Regrettably, the scientists still don’t know what provokes El Nin˜o Back in the1960s, the pioneer investigator Bjerknes thought that the phenomenon was self-engendered by compensating feedbacks within the weather machine of air andsea, so that El Nin˜o was a delayed reaction to La Nin˜a, and vice versa Thepossibility of such a natural seesaw points to a difficulty in distinguishing causeand effect
Big efforts went into studying the ocean water, its currents and its changinglevels, both by investigation from space and at sea, and by computer modelling
A promising discovery was of travelling bumps in the ocean called Kelvin waves,due to huge masses of hot water proceeding from west to east below thesurface The waves can be generated in the western Pacific by brief bursts ofstrong winds from the west—in a typhoon for example
Although they raise the sea surface by only 5–10 centimetres, and take threemonths to cross the ocean, the Kelvin waves provide a mechanism for
transporting warm water eastwards from a permanent warm pool near NewGuinea, and for repressing the upwelling of cold water The US–French teamoperating the Topex-Poseidon satellite in the 1990s became clever at detectingKelvin waves using a radar altimeter to measure the height of the sea surface.But they showed up at least once a year, and certainly were not always followed
by a significant El Nin˜o event
By the end of the century the favourite theoretical scheme for El Nin˜o was thedelayed-oscillator model This was similar to Bjerknes’ natural seesaw, with thespecial feature, based on observations at sea, that the layer of warm surfacewater becomes thinner in the western Pacific as it grows thicker near SouthAmerica, at the onset of El Nin˜o A thinning at the eastern side begins justbefore the peak of the ocean warming
To say that El Nin˜o is associated with a slackening of the trade winds begs thequestion about why they slacken Is it really self-engendered by the air—seasystem? Or do the trade winds respond to cooling due to other reasons, such asvolcanic eruptions or a feeble Sun? The great El Nin˜o of 1983 followed so hard
on the heels of the great volcanic eruption of El Chicho´n in Mexico, in 1982,that each tended to mask the global effect of the other Perhaps El Nin˜o is arather clumsy thermostat for the world
E For related subjects, seeO c e a n c u r r e n t s andC l i m a t e c h a n g e For the use made of
El Nin˜o by Polynesian navigators, see P r e h i s t o r i c g e n e s
256
˜ o