1 Forecast The Scientific Method Anomalies and Crisis Paradigms and Epicycles Experiments and Observations The Empires of the Times Determinism and the Butterfly Effect Probability and M
Trang 2Francesco Sylos Labini
Science and the Economic Crisis Impact on Science, Lessons from Science 1st ed 2016
Trang 3Francesco Sylos Labini
Enrico Fermi Center and Institute for Complex Systems (National Research Council), Rome, Italy
ISBN 978-3-319-29527-5 e-ISBN 978-3-319-29528-2
DOI 10.1007/978-3-319-29528-2
Library of Congress Control Number: 2016931354
© Springer International Publishing Switzerland 2016
The book will be published in Italian by another publisher
This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed
The use of general descriptive names, registered names, trademarks, service marks, etc in this
publication does not imply, even in the absence of a specific statement, that such names are exemptfrom the relevant protective laws and regulations and therefore free for general use
The publisher, the authors and the editors are safe to assume that the advice and information in thisbook are believed to be true and accurate at the date of publication Neither the publisher nor theauthors or the editors give a warranty, express or implied, with respect to the material containedherein or for any errors or omissions that may have been made
Printed on acid-free paper
This Springer imprint is published by SpringerNature The registered company is Springer
International Publishing AG Switzerland
Trang 4The world is in the grip of the biggest economic crisis for more than 80 years Nearly all nations areaffected, though, of course, some are more affected than others The key political question of today is:
“What should be done to bring this crisis to an end?”
In this book, Francesco Sylos Labini, who is a researcher in physics, takes an unusual approach tothe crisis by relating it to the situation in science How is this economic crisis related to scientificresearch? A little reflection shows that this link is in fact very close The neoliberal economic
policies, which have dominated for the past 30 or so years, are based on neoclassical economics.This looks very much like a science such as physics, since it consists of equations and mathematicalmodels But is it really scientific? Should we trust the predictions of neoclassical economics in thesame way that we trust those of physics? Sylos Labini gives good reasons for thinking that we shouldnot, and that neoclassical economics is more of a pseudo-science, like astrology, than a genuine
science, like astronomy
Sylos Labini begins his argument by analyzing predictions in the natural sciences In some areas,such as the future positions of planets and comets, predictions can be made with extraordinary
accuracy; but this is not always the case Predictions of tomorrows’ weather, or of when volcaniceruptions or earthquakes will occur, are much less certain Let us consider meteorology Here thelaws governing the behavior of the atmosphere are precise and well established, but there is a
difficulty—the so-called butterfly effect A small disturbance, such a butterfly flapping its wings inBrazil, can be magnified and cause a hurricane in the United States This leads to what is called
chaotic behavior—a subject which has been studied mathematically, and in which Sylos Labini is anexpert Despite the difficulties caused by chaos, weather forecasting can be, and has been, improved
by better collection of observations, better mathematical models, and the use of more powerful
computers
If we turn from this to neoclassical economics, we see that the situation is completely different
As Sylos Labini points out, we do not know the laws of economic development in the way that weknow the laws governing the atmosphere The butterfly effect seems to apply to the world economy,however, since the failure of a few sub-prime mortgages in a region of the United States led to a
worldwide economic recession Yet neoclassical economists take no account of the mathematics ofchaos whose use is now standard in the natural sciences Although weather forecasts can be trusted
up to a point, little credence should be given to those of neoclassical economics, and yet, as SylosLabini points out, neoclassical economics has nonetheless achieved a cultural hegemony In order toexplain how this has been possible, Sylos Labini turns to a consideration of the organization of
research, and, more generally, of the universities
What is interesting is that neoliberal policies have the same general effect in the universities asthey do in society as a whole In society, their tendency has been to concentrate wealth in fewer andfewer hands The richest 1 % has grown richer and richer at the expense not only of the working classbut also of the old middle class Similarly, in the university sector, more and more funding is going to
a few privileged universities and their researchers at the expense of the others This is justified on thegrounds that these universities and researchers are better than the others, so that it more efficient toconcentrate funding on them To find out which universities and researchers are better, regular
research assessments are conducted, and they are used to guide the allocation of funds But how
accurate are these research assessments in picking out the researchers who are better from those who
Trang 5are not so good? Sylos Labini gives us good reasons for thinking that these research assessments, farfrom being accurate, are highly misleading.
One striking result, which he mentions, is known as the Queen’s question Lehman Brothers
collapsed in September 2008 and started the great recession By chance, Queen Elizabeth visited theLondon School of Economics to inaugurate a new building in November 2008, and here she asked herfamous question: “why did no one see the economic crisis coming?” Of course the neoclassical
economists of the London School of Economics not only did not foresee the crisis, but they had beenadvocating the very neoliberal policies that led to it In December 2008, the UK’s research
assessment exercise reported its results These showed that the field that had obtained the highestscore of any in the country was none other than economics, which in the UK had by then become
almost exclusively neoclassical economics If the results of this assessment were to be believed, theneconomics was the field in which the best research in the UK had been done in the preceding 5 years
—better than the research in physics, computer science, or the biomedical sciences Obviously thisshows that something had gone very wrong with research assessment
Sylos Labini is an active member of Return on Academic Research (Roars.it), an organization that
is active in opposing the attempts of the Italian government to introduce a research organization
modeled on the UK into Italy His book explains the failings of such research assessment systems.One interesting argument he uses concerns some of the major discoveries in physics and mathematicsmade in the last few decades In physics he discusses high-temperature superconductivity, the
scanning tunneling microscope, and graphene; and in mathematics Yitang Zhang’s proof of an
important theorem in prime number theory Unknown individuals, working in low-rated institutions,made all these discoveries that is to say, researchers who would have had their research funding cut
by the rigorous implementation of research assessment exercises The point is that scientific
discovery is unpredictable, and one has a better chance of increasing important discoveries by
spreading funds more evenly rather than by concentrating them in the hands of a small elite
In the final part of his book, Sylos Labini points out that the same neoliberal push towards
inequality is to be found in throughout Europe Research funds are being concentrated more in
Northern Europe and less in Southern Europe Sylos Labini argues not only for a more egalitariandistribution of research funds, but also for an overall increase in the funding for research and
development This is the strategy that will produce innovations capable of revitalizing the economiesand putting them once more on a growth path Sylos Labini makes a very strong case for his point ofview Let us hope that a new generation of politicians will be willing and able to implement his
ideas Meantime his book is to be strongly recommended to anyone seeking to understand the currentcrisis and its ramifications
Donald Gillies (Emeritus Professor of Philosophy of Science and Mathematics, University
College London)
July 2015
Trang 6I am grateful to Angelo Vulpiani, one of my mentors in physics In addition to our countless interestingdiscussions on the role of forecasts in science, I thank him for painstakingly commenting on a
preliminary version of this work I am also thankful for his unwavering encouragement
Several friends and colleagues, who have read early versions of this work, or specific chapters,have given me valuable advice and suggestions In particular I thank Lavinia Azzone, Antonio Banfi,David Benhaiem, Andrea Cavagna, Guido Chiarotti, Francesco Coniglione, Stefano Demichelis, LucaEnriques, Donald Gillies, Grazia Ietto-Gillies, Michael Joyce, Martin Lopez Corredoira, Laura
Margottini, Enzo Marinari, Maurizio Paglia, Daniela Palma, Roberto Petrini, Francesco Sinopoli,Giorgio Sirilli, Fabio Speranza, and Marco Viola
Many ideas presented in this work come from the blog Return On Academic ReSearch (Roars.it),which has given me a privileged observation point on several issues I am therefore grateful to all itseditors for our extensive daily discussions ever since we embarked on the Roars.it adventure in 2011,and for sharing my commitment to be both a researcher and a citizen Each one of them has taught me
a lot and has influenced my ideas on some of the issues raised in this work, especially, but not
exclusively, with regard to research and higher education issues My Roars friends and colleaguesinclude the following: Alberto Baccini, Antonio Banfi, Michele Dantini, Francesco Coniglione,
Giuseppe de Nicolao, Paola Galimberti, Daniela Palma, Mario Ricciardi, Vito Velluzzi and MarcoViola
I thank Luciano Pietronero, Andrea Gabrielli, and Guido Chiarotti for our numerous discussions
on many topics touched upon in this work, and specifically for their collaboration in the study on thediversification of national research systems, as well as for sharing with me their results on “economiccomplexity” that I will discuss in Chaps 2 and 4 I had fruitful discussions with Giulio Cimini andMatthieu Cristelli on the use of big data in economics I also thank Mauro Gallegati for pointing outseveral references that have allowed me to deepen various concepts regarding neoclassical
economics
I am grateful to Donald Gillies and Grazia Ietto-Gillies for many interesting exchanges of views;their comments have helped me to clarify my views on different issues, from the problem of researchevaluation (Chap 3 ) to the criticism of neoclassical economics (Chap 2 ) to the relation betweenbasic research, innovation and technical progress (Chap 4 ) In particular, Donald’s writings havegreatly influenced my outlook on research evaluation and neoclassical economics
Fabio Cecconi, Massimo Cencini, and Angelo Vulpiani were my co-organizers of the meeting on
Can we predict the future? Role and limits of science, that prompted me to investigate the role of
forecasts in the different scientific fields discussed in Chap 1
I have had the good fortune of sharing with José Mariano Gago—who alas is no longer with us—Amaya Moro Martin, Gilles Mirambeau, Rosario Mauritti, Alain Trauttman, and Varvara Trachanamany discussions, arguments and science initiatives in Europe, which have made me think about thecentral role of research and the dramatic nature of the crisis of the European Union which I discuss inChap 4
Lastly, I am grateful to my wife Valeria, who has stood by my side throughout this endeavor andencouraged me with loving intelligence and patience
Despite having had the good fortune to receive comments and suggestions from many
distinguished friends and colleagues, everything written in this book is my sole responsibility
Trang 7November 2015
Trang 8I have the privilege of devoting most of my time to trying to solve problems of theoretical physics thatare quite distant from everyday life However, living in Italy—a country mired in a series of crisesthat affect me closely both as a scientist and as a citizen—has prompted me to bring into the publicdebate a number of issues pertaining to the world of scientific research I firmly believe that this is acrucial imperative in times like these, when ideology and economic interests not only drive publicagendas and government policies, but have also seeped into schools, universities and culture at large
We are faced with an economic crisis that has brought the world economy to its knees and is
combined with an economic crisis pertaining specifically to Italy This situation overlaps with, and is
a consequence of, a political crisis with distinctive characteristics, causes, and developments at theinternational, European, and Italian levels First and foremost, however, this is a cultural crisis on aglobal scale The grand utopias that dominated our recent and immediate past seem to have vanished.Equality, brotherhood, freedom seem to be words that today have nothing to do with our reality,
where inequalities have never been so great, freedom is being reduced gradually in favor of securityand solidarity is overwhelmed by arrogance and indifference Furthermore, because of
insurmountable inequalities, the possibility of a change for the better of any individual’s situation iscurrently in a regressive stage in many countries, and also what has regressed is the role of highereducation as the driving force of social mobility
Hence, what we are currently facing is essentially a political and cultural crisis that affects oursociety as a whole, and not merely an economic and social crisis Scientific research is far fromimmune: on the contrary, it is particularly hard-hit by this crisis On the one hand, the scarcity of
research funds has become a structural problem in many countries, particularly in Southern Europe,where many young scientists are faced with very limited opportunities for pursuing their researchactivities on a permanent basis On the other hand, fierce competition is distressing and distorting thevery nature of research work It seems that scientific research is thus completely taken off track as aresult of this pressure
The fact that the economic crisis has been tackled primarily through austerity policies in the verycountries exposed to the greatest financial distress has further stifled scientific research and sparked avicious cycle that prevents scientists from undertaking innovative research projects that could
actually contribute to ending the crisis Indeed, the very intellectual forces capable of producing newideas and energies have been marginalized and gridlocked in a limbo of uncertainty for which there is
no clear exit Due to the absence of catalysts, subsequent generations may now be isolated and
deprived of prospects, both individually and collectively
Science can provide crucial tools that could be instrumental both in comprehending the problems
of our time and in outlining perspectives that might constitute a solid and viable alternative to therampant jungle law—a misconstrued Social Darwinism—that is currently very widespread The
present work ponders the interface between science dissemination and scientific policy—with somedigressions into history and the philosophy of science It therefore aims to show how the ideas
developed over the past century in natural sciences (both in general and specifically in meteorology,biology, geology, and theoretical physics—much neglected in the public debate), actually play a
major role in understanding the seemingly diverse and unrelated problems lying at the heart of thecurrent crisis and may suggest plausible and original solutions As we advance on this voyage acrossmodern science, one of the main threads will be finding an answer to this crucial question: what are
Trang 9the practical, economic and cultural benefits of basic research? We will be focusing mostly on the called hard sciences as they have a more immediate impact on technology Nevertheless, severalarguments developed in the course of this work apply also to science in the widest sense, includingsocial sciences and the humanities Culture, of which science is a significant albeit small part, is thecornerstone of our society.
Trang 101 Forecast
The Scientific Method
Anomalies and Crisis
Paradigms and Epicycles
Experiments and Observations
The Empires of the Times
Determinism and the Butterfly Effect Probability and Many-Body Systems Forecasts and Decisions
How Will the Weather Be Tomorrow? Extreme Weather Events
Climate Changes
Be Prepared for the Unexpected
Spread of Diseases and New Virus Recurrences and Big Data
Science, Politics and Forecasts
References
2 Crisis
Realism and Rigor
The Queen’s Question
Which Crisis?
There Will Be Growth in the Spring
Trang 11The Disappearance of the Time
The Three Pillars of Equilibrium
The Myth of Equilibrium
Efficiency and Unpredictability
The Neoclassical Dictatorship
The Theft of a Successful Brand
The Growth in Inequality
The Techno-evaluation Era
Evaluation and Creativity
The Misunderstanding of Competition
History Teaches but Does not Have Scholars
Trang 12The Time of the Great Navigators
Physics’ Woodstock
Spaces to Make and Correct Mistakes
Playing with a Sticky Tape
Primes Takeaways
Selecting Pink Diamonds
The Scientific Forger
Tip of the Iceberg?
The Dogma of Excellence
The ‘Harvard Here’ Model
References
4 Politics
The Basic Research at the Roots of Innovation
Micro-Macro Hard Disks
Applied Research and Applications of Basic Research The Role of the State and Risk in Research
Diversification and Hidden Abilities
Diversification of Nations’ Research Systems
The Four-Speed Europe
The Sacrifice of Young Generations
European Science Policy: Robin Hood in Reverse Some Ideas for a Change
They Have Chosen Ignorance
References
Trang 14© Springer International Publishing Switzerland 2016
Francesco Sylos Labini, Science and the Economic Crisis, DOI 10.1007/978-3-319-29528-2_1
1 Forecast
Enrico Fermi Center and Institute for Complex Systems (National Research Council), Via deiTaurini 19, 00185 Rome, Italy
Francesco Sylos Labini
Email: Francesco.syloslabini@roma1.infn.it
The Scientific Method
Richard Feynman, who once referred to himself as a “Nobel physicist, teacher, storyteller, and
bongos player”, was an original and eccentric character He is remembered as one of the most famous
theoretical physicists of the last century, the unforgettable author of the “Feynman Lectures on
Physics”,1 among the most studied physics textbooks in the world, and the brilliant speaker who,during a memorable lecture, explained how does scientific research work as follows2:
Let me explain how we look for new laws In general we look for new laws through the
following process: first we guess it Then we calculate the consequences of this guess, to see
what this law would imply if it were right Then, we compare the computation results to nature,
to experimental experience to see if it works If the theoretical results do not agree with
experiment, the guess is wrong In this simple statement is the key of science It does not matterhow beautiful your hypothesis is, it does not matter how smart is who has formulated this
hypothesis, or what is his name If it does not agree with the experiments, the hypothesis is
wrong […] In this way we can show that a theory capable of making predictions is wrong Wecannot, however, show that it is correct, but we can only show that it is not wrong This is
because in the future there could be a greater availability of experimental data that you can
compare with a larger set of consequences of the theory so that we can perhaps find that the
theory is wrong We can never be sure that we have the correct theory, but just do not have thewrong theory
In a simple and effective way, Feynman explained the concept of a scientific theory’s
falsifiability, formulated in a more organic way by Austrian philosopher and naturalized British
citizen, Karl Popper.3 According to Popper, experimental observations in favor of a theory can neverprove it definitively, but they can only show that it is wrong In fact, a single experiment with
contradictory results is enough for its refutation
Popper’s criterion was however refined by 20th century philosophers of science because, when
Trang 15considering a scientific theory within a mature field in which the observed phenomena are far fromtheoretical predictions, various inferential steps may mediate them, so that the rejection of a singleconjecture may not imply the refutation of the theory itself.4 As physicist and science historian PierreDuhem first noted at beginning of the 20th century, for a very advanced discipline, such as physics,one cannot test a single hypothesis in isolation, because to derive empirical consequences it is
necessary to assume also a number of auxiliary hypotheses For this reason, a very elaborate andhigh-level theory may be overturned only gradually by a series of experimental defeats, rather thanfrom a single wrong experimental prediction.5 A good criterion is the following: a theory is scientific
if, and only if, it is experimentally confirmable—that is, if the theory is able to acquire a degree ofempirical support by comparing its predictions with experiments To be confirmable, a theory must
be expressed in a logical and deductive manner, such as to obtain from a universal statement, in arigidly linked way, one or more particular consequences that are empirically verifiable
Traditionally, therefore, the scientist’s work is to guess the theoretical hypotheses, seeking tobuild a coherent logical framework that is capable of interpreting experimental observations Thesepropositions are naturally expressed in the “language of nature” mathematics, as Galileo Galilei first
claimed in his 1623 book “Il Saggiatore” Precision and mathematical rigor in the theoretical
description and accuracy of experimental measurements are two sides of the same coin In physics wecan, in fact, distinguish correct theories from incorrect ones in a simple way: the former are more andmore distinct with increasing experimental accuracy Moreover, as we will see later, as one proceeds
to more accurate measurements, one has access to an increasing amount of information that enables anever-deeper understanding of the physical phenomena
Since the laws of nature are by definition universal and unchanging, in other words are the same
in any place at any time and space, the knowledge of these laws makes it possible to formulate
testable predictions with experiments conducted under controlled conditions, in order to eliminate orminimize the effects of external factors not considered by the theory The result of these experiments
is, given the same conditions, universal, i.e repeatable in any other place or time The corroboration
of a theory through predictions confirmed by reproducible experiments is therefore one of the pillars
of the scientific method A physical theory, through a mathematical formulation, provides the value ofsome parameters that characterize a given system and that can be measured If the parameters valuesderived from the theory agree with the observed ones, within the limits of experimental errors, thenthe theory provides an explanation of the phenomenon Let us consider a few historical examples toillustrate the use of previsions as a test of scientific theory correctness
Anomalies and Crisis
Mercury, Venus, Mars, Jupiter and Saturn are the only planets visible by the naked eye in the sky.Until the late 1700s, it was thought that no others existed, but, in 1781, the British astronomer WilliamHerschel, during an observational campaign of double stars (that is stars orbiting around each other),accidentally discovered a body, which would have then proved to be the planet Uranus Observingthe body’s orbital motion around the Sun, he found anomalies with respect to the previsions of
Newton’s law of gravity At that time, these anomalies represented a major scientific problem Infact, in the 19th century, astronomy was a reference science, which aimed to measure with great
accuracy the positions of celestial bodies and to interpret the observations by Newton’s theory ofgravity: these measurements and the corresponding theoretical calculations were at that time moreaccurate than in any other scientific discipline Indeed, the regularities of the motions of heavenly
Trang 16bodies were known since ancient times, but only in the Renaissance, thanks to the work of TychoBrahe, Johannes Kepler and Galileo Galilei, were a large amount of very accurate observations
recorded Isaac Newton used this knowledge to identify the mathematical laws that can preciselyexplain the different observations Newton’s laws of motion were shown to be so precise that anyother observation in any other scientific field that did not prove compatible with them could not beconsidered correct Indeed, these laws were also applied to chemistry and engineering problems andprovided the rationale for the entire technological progress that had occurred since their discovery Inaddition Newton, thanks to the introduction of the other hypothesis that the force of gravity weakens in
a certain way with distance, was able to find a comprehensive explanation of planetary orbits, cometsand tides In particular, the Newton’s law of gravitation assumes that the force of gravity decreases as
a power law6 as a function of the distance between two bodies: doubling the distance between twobodies weakens the gravitational force between them by a factor of four
To interpret the anomalies in the trajectory of Uranus, rather than to question the correctness of thelaw of gravitation of Newton, it was hypothesized that they were due to the gravitational effects of aneighth planet that had still not been observed This hypothesis corresponded to the introduction, forthe first time in astronomy, of “dark matter”: dark matter was therefore hypothesized to explain somedifferences between observations and theoretical predictions through its gravitational effects on theposition of an already known planet The problem was then to find other independent evidences of theexistence of this object In current times, a conceptually similar situation is found in the cosmologicalmodel that is generally accepted: to explain some observations, which would not be in agreementwith the predictions of the model, it is necessary to introduce dark matter (and now also dark energy)
We will discuss later about the role of dark matter in modern astrophysics; in 1846 the search for anexplanation of the anomalous motion of Uranus would have led to the discovery of the eighth planet,Neptune In that case, therefore, the hypothesis of the existence of dark matter through its gravitationaleffects was verified by direct observations led by the calculations done in the framework of
Newtonian gravity
The calculations of the mass, distance and other orbital characteristics of the new planet werecarried out by French astronomer Urbain-Jean-Joseph Le Verrier and British astronomer John C.Adams Technically they had to solve, for the first time, the inverse perturbations problem, that isinstead of calculating the orbital parameters of a certain object determined by the presence of anotherplanet with known characteristics, the properties of the object were calculated from the knowledge ofthe orbital anomalies of Uranus The planet thus hypothesized, named Neptune, was then observed forthe first time less than a degree from the position predicted by Le Verrier: for theoretical astronomy,
it was really a remarkable triumph as Newton’s gravitation law was spectacularly confirmed.7
A similar, but in a way opposite, situation to that of Uranus occurred again in the 19th century inthe case of Mercury Indeed, small irregularities in its trajectory were observed; to interpret them itwas assumed, as for Uranus, the existence of another planet within its orbit This hypothetical planetwas named Vulcan, and was held responsible, through its gravitational effects, for the observed
anomalies of Mercury’s orbit However, in this case “dark matter” was found not be the correct
explanation and Vulcan, in fact, was never observed.8
According to the Kepler’s first law, derived from Newton’s law of gravity, the planets revolvearound the Sun along elliptical orbits with the Sun at one of the two focal points.9 This law is derivedneglecting the gravitational action of the other planets, which, however, are responsible for smallperturbations caused by the planets’ relatively small masses These perturbations generate the
Trang 17precession of the point where the planet is closest to the Sun (perihelion): this means that the planet’strajectory does not lie in a single ellipse In fact the orbit does not close, with the resulting effect thatthe ellipse does not remain the same but “moves”, having as the Sun as one of the foci, and thereforemakes a rosette motion In this way, the perihelion changes position in time During the 19th century,the precession of Mercury’s perihelion was measured as equal to 5600 s of arc for century.10 Themotion of the Mercury’s perihelion was calculated using Newton’s theory, considering the sum of thegravitational effects of the Sun and of the other planets The value derived from the theory, however,was different, although by a small amount, from the observed one.
American astronomer Simon Newcomb in 1898 provided the value of this difference as 41.24 arcseconds per century,11 with a measurement error of only 2 arc seconds per century Newcomb
considered several causes to explain this anomaly: the fact that the Sun is non-spherical, the presence
of a ring or a group of planets inside the orbit of Mercury, a great expanse of diffuse matter similar tothat reflecting zodiacal light, and, finally, a ring of asteroids located between Mercury and Venus Bymaking the calculations for the different cases, in the same framework of Newton’s theory, Newcombhowever concluded that none of these possible causes could explain the observations
The hypothesized planet Vulcan was never observed, and Albert Einstein instead explained theanomalies of Mercury, in his famous work of 1915 when in which he introduced the theory of generalrelativity In particular, Einstein presented calculations providing a value for the precession of theabnormal Mercury’s perihelion of 42.89 arc seconds per century, well within the measurement errorreported by Newcomb.12 The Mercury’s perihelion precession became very quickly one of the threemain observational confirmations of general relativity, together with the deflection of light passingclose to the Sun and the redshift of the light13 emitted from a type of very compact star called a whitedwarf Einstein’s new theory of gravitation completely changed astrophysics and modern cosmology,providing a new conceptual framework for relating the effects of gravity, space and time
In fact, general relativity describes gravitational force no longer as the action between distantmass bodies that occurs in the ordinary three-dimensional space, as happened in the Newtonian
theory, but as the effect of a physical law that binds the distribution of mass and energy with the
geometry of space-time itself.14 The equations formulated by Einstein that describe the force of
gravity are similar to those that characterize the properties of an elastic medium In this description,the gravitational effects are due to the distortion of this medium caused by presence of a large enoughmass—like a star For example, the Sun locally deforms the elastic medium in which it is embedded,that is space-time: the force of gravity is thus interpreted as a local curvature of space-time As aresult of this deformation, light rays may not propagate in a straight line: for this reason, the position
of a star in the sky when it is appears along the line of sight with the Sun (observable during a totalSolar eclipse) is slightly different from that observed when the Sun is located away from this
between Newton’s and Einstein’s theory of gravity On the other hand, general relativity is applied tocalculate the characteristics of astrophysical systems with large mass (such as the effects of
Trang 18gravitational lenses in clusters of galaxies) or rotating objects with great speed (such as binary
pulsars) which otherwise could not be explained by Newtonian gravitation
Today, the frontier research in physics tries to develop a theoretical framework that can unify thevarious forces of nature, thus seeking a new formulation also of the force of gravity At the moment,however there are various possible directions that have been undertaken from a theoretical point ofview But none has yet been subjected to a stringent empirical verification since there are only verylimited observations and the relevant experiments are very difficult to perform
Paradigms and Epicycles
Philosopher of science Thomas Kuhn in his famous 1962 essay, “The Structure of Scientific
Revolutions”15 has developed a theory of scientific progress, with reference to the hard sciences,16that has quickly become a landmark in the philosophy of science According to Kuhn, science
develops through periods of “normal science”, characterized by the predominance of a certain
paradigm, but that are occasionally interrupted by “revolutions” at the end of which the old paradigm
is replaced by a new one.17 For most of the time, science develops in a normal way: this is when allscientists working in a certain field, except maybe a few dissidents, accept the same dominant
paradigm Under this paradigm, scientists make progress steadily, though perhaps a bit slowly Theirwork can be seen as the one of “puzzle” solvers, i.e., of difficult problems whose solution requiresknowledge of the field’s state of the art and mastery of its techniques
From time to time, however, a period of revolution takes place, during which the previously
dominant paradigm is criticized by a small number of revolutionary scientists Although most
researchers in a certain scientific field agree with the dominant paradigm, and therefore consider thenew revolutionary approach absurd, the small group of revolutionary scientists can develop a newparadigm enough to persuade their colleagues to accept it.18 In general, a paradigm shift is induced bythe introduction of a refinement of some experimental techniques, which in turn occurs as a
consequence of a technological innovation Thus occurs a revolutionary shift from an old to a newparadigm
Even the great physicist Ludwig Boltzmann in the second half of the 19th century had developed
an analogous idea19:
The man on the street might think that new notions and explanations of phenomena are graduallyadded to the bulk of the existing knowledge […] But this is untrue, and theoretical physics hasalways developed by sudden jumps […] If we look closely into the evolution process of a
theory, the first thing we see is that it does not go smoothly at all, as we would expect; rather, it
is full of discontinuities, and at least apparently it does not follow the logically simplest path
Although revolutions occur only occasionally in the development of science, these revolutionaryperiods correspond to the most interesting times as a certain scientific field undergoes to a majoradvance Kuhn’s model of scientific development is considered a good interpretive scheme of thehistory of science, and it applies not only within the natural sciences considered by Kuhn, but also toscience in a broader sense, including mathematics and medicine.20
Historically the most representative example of this situation, e.g the general model of scientificrevolutions, is surely the Copernican revolution.21 From the days of ancient Greece up to Copernicus,
Trang 19astronomy had been dominated by the Aristotelian-Ptolemaic paradigm, in which the Earth was
considered to be stationary at the centre of the universe The various movements of the heavenly andsublunary bodies were described by the mechanics of Aristotle and, according to Ptolemy
The goal that an astronomer must have is: to show that the phenomena of the sky are described ascircular and uniform motions.22
His book, whose Arabic name is “The Almagest” and dated to around 150 AD, was, in fact, the
first organic and mathematical treatise that offered a detailed explanation and quantitative analysis ofcelestial motions It remained the primary reference of astronomy for more than a thousand years
The astronomer had to describe and predict the movements of the Sun, Moon and planets, as
accurately as possible, by using the Ptolemaic epicycles scheme The epicycle has been introducedbecause it was observed that the planets did not remain at the same distance from Earth as expected ifthey had followed circular orbits: this is shown by their apparent changes in brightness over time Itwas also observed that the apparent motion of the planets in the sky is not always directed toward thewest, as the Sun and the Moon: from time to time, in fact, the movement of the planes in the sky isretrograde, in an eastward direction This fact appears difficult to reconcile with the hypothesis thatthe planets follow circular and uniform orbits around the Earth Therefore it was hypothesized that aplanet rotates in a smaller circular orbit, the epicycle, whose centre was placed on the main circularorbit (the deferent) with the Earth at the center As the observations become more precise, the number
of epicycles grew, so that today the word epicycle has become synonymous with “ad hoc
hypotheses”
This was the normally accepted science at the time of Copernicus, who, trying to solve the
problem of planetary motion that Ptolemy and his successors were not been able to explain
satisfactorily, introduced as a side hypothesis the Earth’s motion By trying to reform the techniquesused in the calculation of the planets positions, Copernicus challenged, therefore, the dominant
paradigm, suggesting that the Earth revolved around its axis, while moving around the Sun His resultswere based on mathematical calculations with a level of sophistication and detail equal to those of
the Ptolemaic system These calculations were published in his book “De revolutionibus Orbium
Caelestium” in 1543 This publication inaugurated a revolutionary period, constituting perhaps the
point of transition from medieval to modern society, during which the old Aristotelian-Ptolemaicparadigm was overthrown and replaced by a new paradigm that was later formulated in detail by
Isaac Newton in his “Philosophiae Naturalis Principia Mathematica” (1687).
The triumph of the Newtonian paradigm has therefore initiated a new period of normal science inastronomy that lasted from around 1700 to around 1900 During that time, the dominant paradigm wascomposed of Newtonian mechanics and the Newton law of gravity, and a normal scientist was
expected to use these tools to explain the movements of celestial bodies and comets, the perturbations
of the orbits of the planets, and so on The hypothesis of the existence of unobserved bodies,
responsible for gravitational perturbations, like Neptune and Vulcan, can be seen as the introduction
of epicycles in the Ptolemaic model: in order not to change the paradigm some ad hoc hypotheseswere introduced However these assumptions were then compared with the, observations, and, as wehave seen in the cases of Uranus and Mercury, the result was quite the opposite On the one hand,Neptune was discovered due to the perturbations of Uranus On the other hand, the Einsteinian
revolution, taking place between 1905 and 1920, provided a new explanation for the motion of
Mercury This was not interpreted as the effect of another plane (Vulcan) that, in fact, was not
Trang 20observed, but as a result of a different theory of gravitation, the theories of special and general
relativity theories that replaced the Newtonian paradigm
Experiments and Observations
In laboratory physics, which has long been considered the model to follow for the development ofscientific knowledge, the backbone is the continuous and exhausting confrontation between theoreticalresults and experimental tests The theory sometimes develops faster, thanks to the intuition of a
particularly inspired researcher, and, at other times, experiment opens new and unexpected paths forresearch, with new, intricate and profound theoretical problems
This on-going confrontation between theory and experimental work has resulted in the
development of a very solid body of knowledge, albeit limited, that has made possible the
spectacular technological development, which we have witnessed over the last century Certainly one
of the reasons why this has been possible derives from the relatively limited domain and controlledobject of study as atoms and molecules that form solids, liquids and gases These and even thoseelementary particles, which, in turn, form more complex, matter; are all constituents, which obeydeterministic, universal and unchanging laws that, though complicated, can be approximately definedand investigated In addition, in the labs, one can control the external conditions in which experimentstake place, and therefore it is possible, at least in principle, to change a single physical parameter at atime, for example the temperature, and then check out what are the effects that this causes
In those natural sciences that are based on observations rather then on experiments, the situation isdifferent In astrophysics, for example, but also for many aspects in geology, an experimental result isrepresented by new observational data In this case, a scientist is faced with a sort of puzzle, whereeach observation provides a tile and where it is difficult, if not impossible, to perform laboratoryexperiments changing conditions in a controlled manner The synthesis of the various observationalpieces in a coherent theoretical framework is the great intellectual challenge for those who are
involved in the construction of a theory Sometimes, instead of the missing pieces, ad hoc hypothesesare introduced, like the epicycles of the Ptolemaic universe, but one should always seek an
independent verification of each used hypothesis However, sometimes one encounters possibly amisuse, as for instance in the current cosmological model which posits that 95 % of matter is made up
by unknown particles and energy of which we have not independent evidence either in the laboratory
or in any other way This situation is one of the symptoms of a problem that is becoming increasinglyimportant in modern cosmology, as noted by astrophysicists George Ellis and Joe Silk23
This year debates in physics circles took a worrying turn Faced with difficulties in applying
fundamental theories to the observed Universe, some researchers called for a change in how
theoretical physics is done They began to argue — explicitly — that if a theory is sufficientlyelegant and explanatory, it need not be tested experimentally, breaking with centuries of
philosophical tradition of defining scientific knowledge as empirical We disagree As the
philosopher of science Karl Popper argued: a theory must be falsifiable to be scientific
Very often, in an area in which experiments are not possible, one can test theories through
numerical experiments: these represent a sort of ideal experiment in which, assuming that a certainphysical theory is appropriate for the description of a particular problem and considering a giveninitial state, systems are evolved numerically, according to the laws of the theoretical model, so as to
Trang 21study their behaviors This type of technique is used when one cannot perform direct laboratory
experiments, and we will see some examples of this technique below when we will discuss the case
of weather forecasting and the spread of epidemics As a system becomes more and more complex theprediction of its future behavior becomes more and more difficult
The Empires of the Times
Historically, predictions were not only used to verify scientific theories, but they have had importantsocial and religious roles Every known civilization has developed knowledge of astronomy since,for example, the appearance of certain constellations and planets in the sky began to be used to
regulate agricultural life (sowing, harvesting, etc.) Among these civilizations, it is interesting to
recall the Maya,24 one of the greatest pre-Colombian civilizations, in which the practice of astronomywas attributed to the “ilhuica tlamatilizmatini”, that is the wise man who studies the sky This sort ofpriest-astronomer, thanks to his knowledge of the trajectories of the stars in the sky and of the
mathematics to solve relatively complicated problems, and therefore due to his ability to predict thefuture, occupied a very important position in Mayan society
Generations of priest-astronomers measured, with considerable accuracy, the trajectories of themain celestial bodies, like the Sun, the Moon and Venus, across the sky and through the seasons InMayan cities, many buildings and temples had not only astronomical orientations, but also functioned
as observatories: through small slits in their walls, one could measure very accurately the position ofthe stars in the sky so as to reconstruct their trajectories The fact that the construction of Mayan citiesdepended on astronomy testifies to the intense relationship that the Maya had with the sky, while thepower of the priest-astronomers was indicative of the essence of their work: if someone can predictthe movements of the stars, then that person is in communication with the gods Astronomy has
characterized many aspects of Mayan life, from religion to those practical aspects of how to measuretime and when to prepare for the planting season
To understand stars motions regularities, it was necessary to observe the trajectories for a ratherlong time: for instance, to predict solar and lunar eclipses, about a hundred years Due to the
inclination of the Earth with respect to its orbital plane, the Sun appears in different positions in thesky according to the time of year, and, thanks to this inclination, there are the seasons The Maya hadmeasured accurately the rising and setting of the sun, and surprisingly, they had determined that thelength of the solar year was 365 days: a tropical year lasts 365.2422 days, so their calculations werevery accurate Unfortunately, this small error means that the calendar would be accumulated an error
of about a month every 100 years, in other words, almost six months every 600.25 This is obviously asignificant error, but there are several pieces of evidences that the astronomer-priests fell short
continually updated their measurements and forecasts so that the calendar remained accurate eventhrough the many generations over which it was developed
The Maya were also very interested in the Moon’s movements Indeed, they kept track of the
interval between two successive full moons and, in particular, they calculated that there are exactly
149 moons in a period of 4400 days: this corresponds to a lunar average month of 29.53 days, a valuevery close to the current measurement The prediction of the positions of the Sun, the Moon or Venus
in the sky at a given time were not necessarily hard to do, since several generations of
priest-astronomers had carefully taken note of the motion of the stars in the sky over time Overall, thesecalculations were carried out simply by counting, for example, the number of revolutions of the moon
in a certain time interval Accumulating observations for decades, the Maya could therefore come to
Trang 22make very precise estimates.
However, to elaborate accurate predictions for certain astronomical events represented a
completely different problem that required the use of relatively sophisticated mathematics, along withthe knowledge derived from astronomical observations A notable example is the ability to predictsolar eclipses which requires more complicated calculations than determining the rising or setting ofthe Sun, because it involves the consideration of the contemporary movements of the Earth, the Sunand the Moon: this is by no means an easy task as solar eclipses, involving a very narrow area of theEarth, are much rarer than lunar eclipses In the Dresden Codex, the oldest book in the Americas,dated around the eleventh century (but perhaps rather a copy of one five hundred years older), thereare tables with forecasts of lunar and solar eclipses, which testify to the considerable knowledgeaccumulated by the Maya
It is interesting to note that the Maya, although they were keen observers, had not formulated aphysical and geometrical model to explain the movement of the stars, but, through the study of
astronomical observations carried out over hundreds of years and by making some relatively
complicated mathematical calculations, they had been able to make very accurate predictions Fromthe storage of observational data, the Maya were able to understand the different and subtle motions
of the planets without a referential physical model Today we know that the success of a forecast,elaborated in the absence of a theoretical model and thus based only on the study of data, was
possible only because the physical problem was relatively simple and well defined As we will seebelow, the possibility to make predictions in this case comes from the particular simplicity of thesolar system where the periodicity in the planets movement is, in fact, easily observable over thecourse of time But this happens “only” for the “relatively limited” interval of a few million years: theprediction of an eclipse over a longer time will be affected by considerable errors For other
systems, the characteristic times are much shorter Let’s see why
Determinism and the Butterfly Effect
There are other phenomena, just as eclipses, which can be predicted with great anticipation and
accuracy, while for others, e.g., tomorrow’s weather, climate changes, earthquakes, or the spread ofdiseases, the situation in quite different and for which it is not possible to make as accurate a
prediction To understand the reason for these differences, we need, first of all, to classify the
different systems on the basis of their evolutionary laws The first category includes those systems forwhich these laws are known: for example, the motion of the planets around the Sun is described byNewton’s law of gravitation, while meteorology obeys to the equations of fluid dynamics and
thermodynamics The second category includes systems for which we know that laws of evolutionexist, but we do not actually know them; for example, earthquakes are definitely described by thelaws of elasticity theory, but not knowing the compositions and conditions of the all the materialsinside the earth, one cannot make predictions by solving the corresponding dynamical equations
Finally, there are the systems for which we do not know if evolutionary laws, with a universal andimmutable character, and therefore independent of space and time, can be defined: this situation
concerns mainly social systems, including, as we shall see in the next chapter, economics where theposition in time and space of an occurrence determines what kind of evolution law is at work
In turn, evolution laws can be divided into two main classes: deterministic and probabilistic.26For determinism, we mean that the status of the system at a certain time uniquely determines its status
at any subsequent time: this is what happens, for example, in the case of a falling stone on Earth when
Trang 23one knows its location and its speed at a certain instant of time The preeminent French scientist of thefirst half of the Eighteen Century, Pierre-Simon de Laplace, is famous for his immortal contributions
to mathematics, astronomy and probability calculus, as well as for the following apocryphal but
plausible exchange with Napoleon himself who, after reading a copy of his work “Exposition du
système du monde”, published at the beginning of 19th century, asked him27:
“Citizen, I read your book and I do not understand why you did not leave room for the action ofthe Creator.” Laplace replied: “Citizen First Consul, I had no need of this hypothesis.” Napoleonsaid, “Ah, this is an assumption that explains many beautiful things!” And Laplace “This
hypothesis, Sire, explains everything but it can not predict anything As a scholar my duty is tobring results that allow us to make predictions”
Laplace was indeed a supporter of causal determinism (although he was well aware of the limits
of this description) and so he described the evolution of a deterministic system28:
We may regard the present state of the universe as the effect of its past and the cause of its future
An intellect which at a certain moment would know all forces that set nature in motion, and allpositions of all items of which nature is composed, if this intellect were also vast enough to
submit these data to analysis, it would embrace in a single formula the movements of the greatestbodies of the universe and those of the tiniest atoms; for such an intellect nothing would be
uncertain and the future just like the past would be present before its eyes
This statement is considered a sort of determinism manifesto: knowing the conditions of a systemtoday and its laws of motion, we can, in principle, precisely know its status at any other future time.Although formally correct, this vision seems to be contradicted by many events, which do not showpredictable behavior: just think of tomorrow’s weather How does one reconcile the concept of
Laplace with the irregularity of many phenomena?
In simple terms, we can say that the error that characterizes any actual measurement, and thereforewith which we can know the status of a certain system, such as the location and speed of the variousplanets of the solar system, will cause a major difference in the prediction of the position of theseplanets in a few million years This may seem surprising One would expect, in fact, that if one knowsthe laws that determine the dynamics of a system, one can calculate the approximate position of abody by simply solving the equations of motion (which are, therefore, also known as they can be
deduced from the fundamental laws) and calculate the various physical quantities from the
approximate knowledge of the initial conditions—represented by the position and speed at a certaininstant of time However, the situation is not so simple because a system in which there are “manybodies” in non-linear29 interaction quickly becomes chaotic: a small change in the initial conditionsproduces a large change in the position and velocity when the system has evolved for sufficiently longtime In the case of systems that interact through the law of gravity, it is enough to have three or morebodies (such as the Earth, Moon and Sun) to obtain chaotic behavior
A brief look at the concept of chaos in a deterministic system30 can clarify the difference betweenthe predictions within the solar system and those of the atmosphere: although both are governed byknown deterministic laws, they obviously exhibit significant differences when it comes to predictingtheir future states The concept of a chaotic system is, as we shall see, at the basis of the possibilities
to make accurate predictions and to find periodicities or recurrences in its dynamical evolution
Trang 24The key question is therefore: how will be the evolution in time of a certain system, when it startsfrom two slightly initial conditions that differ by less than the precision we can measure
experimentally? One can get the intuitive answer right away if one has seen the beautiful film by PeterHowitt, inspired by an idea of the Polish director Krzysztof Kieślowski, entitled “Sliding Doors”,where the life of the protagonist, played by the charming Gwyneth Paltrow, splits, pursuing a
completely different course, depending on whether she was successful taking a certain metro train:small changes, big effects!
The main feature of chaos was discovered by the great French mathematician Henri Poincaré31 inthe late 19th century, when the King of Sweden and Norway called for a prize to be awarded to thosewho had solved a seemingly simple problem: the three-body gravitational problem.32 The two-bodyproblem, such as the Earth-Sun system, had been solved by Newton more than two centuries earlier,and the three-body problem involves the calculation of what happens when another planet or satellite
is added to the system of the two bodies, as for example the Moon to the Earth-Sun system Newtonhimself, as well as Laplace and many other mathematicians during the 18th and 19th century, was not
able to solve the problem completely Poincaré in his 1903 essay “Science and Method” realized
that, because there are chaotic motions, one couldn’t calculate the solutions in the long run, evenapproximately; this is his description of the phenomenon:
If we knew exactly the laws of nature and the situation of the universe at the initial moment, wecould predict exactly the situation of that same universe at a succeeding moment But even if itwere the case that the natural laws had no longer any secret for us, we could still only know theinitial situation approximately If that enabled us to predict the succeeding situation with the
same approximation, which is all we require, then we should say that the phenomenon had beenpredicted But it is not always so; it may happen that small differences in the initial conditionsproduce very great ones in the final phenomena A small error in the former will produce an
enormous error in the latter Prediction becomes impossible, and we have the fortuitous
phenomenon
One refers to this sensitive dependence on initial conditions when one talks about the “butterflyeffect”, which is to mean that the flapping of a butterfly in Brazil can cause a tornado in Texas Inother words, a small uncertainty, on scales of a few centimeters, which characterizes our knowledge
of the state of the atmosphere at a certain time (the butterfly whose wings do or do not flap, etc.)grows exponentially33 over time, causing, after a certain interval, a very high degree of uncertainty inthe predictions of the system on spatial scales that can reach thousands of kilometers Even underideal conditions—in the absence of external perturbations and with an exact physical model, that iswith known deterministic laws that determine the dynamic evolution—the error with which we knowthe initial state of the system is therefore amplified in time because of the chaotic nature that
characterizes the majority of non-linear systems The measurement error of the initial condition, even
if infinitesimal, grows exponentially to become significantly relevant to the evolution of the
phenomenon, making it impossible to formulate predictions beyond a certain period of time
In making a prediction, one must therefore define a threshold of tolerance on the error with whichone wants to know a certain phenomenon, such as the position of the Moon This threshold, in turn,will determine the maximum time scale for which the forecast can be considered to be acceptable, intechnical terms, “the horizon of predictability” The smaller the uncertainty with which we want toknow the position of the Moon, the shorter the horizon of predictability The chaotic dynamics thus
Trang 25poses inherent limitations to our ability to make predictions These limits vary from system to system:the horizon of predictability for an eclipse is of the order of millions of years, while, for the
meteorology underlying weather, it is a few hours or days, depending on the particular conditions and
on the specific location This is because the atmosphere is chaotic, but with a much greater
complexity than the solar system: it is a non-linear system composed by N bodies, with N much
greater than the number of planets and satellites in the solar system.34
In the 1960s, the American meteorologist Edward Lorenz was the first to show what are the basiccharacteristics of a chaotic system To this end, he developed a very simplified nonlinear model ofthe atmosphere containing the essence of the physics of the problem and studied it with the computerthat was available at that time, which was roughly a million times slower than a typical laptop today.The discovery of chaos was by accident—a case of serendipity: Lorenz, in order to save space in thesmall memory of the computer he was using, began the detailed calculation of its model with the
computer but the initial conditions he used were approximated to the three decimal digits instead ofusing higher accuracy as he had previously done He became well aware that, after a short
computation time, the numbers printed by the computer and corresponding to the weather forecast ofhis simplified model, had nothing to do with those obtained when the precision that was used wasgreater After checking that there were no technical problems due to some malfunction of the
computer itself, Lorenz concluded that the reason for the difference was to be found in the
approximation used, since, while initially the differences were very small, they grew visibly as thecalculation continued So he described his results35:
Two states differing by imperceptible amounts may eventually evolve into two considerably
different states […] If, then, there is any error whatever in observing the present state — and inany real system such errors seem inevitable — an acceptable prediction of an instantaneous state
in the distant future may well be impossible… In view of the inevitable inaccuracy and
incompleteness of weather observations, precise very-long-range forecasting would seem to benon-existent
In other words, Lorenz concluded that if the Earth’s atmosphere was well approximated by itsmodel, then it would be a chaotic dynamic system for which small errors in the knowledge of theinitial state can quickly lead to large errors in the prediction of its future states For this reason, heintroduced the successful metaphor of the butterfly effect and his famous speech at a scientific
conference in 1972, was entitled “Predictability: does the flap of a butterfly’s wings in Brazil set
off a tornado in Texas?”.
This type of behavior is not pathologic: chaos has been shown, in fact, to be the rule rather thanthe exception in many situations from geophysics, to astronomy, optics, biology, chemistry, etc.36Chaos is not, in fact, a theory of a particular physical phenomenon Rather, it represents a paradigmshift that applies to science in general, and that provides a set of concepts and methods to analyzenewly observed behaviors that can arise in a variety of disciplines As summarized in the quote of theprestigious Kyoto Prize, which was awarded to Lorenz in 1991:
He made his boldest scientific achievement in discovering ‘deterministic chaos’, a principle
which has profoundly influenced a wide range of basic sciences and brought about one of the
most dramatic changes in mankind’s view of nature since Sir Isaac Newton.37
Trang 26Probability and Many-Body Systems
We have seen that when one has to deal with a system with a few bodies, by knowing the evolutionlaws, one can write the equations that describe its dynamical evolution; but how does one describethe evolution of a system with a large or a huge number of bodies? To solve this problem, probabilitytheory has been introduced into the description of the dynamics of these phenomena Thanks to this,one can perform statistical predictions that characterize a system through the average of a relevantquantity that describes its global properties (such as the temperature) and the corresponding
Maxwell built the necessary equipment to carry out experiments to show that his calculation wascorrect He came to the conclusion that39
[…] Having to deal with material bodies as a whole, without sensing single molecules, we areforced to take what I have described as the statistical method, and leave the purely dynamic
method, in which we follow to calculate every move
Maxwell had laid the foundation for the explanation of the thermodynamic approach that
determines many properties of a system in equilibrium, starting from the knowledge of the type ofinteraction that takes place on a microscopic scale The relationship providing the bridge between themacroscopic and thermodynamics description of a system with many particles and the microscopicworld, where all particles move following the classical laws of motion, in the case of a system inequilibrium, is carved on the tomb of another giant of physics, Ludwig Boltzmann, who lived in thesecond half of the 19th century He was the first to suggest an equation, considered one of the greatachievements of science of all times, tying a measurable thermodynamic quantity in the laboratory, theentropy, to a quantity of a mechanical kind that describes the microscopic world that make up
matter.40
Einstein in one of the four “Annuls Mirabilis papers”41 made a further step in the development ofthe probabilistic approach to a physical system He had been looking for experimental evidence thatcould theoretically be explained by assuming the existence of atoms, which at the time had not beenproven to exist The starting point was represented by the results obtained by botanist Robert Brownwho, already in 1827, had studied the motion of a grain of pollen in a liquid The grain, much larger(macroscopic) than the typical size of liquid molecules, had a ceaseless and erratic motion—the so-called Brownian motion In principle, to explain this observation theoretically, one can write all theequations of motion for the molecules of the liquid and the pollen grain However this road is notaccessible because the equations would be too many, since the number of molecules has an order ofmagnitude of approximately N = 1023 (one followed by 23 zeros!) To get an idea of the magnitude ofthis number, an estimate of the number of grains of sands on all the beaches in the world is 1022, anumber roughly equal to the number of atoms in a sugar cube.42 Einstein’s idea was to interpret themotion as due to the impact of the water molecules with the little pollen granules, providing in this
Trang 27way a quantitative description of the phenomenon So he wrote a single equation for the motion of anaverage grain, taking into account the effect of collisions with an enormous number of molecules
(about N = 1021 per second) Moreover, given the large number of scatterings and molecules, it could
be assumed that the average force depended on a global property of the fluid, such as its temperature.The equation so derived is no longer deterministic but describes some statistical properties, and it istherefore a probabilistic evolution law.43
Einstein derived in this way the expression for the displacement (deviation) of the pollen
particles by using the kinetic theory of fluids, which was still quite controversial, to connect the
amplitude of the displacement to some observable physical quantities This explanation has providedthe first empirical evidence of the reality of the atom, thanks to the observation in the data of a
predicted theoretical quantity, and it also put in a new light statistical mechanics, which at the timewas not yet recognized as an indispensable tool for the study of the properties of matter Statisticalmechanics is thus that part of theoretical physics that aims to describe the properties of systems
consisting of a large number of particles by means of a probabilistic approach In this way, it
renounces a description of the system à la Laplace, and it passes to a statistical kind of approach Thebasic idea is simple: while a particle trajectory cannot be determined because we have no access tothe necessary information, it is expected that the collective motion of many particles generate a
regularity that can be described only probabilistically
For this reason, the theoretical description aims to describe average properties (and fluctuations)
of a material body as a whole without following the behavior of single molecules In other words,one wants to find the probability that a system is in a certain state This step is possible when, forinstance, a system is in a state of thermodynamic equilibrium, as it will be discussed in more detail inthe next chapter Deterministic and probabilistic laws apply respectively to systems with few or manybodies (or more technically with few or many degrees of freedom) However, even for systems withfew degrees of freedom, one must use probabilistic laws and statistical forecasts As we have
discussed, this is due to chaotic behavior that is intrinsic to non-linear dynamics even in the case ofsystems with few bodies
Forecasts and Decisions
As we have discussed so far, forecasts play a fundamental role for science and for the scientific
method itself However, predictions of the scientific tradition are distinguished from predictions
aimed at guiding decisions and therefore intended as a service to the community The confusion
between these two levels may undermine the proper communication between science, politics andsociety and make it difficult to understand the reliability and limitations of predictions Recently, infact, a new type of scientific prediction has arisen,44 motivated in part by the needs of policy makersand in part by the availability of new technologies and of big data Modern technologies enable, infact, the constant monitoring of atmospheric, geological or social phenomena with the hope of
predicting natural disasters and mitigate them with prevention plans Similarly, one can monitor thespread of diseases and epidemics to identify effective opportunities for mass vaccinations or of otherforms of prevention Thus we are witnessing a growing investment, estimated in billions of dollarsper year, to develop tools and technologies that can predict natural and social events This new
discipline of forecasting has implications for social matters, and tries to predict the behavior of bothordinary phenomena and of complex environmental phenomena such as climate change, earthquakes
Trang 28and extreme weather events, but also of some social and economic phenomena such as the spread ofdiseases, population trends, the sustainability of pension systems, etc However to make use of thesepredictions in order to develop policies is not at all simple.
This situation requires, therefore, further clarification of the concept of prediction and a
discussion of its meaning across completely diverse fields and contexts For example, unlike the
cases that we have considered so far, weather forecasting is not considered a test of the equations offluid dynamics and predictions of earthquakes are not a test of the laws of elasticity, in contrast to theprediction of the position of the planets which was a verification of the interpretation of the nature ofthe force of gravity Such forecasts have, in fact, a different role: to ensure a rational basis for
decisions in the field of global politics or local civil protection, etc In practice, it is assumed, giventhe evidence accumulated in their support, that the laws of fluid dynamics that regulate the atmosphere
or the law of elasticity that explain earthquakes are correct, and then one can calculate their effects onreal and open systems As we shall see in the following paragraphs, this step however is not at allobvious due to technical complications and chaotic effects in the first case and, primarily, due to ourinability to perform direct observations of the system in the second case
On the other hand, it is clear that if the models used to make, for example, weather forecasts weremanifestly wrong, predicting a beautiful spring day when a hurricane actually occurs, or a snowstorminstead of a hot sunny summer day, then this should rise a serious warning about the theoretical
foundations of the field Obviously this does not happen, and the weather forecasts are sometimeswrong but they do not confuse “white with black” The same thing happens with earthquakes, whereseismologists identify earthquake zones, although, as we shall see later, they are unable to predict theexact time and place of the next earthquake occurrence Instead, the problem that could undermine thefoundations of geophysics would be if a major earthquake would take place in an area far away fromtectonic plates that has never been considered seismic Also, in this case, such an event has neveroccurred
Since a serious prevention policy to limit catastrophic damages involves such potentially greatinconvenience and cost to the public, reliable forecasting is of great importance For example, onemust decide whether to evacuate a city or not for the arrival of a hurricane, so both choices are
expensive and risky, and the decision must be supported by the most reliable forecasts possible Themain difficulty of this type of prediction lies in being confronted with systems of complex phenomena,closely interconnected and interacting with the environment Moreover, in most cases, one is
interested in the prediction of localized events in time and space that are not reproducible at will(thunderstorms, hurricanes, earthquakes, etc.)
Unlike laboratory scientific experiments, where we try to isolate a system to identify the causeand effect relationships, real systems are complex and open and therefore suffer, even in the mostfavorable situation, the uncertainty linked to model approximations and to the errors in knowledge ofthe initial conditions In the case in which the studied phenomenon is governed by known
deterministic laws, it is possible to take into account all sources of error by repeating the predictionseveral times and suitably varying the initial conditions and/or the type of approximations used in thenumerical model In this way, it is possible to estimate the probability, within a particular theoreticalmodel, that a certain event happens As we will see, this technique is now currently used for weatherforecasting
However, in many cases, the estimate of the probability of the location in space and time of anevent is not very reliable, which is caused both by the high level of approximation with which weknow the specific laws that regulate a certain phenomenon and by the practical impossibility to know
Trang 29the initial conditions This is the case for earthquakes that depend on stresses occurring hundreds ofkilometers below the earth’s surface, a region inaccessible to systematic measurements.
Well-established scientific knowledge, therefore, does not translate inevitably to forecasts devoid
of uncertainty, at best due to the intrinsic limitations of the phenomena of interest These limits are notalways understood or correctly transmitted to those who must transform forecasts into decisions orinto security protocols for the populace The value of scientific prediction with regard to their use inpublic policies is therefore a complicated mix of scientific, political and social factors In particular,since any forecast contains an irreducible element of uncertainty, the implications are often not
considered when this concerns phenomena of public interest and when this implies, for
policy-makers, one choice rather than another Typically, there may be two kinds of error: when an event that
is predicted does not take place (a false alarm) or when an event occurs but it has not been predicted(a surprise) The problem usually is that by trying to reduce the first error, one increases the secondand vice versa So the key point is to try to define the uncertainty of the forecast and therefore thethreshold of tolerable uncertainty for the political decision Moreover, the quantification of the
uncertainty of forecasts is very difficult in the case of exceptional events, such as a hurricane, while it
is easier for weather forecasting in ordinary conditions
A part of this uncertainty can be reduced through an understanding of the physical processes at thebase of the phenomena in question, or through a collection of more accurate data In any case the
assessment of the amount of uncertainty remains a crucial discretionary task, which can be done only
by experienced scientists The latter should also have the intellectual honesty to clarify their partialignorance–inherent to the incomplete nature of scientific knowledge However, this should be wellunderstood by the policy makers, by the media and by the public, which necessitates an
interdisciplinary discussion involving not only specialists We will illustrate below with examplesthe difficult relationship between science and politics, and between science and public opinion,
filtered by the media, in relation to the problem of forecasts To understand through an example howprogress in scientific knowledge and refinement of observational techniques has led to significantprogress in forecasting, we consider the case of meteorology next
How Will the Weather Be Tomorrow?
At the turn of the last century, the possibility that the laws of physics could be used to predict theweather was completely unexplored Pioneering Norwegian meteorologist Vilhelm Bjerknes
described the general idea of measuring the current state of the atmosphere and then applying the laws
of physics to compute its future state In principle, reliable observational data should be the input forthe equations of thermodynamics that bind the changes in atmospheric pressure, temperature, density,humidity and wind speed, with the aim of elaborating a forecast In practice, however, the
atmospheric turbulence makes the relationships between these variables so complicated that the
relevant equations cannot be solved
The turning point in weather forecasting occurred thanks to the insights of the physicist Lewis F.Richardson, which straddled the years of World War I.45 Richardson was a Quaker and so a pacifist,and he is an example of those “premature” scientists that are not understood by their contemporariesbut whose work will be recognized only after a long time Richardson proposed46 to use the basicequations of fluid dynamics, together with those of thermodynamics, to determine the future state ofthe atmosphere Starting from a given initial condition, the weather conditions today, and solvingnumerically the appropriate differential equations describing the atmosphere, one can determine the
Trang 30weather tomorrow The innovation introduced by Richardson has therefore been to switch from astatic to a dynamic approach considering that the atmosphere evolves in accordance with the
equations of hydrodynamics and thermodynamics forming seven coupled differential equations whosesolution provides the prediction of the time
Only differential equations, which describe infinitely small variations in infinitely small timeintervals, can grasp the ever-changing atmosphere Since these equations could not be solved in anexact manner, Richardson reworked the math problem to replace the infinitesimal calculus with
discrete measures that describe quantities in time intervals that are sufficiently small but not
infinitesimal The finite-difference equations developed by Richardson represent a kind of sequence
of approximate images of the flux of reality, but they have the advantage, in principle, to be solvedwith relatively simple algebra techniques Unfortunately at the Richardson’s time, it was not possiblefor a computer to perform these calculations numerically: therefore Richardson was only able to posethe problem in the right way and to define the proper numerical algorithms for the integration of thedifferential equations
The difficulties to obtain the weather data at a given time and to solve the equations of motion byhand calculations were, in those days, in fact insurmountable To give an idea of the situation, theweather data (temperature, humidity, barometric pressure, and wind speed) used by Richardson, werepublished by Bjerknes reporting observations in Central Europe at four in the morning of May 20,
1910 during an international festival of hot-air balloons On the other hand, it is estimated that it tookRichardson more than two years to solve the differential equations, working at least a thousand hours
by hand to do the calculations on a rudimentary calculating machine
Richardson drew a map of the atmosphere above the Central European region, which he dividedinto 25 cells of equal size, with each side representing about 125 km Each cell was further dividedinto five layers with the same mass of air in each layer Richardson divided the 25 large cells intotwo types: P cells, for which he had recorded the atmospheric pressure, humidity and temperature,and M cells, for which he calculated the speed and wind direction He alternated P and M cells on thegrid, creating a kind of chessboard; in order to calculate the “missing” data for each cell, he used thedata of the cells adjacent to it By placing all of the data available at seven o’clock in the morninginto the equations, and patiently solving them for a time period of six hours, he arrived at forecast forthe weather conditions at one o’clock in the afternoon The resulting six-hour forecast, however,
proved very disappointing: the weather recorded for that day showed that Richardson’s predictionwas wrong Many scientists, at that time and even now, would not have published the results of such a
striking disappointment However, when Richardson published his book “Weather forecast by
numerical process”47 in 1922, he described his disappointing results in great detail Richardson
correctly realized that “the method is complicated because the atmosphere is complicated”.
However, in the conclusions he was cautiously optimistic: “Maybe one day in the near future it will
be possible to develop calculations on how the weather proceeds […] But this is a dream.”
Some years ago, Peter Lynch of the Irish Meteorological Service suggested that the problem wassimply that in 1910 the methods of data collection were quite rudimentary and introduced too largeerrors.48 The model of Richardson had, in fact, theoretically worked To realize the dream of
Richardson, one would have to await the fifties when it became possible to have computers to
perform numerical calculations, to develop fast algorithms to speed up the numerical calculations and
to define “effective equations”, in other words the equations that could simplify the problem whilenot neglecting the essential theoretical aspects Only with the advent of computers did fast
calculations become possible, and the approach of Richardson has become the standard method for
Trang 31making predictions Today his technique is the basis for weather and climate modeling, thanks to thedevelopment of computers that allow one to numerically solve complicated systems of equations and
to the observations of weather conditions across a large network of satellites The ideas of
Richardson therefore become reality and the quality of weather forecasting has increased steadilyover time from the early eighties onwards For example, it has become possible to obtain reasonablyreliable forecasts to seven days only since 2000, while the five-day forecast of today have the samequality of three days forecasts of the early nineties Let us briefly mention the manner in which thisspectacular progress has been made possible
Technology has played a decisive role not only to make the dream of Richardson reality, but also
to constantly improve the quality of weather forecasts Only fifty years ago, the primary data
(pressure, temperature, humidity, wind, rain, snow, hail, and all possible information on the weather
of a certain locality) were collected from weather balloons and the ground instruments around theworld, linked by radio and telegraph From this data, it was possible to make reliable predictions for
no more than 12–36 h Today a vast network of satellites dedicated to the detection of meteorologicaldata and other intensive use of increasingly powerful computers networks has transformed the artisanmethod of prediction into a real industry An enormous amount of data is now used as initial
conditions for numerical calculations, just as Richardson had dreamed Satellites and weather
stations provide millions of bits of information per hour, which are used to construct a sort of
snapshot of the atmosphere around the globe that is constantly updated This photograph is put into thecomputers and used to numerically solve the differential equations that describe the atmosphere
evolution laws These are the physical laws proposed by Richardson, i.e., the classic ones that
govern the dynamics and thermodynamics of fluids Simultaneously, improved understanding of sometechnical problems has also led to improved forecasts
On the one hand, it is necessary to measure the state of the atmosphere ever more densely, aspossible, in time and space On the other hand, the volume of the atmosphere that interests us coversthe entire globe to heights of many tens of kilometers, and the complex equations describing this
system become mathematically tractable only by employing approximate numerical techniques
involving acceptable errors Since each approximation in the calculation implies, in fact, the
introduction of a numerical error, reducing it requires the use of super-computers In practice, thecalculation of the weather conditions is made on a discrete set of points that are placed on a regularthree-dimensional lattice The denser is the grid, the more accurate the calculation
The models currently used by the European Centre for Medium-Range Weather Forecasts,49
which is a world leader in the field of medium-term global forecasting, use cubes of about
15 × 15 km horizontally and a few hundred meters vertically In these conditions, one has to use about
500 million cubes to fill the entire atmosphere Whereas, for each cube, there are a dozen
atmospheric variables to be calculated, there are in total about five billion variables to evaluate.Moreover, since the calculations for time progress must be redone about every minute of predictiontime, it is evident that the number of elementary operations to make a ten-days prediction requiressuper-calculus (the so-called high-performance computing) This is the reason why the quality ofweather forecasting has progressed along with the increase of the maximum power of the super-
digital calculation available
The main limitations of this procedure are quickly identifiable in the observational uncertaintiesdue to instrumental errors, in the incomplete coverage of the globe, in the inability to model somephysical phenomena (for example phase transformations between water vapor, liquid water, ice andvarious types of snow, the effect of small turbulent vortices, etc.), in the approximations used in
Trang 32numerical algorithms and the intrinsic limit due to the fact that the Earth’s atmosphere is a chaoticsystem, aka ‘the butterfly effect’ Progress in the field of super-computing, satellite technologies andnumerical weather modeling have enabled, over the past thirty years, the addition of one day of
predictive power every ten years
Over the last twenty years, the strong limitations that arise from the use of individual estimates ofthe initial conditions for the so-called deterministic forecasts, without any assessment of their
reliability, have become clearer To be as realistic as possible, the forecast should always contain,with respect to the type of phenomena for which predictions are intended, a very important feature, itsinherent uncertainty This situation has led to the development of forecasting techniques that are todaycommonly used These are based on multiple numerical integrations allowing one to predict the morelikely situation (a single prediction) as well as the possible alternative scenarios The basic tool isalways the model, but instead of producing a single forecast scenario (deterministic approach), theyprovide alternative and theoretically equally probable scenarios All these scenarios should provide
a picture of the potential variability of the forecast and, at the same time, an estimate of its uncertainty(or error) In other words, these systems allow one to estimate the probability that certain events willoccur, for example, that a storm will take place in a certain location
Extreme Weather Events
Numerical models describe reality in an approximate way, because they have a certain finite
resolution in space and time, and because they are not able to consider all the processes and
interactions, especially those on small spatial scales Furthermore, the speed with which two chaoticdynamical systems, which are located at a certain instant in almost identical conditions, evolve intocompletely different conditions is not always the same; rather it strongly depends on the initial
conditions The meteorologist Edward Lorenz, on the basis of a series of reasonable clues,
hypothesized that, in the case of the atmosphere, small initial differences grow more rapidly the
smaller is their extension in space This means that, beyond a certain value, it is useless to furtherreduce the amplitude and the scale of the errors of the analysis, for example, by increasing the number
of observational stations, the spatial resolution of the instruments on board of the satellites or theresolution of the numerical models used to analyze the data This is the reason why operational
meteorology has seen modest improvements in the quality of weather forecasts for small spatial
scales, while those at large spatial scales show no sign of having their continuous improvement
interrupted To predict deterministically, and with an advance of merely twelve hours, the location inspace and time of a summer storm or of a tornado is as impossible today exactly as it was forty yearsago On the contrary, our ability to predict the intensity and path of a tropical cyclone, in other words
a system of about one thousand kilometers in diameter, and kept alive by hundreds of those samestorm cells which are impossible to predict individually, has significantly increased in the last twentyyears
However, we are witnessing an increasingly pressing demand for reliable forecasts even on smallspatial scales In fact, in recent years, we have observed the intensification of extreme weather events
in the form of storms of short duration but very violent and localized, that the popular media named
“water bombs” These events have caused sudden flood waves in rivers of small or medium capacity,which in turn have led to hydrogeological disasters in areas where the land has a certain topography(for instance, the infamous cases of Genoa in Italy and of Nice in France) These are phenomena thatoccur on small spatial scales and over short-time intervals and that depend on micro climatic
Trang 33conditions that are difficult to monitor To predict the location of these types of events is very
difficult, and it requires very detailed meteorological models Even the refined mathematical models
of the terrestrial atmosphere are not able to represent, in an explicit and appropriate way, all the
physical and microphysical phenomena important for the evolution of the weather on small spatialscales (of the order of kilometers) The water bombs are, in fact, due to the clouds, the
cumulonimbus, which have a rapid life cycle both in terms of space, with a radius of a few
kilometers, and of time and thus their evolution can be very variable Cumulonimbus generally reachmaturity within 20–40 min after their origin, and the maximum intensities can be predicted only aboutthirty minutes in advance
Given this situation, from a theoretical point of view one tries to derive information on smallspatial scales from large-scale atmospheric variables, that, as we have seen, are more stable andpredictable Indeed, however, this lies on the frontier of the research in the field, which is currentlynot able to improve the local forecasts Because of the difficulty in predicting the spatial position ofsuch extreme events, a possible approach may be to identify them just when they begin to form, and tofollow step by step their development through meteorological observations For the forecast of 1–2 h,technically defined as now-casting (what will happen in a short time), observations play a crucialrole and the model will eventually be used to provide continuity in space and time, or as an element
in the process of checking the observations Collecting data, transmitting them quickly, and combiningthem to better extrapolate over time are the essential ingredients
These short-term forecasts may be used for particular sectors in which security procedures canstill be activated, such as the temporary closure of airports, planned interruptions for road and railnetworks and to alert the local population It is worth mentioning that the most efficient way to limitthe damage from extreme-weather events (one might hope this is obvious) remains limiting the
devastation of the landscape, such as uncontrolled deforestation, overbuilding near rivers and
disrupting the flow of streams The debate in the media after the occurrence of a hydrogeologicalevent with disastrous consequences often focuses on the alleged inefficiency in predicting the event
“well in advance” and in activating timely planned protection While the structural causes of the
events are usually omitted, these are often rooted in the indiscriminate use that is made of many piece
of territory
Climate Changes
A related, but more difficult, problem concerns the forecasting of global climate change These
effects can only be measured over relatively long time scales, at least on the order of decades, and themain reason for the controversy about their actual fulfillment and about their human impact originatesfrom the fact that the data were, until recently, rather poor.50 Even though we can monitor the weather
on scales of tens and hundreds of kilometers, and for time interval on the order of hours in an
“industrial” way, we can never do the same for spatial scales of thousands of kilometers, and fortypical time periods of tens to millions of years Some causes of long-term climate change are wellknown: changes in solar radiation received by the Earth, plate tectonics, volcanoes, etc The crucialquestion is whether human activities are an important cause of the recent global warming The mostdifficult aspect in modeling is due to the fact that the relevant variables, which govern the deep
circulations, have very long time scales (millennia) while others, typical of the geophysical
structures, have a time scale of months and others related to turbulence have a very short time scale(seconds)
Trang 34To cope with such a difficult problem and to provide science-based assessments on climate
change, it was founded five years ago the Intergovernmental Panel on Climate Change (IPCC) Sinceits first report in 1990, the IPCC has published detailed reports about every six years, which
represented a considerable advance on the understanding of the state of the climate and which areintended to provide a comprehensive and credible reference that must serve as a guide to
policymaker In 2007, the IPCC work was recognized with the award of the Nobel Prize for Peace.The fifth IPCC assessment, in addition to the analysis of the current state and the study of the
projections for the future, provides a comprehensive analysis of policy choices and their scientificbasis for climate negotiations The IPCC has a crucial role in this process being the central authority
on global warming: this is another important example of the delicate but crucial relationship betweenscience and politics, between forecasts and decisions
The starting point is represented by historical measurements of the atmosphere obtained fromgeological evidence such as the temperature profiles from boreholes, ice cores, glacial and peril-glacial processes, analysis of sediment layers, records of sea levels in the past, etc The most recentmeasurements, combined with climate data from the past, are used in the theoretical approaches ofgeneral circulation models in order to obtain future projections, identifying the causes of climatechange The models for the prediction of climate change are now more detailed, and they are able toconsider a number of processes that are difficult to model (biologic oceanic processes, the
atmospheric chemistry, etc.) Furthermore, their spatial resolution has greatly increased: in 1990 thecubes in which the globe was divided had a side of 500 km, while today they have a side of 50 km—not much larger than the 15 km cubes now used for ordinary weather forecasting In addition, it isinteresting to note that, apart from the inevitable errors of judgment, three key climate variables thatwere considered by the IPCC in 1995, are essentially found now to be within the estimated predictedranges These climate variables are the carbon concentration, the surface temperature and the rise insea level For this reason the central IPCC message, that greenhouse gases are altering the Earth’sclimate is now incontestable.51 It is estimated, therefore, that human influence has caused more thanhalf of the increase in temperature in the period 1951–2010 On the other hand, it cannot yet be
predicted, with reasonable reliability, which will be the heating rate in the coming years, but thetemperature range of the heating, which would be induced by a doubling of levels of carbon dioxide
in the atmosphere, should be in the range 1.5–4.5 °C This estimate, published for the first time in
1990, was confirmed again in 2013
They have been thus accumulated enough evidence to conclude that if emissions of greenhousegases continue to rise, we will pass the threshold beyond which global warming becomes
catastrophic and irreversible This threshold is estimated as an increase in temperature to two
degrees above pre-industrial levels: with the current rates of emissions we are heading towards anincrease of 4–5 degrees This may not seem like a big change, but the temperature difference betweenthe world of today and the last ice age is about five degrees, so that small fluctuations in temperaturecan mean big differences for the Earth, and above all for its inhabitants
At the recent United Nations Conference on Climate Change in Paris in December 2015 (Cop21)governments were required to agree on the policies to be taken at a global level for the decade after
2020, when current commitments will be exhausted in respect of emissions greenhouse gases
Although the agreement was presented as a political success, several observers have noted that there
is nothing decisive because it is based on voluntariness of nations, without providing for sanctionsand intervention programs specifically for those who break the reduction of emissions
Under present conditions, according to the reference document of the IPCC,52 it is very unlikely
Trang 35that we can contain warming to 1.5 degrees more likely that global warming, by continuing with thescenario of growth or decline of carbon dioxide more favorable, may be 2 to 3 degrees This increasewill result in an average rise of sea level of 0.5 to 1 meter: a situation therefore very worrying thatthere seems to be perceived by the public for its severity.
A crucial problem in the issue of climate change is represented by the relationship between
science, policy and information In fact, according to some polls, public opinion does not considerclimate change as a serious and urgent threat53: on the other hand, it is evident that it has never beenmore important to make people aware of the seriousness of its consequences This lack of interest is,
in fact, unjustified because climate change will have a direct impact on the lives of all the inhabitants
of the world affecting their way of life At the root of the lack of attention that emerges from the polls,there are political and economic reasons affecting most countries in the world, certainly with
different degrees of responsibility
It is crucial then the link between the scientists—who must explain the limits and uncertainties oftheir forecasts—the information54—which has the duty to provide the public elements trying to
correctly report the meaning of scientific results—and policy makers—who must transform forecastsinto intervention protocols This is the crucial point in the discussion on the problem of climate
change
Be Prepared for the Unexpected
The day after the most powerful earthquake ever recorded in Japan, and the seventh globally known(magnitude 9.1), which hit the Japanese city of the Tohoku on March 11th 2011, Robert Geller, a
leading expert in seismology, commented, in the journal Nature,55 what happened:
it is time to tell the public frankly that earthquakes cannot be predicted […] All of Japan is atrisk from earthquakes, and the present state of seismological science does not allow us to
reliably differentiate the risk level in particular geographic areas We should instead tell the
public and the government to ‘prepare for the unexpected’ and do our best to communicate bothwhat we know and what we do not And future basic research in seismology must be soundly
based on physics, impartially reviewed, and be led by Japan’s top scientists rather than by
faceless bureaucrats
The issues touched by Geller are the three ones that we have already mentioned: the purely
scientific problem of forecasting, the relationship between science and the media, and therefore thepublic opinion, and the relationship between science and policy makers We focus on the first point,postponing the discussion of the others The central point of this discussion is really macroscopic: thefailure of earthquake prediction has been striking with the submarine mega-earthquake of Tohoku thathad released an energy equivalent to six hundred atomic bombs of the type that hit Hiroshima
However, it was not a failure of our understanding of the physical laws that govern the earthquakesdynamics, and so there was no need to reconsider the foundations of the field
According to the theory of plate tectonics, the plates move slowly with respect to each other, withspeeds that vary from place to place, through the effect of pressure exerted by material coming upfrom the depths of the Earth For example, in Italy along the Apennines, the African plate moves
relative to the Eurasian plate at speeds of a few millimeters per year, while, corresponding to Japan,the Pacific plate moves relative to the Eurasian ten centimeters per year From this follows the
Trang 36enhanced seismicity in Japan as compared with Italy This theory explains most of the phenomenaobserved in geology, such as the correlation between seismic areas and volcanic zones along theedges of plates.56 Relative movements, therefore, make pressure on the plate margins and thus giverise to earthquakes and to volcanoes As for any other case of elastic deformation, when materials gobeyond the breaking point, they will break and the enormous elastic energy accumulated is releasedsuddenly and almost instantaneously, in the form of elastic waves, i.e seismic waves, producing thesliding of two large blocks of rock on each other In the case of the Tohoku earthquake, this is whathas happened: a section of the ocean floor suddenly slipped under the adjacent seabed.
This modeling of the earthquakes dynamics implies that they are cyclic with, at the end of thefracture process, the two blocks returning to rest in a different position with respect to that which theyhad before the earthquake At this point, the process of elastic potential energy begins mounting
slowly up to re-create the conditions for a new earthquake This process, while localized in specificzones, is not truly periodic, but it has uncertain return times that depend on the characteristics of thematerials and on a number of other factors that are not measurable These mechanisms make
predicting the next earthquake practically impossible, even though one knows approximately its
location The problem of seismology is therefore the fact that, despite the deterministic laws thatdetermine the dynamics being known, not having access to the system status, one cannot forecast
earthquakes, their location, the energy released and the time at which they will happen This is surelysuch an important limit that one could ask: what are seismologists good for? In Italy, some politicianshave recently proposed to dismiss the National Institute of Geophysics and Volcanology (INGV) Is itreally an unnecessary institution for the community?
Back in 1997, again Robert Geller, in an overview of earthquakes predictability, concluded that
no forecasting technique attempted by anyone ever worked57
The research on earthquake prediction has been conducted for more than 100 years without
apparent success Alarms of earthquakes have not stood the scrutiny of the facts Extensive
research have failed to provide useful background […] it seems, in fact, impossible to isolate
reliable signals of impending large earthquakes
Despite this situation, one of the classic products in the medium/long-term earthquake prediction
is the probabilistic map of seismic risk that expresses the probability of exceeding a certain value ofground motion acceleration in a given time window.58 In this case, the time frame of occurrence iswide, for example fifty years, and therefore its actual use is for the protection of the population, that
is, it useful for policy making reasons: in earthquake zones one must build in an anti-seismic way(sounds obvious but, at least in Italy, it is not at all) This map can be constructed by analyzing thehistorical seismicity, and, from knowledge about past earthquakes, one can recognize seismic regionsthat are still active today In these seismic areas, such as Italy, Japan or California, 24-hours-a-daysurveillance is required In Italy, for example, where there are on average fifty seismic events perday, there are over 400 seismic stations installed across the country that make up the national seismicnetwork Such continuous monitoring plays a key role in the organization of civil-protection
intervention Indeed, this is one of the institutional responsibilities of the Italian geophysical institute.One type of prediction that has been attempted for many years is based on recognizing and
recording something characteristic, called a precursor signal, which should take place at some timebefore an earthquake This precursor signal, associated with the approaching earthquake, must not benecessarily linked to a physical process The essential philosophy is to employ a method that works,
Trang 37regardless of its scientific basis, so that we can successfully predict the phenomenon The main
precursors that have been considered are: hydrological changes, electromagnetic signals due to
abnormal currents, changes in physical properties of seismic signals and changes in seismicity,
abnormal deformation of the crust and abnormal release of gas (radon) or heat For none of these has
a clear correlation with the earthquake been found, and this is one of the reasons of our inability topredict earthquakes On the other hand, very often before a volcanic eruption, one can observe a
series of phenomena indicative of an abnormal state of the volcano These precursory phenomena aresigns of a volcanic process already in place: hence the importance of continuous monitoring of
volcanoes for civil protection purposes
In addition to identifying that a region is seismic, and then to continuously monitoring it,
geophysics has found two very interesting empirical laws It is worth mentioning them, for their
relevance to what we will discuss in the next chapter It is well known that there are various types ofearthquakes, depending on the depth at which they occur and on the types of rocks underlying thesurface Despite these differences, by the 1950 s a law was discovered that describes the number ofearthquakes that occur with certain intensity regardless of the details of the geophysical rocks, thespecific location, etc This is a power law: the number of earthquakes per amount of energy released
is inversely proportional to the square of the energy—an earthquake with energy 2X is four times lessfrequent than an earthquake with energy X This empirical law, named for the two discovering
geologists Charles Francis Richter and Beno Gutenberg, is surprisingly well verified by
observational data Records reveal that it does not vary significantly from region to region and
therefore it well describes the seismicity of an area defining how many earthquakes of a certain
intensity we can expect in a certain region, without a precise reference to the time scale in whichthese occur
The second is the law that takes its name from the Japanese seismologist Fusakichi Omori (1890);according to this law the rate of aftershocks decreases in direct proportion with time from the mainshock, e.g., twice the number of aftershocks occur one month after the primary shock than those thathappen in the following month, and so on Power laws of this type often appear in physical systems:the complex structure of fractured surfaces of brick or rock, in the growth patterns of living
organisms, etc In each of these cases, the power law, as discussed in more detail in the next chapter,indicates the presence of a certain type of regularity, which can give interesting indications on thenature of the underlying dynamical processes
To conclude with the description of earthquakes, it is interesting to note that, in addition to theearthquake aftershocks, in some cases, but not always, there are foreshocks, i.e., smaller warningshocks that occur before the main event Such events had occurred for several months before the
devastating earthquake in the town of L’Aquila in Italy on 6th April 2009 The quake damaged thetown causing the death of more than three hundred people despite its relatively modest intensity: ithad, in fact, a magnitude 6.1, which corresponds to an energy released nearly 30,000 less than theJapanese earthquake of Tohoku
According to the prosecution at the trial alleging pre-quake negligence by officials of L’Aquila,some members of the civil defense testified that they had interpreted these foreshock events as a
discharge of energy that would have avoided a larger shock.59 This interpretation was reported by themedia with the intent to reassure the population: this was however totally inappropriate because, infact, one cannot predict the unpredictable The Italian judiciary has processed all members of theMajor Risks National Committee Some media have presented that trial as a “trail to science”:
however at no point did prosecutors question either seismologists’ ability to predict earthquakes or
Trang 38the way in which their knowledge was communicated to the public Rather, the trial centred on thereassurances provided by the Italian Major Risks National Committee, and whether these had
prompted some inhabitants of L’Aquila not to leave the city before the earthquake struck For
instance, a wiretap recorded the Civil Protection chief Guido Bertolaso describing the committee’smeeting as a “media operation”—suggesting that its pronouncements were influenced by factors otherthan genuine risk assessments In an initial trial, the entire commission was convicted of spreadingunfounded information On appeal, this sentence was confined to Civil Protection vice-chief
Bernardo de Bernardinis The final judgment has recently confirmed this sentence
Spread of Diseases and New Virus
Another field in which recently there has been increasing effort in making accurate forecasts is
epidemiology In the spread of diseases, a decisive role is played by individual susceptibility, i.e.how a single person responds to a particular disease, virus, etc In addition, there are a number ofother uncertainty factors depending to the type of disease that is being considered For this reason,risk prediction and forecasts of disease spread are necessarily probabilistic and, subsequently, theyare collectively but not individually reliable One must, however, distinguish between two scenarios:non-communicable and infectious diseases While, in the first case, the prediction at the populationlevel is reasonably reliable, in the second case one has to deal with fast transmissions from person toperson of infectious diseases that have a short incubation period and an equally short course and,thus, that are much more difficult to monitor and to predict
In the case of non-communicable diseases, such as cancer, diabetes, and cardiovascular and
respiratory diseases, the ability to extrapolate current trends into the future is determined by the
knowledge of risk factors and by the fact that these are slow-onset diseases, with a long latency
period (often the cause precedes the onset by decades) and with chronic courses So their distribution
is relatively stable over time For some cases, such as smoking, the effect is so strong that not only aprediction at the population level but also the individual may be acceptable if one considers that therisk to a smoker is about twenty-five times higher than that for a non-smoker
Having the opportunity to make reliable predictions on the medium-to-long term for this type ofdisease also allows us to define suitable prevention campaigns In this regard the United Nations haslaunched a program called 25 × 25, which aims to bring down the mortality from non-communicablediseases by 25 % by 2025, by reducing exposure to four risk factors: smoking, alcohol, lack of
physical activity and salt consumption
In this century, there have undoubtedly been huge successes in combating infectious diseases,through the development of vaccines, but often we witness the problem of defining effective
intervention strategies for new viruses, which, apparently, are increasing in number recently.60 Sinceone cannot perform in vivo epidemics research studies, it is necessary to model the problem in order
to obtain the information for establishing intervention strategies In recent years, the reliability ofmodels used to predict the spread of infectious diseases is considerably improved through the
integration of large amounts of data In fact, it is now possible to keep track of billions of individuals,and, to make intensive use of numerical simulations of entire populations These models providequantitative analyses to support policy makers and are used as predictive real-time tools.61
We thus find in epidemiology elements similar to those that characterized the improvement ofweather forecasting: data, numerical calculations and a theoretical understanding of the diffusionprocess Diffusion models are phenomenological in nature and must consider the structure of human
Trang 39interactions, mobility and modes of contact between individuals, individual heterogeneity and themultiple time scales involved in the dynamics of an epidemic In this context, a key role is played bythe representation of these phenomena in the form of newly invented mathematical objects: networks.
Indeed, epidemics, like many other phenomena, propagate in systems with complex geometriescalled networks.62 These are organized structures with nodes or links and contacts: each node (which,for example, may be the individual for a social network or an airport for air transport network, or aserver for the Internet) is connected, through several links, to a number of other nodes A network issimple when each node is connected with roughly the same number of nodes, so that the number oflinks per node is roughly constant A network is rather complex when it is organized in a hierarchicalmanner with some nodes that can act as hubs: these hub-nodes are connected with a number of othernodes that may have, by orders of magnitude, more connections than the average number of
connections per node This situation is described, from a mathematical point of view, by the fact thatthe number of connections per node follows a power law so that the vast majority of the nodes hasvery few connections, while a few nodes are hyper-linked; an immediately obvious example are theairports in a country A power law, as in the case of the Gutenberg and Richter or of the Omori lawfor earthquakes, which we have previously encountered, is symptomatic of a specific network
property In most cases, no central planner designed a complex network, such as Internet, but ratherthey are the result of self-organization from below (a bottom-up rather than top-down formation).Very often, as we will mention in the next chapter, self-organizing phenomena give rise to complexstructures statistically characterized by power-law behavior
The modeling of the spread of epidemics on a complex network is therefore a frontier theoreticalproblem: a few nodes (individuals), who are the most connected, can have a huge effect that
profoundly changes the evolution and the behavior of an epidemic and of the processes of contagion.Alternately, the vast majority of individuals, having few connections, remain irrelevant to the spread
of diseases Unfortunately, from a theoretical point of view, the general solution of these dynamicalprocesses is difficult to achieve even in the simplest cases For this reason, intensive research is nowfocused on mathematical and computational modeling of epidemic processes and of their spread innetworks The ultimate goal of computational epidemiology is to provide, in real time, forecasts onthe spread and timing of infections in order to assess the impact of proposed prevention strategies
On the one hand, there is big data, which in this case are related to personal mobility data, both atthe local level (such as in a city), monitored through, e.g., cellular phones, and at a global level
among different countries involving records of airline and rail travel data On the other hand, there is
a strong theoretical and numerical effort to study the spread of the epidemics in populations that can
be modeled as complex networks A model then considers the data on infected individuals, the modes
of infection and time of incubation In this way, numerical models can provide forecasts for the realworld This kind of technique was used for the first time for the 2009 pandemic, the swine flu, andhas recently been applied to the spread of the Ebola63 virus In the latter case, it was estimated that, atthe beginning of the infection between August and October of 2014, there would be an exponentialincrease in the number of cases if appropriate countermeasures in the areas of outbreaks were nottaken.64 The outbreak of the virus in West Africa was, in fact, the largest and deadliest recorded inhistory; the affected countries, such as Sierra Leone, Guinea, Liberia and Nigeria, however, havetaken measures to contain and mitigate the epidemic The pandemic has been temporarily restricteddue to the identification and isolation of cases, quarantine of contacts, appropriate precautions in thehospitals and during other occasions of increased likelihood of social transmission, such as the
funeral ceremonies of the victims However, the possibility of an international spread in the long
Trang 40term, due to the fact that the epidemic has hit the cities with major airports, is still a topical issue Thespread of infectious diseases and the study of preventive measures through predictive models is notunique to the infection of humans but also of many other plant and animal species For example,
recently we have witnessed in southern Italy to the spread of bacteria, the Xyella fastidiosa (X.f),considered the cause of olive trees death To contain the spread has been considered necessary thefelling of infected trees Recently public prosecutors placed nine researchers from research
institutions under investigation The prosecutors are investigating charges ranging from negligentspread of a plant disease, environmental negligence, falsehoods in public documents, dispersion ofdangerous substances, to the destruction of natural beauty Judges also halted containment measures,which included the felling of infected trees.65
In this case the evaluation of the risk of contagion, the prediction of the spread of bacteria anddetermination of an intervention protocol to contain the problem seem to be secondary to three pointsthat the persecutors have highlighted: the absence of a clear causality link between the presence of thebacterium Xf and the death of olive trees, a very peculiar geometric pattern characterizing the spread
of death olive trees, and the fact that the main studies on the bacterium were performed almost
exclusively by the small group of researchers now under investigation—something that clearly
requires the intervention of other independent scientists in the technical studying and in the emergencymanagement.66
Recurrences and Big Data
One may wonder whether, by studying a large amount of data describing the evolution of a system—just as the Mayans did with the Earth-Moon-Sun, if it is possible to derive useful features to predicttheir status in a future time; i.e., whether this is useful for making reliable forecasts The essentialidea is to apply to this information the so-called “method of analogues”, which allows, by the
knowledge of the state of the system up to a fairly remote time in the past, to infer its future state Inother words, one would like to discover regularities from time-series data and so find a past a
situation “close” to that of today and, from that, one would like to infer the evolution of the systemtomorrow In order words, if in the time series that describes a system’s past evolution, one finds asituation similar to the current one, one can hope to learn something about the future of the systemeven in the absence of a theoretical model that captures its dynamics However, it is far from obviousthat this is possible, as the famous Scottish physicist James Clerk Maxwell noticed67 in the mid-19thcentury
It is a metaphysical doctrine that from the same antecedents follow the same consequents […]But it is not of much use in a world like this, in which the same antecedents never again concur,and nothing ever happens twice […] The physical axiom which has a somewhat similar aspect
is that from like antecedents follow like consequents
For example, any attempt to forecast the weather based on the method of analogues was disastrous
as Edward Lorenz, the discoverer of chaos, noticed already in the 1960s The reason why, in thiscase, the method of the analogues does not work, despite having huge amount of data in which to
search for analogous, was first realized by the physicist Ludwig Boltzmann and finally explained by aresult reached by the Polish mathematician Mark Kac in the second half of the 20th century Kac
showed that the length of the time series in which one can find the analogous exponentially grows