1. Trang chủ
  2. » Ngoại Ngữ

Weather and Climate GCMs A MEMOIR

30 7 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Weather and Climate GCMs: A Memoir
Tác giả K. Miyakoda
Trường học George Mason University
Thể loại essay
Thành phố Calverton
Định dạng
Số trang 30
Dung lượng 231 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Different from the Arakawa scheme, only one quantity is conserved, as opposed to two quantities such as kinetic energy and enstrophy with the application of non-linear viscosity.. Arakaw

Trang 1

George Mason University / COLA

4041 Powder Mill Rd., Suite 302

techniques for making long-range forecasts, in particular, using general circulation models (GCM)

The United States is an ideal place to observe the historical events of research activities, including GCM weather forecasts I came to the U.S in 1961, and settled down as an imigrant in 1965 In particular, my laboratory was the Geophysical Fluid Dynamics Laboratory (GFDL), located at Princeton Smagorinsky was a unique and innovative director, who studied numerical weather prediction in its early stage at the Institute of Advanced Study (IAS) at Princeton under Charney Therefore, my office was a superb observation deck to watch progress in GCM activities

From the beginning, there have been two types of GCMs: Weather model ( e.g., Charney et al 1950) and Climate model (e.g Phillips 1956) By definition, the Weather GCM is used to produce operational forecasts, while the Climate GCM serves as a tool with which scientists investigate fundamental processes I always felt that the former tends to have a narrow objective, while the latter tends to embrace wide targets

Bengtsson often mentioned that GCMs tend to become more precise and accurate if they are used for practical forecasts On the other hand, if GCMs were not exposed to

an academic environment, they would not expand their utility and would lose flexibility This essay is my recollection The papers related to Weather GCMs quoted here are basically restricted to the ones prior to 1982, except other memoirs or historical descriptions The year 1982 is the starting point of the World Meteorological

Organization (WMO) project on long-range forecasts (see WMO 1982) The outstanding1982/83 El Niño occurred just after this meeting Climate GCMs have not been my

Trang 2

research subject, but I worked at GFDL for 30 years, observing the climate research of

my colleagues

2 Phillips’ GCM experiment

The starting point of GCM long-range forecasts is the research of Phillips (1956), whichtook place before I emigrated, but I read his paper as a student in Tokyo Smagorinsky (1983) characterized this study as “a monumental experiment”, and he wrote that a newera had been opened by this experiment Phillips (1956) built a two layer quasi-

geostrophic atmospheric model and carried out a 31 day run of this model This is the first GCM run in history, and it was done at IAS using the first automatic computer The model’s domain was hemispheric between a pole and the equator, and the east-west extension was chosen sufficiently large to accommodate one large baroclinic wavewith periodic lateral boundary conditions at both ends The grid sizes were: x=375 km and y=625 km, and the west to east domain corresponded to 16 x Net radiation and latent heat processes were empirically specified by a heating function

Phillips started his integration with an isothermal atmosphere at rest and ran the model with a time step t=1day The net heating gradually built up a latitudinal

temperature gradient At this stage, the meridional circulation consisted of a single weak direct cell In the second stage, a random number was introduced to give rise to a perturbation in the geopotential field The time step was then 1 hour Consequently disturbances of wavelength of ~ 6000 km were produced, and the flow patterns tilted westward with height, the wave moving eastward Meanwhile, horizontal transport of zonal momentum was directed toward the mid-latitudes, creating a jet of 80 m s-1 at the

250 mb level The three-cell structure emerged between the equator and the pole Phillips analyzed the result and recognized that the solutions appear to reflect the observational evidence about the atmospheric general circulation The baroclinic instability theory, which had been established before 1956, is in a linear framework, andtherefore, it is difficult to prove its validity with observations Now Phillips’ experiment mimiced nature so well that the non-linearity was included ,and therefore, it was possible to investigate the connection between nature’s complexity and the linear theories The experiment indicated that unstable baroclinic waves are generated when the vertical wind shear of the basic flow reaches a critical value, as is expected The disturbances transport heat in the expected direction The secondary effect is to

transport momentum in such a way as to maintain the jet against frictional dissipation, and thus the westeries are established in the middle latitudes All these results

completely agree with what the general circulation theory had postulated As Lewis (1998) documented, this was the first breakthrough in research on climatology

Charney mentioned (Platzman 1990) that this was one of three remarkable achievements

in the IAS at Princeton, others being the first implementation of a computer for

numerical weather prediction, and the realization of cyclone development with a 2-1/2level model

It was surprising that Phillips was successful in running the model without any serious computational instability for such a long time The major reason is that he treated relatively mild weather, and the solutions reached a realistic state after 30 days,

He couldn’t continue too long because truncation errors continued to develop and certain instabilities finally destroyed the calculation

Trang 3

3 Numerical integration of GCMs

a Finite difference method

The period from 1955 to 1963 represents the struggle of unraveling the cause of

computational instability and searching for techniques to obtain stable calculation Forexample, I presented a paper at the first international conference on numerical weather prediction in Tokyo, the paper entitled “500 hour forecasts” (Miyakoda 1960—not 500

Fig 1 Evolution of a square checkerboard embedded in the westerlies of the

atmosphere The checkerboard is deformed by the flow and stretched to a filament When the filament becomes too thin, and the grid cannot resolve the filament any more,

a numerical instability occurs A number of filaments emerge in parallel to the original one The feature is called the “Spaghetti pattern” (After Welander - see Rossby 1959). -

days) (Fig 1) The issue was how to reduce the truncation error, because it was thoughtessential to obtain an accurate solution for a long run This objective turns out to be wrong The question is, which is preferred: an accurate solution without stability, or a less accurate solution with stability

Arakawa (1966) presented a method to obtain a stable solution by using the called Arakawa Jacobian This was really a breakthrough for long time integration of the barotropic vorticity equation The principle is simple Energy and enstrophy

so-(squared vorticity) are conserved within the framework of the finite difference

equations These two squared quantities can be conserved for all time In other words, the integration of the equation is computationally stable, because two quantities are bounded, and therefore, any quantity does not go to infinity This method was applied tothe two-level GCM of the University of California at Los Angels (UCLA), which was referred to as the Mintz-Arakawa model Now GCMs could be integrated as long as desired

Trang 4

As mentioned earlier, I was obsessed by the notion of accurate numerical

solution In fact, accuracy versus stability was a controversial issue among the applied mathematicians If one approximates a differential Jacobian using finite differences, theresulting formula consists of the main term and the remaining terms of higher order The question is whether the remaining terms should be ignored The Arakawa

approach is to prevent production of spurious energy and enstrophy first, and if so desired, the order of accuracy is made higher In those days, I had the impression that mathematicians at the Courant Institute, New York University, were critical of the Arakawa Jacobian, or at best not enthusiastic In fact, the famous text book of

Richtmeyer and Morton (1967) did not include the Arakawa Jacobian at all For

example, the Lax-Wendroff method (1960) (Courant Institute) lets the remaining term serve as a damping term Sometimes this term appears as a “non-linear viscosity”, and

it becomes large, when the deformation of the flow pattern is large

Stimulated by the idea of the Arakawa Jacobian, Bryan (1966) at GFDL developed ascheme which conserves a squared quantity Different from the Arakawa scheme, only one quantity is conserved, as opposed to two quantities such as kinetic energy and enstrophy with the application of non-linear viscosity Conservation of one quantity is sufficient from the standpoint of computational stability The GCMs at GFDL ran with this scheme ( see Smagorinsky et al 1965; Manabe et al 1965) The scheme was also applied efficiently to an irregular grid ( Kurihara and Holloway 1967)

The Arakawa Jacobian is an excellent scheme, because the solution is not only stable but the resulting patterns are noise-free, compared with, for example, Bryan’s scheme However, the Arakawa scheme is only applicable to the case in which the advection term is of Jacobian form In other words, the advective velocity 

V is divided into two components by the Helmholtz theorem, i.e., the rotational component, k  ,

and the divergent component,  ; if then V is represented only by k  , then the Jacobian scheme can be utilized Therefore, a three-dimensional problem needs an extended Arakawa Jacobian Arakawa and Lamb (1981) achieved both an energy and potential enstrophy conservation scheme for the general flow, and applied it to the three diemensional UCLA model (Figs.2 and 3) In Yugoslavia, Mesinger and Janjic (see Mesinger and Arakawa 1976) developed a similar but different scheme Their scheme conserves energy and potential enstrophy, but the potential enstrophy conservation only holds with the -component, not with  and  together The finite difference alogorithm had now become very sophisticated and complicated, so it became more difficult for graduate students, for example, to follow this path without deep involvement

Trang 5

Fig 2 Arakawa C-grid (u, v) is the flow vector; h is the depth of the fluid; q is the

potential absolute vorticity, q(   f h ) 1;  and f are the vorticity and Coriolis parameter

Fig 3 Solutions of the shallow water equations over an steep mountain as represented

by flow vectors and streamlines (upper) by the simpler scheme, i.e., energy and potential enstrophy conserving with only purely horizontal non-divergent flow, and (lower) by the new method, i.e., conserving with general divergent flow (after Arakawaand Lamb 1981)

b Spectral method

We now turn to the spectral method The Fourier method can guarantee the conservation

of kinetic energy and enstrophy For the global atmosphere, the spherical harmonic functions are more appropriate than Fourier functions (Fig 4) In fact, it is another merit for the spherical harmonic method that the treatment at the poles is

straightforward; for example, the “polar filtering” method used in the finite difference scheme for increasing the time step is not needed The activity along this line has been

Trang 6

A problem with spectral methods is the representation of discontinuous fields, for example, the distribution of rainfall (Robert 1968) If you apply the Fourier transform

to a delta-function, the resulting distribution shows a number of ripples around the mainrainfall region, which is called the “Gibbs Phenomena” The reason is that any finite number of terms in a Fourier series is not sufficient to resolve the rainfall profile In this case, a finite difference scheme may have a slight advantage, because although it produces severe truncation error, the resulting pattern is not so bad, compared with the Fourier method Finite- difference-oriented people are concerned about the fact that thewave produced by local rainfall is spread over the whole world instantly, because of the modal nature of the basis functions

The real problem in the spectral method is, however, the treatment of non-linear terms, which are products of two or more variables A product of two variables is reduced to two terms of different wave numbers For this reason, the scheme which has been discussed so far is referred to as the “Interaction Coefficient method” Orszag (1970) and Eliasen et al (1970) developed a new method, which simplifies the nonlinearcalculation i.e, the “Transform method” (see also Machenhauer 1979) In this method, the mathematical process is switched flexibly between the spectral and the grid

calculation, depending upon which is convenient For example, the horizontal

advection term in the x-coordinate consists of two variables, i.e., A u  and B x,where u is the speed of the advective flow and  is the vorticity In the Transform

method, A B is obtained at grid points, and this set of values is decomposed into Fourier series

Bourke, a brand new nuclear physicist in Melbourne, Australia, went to Montreal

in 1969, and started to work on the application of the Transform method to a shallow water spherical model under the advice of Robert Different from the approach of Orszag, the basic dependent variables are vorticity,  , and divergence, D (u and v are the so-called pseudo scalars and are not appropriate for the expansion with the sphericalharmonics-Robert 1966) I met Bourke at Princeton in 1971 He asked me my opinion

on his work, and I encouraged him to continue it Looking back, he had already gone quite far at that time Thus, the shallow water equation model was first constructed by Bourke (1972) This GCM was subsequently distributed to a number of groups around

Trang 7

the world Each group which received it constructed its own baroclinic spectral model within 2~4 years For example, the Bureau of Meteorology in Australia did so in 1974, GFDL in 1974, University of Reading in 1975, and the Prediction Unit in Canada in

1976 The Australian and Canadian bureaus of meteorology implemented operational numercal weather prediction systems based on the transform method in early 1976 McAvaney et al (1978), an Australian research group, conducted a pioneering GCM integration utilizing the spectral method and semi-implicit time integration, implying that this method is indeed inexpensive

The subsequent development of the atmospheric GCM is relatively

straightforward, except the semi-Lagrangian method, as well be described shortly The major efforts of GCM development are more directed to the development of physics The semi-Lagrangian method (Robert 1982; Bates and McDonald 1982) has brought a dramatic change in computational aspect at the European Centre for Medium-Range Weather Forecasts (ECMWF) The model was constructed by a substantial contribution

of Montreal scientist, Ritchie ( see the story of Staniforth 1997) The advection terms are replaced by the Lagrangian transport, resulting in no more computational restriction

on the time step, t, in the sense of the Courant-Friedrich-Levy restriction This method also permits the same framework for the treatment of advection in all three dimensions (

as opposed to doing a grid calculation in the vertical and a spectral calculation

horizontally); no more Gaussian grid restriction for the latitudinal direction; and no concern about aliasing error As a result, a drastic reduction of grid-points near the poles is possible With these arrangements, it is no more certain whether the

computational scheme is the spectral or the finite difference version

On the other hand, it becomes rather difficult for small research groups to use theLagrangian model, because the overhead time for model construction is considerable Proliferation of this method is not as easy as that of the spectral method in 1972~76 It

is an open question, how many groups among research institutes will go to the Lagrangian method for all variables, i.e., the wind vectors, u, v,, the temperature, T, and the moisture, q In 1982, Robert called me from Montreal, suggesting for us to use

semi-it, and I declined the offer In 1998, ECMWF and Canadian Meteorological Centre are the only operational centers which use this method., together with a small model in IrishMeteorological Service Despite the Lagrangian principle, the conservation of

advectived quantities has not been guaranteed Namely the water vapor advected, for example, is not positive definite so far Recently , however, Machenhauer (1997) has developed a new scheme by which the mass, enthalpy and momentum are conserved In the future, operational centers will continue to refine the resolution, and therefore, they will use the semi-Lagrangian methods

c Validity of GCM solutions

Finally I point out my old concern about the accuracy of the GCM solution As was mentioned earlier, the GCM calculation produces a large amount of truncation error The GCM solution is nothing but truncation error after, say, five days If true, is the GCM solution insignificant, and therefore, invalid ? If the problem is steady state, such as the flow pattern around an airplane’s wing, the solution is significant

However, there are problems with the case of long-range forecasts and climate

simulation Our wish and belief are that the whole solutions of GCM, including the truncation error, are determined by the governing physics and dynamics of the GCM,

Trang 8

and therefore, they should be significant Besides, it is a common practice to produce multiple realizations of GCM integration by changing the initial condition and averagingthe ensemble of solutions (Leith 1974)

4 Nonlinear viscosity

a Smagorinsky’s non-linear viscosity

In the previous section, the “non-linear viscosity” was mentioned Smagorinsky (1963) applied this scheme to his GCM When I joined GFDL, this scheme came to my notice However, if you read his paper, you cannot find the derivation of this scheme anywhere

I borrowed his personal notes, and made a copy of them After his retirement, he was

Fig 5 An example of non-linear viscosity, represented by the horizontal diffusion terms in an ocean GCM (Rosati and Miyakoda 1989) The grid size is 1/30 in the

meridional direction and 10 in the zonal direction within the equatorial belt between 100

N and 100 S, and 10 x 10 outside of 100 equatorial belt The “non-liner viscosity is proportional to ( def )2 and ( )  s 2, and this figure indicates that the first term is dominant

invited by a group of turbulence experts to talk about his updated view of scale eddy viscosity ( Smagorinsky 1993)

subgrid-The formula is such that the coefficient of this viscosity is assumed to be proportional

to the resolvable velocity deformation, and the final result is that the eddy viscosity is proportional to the square of the grid-size, ( )s 2 (Fig 5) Before 1963,

Lilly was a member of GFDL He appeared to be appreciably impressed by this work After he moved to the National Center of Atmospheric Research (NCAR) in Colorado,

he continued this work In GFDL, this scheme had been used in the GCM until a switch to the spectral method

b Turbulence closure scheme

In the town of Princeton, there were several scientists who were working on turbulence theories in various research facilities, for example, Mellor, Donaldson, Lewellen, Herring, etc Besides, Pennsylvania State University is not far away from

Trang 9

Princeton, where similar activities were going on ( for example, Panofsky, Tennekes, Lumley and Wyngaard) It is not surprising, therefore, that the second-order turbulence closure model was derived by Mellor (1973) and Mellor and Yamada (1974), based on the closure assumptions of Rotta and Kolmogorov Mellor is a unique, independent and bright professor at Princeton University, who was involved in models and experiments

on a neutral, rotationless turbulent boundary layer (Mellor 1985) On the other hand, Lilly continued his work in NCAR, extending the treatment to three-dimensional

turbulence, and later presented a paper, which shows that Smagorinsky’s formulation is exactly what is needed to cascade resolvable-scale turbulence energy to the scale of the inertial subrange (Lilly 1967) In the group of Lilly, Deardorff worked mainly on boundary layer turbulence He also published his turbulence theory (Deardorff 1973) Ironically the latter is the direct descendant of Smagorinsky’s or can be traced back even to von Neumann and Richtmeyer Yet the equations derived by Mellor in Princeton and by Deardorff in Colorado are very similar to each other, except that the

characteristic parameters for the closure assumption are different I documented these theories side by side in a German journal (Miyakoda and Sirutis 1977) The conceptualdifference in the turbulence treatment is the definition of turbulence, i.e., the ensemble average in the former (Princeton), as opposed to the space average in the latter

(Colorado) Therefore, Mellor-Yamada use the turbulent length scale, , as the

characteristic length scale, while Deardorff uses the grid size,  s As a consequence, Mellor and Herring had to propose an empirical formula for the length scale, , in Mellor’s system

These theories are appealing, because the formulation of eddy viscosity in the Earth’s planetary boundary layer agrees with the hypothesis on the nocturnal jet,

proposed by Blackadar (1957) The hypothesis stresses the decrease of vertical mixing inside the boundary layer at night, and as a result, the inertial oscillation of wind is not weakened by the vertical mixing, but it remains strong Bonner (1968) mentioned that major wet episodes over the central U.S in summer occur associated with the strong low level nocturnal jet (see also Helfand and Schubert 1995; Paegle et al 1996) The low level jets bring Gulf Coast moisture efficiently into the central continent (see Fig 5,

by Mo et al 1997) One of questions is, however, that the large scale moisture transport takes place only in a certain synoptic situation Anyway, if you believe the turbulence closure approach, Smagorinsky’s nonlinear viscosity should be reasonable, though the details remain to be refined, such as the magnitude of coefficients and the balance with some neglected terms

Trang 10

Fig 6 (left) Composite of rainfall for (a) wet events Contour interval 1,2,4, 6 and 8

mm day-1 (b) Same as (a) but for dry events Contour interval 0.5, 1, 2 and 3 mm day-1 (right) Composite of the vertical profile of meridional wind at 300 N for (a) wet and (b) dry events Contour interval is 1 m s-1 (after Mo et al 1997)

One of the merits of the turbulence closure approach is that the formulation and the coefficients are based on the “similarity paradigm”, and that the coefficients should be determined by universal constants Accordingly, the framework should be applicable equally to the atmosphere as well as the ocean, and even to Jupiter’s atmosphere

-without changing the values of the constants If a bulk method, as opposed to the turbulence closure method, is to be used for economical reasons, it should be consistentwith the original multi-level framework But this is just my personal feeling; in practice,one could be flexible to adjust the method to the observational evidence and efficiency

So far as the horizontal eddy coefficients are concerned, the necessity of including the Reynolds eddy term varies, depending on the numerical scheme Whether it is the Arakawa scheme or the Bryan scheme or the spectral approach, the constants vary, depending upon the degree of production of deformation In other words, the

magnitude of eddy viscosity in the free atmosphere or ocean is not so large in nature, but the deformation in finite difference calculations requires a large amount of damping.For the spectral GCM, therefore, it is a general attitude (or policy) to avoid the non-linear formula for the horizontal Reynolds term, because of efficiency and no strong reason for the requirement Linear formulae such as  K14 and  K24D are oftenused, where K1 and K2 are constants

c Large Eddy Simulation

Trang 11

One of the interesting development from Smagorinsky’s subgrid-scale eddy

viscosity is the Large Eddy Simulation (LES ) Studies, started by Deardorff (1970) and Lilly (1967) The turbulence approach has a traditional concept of different hierarchy ofscales The subgrid-scale transfer terms in the planetary boundary layer can be

examined by using the turbulence eddies in a model of one-step higher hierarchy The grid-scale turbulence in the next hierarchy can be investigated by even finer grid

resolution If schemes or coefficients in the lower hierarchy are found to be inadequate, they should be corrected accordingly This correcting process really happened, but this was successful only to a limited extent The most crucial subgrid-scale physics in the atmospheric GCMs is the “cumulus parameterization” The LES research on this processhas been carried out (e.g Yamasaki 1996; Sui et al 1998), but the final conclusion has not yet been reached The reason for the difficulty is that computers are not large

enough to accommodate several cumuli simultaneously with a grid-size of, say, 1 km

treatment of clouds

The activity associated with Climate GCMs had spread to several centers around the U.S in the 1960’s, including GFDL, UCLA, NCAR, and GISS (Goddard Institute for Space Studies, New York, NY) In those years, the main objectives for using Climate GCMs were to simulate the observed climatology of the atmosphere (for example, Mintz

1965, Smagorinsky et al 1965; Manabe et al 1974), and to explore the “sensitivity” of

Trang 12

Fig 7 (left) Dashed line shows the vertical distribution of global mean temperature of the atmosphere in radiative-convective equilibrium The solid line shows the U.S standard atmosphere (After Manabe and Strickler 1964) (right) Vertical distribution oftemperature in radiative-convective equilibrium for various values of atmospheric CO2

concentration in units of ppm by volume (After Manabe and Wetherald 1967)

Fig 8 A comparison of the cumulative increase in carbon released by fossil fuel with the seasonally adjusted rise in atmospheric CO2 at the Mauna Loa Observatory in ppm (After Keeling 1997)

Climate GCMs have started to expand their territories to new frontiers, such as ocean, middle atmosphere, planetary atmosphere, paleoclimate, tracer problem, and hydrology

The paleoclimate approach appears very useful for the study of the issue of CO2, because the Earth’s atmosphere has already experienced a sizable amount of natural CO2

variations Fortunately the paleoclimatic records of temperature were extracted from the Greenland and Antarctic ice core, and the ocean sediment cores They contain evidence of striking changes in concentration of atmospheric carbon dioxide, in

particular, the core from about 18,000 to 9,000 years ago (see Kutzbach 1985) Based

on the data published by CLIMAP Project Members (1976), Gates (1976) and Manabe and Broccoli (1985) performed simulation experiments

Trang 13

b Construction of ocean GCMs

The first ocean GCMs were built by Bryan (1963), at GFDL, Takano (1975) at the University of Tokyo, and Sarkisyan (1966) in the Soviet Union In contrast to the acceptance of GCM by meteorologists, the ocean GCM was not so popular among oceanographers The main reason may have been the lack of marine observations, for which reason, there have been no attempts at ocean forecasting Another reason is that the leaders in oceanography considered the GCMs as a technical exercise, as opposed to

a scientific tool I recall the discussion in 1957 with a well-known oceanography professor at the University of Tokyo I proposed that he organize a committee to

purchase or build a scientific computer at the university He opposed this proposal, telling me that there were more important issues in oceanography It is interesting to note that Bryan received his PhD in meteorology

As mentioned earlier, oceanography had the formidable difficulty of lack of observational data The ships-of-opportunity, based on measurements taken by ocean liners and fishing vessels, provide the temperature, salinity, and current data from the surface down to only 500 meters depth Observations at a depth of 3 ~ 4 km are

obtained by research vessels, and are only available occasionally

The first attempt to couple an ocean GCM with an atmospheric GCM was carried out in a highly idealized geometry by Manabe and Bryan (1969) They used different time steps for the integration of atmosphere and ocean GCMs (“asynchronous

integration”) for the sake of economy, because they started the integration with the ocean at rest

At present, the oceanographic model activities are divided into, at least, three

categories The first category is for the El Niño forecasts in the time-scale of 1 ~ 6 years It requires the 600N-600S extension of the Pacific basin with 10 ~ 20 grid

resolution, and a depth of at most 1 km The grid resolution is 10 ~ 20 for the entire domain, but at least 1/30 grid resolution meridionally for the equatorial Pacific basin to resolve the oceanic Kelvin and Rossby waves The salinity is not important This kind

of model emerged after 1983 (Philander and Seigel 1984) The second category is for the simulation of the world ocean circulation including mesoscale eddies in the time-scale of 10 ~ 30 years, which requires at least 1/60 grid resolution, and a world ocean of

2 km depth (Semtner and Chervin 1988) The currents are driven by wind, and

therefore, the salinity variability may not be essential However, the treatment of latitude sea-ice could be important It can be argued that, with an appropriate

high-parameterization of mesoscale eddies (such as Gent and McWilliams 1990), 1/60 grid resolution may not be required The third category is for the thermohaline circulation in the time-scale of 60 ~500 years (Manabe and Stouffer 1988), which requires 10 grid resolution, and a whole global ocean, and a depth of at least 1 km with 4 km desirable The salinity is essential See section 7

6 Long range forecasts for El Nio

6 Long range forecasts 6 o

6 Long range a Extended forecasts

After finishing the 10 year project of bi-weekly forecasts in 1975, my group at GFDL started the monthly forecast experiment, which continued until 1985 The ECMWF hasbeen very successful in 10 day daily forecasts, so I thought that GFDL should take a

Trang 14

slightly different course and undertake more basic research In fact, we had already started a study of utilizing the observed sea surface temperature (SST) for forecasts However, we knew that SST anomalies are only effective after 20 days This point was quite controversial with Namias, who was a pioneer on long-range weather forecasts (LRF) and has stressed the importance of SST (Namias 1976; 1962) Therefore, it is not surprising that Namias insisted that SST’s influenced the atmosphere even after two days But, as far as our monthly forecast is concerned, the monthly normal SST, as opposed to real SST, is used as the lower boundary condition, because the time varying SST is not necessary For the GCM, we first used the “Kuri-Grid’’ model (Kurihara andHolloway 1967), and later switched to the spectral model

One of the major targets was to forecast “blocking” It had been known that blocking is one of the most outstanding and distinct phenomena in the middle- and high-latitudes (Rex 1950; Green 1977) Besides, Charney and DeVore (1979) presented a paper which asserted that there are two states of equilibrium in the atmosphere, i.e., zonal flow and blocking This paper created a sensation at that time, because people had not realized that ‘blocking’ could be such a fundamental component in the dynamic theory However, this argument was not well accepted eventually, because this paper used an excessively low order (few terms) system, and as a result, the solutions did not represent the real atmosphere well In any event, the importance of blocking has not diminished Concerning the understanding of the dynamics of blocking, progress has been slow

Monthly forecasts are beyond the limit of predictability for cyclone-scale weather Therefore, it is not appropriate to compare the day-to-day weather patterns between the model’s result and the observation Another difference from the daily forecasts is that multiple forecasts are required: Shukla (1981) presented three realizations for one-month GCM runs, concluding that the separation of each run does not reach its

saturation yet, and therefore, there is a hope for monthly forecasts My group showed three ensemble forecasts for eight January cases (Miyakoda and Sirutis 1985)

Unfortunately our GCM was not good enough in quality to obtain accurate forecasts However, if the systematic error of the particular GCM is removed after the end of a forecast, the skill of prediction is substantially improved, and the monthly forecasts are not bad

Concerning the broad view of forecasts, one of the reasonable ways is to compare the teleconnection patterns of the 500 hPa height field between the model and

observations Wallace and Gutzler (1981) identified five categories of teleconnection patterns, such as PNA (Pacific/North American) (PNA: the original finding was by Namias 1969) These teleconnection patterns have an approximate relation with the EOF(Empirical Orthogonal Function) loading function, indicating that these patterns are most recurrent.(see Fig 9) Blackmon et al (1977) showed that, if the geopotential height distribution is processed by the filter of less than the frequency of (10 days)-1, there remain stationary and low-frequency planetary and some synoptic-scale waves This means that no baroclinically unstable wave is left in these patterns, and therefore, the filtered patterns may be predictable

Trang 15

Fig 9 Teleconnection 200-mb geopotential height map for negative PNA (a) and positive PNA (b) with the contour interval of 120 m (c) The difference of geopotential height between the positive and negative PNA pattern with the contour interval of 60 m:(d) an example of teleconnection index at 500 mb in five categories for March 1965. -

As was mentioned earlier, there are rather clear distinctions, in the basic framework, between one-month forecasts and seasonal forecasts For the latter, the accurate lower boundary conditions, e.g the precise SST anomaly distribution is essential On the other hand, for the former, an adequate ensemble of initial conditions appears important.This subtlety of initial conditions will not be an important issue for seasonal forecasts (Brankovic and Palmer 1997) One-month forecasts are indeed intermediate between bi-weekly and seasonal forecasts

b WMO Committee on LRF

The 1960’s and 1970’s were perhaps the years of lowest ebb in interest and activityfor LRF, partly because of the still advancing knowledge and technique of extended-range numerical prediction ( too early for LRF), the apathy related to the poor skill in the statistical LRF, the start of enthusiasm about the study of global warming, which attracted able scientists For example, the long-range forecast section of NOAA in Washington, D.C changed its name to Climate Analysis Center Nevertheless, according

to the WMO survey in 1978, 34 countries were carrying out scientific work on LRF; 10 meteorological services were regularly issuing one-month forecasts, while 22 services

Ngày đăng: 17/10/2022, 22:52

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w