An Application of the Multi-Physics Ensemble Kalman Filter to Typhoon ForecastCHANH KIEU,1,3PHAMTHIMINH,2and HOANGTHIMAI2 Abstract—This study examines the roles of the multi-physics appr
Trang 1An Application of the Multi-Physics Ensemble Kalman Filter to Typhoon Forecast
CHANH KIEU,1,3PHAMTHIMINH,2and HOANGTHIMAI2
Abstract—This study examines the roles of the multi-physics
approach in accounting for model errors for typhoon forecasts with
the local ensemble transform Kalman filter (LETKF) Experiments
with forecasts of Typhoon Conson (2010) using the weather
research and forecasting (WRF) model show that use of the WRF’s
multiple physical parameterization schemes to represent the model
uncertainties can help the LETKF provide better forecasts of
Typhoon Conson in terms of the forecast errors, the ensemble
spread, the root mean square errors, the cross-correlation between
mass and wind field as well as the coherent structure of the
ensemble spread along the storm center Sensitivity experiments
with the WRF model show that the optimum number of the
multi-physics ensemble is roughly equal to the number of combinations
of different physics schemes assigned in the multi-physics
ensemble Additional idealized experiments with the Lorenz
40-variable model to isolate the dual roles of the multi-physics
ensemble in correcting model errors and expanding the local
ensemble space show that the multi-physics approach appears to be
more essential in augmenting the local rank representation of the
LETKF algorithm rather than directly accounting for model errors
during the early cycles The results in this study suggest that the
multi-physics approach is a good option for short-range forecast
applications with full physics models in which the spinup of the
ensemble Kalman filter may take too long for the ensemble spread
to capture efficiently model errors and cross-correlations among
model variables.
Key words: LETKF, ensemble data assimilation,
multi-physics ensemble.
1 Introduction
Evaluating impacts of model internal
uncertain-ties in tropical cyclone (TC) forecasting models is a
challenging problem While there are many differentsources of errors related to observational errors,insufficient vortex initialization, or spurious correla-tions (see, e.g., DALEY,1993; ANDERSON,2007; BAEK
et al., 2006; LI et al., 2009), model deficienciesassociated with underrepresented physical processesare perhaps the main cause of the forecast errors in
TC models In particular, fast sub-grid processes such
as turbulence forcing or cloud microphysics arehighly variable under the TC extreme wind condi-tions that inadequate parameterizations of theseprocesses could impact considerably the TC track andintensity forecast skills Numerous studies haveshown that a simple change of microphysics scheme
or boundary parameterization could result in verydifferent forecasts even with the same initial condi-tion, especially for forecasts of high-impactmesoscale systems such as heavy rainfall or TCs(ZHU, 2005; VICH and ROMERO, 2010; BYUN et al.,
2007; LI and PU, 2009; IM et al., 2007; KIEU and
ZHANG,2010; PU,2011)
As a quick illustration, Fig 1 shows four 36-hforecasts of the accumulated rainfall over the Indo-china Peninsula, using the weather research andforecasting model (WRF-ARW, V3.2, SKAMARAOCK
et al., 2005) These forecasts are initialized with thesame boundary and initial condition valid at 1200UTC 13 July 2010 at which Typhoon (TY) Conson(2010) started to develop rapidly The initial andboundary conditions are taken from the NationalCenter for Environmental Prediction (NCEP) GlobalForecast System (GFS) forecast products All modelforecasts have similar model configurations except forfour different microphysics schemes including the Lin
et al., scheme, Kessler scheme, WSM 3-class simpleice scheme, and the WSM 5-class scheme One cannotice easily that both the magnitude and distribution
of accumulated precipitation differ substantially in the
1 Laboratory for Weather and Climate Forecasting, Hanoi
College of Science, Vietnam National University, Hanoi 10000,
Vietnam E-mail: chanhkq@vnu.edu.vn
2 Center for Environmental Fluid Dynamics, Hanoi College
of Science, Vietnam National University, Hanoi 10000, Vietnam.
3 I M Systems Group, NOAA/NWS/NCEP/EMC, Camp
Spring, MA 20746, USA.
2013 Springer Basel
Trang 2four forecasts (Fig.1), especially around and in the
eastern part of TY Conson where both the maximum
accumulation and the coverage of the rainfall vary
widely; for the Kesser and Lin et al., microphysics
scheme, the peaked rainfall exceeds 310 mm and is
confined mostly along-the-track of TY Conson while
the rest have distribution of rainfall extended farther
to the east of the Indochina Peninsular area
The diverse forecasts due to different physics schemes in the above simple exampledemonstrate one of many severe mesoscale weathersystems that are sensitive to forecasting models Asseen in Fig.1, the errors associated with such poorlyphysical representation could suppress any benefit ofobservational information assimilated into the model
micro-no matter how good the initial condition is This
Figure 1 Thirty six hours forecasts of the accumulated rainfall using the WRF model with the same initial and boundary condition but with different microphysics schemes including a Kessler microphysics scheme; b WSM 3-class simple ice scheme; c WSM 5-class scheme; and d Lin et al., microphysics scheme All simulations are configured with a single domain of 36-km resolution and initialized at 1200 UTC 12 July 2010 from
the GFS global input data and boundary updated every 6-h
Trang 3poses some real challenge to any TC model in which
a single deterministic forecast could easily provide
bias track and intensity information Model errors are
thus essential and have to be considered properly in
any TC forecasting systems
The importance of model errors has been
recog-nized and examined extensively in previous studies
(e.g., ANDERSON and ANDERSON, 1999; HOUTEKAMER
et al.,2005; WHITAKERand HAMILL,2002; SZUNYOGH
et al.,2008; MENGand ZHANG,2007; LIet al.,2009;
Evensen,2009) For unbiased systems, ANDERSONand
ANDERSON(1999) proposed an approach in which the a
priori covariance matrix is enlarged every cycle by a
multiplicative factor k [ 1, the so-called
multiplica-tive covariance inflation technique This approach is
to some extent equivalent to an assumption that the
model error is proportional to the background
covariance by a factor of (k - 1), and, thus, bearing
all spatial structures of the priori covariance matrix In
another approach, MITCHELLand HOUTEKAMER(2000),
and HAMILLand WHITAKER(2005) suggested that one
can add a random distribution to the posteriori
anal-ysis perturbations such that the spread of the ensemble
and, consequently, the analysis covariance matrix are
augmented The additive and the multiplicative
inflation methods have been tested in various
assim-ilation systems and have proven to give encouraging
results (see, e.g., HOUTEKAMERet al.,2005; ANDERSON,
2007; WHITAKERand HAMILL,2002; SZUNYOGHet al.,
2008; HUNTet al.,2005; LIet al.,2009)
A somewhat different approach was introduced by
ZHANG et al (2004) in which modified analysis
per-turbations are computed by weighting the background
and newly-obtained analysis perturbations This
technique can be shown to be roughly equivalent to
degrading the quality of observations, i.e., to
increasing the observation errors, thus inflating the
posterior analysis To further enlarge the ensemble
spread, FUJITAet al (2007), MENGand ZHANG(2007),
KIEU et al (2012) proposed to employ multiple
physical parameterizations, which were demonstrated
to help improve the performance of the ensemble
Kalman filter (EnKF) for TC forecasts In addition to
the above approaches for the unbiased models, a
number of techniques that deal with the biased
models have been also developed (see, e.g., DEEand
KALNAY, 2008) A review of different model errorcorrection techniques in the presence of the modelbias can be found in LI et al (2009)
Because the early TC development and subsequentintensification are challenging problems due to thesensitivity of the storm track and the intensity for fore-casting models, the case of TY Conson (2010) is chosen
in this study to examine the relative importance of themultiple physics (MP) and the multiplicative inflation(MI) in correcting model errors in the EnKF algorithm.The MI approach is currently considered as an effectivetreatment of model errors in many practical applications
of the EnKF in regional models While there are severalother techniques such as additive inflation or variousadaptive versions of the localization that are shown to bevaluable (see, e.g., ANDERSON,2007; BISHOPand HODYSS,
2009; MIYOSHI,2011), thorough validation of all theseinflation techniques is challenging in the context of theregional models with the full primitive equations set-tings Therefore, this study is limited to examining theperformances of the MP and MI approach for the ease ofimplementation and comparison A variant version ofthe EnKF, the so-called Local Ensemble TransformKalman Filter (LETKF), is adopted and implemented inthe WRF-ARW model system for our investigation.Recent studies have demonstrated that the LETKF is apotential candidate for various real-time global andregional applications (HUNTet al.,2005; SZUNYOGHet al.,
2008; MIYOSHIand YAMANE,2007; MIYOSHIand KUNII,
2012) By far, the MI approach and its related adaptivealgorithms are the most common method in accountingfor model errors in the LETKF (HUNTet al.,2005; LI
et al.,2009; MIYOSHI,2011) Thus, it is of significance toexamine the importance of the MP method against the
MI method in forecasting TCs with the LETKFalgorithm
Because a single case study could be exposed torepresentativeness errors that make it hard to apply tomore general situations, a series of complementaryidealized experiments with the Lorenz 40-variable(LORENZand EMANUEL,1998) whose model errors areassumed to be represented by a random forcingfunction will also be conducted Note again that themain objective of this study is to examine howthe multi-physics ensemble can help the LETKFalgorithm improve the TC forecasts relative tothe multiplicative covariance inflation technique
Trang 4Therefore, potential representativeness errors arisen
from a single case are expected to be of secondary
importance for such relative comparison
In the next section, a quick overview of the
LETKF will be presented Section 3describes model
experiments and data Results are discussed in Sect
4 In Sect 5, some sensitivity experiments with the
idealized Lorenz 40-variable model are presented to
provide some further information into the dual role of
the multi-physics in the EnKF algorithm Discussions
and conclusions are given in the final section
2 LETKF Algorithm
Recent studies with LETKF have demonstrated
that this ensemble scheme is capable of handling a
wide range of scales and observation types (see, e.g.,
HUNT et al., 2005; SZUNYOGH et al., 2008; LI et al.,
2009; KIEU et al., 2012) The main advantage of
LETKF is that it allows for the analysis to be
com-puted locally in the space spanned by the forecast
ensemble members at each model grid point, which
reduces the computational cost and facilitates the
parallel computation efficiently
The key idea of the LETKF algorithm is to use the
background ensemble matrix as a transformation
operator from the model space spanned by the grid
points within a selected local patch to the ensemble
space spanned by the ensemble members, and
per-form the analysis in this ensemble space at each grid
point For a quick summary of the LETKF algorithm,
assume that a background ensemble fxbðiÞ: i¼
1; 2 .; kg are given, where k is the number of
ensemble members (assuming that we are doing
analysis at one instant of time, so no time index is
written explicitly) Following HUNTet al (2005), an
ensemble mean xb and an ensemble perturbation
matrix Xbare defined respectively as:
b¼1k
Xk i¼1
xbðiÞ
Xb¼ fxbð1Þ xb; xbð2Þ xb; ; xbðkÞ xbg: ð1Þ
Let x¼ xbþ Xbw, where w is a local vector in the
ensemble space, the local cost function to be
mini-mized in the ensemble space is given by:
J_ðwÞ ¼ ðk 1ÞwTfI ðXbÞT½XbðXbÞT1Xbgw
wais orthogonal to N such that the cost function J_ðwÞ
is minimized, the mean analysis state and its sponding analysis error covariance matrix in theensemble space can be found as:
a
in the ensemblespace have a simple connection of Pa ¼ XbP_
a
ðXbÞT,the analysis ensemble perturbation matrix Xa can bechosen as follows:
Xa¼ Xb½ðk 1ÞP_a1=2: ð5ÞThe analysis ensemble xais finally obtained as:
xaðiÞ¼ xbþ Xbf waþ ½ðk 1ÞP_
aðiÞ
1=2g: ð6ÞDetailed handling of more general nonlinear andsynchronous observations in LETKF can be found
in HUNT et al (2005) It should be mentioned thatthe above formation is only valid in the absence ofmodel errors To take into account the modelerrors, Hunt et al (2005) suggested that a multi-plicative factor should be introduced in Eq (4)(specifically, the first factor on the rhs of Eq.4).This simple use of the multiplicative inflationintroduces no additional costs to the scheme, andhas been shown to be efficient in many applications
of the LETKF (e.g., LI et al.,2009; MIYOSHI,2011;
MIYOSHI and KUNII, 2012)
Trang 53 Experiment Descriptions
3.1 Overview of Typhoon Conson (2010)
Conson (2010) was the first typhoon of the 2010
typhoon season in the WPAC basin It originated
from a tropical disturbance east of the Philippines
around July 9 2010 The system reached the tropical
depression stage on July 11 and intensified quickly
into a severe tropical storm around July 12 as it
moved westward over favorable environmental
con-ditions By July 13, Conson attained the typhoon
*130 km h-1 It weakened substantially after
mak-ing landfall over the Philippines archipelago but
re-strengthened when it entered the South China Sea As
it approached Vietnam coastal line, it managed to
reach the typhoon stage again with the maximum
wind of *110 km k-1 right before crossing Hainan
Island Despite its fairly straight westward track, it
was somewhat surprising to notice that most model
guidance, including the GFS forecasts, had strong
right bias This caused a lot of confusion to
Hydro-Meteorological Service continued issuing advisories
that Conson would bear northward until later July 15
when the consensus forecasts from several guidance
models started to converge toward a track that headed
to the North of Vietnam Such persistent right biases
of the track forecasts appeared to be related to the
underestimation of the large-scale steering flow
associated with the western Pacific subtropical ridge
from the GFS model, which provided the global
boundary conditions for most of the regional models
As such, all of the downstream model applications
that were driving the GFS would suffer from the same
biases, leading to inaccurate advisories of Conson’s
track and intensity change
The impacts of Typhoon Conson were detrimental
across several countries including the Philippines,
Vietnam, China, and Laos In the Philippines, Conson
produced very heavy rains that triggered flooding
over a widespread area Seventy-six people were
reported to be killed across the area and 72 others are
listed as missing Damage was also announced in
Vietnam where several people were killed and 17
others were listed as missing In China, at least two
people have been killed due to wind-related incidents.The total damage over all countries was estimated atmore than US $100 million Understanding thesource of the bias as well as effectiveness of differentdata assimilation methods in improving the trackforecast of Typhoon Conson is, therefore, of signif-icance for future better preparation and typhoonforecasting
a large number of ensemble experiments conducted,the WRF model is configured with a single domain
of 36 km horizontal resolution and initialized withthe NCEP/GFS operational analysis The modeldomain covers an area of 3,700 km 9 3,700 kmwith 31 vertical levels, and it is centered in theSouth China Sea, to the East of Vietnam (Fig.1).The forecasted period spans the lifetime of TYConson from its earlier depression stage at 1200UTC 12 July 2010 to its near dissipation at 1200UTC 15 July 2010 after making landfall overVietnam
To establish some general baseline about theperformance of different model error correctiontechniques, three experiments in which a fixednumber of 30 ensemble members are conducted Inthe first control experiment (NO), a specific set ofmodel physics in the WRF model including (a) theBetts–Miller–Janjic (BMJ) scheme cumulus parame-terization scheme (JANJIC, 2000); (b) the YonseiUniversity planetary boundary layer (PBL) parame-terization (HONG et al., 2006); (c) WRF Single-Moment 3-class (WSM3) microphysics scheme(HONG et al., 2004); and (d) the rapid radiativetransfer model (RRTM) scheme for both long-waveand short-wave radiations (MLAWERet al., 1997) areapplied for all ensemble members with no multipli-cative inflation at any analysis cycle This NOexperiment serves as a reference to evaluate theeffectiveness of the MI and MP approach
Trang 6In the second experiment (MP), a spectrum of (1)
three microphysics schemes including the Kessler
scheme, the Lin et al., scheme, and the WSM3
scheme; (2) two PBL schemes including the YSU and
the Mellor–Yamada–Janjic (MYJ) scheme; (3) two
cumulus parameterizations schemes including the
Kain–Fritsch scheme, and the Betts–Miller–Janjic
scheme, and (4) two long-wave radiative schemes
including the RRTM and the Geophysical Fluid
Dynamics Laboratory (GFDL) longwave scheme are
used A total of 24 different combinations of these
physical options are formed and assigned to different
ensemble members in a sequence of permutations of
the above physical options.1 If the number of
ensemble members is larger than the number of the
combinations, the assignment will be repeated for the
next members There is no inflation invoked in the
MP experiment such that the increase of the ensemble
spread relative to the NO experiment can be
attrib-uted to the use of the multiple physics options
In the third experiment (MI), a multiplicative
inflation factor k = 1.8 is applied to the analysis
transformed covariance matrix P_
a
in Eq (4) Theinflation factor is kept constant for all cycles and the
same set of model physics as in the NO experiment is
used for all members such that the effectiveness of
the MI approach in correcting model errors can be
compared against that of the MP approach in a
transparent way The role of the MI approach is
examined further in a number of sensitivity
experi-ments in which the inflation factor varies from 1.0 to
6.5, which appears to be a typical range of the
inflation factor for the TC environment as shown in
MIYOSHI and KUNII (2012) These sensitivity
experi-ments are expected to provide some information
about the significance of the adaptive multiplicative
inflation in the LETKF Likewise, the effectiveness of
the MP approach can be investigated by varying the
number of ensemble members from 10 to 50 in
several additional sensitivity experiments See
Table1 for the list of experiments
Since there are no ensemble backgrounds for thefirst analysis cycle, cold-start background ensemblemembers are first initialized by adding a randomnoise with standard deviations of 3 m s-1 for wind,
3 K for temperature, and 3 9 10-3 kg kg-1 forspecific humidity into the GFS data 12-h earlier,i.e., at 0000 UTC 12 July The short 12-h forecasts ofthe cold-started members from 0000 UTC to 1200UTC 12 are then used as the initial background forthe first analysis cycle at 1200 UTC 12
3.3 Observation Data
To create observational data, a hypothetical truth
is formed by using the NCEP Final OperationalGlobal Analysis (FNL) dataset during the period thatencompasses the whole lifecycle of TY Conson(2010) Such FNL data represents roughly the truestate of the atmosphere and will be considered in thisstudy as a reference for later comparison After thetruth is obtained, bogused observations are generatedevery 6-h by assigning a random noise of a standarddeviation of 1 m s-1for the horizontal wind compo-nents, 1 K for temperature, and 10-3 kg kg-1for thespecific humidity to the truth at all of the grid pointsfrom surface up to level z = 13 km These bogusobservational data points are generated in the forms
of radiosonde columns in the LITTLE_R.2format; anydata levels that are below the terrain height will beeliminated As a step to orient the WRF-LETKFsystem to be consistent with the WRF assimilationsystem (WRFDA), all of the bogus observations areassigned observational errors that are based on theerror statistics imposed by the quality control com-ponent of the WRFDA system
Of the seven prognostic variables in the ARW model including the horizontal wind compo-nents, vertical velocity w, perturbation potentialtemperature, perturbation geopotential, surface pres-sure, and water vapor, then there are the variables thatare directly assimilated by the LETKF are the
WRF-1 In the WRF-ARW model, each physical parameterization
option has a designated number So, the 24 combinations of the
physical schemes are simply permutations of these options, which
are of the form (i, j, k, l) in which i [ [1, 2, 3] is for microphysical
scheme, j [ [2, 3] for the PBL schemes, k [ [1, 2] for the cumulus
schemes, and l [ [1, 2] for longwave radiation schemes.
2 LITTLE_R is a legacy data format that was developed earlier to ingest observational data for the MM5 model This format
is adopted in the WRF-ARW as part of continued support for different observational format inputs More information about the LITTLE_R format can be found at: http://www.mmm.ucar edu/mm5/mm5v3/little_rv3.html
Trang 7horizontal (u and v) winds, the potential temperature,
and the relative humidity The other three prognostics
variables are updated every cycle via
cross-correla-tions with the observed variables The degree to
which such prognostic variables can update
observa-tional information depends essentially on how the
ensemble cross-correlations are represented, and can
be used to evaluate the effectiveness of different
model error correction methods To minimize the
impacts of the homogeneous covariance localization,
the observational network is designed in such a way
that observations are given at all model grid points
for all cycles This partly removes the need of a
spatially adaptive localization, albeit the scale of the
localization still needs to be tuned at the initial time
for the best performance For the LETKF, a local
volume of 11 9 11 grid points in horizontal direction
and vertical extension of 0.2 (in r-coordinate) is
fixed, and the localization scale is chosen to be
700 km and kept constant in time
3.4 Boundary Condition
To ensure that each member has its own lateral
boundary condition consistent with its updated
anal-ysis, the WRFDA boundary routine is used to
generate boundaries for each ensemble member after
the ensemble analysis step is finished for every cycle
Because the GFS forecasts are outputted every 6-h,
the lateral boundary conditions are updated at the
same temporal period The roles of the lateral
boundaries in regional models should be especially
highlighted as our various experiments with different
types of boundary conditions show that idealized
boundaries tend to be detrimental to the TY
devel-opment (not shown) E.g., use of the open boundaries
destroys the coherent structure and development of
TY Conson after just two or three cycles This isbecause the convective instabilities needed to fuel the
TY growth appear to be radiated away from thedomain As a result, the storm dissipates and theensemble collapses (i.e., ensemble spread approacheszero) and drifts away from the truth quickly, thusreducing the capability of the ensemble filter inassimilating new observation
4 Results
4.1 Control Experiments
As a preliminary illustration of the performance
of the MI and MP approach in the forecasts of TYConson, Fig.2 shows the track forecast errorsaveraged from 1200 UTC 12–1200 UTC 13 and thetime series of the storm intensity valid at 1200 UTC
12 Note that this 12-h period rather than the entireforecast period is selected for the averaging shown inFig.2 because the TC track and intensity forecastsrequire tracking the point-like storm centers forwhich a systematical track bias can be carried overthe next cycles if it is not corrected for every cycle.Unless a vortex initialization scheme is used to re-correct the vortex center (or the initial condition iscontinuously updated from the GFS analysis), thestorm center will deviate gradually away from thebest track, leading to artificial accumulated track andintensity errors for the later cycles Since we have nobogus vortex component implemented in this work,any analysis of track or intensity forecast errors islimited to the first several cycles during which thestorm centers are located at the similar position as thetruth location from the FNL dataset
Table 1 List of experiments with the WRF-LETKF configuration Experiment Description
NO A single set of model physics; no inflation; 30 ensemble members
MP Combination of 24 physics options; no inflation; 30 ensemble members
MI A single set of model physics as in the NO experiment; inflation factor k = 1.8; 30 ensemble members MP–MI A combination of 24 physics options; inflation factor k = 1.8; 30 ensemble members
MI-sensitivity A single set of model physics as in the NO experiment; inflation factor k varying from 1.0 to 6.5; 30 members MP-sensitivity A combination of 24 physics options; no inflation; ensemble members varying from 10 to 50
Trang 8One first sees in Fig.2 that although the track
errors in the MI experiment do not show much
improvement with respect to the NO experiment, the
MP experiment exhibits a noticeably better track
forecast with the track errors reduced about 20 % at
36-h lead time and longer during the selected period.This is because the broad spectrum of the physicsschemes in the MP experiment helps generate a range
of storms with different intensity, which interactdifferently with the steering environment (Fig.2b, c)
Figure 2
a Comparison of the track forecast errors between experiments with no model error correction (dark gray), the multiplicative inflation factor
of 1.8 (medium gray), multiple physics (light gray) b time series of the minimum sea level pressure for the MP experiment, and c similar to
b but for the MI experiment valid at 1200 UTC 15 JUL 2010 All experiments have 30 ensemble members
Trang 9Unlike the MI ensemble in which all members show a
similar intensity with very minimum spread for the
entire forecasted period, the MP ensemble possesses
a much larger intensity spread with half of the
members showing higher intensity while the other
half possessing much weaker intensity Such
bifur-cation of the intensity seen in the MP ensemble is
attributed to the fact that the members with the Kain–
Fritsch cumulus scheme tend to produce storms with
much stronger intensity than those with the BMJ
scheme This is not limited to the selected cycle but
in fact observed very consistently in all experiments
conducted thus far with our WRF-LETKF system
Because of their stronger intensity and more
well-defined circulation, members with the Kain–Fristch
cumulus scheme are subject to larger northward bias
under the strong influence of the subtropical ridge
over Mainland China In contrast, the ensemble
members with the BMJ cumulus scheme have weaker
intensity and do not seem to be influenced much by
this northward steering, leading to an overall larger
ensemble spread and thus smaller ensemble mean
track errors as shown in Fig.2 Of course, any
analysis of the storm intensity at the 36-km resolution
should be cautioned as this coarse resolution may not
capture reliably the magnitude of the maximum
surface wind at any stage However, the mesoscale
characteristics are often sufficiently represented in
the model for the storms to experience a different
response to the large-scale steering flow under
different parameterizations This explains the ent performance in the track and intensity errors seen
differ-in Fig 2.Because the point-like metrics based on the TYtrack and intensity errors are rather sensitive to themodel resolution and could be subject to representa-tiveness errors, Fig 3 compares further the totalerrors between the MP and MI experiment Here, thetotal errors are defined as the volume-averagedenergy root mean squared errors (EME) as follows:
2ðU0U0þ V0V0þCp
T T
0T0Þ1=2; ð7Þwhere U, V are the zonal and meridional wind com-ponents, respectively, T is the temperature (in K unit),the prime denotes the differences between the anal-ysis and the truth valid at the same time, Cp is theconstant pressure heat capacity, T = 273 K is thereference temperature, and the average is taken overthe entire model grid mesh
Consistent with the track errors, the MP hasoverall better performance during the entire forecastperiod with the average EME error *0.2 m s-1 ascompared to 1.2 and 0.8 m s-1 in the NO and MIexperiment, respectively In particular, the MP EMEerrors reduce *80 % after just three cycles andmaintain a steady small magnitude afterward.Except for the first cold-started cycle, the smallerEME errors in the MP experiment are observed for a
Figure 3 Comparison of the volume-averaged energy root mean squared errors (EME) with no model error correction (dark gray), a fixed multiplicative
inflation factor k = 1.8 (medium gray), and multiple physics (light gray)
Trang 10regardless of the domain size or the model
resolu-tion (cf also Fig 9) The outperformance of the MP
method during the entire period, especially for the
few early cycles, can be understood if one notes that
convective- to meso-scale instabilities often develop
rapidly during the incipient phase of model
integra-tion (TOTH and KALNAY,1997) Such development of
the uncertainties related to the instabilities at the
convective scale depends sensitively on the different
parameterization schemes used by the model With a
diverse set of physics schemes, the MP ensemble
could generate a large spread quickly after 12 h into
integration, allowing for the model errors associated
with physical representations to be captured more
efficiently
To investigate the MP and MI ensemble spread
more explicitly, Figs.4 and 5 show a series of
horizontal distributions of the ensemble spreads,
which are defined as the standard deviation of the
wind speed with respect to the ensemble mean, in the
MI and MP experiment Despite its fairly organized
structure in both coverage and amplitude, the MI
spread is in general small even near the storm center
where uncertainties are supposed to be the most
significant at the convective scale Because of the
inflation, it is seen that the analysis increments still
manage to capture some of the new observational
information at each cycle Apparently, the inflation is
essential to allow for such analysis to be updated
despite the small ensemble spread As long as the
spread does not completely collapse, the inflated
covariance always enhances the analysis
perturba-tions such that new observational information is
updated over the area where the spread is significant
Similar analyses of the ensemble spread versus
analysis update for the NO experiment confirm that
without the inflation, the analysis increments are
indeed very small and virtually negligible over the
entire domain after three cycles (not shown)
Although the MI analysis increments exhibit some
update of new observational information for every
cycle, it should be noted that the percentage of the
analysis increments relative to these increments
diminishes quickly in time in the MI experiment
(Fig.4e–h) Indeed, examining the observational
increments, shows that these increments keep
grow-ing in time because the model state is driftgrow-ing away
from the truth This is anticipated as the MI analysisincrements could not assimilate fully new observa-tional information at each cycle due to the smallensemble spread even after inflated (Fig.4) As aresult, the difference between the analyzed and thetrue states is accumulated every cycle, leading to agrowing deviation of the model state from the trueatmosphere toward the end of the forecasted period.Unlike the MI spread, the MP spread shows muchmore signal with time in both horizontal and verticalcross sections (Figs 5, 6) Specifically, the MPspread is more organized and could maintain wellthe structure of the uncertainties along the track of thestorm Inspecting the regions of strong wind conver-
representative in the first five cycles during whichmodel uncertainties related to convective instabilitiesdevelop rapidly, leading to a consistent and coherentstructure of the spread The clustering of the MPspread along the convergent areas seen in Fig.5implies that new observations are most updatedwithin these large spread areas but minimally used
in other areas where the ensemble spread is otherwisesmall The characteristically large concentration ofthe MP spread along the convergent zones does notseem to depend on the number of ensemble members
as this is seen for all ranges of the number ofensemble members
The difference in the MP and MI spread is evenmore apparent in the vertical cross sections (Fig.6).While the MI spread is confined mostly at the middlelevels, we observe that the MP spread could capturemodel uncertainties related to the TC physics fromthe surface up to 200 hPa Comparison of the analysisincrements in the MI and MP experiment shows thatthe MI method tends to be slightly more efficient atthe upper levels where the magnitude of the MI and
MP ensemble spread is somewhat similar, but it isless efficient than the MP approach from the surface
to *300 hPa where the MI spread is not sufficienteven after inflated Note that the performance of the
MP approach is somewhat degraded toward the end
of the forecast because the saturation of the ensemblespread affects the effectiveness of the LETKF.Therefore, the LETKF is no longer capable ofupdating new observations efficiently and the EMEerrors start to grow afterward (Fig 3)
Trang 11Figure 4 Plane views at 900 hPa of a–d the ensemble spread with the inflation factor of 1.8 (shaded, m s -1 ) and zonal wind analysis increments (contoured at interval of 0.5 m s -1 ); and e–h rms wind speed errors (shaded) and observed zonal wind increments (contoured at interval of 0.5 m s -1 ) valid at 0000 UTC 13, 1200 UTC 13, 1200 UTC 14, and 1200 UTC 15 Superimposed are storm-relative flows
Trang 12Figure 5 Similar to Fig 4 but for the multi-physics approach with no inflation