删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

Recent Progress in Numerical Atmospheric Modeling in China

本站小编 Free考研考试/2022-01-02

Rucong YU1,*,,,
Yi ZHANG1,
Jianjie WANG2,
Jian LI1,
Haoming CHEN1,
Jiandong GONG2,
Jing CHEN2

Corresponding author: Rucong YU,yrc@cma.gov.cn;
1.State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, China Meteorological Administration, Beijing 100081, China
2.National Meteorological Center, China Meteorological Administration, Beijing 100081, China
Manuscript received: 2018-10-15
Manuscript revised: 2019-05-02
Manuscript accepted: 2019-06-10
Abstract:This review summarizes the scientific and technical progress in atmospheric modeling in China since 2011, including the dynamical core, model physics, data assimilation, ensemble forecasting, and model evaluation strategies. In terms of the dynamical core, important efforts have been made in the improvement of the existing model formulations and in exploring new modeling approaches that can better adapt to massively parallel computers and global multiscale modeling. With regard to model physics, various achievements in physical representations have been made, especially a trend toward scale-aware parameterization for accommodating the increase of model resolution. In the field of data assimilation, a 4D-Var system has been developed and is operationally used by the National Meteorological Center of China, and its performance is promising. Furthermore, ensemble forecasting has played a more important role in operational forecast systems and progressed in many fundamental techniques. Model evaluation strategies, including key performance metrics and standardized experimental protocols, have been proposed and widely applied to better understand the strengths and weaknesses of the systems, offering key routes for model improvement. The paper concludes with a concise summary of the status quo and a brief outlook in terms of future development.
Keywords: numerical modeling,
atmospheric modeling,
weather and climate modeling
摘要:本综述回顾了自2011年以来中国科学家在大气数值模拟领域的科学和技术进展, 包括动力框架, 模式物理, 资料同化, 集合预报和模式评估等. 动力框架方面, 重要工作包括对现有模型架构的改进, 以及对适用于大规模并行计算和全球多尺度模拟的新技术探索. 模式物理方面, 在不同过程的参数化上取得了一系列成果, 特别是为适应模式分辨率的增加而兴起的尺度自适应型参数化方案的研究. 数据同化方面, 我国的四维资料变分系统已经建成并已在国家气象中心业务运行, 显示出进步效果. 同时, 集合预报在业务预报系统中的作用愈发凸显, 与其相关基础技术也得到了发展. 此外, 涵盖关键性能指标和标准化测试协议在内的模式评估策略已经得到了广泛应用. 它们为评估模式优缺点和改进模式性能提供了重要的指示意义. 文章最后简要总结了当前数值模式发展现状, 并展望了未来的发展.
关键词:数值模拟,
大气数值模拟,
天气与气候数值模拟





--> --> -->
1. Introduction
Numerical atmospheric/meteorological models have become fundamental tools for operational weather forecasting, climate prediction and projection, and research modeling. The performance of atmospheric models reflects the core strength of meteorological technology. With the ongoing enhancement of computational techniques and resources, the role of numerical models will certainly become increasingly important in both research and operational applications.
As numerical modeling has gradually advanced, the development of atmospheric models has been pushed forward along two complementary routes: the increase in model resolution and the increase in model completeness. These two routes loosely correspond to two major applications of atmospheric models: numerical weather prediction (NWP) and climate modeling. The need for highly accurate forecasts in the NWP community leads to pursuing higher model resolutions, which are beneficial for resolving more fine-scale structures associated with atmospheric motion. Global nonhydrostatic modeling will become a routine rather than an exceptional choice as the horizontal resolution of NWP models approaches the typical nonhydrostatic regime (Bauer et al., 2015). For earth/climate system modeling as a whole, it is more important to obtain an adequate statistical state that covers a relatively long time duration. This approach typically requires a detailed description of different component processes, which increases the model complexity. While the resolution of global climate models has also increased substantially in the few past decades (e.g., Yu et al., 2016), atmospheric modeling applications will still be largely operated on the hydrostatic regime, especially when it comes to a fully coupled system. However, at the regional scale, convection-permitting climate modeling has already become popular (e.g., Prein et al., 2017). Although these applications differ somewhat, the underlying science and techniques associated with these modeling strategies are similar. The ultimate solutions to these common issues may be shared by various modeling groups with different purposes.
The development of numerical atmospheric modeling in China dates back to the 1950s (e.g., Gu et al., 1957; Liao, 1958; Chou et al., 1963; Zeng, 1979; Zeng et al., 1985; Qian and Zhong, 1986; Zhang, 1990; Li, 1994; Zhou and Wang, 1996; Wu et al., 1996). Some earlier reviews have been presented by (Tao et al., 2003), (Xue, 2004), and (Xue and Liu, 2007). Based on these past achievements, weather and climate modeling activities have prospered in China. In the field of weather forecasting, the Global/Regional Assimilation and Prediction System (GRAPES) has been operationally used for delivering global and regional forecasts (e.g., Shen et al., 2017). The performance of this operational system has approached those of some leading global forecast systems. Figures 1a and b present the forecasting skill in terms of the root-mean-square error of sea level pressure over Asia, and the anomaly correlation coefficient of geopotential height at 500 hPa over the Northern Hemisphere, respectively. For the Asia region, GRAPES-GFS (the global version) already shows competitive performance with other models. However, for the entire Northern Hemisphere, there are still some gaps compared with other leading global forecast systems. In terms of daily operational weather forecasting for China, GRAPES-GFS also exhibits competitive performance with the systems developed in Europe and Japan (Fig. 1c).
Figure1. (a) Forecasting skill in terms of the root-mean-square error of sea level pressure forecasts for Asia. (b) Forecasting skill in terms of the anomaly correlation coefficient of geopotential height at 500 hPa for the Northern Hemisphere. (c) Equitable Threat Score (ETS) of precipitation over China averaged in the period 1 July to 31 October 2018. Results from various forecasting systems (ECMWF, NCEP, Russia, GRAPES, JMA, and UK) are compared.


With regard to global climate modeling, the major efforts have been devoted to climate prediction and projection. Climate models developed in China were used for phases one, three and five of the Coupled Model Intercomparison Project (CMIP). Four modeling groups (Bao et al., 2013; Li et al., 2013a; Ji et al., 2014; Wu et al., 2014) made important contributions to the last program, i.e., CMIP5. Presently, there are around 10 climate modeling groups all over the country (Zhou et al., 2014b), and these groups will play a more active role in the upcoming CMIP6. Meanwhile, subseasonal-to-seasonal prediction has attracted the attention of both the weather and climate modeling communities. The Beijing Climate Center (BCC) Climate System Model at the China Meteorological Administration (CMA) has participated in the Subseasonal-to-Seasonal Project (e.g., Jie et al., 2017; Liu et al., 2017a), which targeted the building of a common database (cf., Vitart et al., 2017) for research in this field, as well as helping to improve the performance of operational systems.
This review seeks to summarize the scientific and technical progress in the field of numerical atmospheric modeling in China since 2011. As shown in Fig. 2, a typical atmospheric modeling system can be described by several major components from the perspective of the system function, including the dynamical core that handles the resolvable-scale fluid structures; model physics that represents the bulk effects of the non-resolvable-scale processes by using parametric relationships of the prognostic variables; and the data assimilation system that provides an optimal estimate of the initial state. Given proper boundary conditions prescribed by existing datasets or provided by an interactive component model in the earth system (e.g., land and ocean), a unified modeling framework may be used for multiple meteorological applications, such as weather forecasting, climate prediction, or climate projection. Scientific and technical advances associated with these modeling system components for NWP and climate modeling are the major focus of this paper.
Figure2. Schematic diagram illustrating the typical components of an atmospheric modeling system, including functions and applications.


Meanwhile, it is also useful to mention what this review does not include. This review does not include all atmospheric modeling issues in a broad sense. The forcing mechanisms for conventional atmospheric models, such as the coupling between atmospheric models and other earth system component models and their interactions, are beyond the scope of this review. The roles played by aerosol and chemical species in the atmosphere are important but are also not considered here. In addition to the science and techniques associated with the modeling system itself, the progress in model application and evaluation strategies, including operational ensemble forecasting and model evaluation, are also described. These aspects have become an indispensable part of numerical modeling of the atmosphere. They help people to better utilize the system in a more skillful manner and help in understanding the strengths and weaknesses of the system, thus offering key routes for performance improvement.
The remainder of this review is organized as follows. Sections 2 and 3 describe advances in the fields of the dynamical core and physical processes, respectively. Progress in data assimilation is documented in section 4. Sections 5 and 6 review studies associated with ensemble forecasting and model evaluation, respectively. A summary and future outlook are provided in the final section.

2. Dynamical core
The dynamical core deals with the resolvable-scale fluid evolution in a model. In this section, research advances associated with the dynamical core are reviewed with respect to three aspects: (i) the horizontal and vertical discretization techniques developed for existing models; (ii) the techniques developed for the new generation of global models on quasi-uniform meshes; and (iii) the impact of the dynamical core on model performance.

2
2.1. Horizontal and vertical discretization
--> A modern atmospheric dynamical core may be separated into two parts: (i) the basic dynamical component that solves the atmospheric governing equations (hydrostatic or nonhydrostatic, shallow or deep atmosphere), which determine the evolution of mass, momentum and energy; and (ii) a passive transport component that handles the advection of moisture and other passive tracer species. The basic dynamics also contain many transport terms (e.g., mass, momentum, heat), which are tightly connected with wave-like phenomena. Hence, the transport solver usually has an important impact on model performance. Transport methods are usually categorized into Lagrangian and Eulerian types based on the specification of the flow field. The (semi) Lagrangian approach solves the material time derivative equation by tracking the fluid trajectory. It is able to maintain numerical stability when combined with large Courant numbers but requires nontrivial effort to meet the conservation laws. The Eulerian approach solves the local time derivative equation, and the time step is usually limited by the Courant number, but the flux-form Eulerian scheme can automatically guarantee conservation of mass. In practical modeling, the choice between them is usually a tradeoff depending on the primary focus and the target applications.
A semi-Lagrangian transport method (Bermejo and Conde, 2002) is used by the GRAPES model, which was developed at the National Meteorological Center of the CMA. The original semi-Lagrangian scheme effectively avoids the time step restriction in the polar regions, but it does not conserve mass or any mass-weighted quantity. The scheme also does not resolve well the sharp gradient of the moisture field, and thus may adversely impact model accuracy. To improve model performance, (Shen et al., 2011) and (Wang et al., 2011b) replaced the original semi-Lagrangian scheme with a piecewise rational method advection scheme. This approach reduces the numerical errors in both idealized transport tests and realistic modeling experiments. The results associated with water vapor distribution and heavy rainfall forecasting were improved by this scheme due to a more accurate representation of moisture transport. (Su et al., 2013) further provided a flux-form implementation of this scheme for solving the moisture transport equation in GRAPES.
For Eulerian transport schemes, the extremely short zonal distance in the pole region is a problem that restricts their applications in global models based on the regular latitude-longitude or Gaussian grid. Several methods that can alleviate zonal Courant number restrictions have been proposed (e.g., Leonard et al., 1996; Lin and Rood, 1996). In (Zhang et al., 2013c), a leaping-point difference variant of the two-step shape-preserving advection scheme (TSPAS; Yu, 1994) was established. The scheme helps to alleviate the time step restriction, and does not create computational difficulties in the polar regions, as confirmed by both idealized transport tests and realistic climate modeling tests. Long-term simulation results (e.g., Yu et al., 2015b; Zhang and Li, 2016) have demonstrated the robust performance of this modified TSPAS when used in a global model (Community Atmosphere Model, version 5; CAM5). Moreover, CAM5 with TSPAS shows improved performance of summer rainfall simulations at the southern edge of the Tibetan Plateau (Yu et al., 2015b). As shown in Fig. 3, artificial rainfall amounts (Fig. 3a) at the steep southern edge of the Tibetan Plateau are reduced in the model when TSPAS is activated (Fig. 3b). The reason may lie in the restriction of multigrid moisture transport that can easily occur in the original semi-Lagrangian scheme. This transport scheme has been implemented to the Chinese Academy of Meteorological Sciences climate system model (Rong et al., 2018), which will participate in CMIP6.
Figure3. Differences in summer (June-July-August) mean precipitation amount (shading; units: mm d-1) (a) between CAM5 and TRMM 3B42 and (b) between M-CAM5 (with TSPAS) and CAM5. Black contours indicate orographic height (shown for 200, 500, 2000 and 4000 m). [Reprinted from (Yu et al., 2015b)].


Another research activity associated with the transport solver is the development of fully Lagrangian advection schemes (e.g., Dong and Wang, 2012; Dong et al., 2014, Dong et al., 2015). Fully Lagrangian schemes do not discretize the space, as done by a finite-difference/volume scheme. Instead, these schemes discretize the total fluid mass into a number of small pieces and track the trajectory of each fluid unit. Thus, these schemes do not necessarily assume the underlying mesh shape. The proposed fully Lagrangian scheme is inherently shape-preserving and shows good performance in terms of transporting the sharp gradient field. The scheme also produces quite small range-preserving unmixing, which is potentially important for transporting the chemical species. For practical application of this scheme, the major challenge is to establish a parallel framework, because the fluid unit may go to any place on the sphere.
In the vertical direction, the atmospheric equations may be cast into either height-based or mass-based governing equations (Laprise, 1992), and some alternative choices are also available (e.g., Bleck, 1974; Randall et al., 2000). For nonhydrostatic modeling, using height-based or mass-based coordinates is physically similar, while the numerical techniques could be different. Mass-based coordinates are a natural extension of the pressure-based coordinates that have been used by most hydrostatic models. Their applications in nonhydrostatic modeling are also widespread. The Advanced Regional Eta-coordinate Model (Yu and Xu, 2004), which was originally a hydrostatic primitive equation model, was upgraded to include the nonhydrostatic effect using a mass-based coordinate (Cheng et al., 2018). The nonhydrostatic formulation is able to reproduce the hydrostatic solutions when nonhydrostatic effects are trivial. This is a desirable feature for simulating large-scale flow because the physical constraints demand a hydrostatic balance. (Yang et al., 2015) implemented a hybrid finite-element finite-difference vertical discretization for a global nonhydrostatic spectral model based on a mass coordinate. In this approach, a finite-difference scheme that satisfies the set of constraints is applied to the linear components, and a cubic finite-element scheme with high-order accuracy is applied to the nonlinear components. A deep atmosphere extension for this model is further described in (Yang et al., 2017).
At the bottom, the model surface is usually arranged to follow the topography, and the coordinate levels are gradually transformed to pure pressure or pure height levels at the model top. These hybrid coordinates have been widely adopted because they alleviate the rise and fall of the pure sigma coordinate at the upper level. (Li et al., 2012) compared several types of hybrid coordinates for GRAPES, which uses a height-based terrain-following vertical coordinate. The results suggested that among four selected coordinates, the one that has the minimal rise and fall at the upper level leads to the optimal performance. (Zhang et al., 2015c) implemented a height-based hybrid vertical coordinate (Klemp, 2011) in the GRAPES model. The scheme reduces numerical errors associated with the pressure gradient force term and leads to improved model performance. (Li et al., 2014b) proposed an orthogonal terrain-following coordinate system and tested it using idealized advection tests. This scheme helps to reduce advection errors compared with the corresponding hybrid sigma coordinate, while the convergence of grid lines over orography also restricts the time step and increases numerical errors.

2
2.2. Modeling on the quasi-uniform grids
--> The next-generation of weather and climate models will largely depend on massively parallel computers to obtain desirable computational performance. A large amount of model development effort has been directed on quasi-uniform grids (e.g., Staniforth and Thuburn, 2012). The regular latitude-longitude grid has a well-known polar problem that restricts its parallel efficiency, and this problem worsens as the horizontal resolution increases. Quasi-uniform grids facilitate the use of localized discretization methods for global models that have a high-resolution modeling requirement. Meanwhile, the nonhydrostatic effect will become significant as the horizontal resolution increases above certain thresholds (e.g., <10 km for the atmosphere). Global modeling at such resolutions would require a nonhydrostatic dynamical formulation such that motions at various scales can be better resolved. This section provides a review of research activities in this regard.
For the GRAPES model, a Yin-Yang grid variant has been developed in recent years (Li et al., 2015b). The Yin-Yang grid is a combination of two regular latitude-longitude grids with some overlapping regions (Kageyama and Sato, 2004). Because of the regular and orthogonal line structure, a Yin-Yang grid model can largely inherit its regular latitude-longitude grid counterpart (e.g., the semi-implicit semi-Lagrangian technique of GRAPES). The elliptical equation is solved by the generalized conjugate residual solver, and bicubic Lagrangian interpolation is used to provide Dirichlet-type boundary conditions between the subdomains. Long-term integration under different sea surface temperature conditions was carried out on an aqua planet (Li and Peng, 2018), and the simulations of precipitation and large-scale circulation generally showed reasonable performance. A major challenge when developing a Yin-Yang grid model is that the overlapping regions will require special treatment to meet mass conservation requirements (e.g., Peng et al., 2007). However, this problem is not urgent for the Yin-Yang GRAPES model because its semi-Lagrangian transport does not conserve mass.
A multimoment finite-volume technique (e.g., Ii and Xiao, 2007) has been developed for atmospheric applications on quasi-uniform grids in recent years. This technique has been applied to modeling 2D advection problems on a sphere (e.g., Chen et al., 2012; Li et al., 2013b), global nonlinear shallow water waves (e.g., Chen et al., 2014; Li et al., 2015c), and nonhydrostatic dynamics on a 2D transect (Li et al., 2013c). The basic idea of this method (an earlier prototype also known as constrained interpolation profile; Yabe et al., 2001; Xiao and Ikebata, 2003) is to use more than one type of prognostic variable (e.g., point value, derivative value, and cell-averaged value, two types are typically used) for model integration. The multimoment finite-volume approach defines both cell-averaged values and point values as independent prognostic variables. In contrast, conventional finite-volume/finite-difference methods only define the cell-averaged or point value in a single-moment sense. The additional moment increases the local degree of freedom and helps to enhance the numerical accuracy by using high-order local reconstructions. A parallelized multimoment nonhydrostatic atmospheric dynamical core has been constructed recently. The model has both global and regional configurations, and exhibits good performance in several idealized numerical benchmark experiments (X. L. Li, 2019, personal communication). Ongoing efforts have been carried out to build an NWP model based on this dynamical core.
A new modeling approach based on an unstructured mesh is currently being explored at the Chinese Academy of Meteorological Sciences, in collaboration with the National Supercomputing Center in Wuxi. The unstructured mesh does not assume the underlying mesh shape and can be used for both global quasi-uniform and regional-refinement applications in a consistent way. The quasi-uniform application starts from a subdivided icosahedron, which is a semistructured geodesic grid that has two types of neighboring relations. The local line segments on an icosahedral grid can be arranged to be either orthogonal (e.g., Heikes and Randall, 1995) or nonorthogonal (e.g., Tomita et al., 2001). The orthogonal approach (referred to as the Voronoi polygon) is used because it provides beneficial properties for staggering finite-volume methods (Weller et al., 2012), and naturally assimilates existing variable-resolution techniques (e.g., Ringler et al., 2008). In (Zhang et al., 2018a), a global shallow water modeling framework, which isolated most horizontal components of a 3D model, was developed and tested. The model shows comparable results with reference solutions in various 2D idealized benchmark tests. In particular, the importance of potential vorticity transport on the modeling behavior was described and emphasized in that study. Some numerical methods for solving the transport equation have also been developed and evaluated (e.g., Zhang et al., 2017; Zhang et al., 2018a).
Recently, a global nonhydrostatic dynamical core has been further established on an unstructured mesh (Zhang et al., 2019a). An added-value of this model is that the variable-resolution capability is able to resolve more fine-scale fluid structures without a uniform increase in the global resolution. Figure 4 shows the simulated baroclinic waves using this newly developed global nonhydrostatic framework. In the variable-resolution modeling with resolution from ~70 km to ~30 km (Fig. 4a), the model is able to resolve the fine-scale structure of the relative vorticity field over the refinement region, better than the quasi-uniform ~60 km result (Fig. 4b) and closer to the ~30 km simulation (Fig. 4c). More importantly, the model does not show evident performance deterioration in the mesh transition zone. Future work should extend the present framework to a fully-fledged atmospheric general circulation model (AGCM).
Figure4. Simulated dry baroclinic waves from a newly developed global nonhydrostatic atmospheric dynamical core using (a) variable-resolution from ~60 km to ~30 km, (b) quasi-uniform modeling with a quasi-uniform ~60 km resolution, and (c) as in (b) but using a resolution of ~30 km. Results are compared for the relative vorticity (units: 10-5 s-1) at the model level near 850 hPa after 10 simulation days. The black contours in (a) denote the refinement region centered at (35°N, 180°E). [Reprinted from (Zhang et al., 2019a)].



2
2.3. Impact of the dynamical core on model performance
--> Once a dynamical core is constructed, it is usually difficult to isolate the impact on model performance of changing the dynamical core. The impact of the dynamical core on model performance is sometimes overlooked. Some recent studies have suggested that such an impact is not small. (Zhang et al., 2013b) investigated the sensitivity of the simulated climate to two atmospheric dynamical cores coupled with the same model physics: the Institute of Atmospheric Physics (IAP) AGCM and CAM3.0. The authors showed that the model with the IAP core simulated a colder troposphere than that with the CAM3.0 core, reducing the original warm bias of CAM3.0 in the tropical and midlatitude troposphere (using the same model physics at a similar resolution). However, when the two dynamical cores were used in the idealized dry Held-Suarez test, the IAP core simulated a warmer troposphere than that using the CAM3.0 core. These results suggest that interactive physical processes can change the effect of the dynamical core on climate simulations. (Zhang et al., 2015d) compared the simulations of deep stratus clouds over continental East Asia between two CAM5 versions with a grid point finite-volume core (CAM-FV) and a spectral transform core (CAM-EUL). They showed that CAM-FV tends to produce an artificial low-value center on the eastern periphery of the Tibetan Plateau. This problem, which is not found in CAM-EUL (also not found in the low-resolution CAM-FV; (Zhang et al., 2014c)), leads to a significant deficit in the shortwave cloud radiative forcing. The reason is attributed to the artificial low-level divergence around the steep terrain, which creates subsiding motion that limits the generation of stratus clouds. This problem might be related to the artificial response to the better-resolved steep topography in a high-resolution configuration (at least as configured as in that study). These findings demonstrate that the dynamical environment produced by the numerical solver can have a strong impact on the simulations of cloud physical features.

3. Model physics
2
3.1. Moist convection and turbulence
--> The moist convection and turbulence are small-scale atmospheric phenomena. The associated dynamical processes would be resolved provided that the model resolution is high enough. For typical global weather and climate models, the collaborative effects of these processes on the resolvable scale need to be parameterized. The absence of a parameterization of these effects may result in significant model biases (e.g., Chen et al., 2016c).
Cumulus convection is central to the hydrological cycle, and the cumulus scheme is usually crucial to model performance. Some studies have explored the sensitivity of simulated precipitation to deep convection parameterization. (Wu, 2012) proposed a mass-flux cumulus parameterization scheme based on a bulk-cloud approach for large-scale atmospheric models. This scheme shows good performance in both single-column and global simulations when used by the BCC climate model (Wu et al., 2014). (Wang and Zhang, 2013) investigated the impact of different closure and triggering parameterizations of deep convection on the single-column simulations of tropical convection by CAM4 and CAM5. Rainfall amounts associated with different precipitation types in the model are sensitive to these choices, but the total precipitation amount may be insensitive because of compensation by different parameterization components. These results highlight the need for multiple datasets for constraining models, as well as treating different parameterizations as an integrated system. (Xu et al., 2014a) applied a modified simplified Arakawa-Shubert scheme to the GRAPES-Meso model, and found that the new scheme improves the prediction of typhoon tracks.
The importance of shallow convection has also been recognized. Shallow convection is more tightly connected with boundary layer processes and has a strong impact on the global hydrological and energy cycles. A shallow convection mass flux scheme has been used in the GRAPES model to replace the original vertical turbulent mixing and diffusion approach (Liu et al., 2015). The results show that the precipitation distribution in the tropics is improved, with reduced spurious precipitation. The improvement is attributed to the more reasonable shallow convective heating rates. (Wan et al., 2015) improved the trigger function of the shallow convection scheme in the GRAPES-Meso model by relating the trigger function to the distribution of temperature and humidity in the convective surface layer. As more shallow convection is triggered, the model presents better performance of 24-hour forecasts of accumulated precipitation and surface air temperature. (Kang et al., 2016) further incorporated a compensation parameterization of cloud cover and hydrometeors from shallow convection based on this scheme. The results exhibit an improvement in the cloud cover and surface total solar radiation based on a one-month forecast experiment. Furthermore, a recent global climate modeling study also suggested that a better representation of the transition from shallow to deep convection may help to delay the peak time of continental rainfall in the afternoon (Zhang and Chen, 2016), although this study used a completely different subgrid parameterization framework (see section 3.4).
As the resolution of global NWP models approaches below ~10 km, the model dynamics needs to resolve nonhydrostatic effects more effectively. The under-resolved dynamical processes under the previous coarse resolution regime will be partly resolved, and more atmospheric energy will be captured by grid-scale dynamics. Moreover, below certain resolution thresholds, clouds can be fully represented by the model grid (sometimes in an approximate way), and explicit cloud microphysics will thus more tightly interact with grid-scale dynamics. Meanwhile, research has also pointed out that there are dynamical difficulties of gray-zone modeling the convective boundary layer, because convection under this regime is artificially governed by the grid resolution rather than the natural state of the flow (Zhou et al., 2014a). Developing parameterization schemes that can adequately blend effects from both resolved and under-resolved scales (so-called scale-aware schemes) has become an important subject in recent years, and thus a unified approach can be used for multiscale modeling applications. Some basic research has been devoted to this aspect. In terms of convection parameterization, (Huang et al., 2014) developed a subgrid convective cloud parameterization scheme that is suitable for gray-zone resolutions. The new scheme is automatically adaptive to variation in grid size, and accounts for microphysical processes consistently with resolved clouds. Numerical experiments of an idealized tropical cyclone showed that this new scheme has a positive impact on the intensity and precipitation distribution of tropical cyclones owing to the effect of subgrid clouds on the total diabatic heating. (Huang et al., 2018) compared an idealized supercell large-eddy simulation with coarse resolution simulations. They suggested that the nondimensional subgrid vertical flux of the moist static energy may vary with the subgrid fractional cloudiness according to a function of fractional cloudiness, regardless of the box size. Based on these results, these authors are currently developing a subgrid mass flux transport parameterization to include the impact of the subgrid cloud fraction, and will apply it to a convection parameterization scheme (W. Huang et al., 2018, personal communication).
For boundary layer parameterization, (Zhang et al., 2018b) developed a 3D turbulent kinetic energy subgrid mixing scheme for the Weather Research and Forecasting (WRF) model to address the gray-zone problem in the subgrid turbulent mixing parameterization. This scheme combines horizontal and vertical subgrid turbulent mixing into a single energetically consistent framework and is self-adaptive to the mesh space variation from the typical large-eddy simulation resolution (a few hundred meters) to the typical mesoscale resolution (a few kilometers). For the convective boundary layer, (Zhou et al., 2017) examined several conventional horizontal mixing schemes and pointed out they all have certain limitations. They then used the gradient-diffusion framework for parameterizing the turbulent horizontal mixing and obtained improved performance. The proposed schemes are not strictly scale-aware but can also be used in the gray zone as a working solution.
For the development of convection schemes, another important issue is the parameterization of small-scale in-cloud processes and the reduction of their uncertainty. (Wang and Zhang, 2014) studied the bulk budgets of in-cloud vertical velocity and evaluated its parameterization for different plumes in the shallow cumulus environment. Their results suggest that shallow convection may require specifically parameterized vertical velocity schemes for different types of plumes. (Lu et al., 2016) examined the relations between entrainment rate and various quantities (e.g., vertical velocity, buoyancy, and turbulent dissipation rate) and suggested that using a combination of multiple variables, rather than using a single variable fitting, may lead to a better representation of the entrainment rate.
Furthermore, the importance of incorporating stochastic effects in the convection scheme for reducing uncertainty has been recognized. (Wang and Zhang, 2016) implemented a stochastic deep convection parameterization (Plant and Craig, 2008) in CAM5. In this scheme, the total cloud base mass flux is a random variable at a given time step and grid point location. The ensemble mean cloud base mass flux is determined from the deterministic convection scheme using spatially and temporally averaged temperature and moisture profiles. The results showed that when stochastic processes are incorporated, deep convection will be suppressed, while shallow convection will be enhanced, leading to improved cloud and precipitation simulations.

2
3.2. Cloud parameterization
--> In an atmospheric model, the term "cloud scheme" usually denotes the parameterization of cloud physical features (e.g., cloud amount, cloud microphysics), although the dynamical processes are certainly integral parts of the cloud processes in nature. The cloud scheme in a numerical model is important for several reasons, and the relevant studies are briefly described as follows.
First, cloud microphysical processes are directly related to precipitation, which impacts the entire hydrological cycle. Some cloud microphysical parameters (e.g., effective radii of droplets and cloud ice) also have important implications for cloud radiative properties, which affect the global energy balance. The microphysical scheme generates the local tendency from various microphysical processes for different hydrometeor species. For meso- to cloud-scale models that can explicitly resolve the cloud fraction and provide more appropriate dynamics, the microphysical scheme can be applied in a relatively straightforward manner. For example, (Gao et al., 2011) implemented a two-moment bulk microphysical scheme in the WRF model. A new approach for parameterizing heterogeneous droplet activation was developed in this scheme. The scheme performs well in terms of modeling the microphysics of clouds and precipitation. (Lin and Colle, 2011) proposed a new parameterization of heterogeneous droplet activation. A flexible and self-consistent framework was developed to represent the physical properties of precipitating ice, such as fall speed and density. The scheme has been adopted in the WRF model and shows good performance.
For large-scale models that have scale separation between the convective and stratiform cloud systems, the cumulus cloud parameterization usually adopts a highly simplified microphysical scheme, although more advanced microphysical strategies may also be adopted to improve performance (e.g., Xu et al., 2014b). In contrast, stratiform clouds are usually treated using more comprehensive microphysical schemes. In this regard, (Zhao et al., 2017) modified the MG (Morrison and Gettelman, 2008) microphysics of CAM5 by combining ice and snow into one single variable (called total ice). The justification for doing so was that ice and snow are not well separated in nature as cloud droplets and raindrops because ice and snow share basically the same growth paths. Moreover, the division between ice and snow in GCMs is quite empirical and artificial, and has been largely treated as a tuning procedure. The single ice scheme not only reduces the uncertainty in the ice to snow autoconversion process (because eight microphysical processes are no longer needed), but also improves tropical high-cloud simulation with improved shortwave and longwave cloud forcing.
A cloud macrophysical scheme is usually used in large-scale global models for generating the subgrid fractional cloud amount and large-scale condensation, which are important inputs for the microphysical and radiation schemes. (Wang et al., 2015c) evaluated three cloud fraction schemes based on numerical simulations from high-resolution cloud-resolving models. Among the investigated schemes, a semiempirical scheme and a probability-density-function-based statistical scheme performed better than a relative-humidity-based scheme. (Qin et al., 2018) proposed a diagnostic probability-density-function cloud scheme and tested it in CAM5. The scheme diagnoses the subgrid scale variance of a combined variable of total water and liquid water potential temperature from the shallow convection and boundary layer turbulence schemes. Compared to the default relative humidity cloud fraction scheme, the new scheme improved marine low-cloud simulation, especially for shallow cumulus clouds. (Ma et al., 2018) implemented an explicit prognostic cloud cover scheme in the GRAPES model. It adds an additional prognostic equation for the cloud fraction in a similar manner as (Tiedtke, 1993). The cloud fraction is governed by both the grid-scale and subgrid-scale tendencies. Their simulations showed improved cloud fraction and radiative flux over the original diagnostic cloud scheme. Meanwhile, (Li et al., 2014a) showed that enhanced stratiform condensation and evaporation can improve the simulations of optically and hydrologically important moisture and cloud properties, which lead to reduction in the biases of shortwave cloud radiative forcing and rainfall performance.

2
3.3. Radiation transfer scheme
--> The radiation process is one of the best-understood aspects of atmospheric science (Liou, 2002). However, building radiative transfer parameterizations remains a challenging task because the primary focus is to achieve a balance between accuracy and efficiency. The traditional two-stream approximation used in most weather and climate models has acceptable accuracy in a clear-sky atmosphere, but may lead to larger errors under cloudy-sky conditions. Thus, higher-order schemes may be considered. (Zhang et al., 2013a) developed a four-stream doubling-adding discrete ordinates method based on the invariance principle. (Zhang and Li, 2013) further developed an analytical doubling-adding algorithm for the four-stream spherical harmonic expansion method. These methods could potentially improve the radiation performance in a global model.
Radiation transfer is tightly connected with the cloud and aerosol properties (e.g., cloud fraction, optical depth, geometric location), and the treatment of these properties in the radiation code deserves special attention (e.g., Barker et al., 1999). In this regard, (Zhang et al., 2014a) implemented the BCC radiative transfer code (RAD) to the BCC climate model. The BCC-RAD model is coupled with the Monte Carlo independent column approximation (McICA; e.g., Pincus et al., 2003) method to treat subgrid cloud variability. The results showed that the inclusion of a cloud-type dependent function for the cloud fraction decorrelation length and the horizontal inhomogeneity of cloud condensation leads to improved shortwave and longwave cloud radiative forcing. Moreover, the treatment of cloud overlapping in the radiation module also affects cloud radiation simulations. (Zhang and Jing, 2016) provided an overview of this issue in their study.

2
3.4. Super parameterization
--> The super parameterization (SP; Grabowski and Smolarkiewicz, 1999; Khairoutdinov and Randall, 2001) modeling approach has received growing attention in recent years. For global models, the SP framework provides a more consistent treatment of various subgrid physical processes in a systematic manner, and reduces the artificial and empirical assumptions associated with conventional convection parameterizations. (Wang et al., 2015a) extended an early-developed SPCAM version (Wang et al., 2011a) with a high-order turbulence closure scheme, so as to improve the simulation of marine boundary low clouds. The results showed that when the high-order closure is used, cloud fraction and cloud radiative forcing over the marine boundary layer region become more realistic. The added-value of SPCAM with regard to climate modeling of precipitation over continental East Asia was analyzed in (Zhang and Chen, 2016). The SP model improves the frequency-intensity spectrum of rainfall, and shows a delay of the rainfall peak in the afternoon, which is related to more abundant shallow convection before intense rainfall starts. (Zhang et al., 2019b) further showed that the nocturnal and early-morning rainfall peaks over East Asia can be more realistically simulated by the SP model provided that the resolvable-scale flow becomes more favorable, highlighting the importance of correctly representing multiscale dynamical and physical effects for the climate simulation of the diurnal cycle of rainfall. The SP approach has also been explored in the regional GRAPES-Meso model. (Zhu et al., 2015) developed a 2D cloud-resolving model and used it for constructing SP-GRAPES-Meso. Unlike SPCAM, which adopts a filtered anelastic model for small-domain cloud-scale simulations, SP-GRAPES-Meso uses a simplified compressible model as the embedded cloud model, and the target application is weather forecasting rather than climate modeling. Preliminary tests suggest that the adoption of SP into GRAPES-Meso helps to improve the rainfall forecast.

2
3.5. Coupling between dynamics and physics
--> In addition to individual physical schemes, the coupling of dynamics and physics is also key, and has become an increasingly important issue in recent years (e.g., Gross et al., 2018). Normally, coupling is established during the early stages of model development. However, due to suboptimal work in the early stages, some further adjustments are needed to optimize model performance. For example, the GRAPES model uses a Charney-Philips vertical staggering grid. However, the model physics still operates on a Lorenz grid. Interpolation is needed to perform the dynamics-physics coupling from one grid to another, which can lead to increased numerical error. Recently, a boundary layer scheme that directly operates on a Charney-Philips grid was developed for the GRAPES model (Chen et al., 2017). After the modification, the model showed improved forecasting skills owing to the absence of vertical temperature and moisture interpolation. This reminds us that the physics-dynamics coupling procedure should be carefully considered in the early stage of model development, as its errors may overwhelm the improvement of other parts. The coupling issue will become more important when conventional global models extend their resolution to the nonhydrostatic regime, as convective-scale models usually require a different coupling approach to coarse-resolution global GCMs (e.g., Giorgetta et al., 2018).

4. Data assimilation
2
4.1. Variational and ensemble data assimilation
--> Data assimilation is an integral part of NWP systems. Current data assimilation frameworks may be categorized into two types: variational and ensemble methods (Gustafsson, 2007; Kalnay et al., 2007). Research activities along these two lines are reviewed in this section.
Variational methods have a solid root in many operational centers. Some centers have switched from conventional 3D-Var to 4D-Var, which is computationally more expensive but also more accurate. The 4D-Var system has the ability to handle flow-dependencies developing during the time window of the assimilation and may assimilate the observations at multiple time points. At the National Meteorological Center of the CMA, a 3D-Var system was developed in which a dynamic balance equation was defined on isobaric surfaces during the early development stage of GRAPES (Zhuang et al., 2005; Ma et al., 2009). Afterwards, improvements were made to estimate dynamic balance constraints, and perform calculation at model levels for the 3D-Var system. In the new formulation, the regression coefficients between streamfunction and dimensionless pressure were used, and the balance relationship of rotational wind with divergent wind was considered. The experiments revealed that the new scheme could bring significant benefits to not only the accuracy of the analysis but also the performance of forecasts, especially over the tropics (Wang et al., 2012, 2014, 2015b). This improved 3D-Var system, which is the basis of the GRAPES 4D-Var system, was operationally used between January 2016 and June 2018. Recently, a 4D-Var-based system has replaced 3D-Var for operational forecasts. Several new techniques were developed for the 4D-Var system (e.g., Liu et al., 2017b; Zhang and Liu, 2017), including:
(1) An improved computational strategy for the background potential temperature profiles that does not depend on the thickness of the model layer and possess greater accuracy;
(2) An improved incremental analysis technique that uses multiple outer loops to alleviate strong nonlinear impacts;
(3) A conjugate gradient algorithm based on the Lanczos iterative method to improve the accuracy and efficiency of the minimization procedure of the target function (Liu et al., 2018);
(4) A global tangent linear and adjoint framework for GRAPES-GFS;
(5) A number of linear model physics schemes to alleviate analytical error;
(6) A digit filter to improve the 4D-Var system;
(7) An improvement of the parallel efficiency.
The use of the 4D-Var system leads to improved performance of global medium-range forecasts when compared with the 3D-Var system (Fig. 5), especially for results in the Southern Hemisphere.
Figure5. Anomaly correlation coefficient of global annual mean geopotential height (500 hPa) forecasts by the GRAPES 4D-Var and 3D-Var systems.


Compared with 4D-Var, the Ensemble Kalman Filter (EnKF) approach (see Houtekamer and Zhang, 2016for a recent review) is relatively easier to implement and is model independent. This approach also provides optimal initial ensemble perturbations and automatically filters out fast processes. The main disadvantage of EnKF is that it tends to introduce sampling errors in the estimation of the background error covariance because of the low dimensionality of the ensemble (Kalnay et al., 2007). The EnKF approach has been used to build a hybrid rapid refresh forecasting system (Pan et al., 2018). (Liu et al., 2013b) also studied ocean-atmosphere coupled data assimilation based on the ensemble approach. The analysis produced with EnKF may be dynamically inconsistent and contain unbalanced modes. The incremental analysis update (IAU) is widely used to control these unbalanced modes in EnKF analysis. A 4D-IAU was developed by (Lei and Whitaker, 2016) to take into account the propagation of the analysis increment during an assimilation window. The model constructs time-varying analysis increments by applying all observations in an assimilation window to state variables at different times during the assimilation window. Furthermore, some research has attempted to combine the strengths of 4D-Var and EnKF. (Tian and Feng, 2015) proposed a nonlinear least-squares enhanced proper orthogonal decomposition-based 4D-Var algorithm for nonlinear ensemble-based 4D-Var. The scheme was further upgraded to reduce the computational and implementation complexity (Tian et al., 2018). For ensemble-based assimilation (and forecasting) systems, a balance between the ensemble size and the ensemble resolution is useful given limited computational cost, and the solution usually depends on the specific forecast metric (e.g., Ma et al., 2012; Lei and Whitaker, 2017).

2
4.2. Satellite data assimilation
--> In addition to the development of a data-assimilation framework, a proper assimilation of satellite data may lead to improved forecasting skill. (He et al., 2011) examined the impact of assimilating Advanced Microwave Sounding Unit-A (AMSU-A) data over land, and suggested that using proper surface temperature is quite important to improve radiance and temperature forecasts when assimilating AMSU-A lower sounding channels. (Qin et al., 2013) suggested that assimilating Geostationary Operational Environmental Satellites Imager Radiance Data leads to better quantitative precipitation forecasts. (Zou et al., 2015) showed that the direct assimilation of satellite microwave and infrared radiance observations with an appropriately high model top can help to improve the track and intensity of tropical storms. (Lei et al., 2018) improved the performance of EnKF in assimilating radiance observations by using model space localization. (Chen et al., 2018) demonstrated that the assimilation of Feng-Yun-3B satellite microwave humidity sounder data over land, based on the dynamically retrieved emissivity, has a neutral to slightly positive impact on the five-day forecast. (Xia et al., 2019) assimilated aerosol optical depth data from the Fengyun-3A and MODIS meteorological satellites using the Gridpoint Statistical Interpolation 3D-Var data assimilation system. They showed that the assimilation of satellite aerosol optical depth observational data can significantly improve model aerosol mass prediction skills.
Meanwhile, the assimilation of all-sky satellite radiances has received growing attention in recent years. In this regard, some important issues related to assimilation techniques were reviewed in (Li et al., 2016). (Chen et al., 2015) extended the WRF data assimilation system to assimilate cloud liquid and ice water path based on the Geostationary Gridded Cloud products, and obtained improved forecast performance in terms of various metrics. (Yang et al., 2016) examined the assimilation of all-sky radiances from Advanced Microwave Scanning Radiometer 2 data in the forecast of Hurricane Sandy. They found that the all-sky assimilation case generated better track and central sea level pressure forecasts than the clear-sky case for all forecast lead times, due to a better analysis in the tropical cyclone core areas. (Zhang and Guan, 2017) suggested that the direct assimilation of cloud-affected AMSU-A data can effectively adjust the structure of large-scale temperature, humidity and wind analysis fields due to the assimilation of more AMSU-A observations in typhoon cloudy areas.
At the National Meteorological Center, some major improvements have been made in the field of operational satellite data assimilation, including the establishment of a comprehensive data quality control system with improved assimilation methods and an efficient utilization of various satellite datasets. Some important contributions are listed as follows:
(1) The development of a constrained bias correction method for the satellite radiance assimilation in the presence of model biases, especially for the stratosphere where the model biases are relatively large (Han and Bormann, 2016);
(2) The application of a background error estimate and a quality control technique to improve the background error associated with the brightness temperature, which helps to significantly increase the accuracy of analysis in the upper stratosphere in the Southern Hemisphere (Wang et al., 2016);
(3) The assimilation of global navigation satellite radio occultation observations to improve global forecasts, especially for reducing the root-mean-square errors and bias errors in the upper troposphere and stratosphere (Liu and Xue, 2014);
(4) The assimilation of hyperspectral infrared satellite data, microwave temperature data and humidity sounder data (Li and Liu, 2016);
(5) The development of a data control system for assimilating the FY-3C data that helps to improve the forecast skill (Li and Zou, 2014).
For operational analysis and forecasts, research has clearly demonstrated that satellite observations have a positive impact. (Zhang et al., 2018a) suggested that the refractivity data from Global Navigation Satellite System (GNSS) radio occultation (GNSS-RO), satellite radiance, and atmospheric motion vector data had significant influence on the predication skill (Fig. 6). Low-impact observation points were located in data-rich areas, whereas high-impact observation points were located in data-poor areas. In the Southern Hemisphere, where conventional observations are sparse, the contribution of GNSS-RO increases with height to more than 60% above 500 hPa. The AMSU-A measurements contributed with a minimum value higher than 20%. For the Northern Hemisphere, the contribution of AMSU-A data was about 15% at all isobaric levels. The results from observing system experiments (Liu and Xue, 2014) and other verification methods (Liu et al., 2016) demonstrate that GNSS-RO data are the most important observation types for the GRAPES data assimilation system, exerting a significant influence on forecasts. These results suggest that using multiple satellite observational data clearly makes a positive contribution to NWP skill.
Figure6. Percentage of temperature error variance reduction (vertical axis) compared with the total variance reduction at various isobaric surfaces due to different observation subsets at 0000 UTC throughout June 2014 in (a) the Southern Hemisphere, (b) the equatorial region, and (c) the Northern Hemisphere. TEMP: observations from radiosondes; SYNOP: surface pressure from land stations and ships; AIREP: wind data from aircraft measurements; GNSS-RO: refractivity data of radio occultation from the Global Navigation Satellite System (GNSS); AMV: atmospheric motion vector; and AMSU-A, satellite microwave sounder radiances. [Reprinted from (Zhang et al., 2018a)].



2
4.3. Rapid updating cycle analysis
--> For meso- to convective-scale data assimilation, research and operational analysis methods include variational methods (3D-Var and 4D-Var), ensemble methods (EnKF), and a hybrid of the two (3DEnVar and 4DEnVar). Doppler weather radar reflectivity and radial wind data, wind profiler data, dense surface observations, and geostationary satellite image data all play an important role in convective-scale analysis. The quality of forecasts using initial data from rapid updating cycle (RUC) analysis is significantly better than the quality of forecasts from the simple downscaling of larger-scale initial data (Gustafsson et al., 2018). The major challenges for RUC analysis lie in the wide range of spatial and temporal scale, flow-dependent background error covariance, and insufficient observation information and data coverage for the analysis domain.
At the National Meteorological Center, a 3D-Var analysis method with a three-hourly updated analysis cycle is used for the operational regional model GRAPES-Meso. Hydrometeors are initialized by the 3D cloud analysis scheme, which uses Doppler weather radar 3D mosaic reflectivity data, geostationary meteorological satellite data, and surface observation data (Zhu et al., 2017b). The 3D cloud analysis fields are incorporated into GRAPES-Meso by the nudging technique. Recent improvements to the GRAPES-Meso 3D-Var system include:
(1) A more efficient and accurate background error covariance structure for the mesoscale analysis;
(2) A blending method to merge the global analysis with the mesoscale analysis to address the problem of accumulated meso-scale analysis errors after a period of continuous cycling;
(3) Intensive observation and geostationary satellite radiance data assimilation, such as Fengyun 4A hyperspectral infrared sounding and infrared image radiance data;
(4) Doppler radar reflectivity data quality control and ingestion of more data in the cloud analysis scheme (Huang et al., 2017).
(Hao et al., 2011) compared 3-h/1-h RUC analysis with the operational cold-start analysis by using the GRAPES-RUC analysis. The three-month experiments for the flood season showed that the RUC analysis is able to more effectively assimilate the multi-time observation data such that the rapidly changing severe convective weather system can be better captured. Both threat and bias scores were improved by its contribution [e.g., Fig. 11 in (Hao et al., 2011)].
Similar 3D-Var analysis methods combined with cloud analysis schemes and nudging techniques are also utilized at some regional centers of the CMA, with more frequent updating analysis cycles and higher horizontal analysis resolution than those of the GRAPES-Meso system (Chen et al., 2013). In addition, there are data-assimilation research and testing activities at regional centers that: (i) utilize a more efficient data assimilation method, such as the regional 4D-Var method, EnKF ensemble method, or hybrids of the two (Chen et al., 2016b; Zhu et al., 2017a); (ii) include 10-minute period RUC analysis, now-casting up to 2 h by extrapolation, and blending with km-scale high-resolution model results in a 2-6-h forecast range only for rapid updating of grid forecasts of 0-12 h [which include, for instance, the Rapid Multiscale Analysis and Prediction System of the Beijing Meteorological Center (http://www.bjmb.gov.cn/info/842/3988671.html)]; (iii) include hydrometeor variables initialized by 3D cloud analysis schemes that are coherent with model cloud physics (Li et al., 2017); and (iv) include extracting and assimilating more hydrometeor variable information from dual polarization Doppler radar data (Wang et al., 2018a).

5. Ensemble forecasting
Ensemble forecasting produces a set of forecasts to estimate the range of possible future states of the atmosphere. At the National Meteorological Center of the CMA, the development of the GRAPES ensemble forecasting system began in 2009, which included both global (GRAPES_GEPS) and regional (GRAPES_REPS) applications.
With regard to global ensemble forecasts, several major achievements have been made in recent years. First, a singular vector (SV)-based initial perturbation method was implemented. This is coupled with linear tangent and adjoint models with greatly enhanced parallel efficiency. The spatial distribution of the total energy in GRAPES-SV and its components (kinetic energy and potential energy) shows that GRAPES-SV is able to represent the uncertainty of growth characteristics of baroclinic instability at the different levels in the troposphere at the mid-high latitudes (Liu et al., 2013a). Second, a dynamical upscaling method of the initial values from the analysis resolution (0.25°) to the ensemble forecast resolution (0.5°) was developed using a 3D interpolation scheme for pressure based on the hydrostatic balance equation. The results showed that the noise of the initial conditions of geopotential height and temperature could be reduced (Huo et al., 2018).
With these recent achievements, GRAPES_GEPS has since been achieving better forecasting skill than its predecessor, T639_GEPS. In a set of parallel experiments, the valuable forecast days of GRAPES_GEPS were 0.3, 0.8 and 0.5 longer than those of T639_GEPS for the Northern Hemisphere, Southern Hemisphere, and East Asia region, respectively (Table 1). In terms of the Brier Score of the precipitation probability, results clearly demonstrate that GRAPES_GEPS outperforms T639_GEPS for short-to-medium-range forecasts under different precipitation thresholds (Fig. 7).
Figure7. Comparison of the Brier Score (lower value indicates better prediction) of precipitation probability produced by GRAPES_GEPS and T639_GEPS over the 10-day forecast period: (a) 24-h accumulated precipitation higher than 0.1 mm; (b) as in (a) but for precipitation higher than 25.0 mm.


For regional ensemble forecast systems (e.g., Zhang et al., 2014b), a multiscale hybrid initial perturbation method and an Ensemble Transform Kalman Filter (ETKF) perturbation method at the model level have been developed (Zhang et al., 2015a). The multi-scale blending (MSB) technique generates initial perturbations by combining small-scale components from the regional ensemble forecast system and large-scale components from the global ensemble forecast system. The regional ensemble forecast system using the MSB and ETKF techniques shows improved performance. The MSB technique has been used operationally in the GRAPES_REPS system since March 2016. A 3D bias correction method has also been developed to reduce forecasting errors (Wang et al., 2018b). Results show that the quality of the ensemble forecast is quite sensitive to systematic model errors. The bias correction method can effectively improve the ensemble forecast skill and reduce forecast error.
Moreover, both the stochastic perturbation of physical tendency (SPPT) and the stochastic kinetic energy backscatter (SKEB) methods have been developed to describe uncertainty in both the GRAPES-GFS and GRAPES-Meso models. The random fields in the SPPT and SKEB schemes were built by a first-order Markov process with time-related characteristics and a Gaussian distribution (Yuan et al., 2016). Experimental results of the SPPT scheme for the GRAPES_REPS system indicate that the skill of heavy rainfall forecasts can be improved significantly. The impacts of SPPT on GRAPES_REPS are also related to the amplitude of the random perturbation field and time correlation scale (Yuan et al., 2016). The SPPT scheme was put into operation in the GRAPES_REPS system in March 2016. The SKEB scheme was employed to estimate the uncertainty of local kinetic energy dissipation rates caused by numerical diffusion. The stochastic stream forcing function was built to simulate local kinetic energy dissipation rates. Experiments with SKEB show that forecasts of atmospheric kinetic energy spectra and the error-spread relationship of GRAPES-GFS have been improved, especially in tropical regions (Peng et al., 2019). Meanwhile, some recent studies have employed such stochastic perturbation techniques for local fine-scale forecasts. Based on the Advanced Regional Prediction System (Xue et al., 2000), (Qiao et al., 2017) developed a stochastically perturbed parameterization tendency scheme for the diffusion process, and suggested that the resultant ensemble forecasts of an idealized supercell match the truth run better. By further perturbing both microphysics and diffusion processes, (Wang et al., 2019) suggested that such techniques help to improve the low-level vortex intensity forecast associated with a tornadic supercell. In general, these studies have indicated the benefits of stochastic physics perturbation in the ensemble forecast, while the physical basis behind such stochastic approaches may need further exploration.

6. Model evaluation
Model evaluation is an indispensable part of the model development processes. Model evaluation provides key information to help identify model biases and guide the routes for improving model formulations. For weather and climate modeling, model evaluation strategies can be shared by each other. For example, many operational centers routinely perform long-term climate simulations with their NWP models as an assessment of systematic model errors (e.g., Jung et al., 2010). For climate models, utilizing short-term NWP experiments is also a helpful approach for diagnosing certain model errors (e.g., Rodwell and Palmer, 2007). Because model evaluation is a broad term, we focus on studies that are more relevant to model development, tuning and performance. Some such studies have already been described in sections 2 and 3, since developing model techniques often include substantial evaluation efforts. Recent studies that offer new performance metrics will also be described.
As model resolution increases, some new verification methods have been employed in NWP models (Casati et al., 2008), which pay more attention to the comparison of spatial scales between forecasts and observations. (Zhao and Zhang, 2018) employed a neighborhood spatial verification method for the evaluation of precipitation forecasts, which gives a more objective and reasonable assessment than the conventional Threat Score. (Pan et al., 2017) compared the object-oriented MODE (Method for Object-based Diagnostic Evaluation) method and the neighborhood method with the Threat Score method to validate the performance of the ECMWF high-resolution precipitation forecast. They suggested that the new spatial verification methods are more skillful in quantifying the spatial scale at which the model possesses a better forecast performance, thus providing more information on forecasting performance from various perspectives.
In terms of climate modeling, most studies have focused mainly on evaluation of the long-term mean state and climate variability. (Zhou et al., 2016) proposed a series of standardized experimental protocols for evaluating and understanding the performance of monsoon simulation and projection, which has been selected as an endorsed program for CMIP6. Moreover, some studies have employed the weather forecast or transposed-AMIP (T-AMIP) approach to quantify the origins of key model errors (e.g., Zhang et al., 2015d; Zhang and Li, 2016; Li et al., 2018). By initializing a climate model with observed/reanalyzed atmospheric states, this approach integrates the model in a relatively short duration but with multiple samples. This approach helps to isolate model errors (especially those from fast moist physics) in the modeled climate while avoiding continuous long-term integration. By using the T-AMIP setup, (Li et al., 2018) evaluated the model capability of reproducing the severity of a heavy rainfall event and the model dependence on the resolution, which emphasized the importance of evaluating rainfall processes for understanding the model biases. For model development, simultaneously running multiple model realizations in a limited time duration saves a lot of time and computational resources. The model developers can quickly grasp the impact of changing model components on model performance and make a response. The ensemble of model outputs also constitutes a constrained climate background, which may help to quantify specific model errors that are sensitive to the large-scale atmospheric state (e.g., Zhang et al., 2015d).
The definition of performance metrics is important for objectively quantifying model performance and providing useful guidance for the model evaluation. Several methods that help to evaluate broad model performance have been recently proposed. (Xu et al., 2016) proposed a vector field evaluation diagram, which is an extension of the widely used Taylor diagram (Taylor, 2001), for evaluating model performance in simulating vector fields. A more comprehensive integrated multivariable evaluation method that can summarize multiple metrics of model performance in terms of multiple variables was further given by (Xu et al., 2017b). These diagnostic metrics concentrate simulation results and concisely evaluate model performance.
Fine-scale rainfall features, such as diurnal variation, frequency-intensity structure and duration-related features, provide a new objective metric to evaluate model performance in terms of rainfall-related processes. (Yu et al., 2014) summarized recent progress in studies of diurnal variation of precipitation over contiguous China and transformed these features into useful metrics for model evaluation. (Yuan, 2013) employed diurnal variation metrics to evaluate the simulations of CMIP5 models and suggested that present models have large errors in terms of simulating the diurnal cycle. (Lu et al., 2017) evaluated the diurnal variation of precipitation in the Beijing RUC forecast system and suggested that modification of near-surface temperature may improve the model performance. (Xu et al., 2017a) analyzed the diurnal cycle of precipitation with GRAPES-Meso at convection-permitting resolutions and suggested that explicit simulation of cloud and precipitation processes is a key reason for the improved precipitation forecast. (Li and Yu, 2014) proposed a new method for evaluating rainfall frequency-intensity distributions in a linear manner. A two-parameter double exponential function is formulated and fitted to the hourly rainfall observations at each station, which represents key information on the rainfall frequency-intensity spectrum. This method exposes the model ability in terms of rainfall frequency-intensity simulations (Li et al., 2015a). As shown in Fig. 8, the benefit of increasing horizontal resolution from T42 to T266 in the improvement of the frequency-intensity relation can be clearly revealed by this metric. The two-parameter space effectively extracts key information in the frequency-intensity spectrum. (Yu et al., 2015a) defined a regional rainfall event (RRE) and a regional rainfall coefficient (RRC) to quantify the spatiotemporal variation of rainfall events. This RRC parameter helps to explore a more complete spatiotemporal organization and evolution of RREs. The method has been used to compare hourly rainfall between station observations and satellite products over east-central China (Chen et al., 2016a), and can potentially be used for objectively evaluating operational forecasts.
Figure8. A rainfall performance metric used to evaluate the frequency-intensity simulation of a global model: (a) the averaged double logarithm of precipitation frequency distributed by intensity over East Asia from TRMM (black), T42 (red), T106 (green), and T266 (blue); (b) the distribution of averaged parameter values (α,β) over East Asia (solid circles), the plain area of eastern China (right-half solid circles) and the steep terrain area in the southeastern part of the Tibetan Plateau (left-half solid circles) on the α-β space. Black, red, green, and blue circles represent TRMM, T42, T106, and T266, respectively. [Reprinted from (Li et al., 2015a)].


Model parameter optimization is often related to evaluation as a way to quantify and reduce simulation errors. Conventional parameter tuning is often regarded as a gray technique that largely depends on subjective judgements. This has inspired the use of objective tuning techniques. A multiple very fast simulated annealing method was used in (Yang et al., 2012) and (Zou et al., 2014) to give the optimal parameters for convection schemes. Some alternative methods are also available (e.g., Yang et al., 2013; Guo et al., 2015), and the selection of parameters can also be formulated in an automatic way (e.g., Zhang et al., 2015b). These methods typically select parameter values based on the modeling skill scores of certain aspects to minimize the simulation bias. In general, fine-tuned model performance can be improved in certain aspects. Moreover, such an approach may also be viewed as a sensitivity analysis tool for understanding the uncertainty associated with model parameters (e.g., Yang et al., 2013; Guo et al., 2014), helping to facilitate the development of various modeling components.

7. Summary and outlook
We have reviewed recent progress in numerical atmospheric modeling in China from the perspective of modeling system components. In recent years, important advancements have been achieved to improve the existing model formulations, to develop advanced data assimilation techniques, to optimize the parameterization schemes, to better understand model behavior, to assess the strengths and weaknesses of model performance, and to explore new dynamic cores for the next generation of global models. The status quo and future trends are summarized as follows.
With regard to the dynamical core, the importance of dynamical formulation on model performance has been well recognized. The change in certain dynamical formulations (e.g., the transport term) may have a substantial impact on simulated clouds and precipitation. The dynamical core itself may also cause quite different physical performance, and the resultant difference is not small. These results suggest that the dynamical core should not be treated as a black box when one tries to understand the reasons that are responsible for model flaws. Furthermore, there is a trend for model development to switch to quasi-uniform grids to better benefit from massively parallel computers, which are indispensable for global high-resolution modeling under the nonhydrostatic regime. Due to local irregularities, quasi-uniform grids generally require special treatment in terms of the discretization technique and technical implementation. Thus, such a switch should not be conducted at the cost of model performance. This outcome has inspired several modeling groups to take important steps in developing different types of quasi-uniform grid models. Because the issue of model scalability for adapting the evolution of supercomputer architectures is of a much greater magnitude, the development of nonhydrostatic models on quasi-uniform grids may be an important effort along this trend.
In terms of physical parametrization, many important advances have been achieved in the field of improving convection parameterizations, scale-aware parameterizations, small-scale in-cloud parameterizations, stochastic parametrizations, cloud parameterizations in terms of macrophysics and microphysics, SPs, high-order radiation transfer formulations, and improved treatments of cloud and aerosol radiative properties. These are fundamental issues and deserve continuous improvement. As model resolution continues to increase, developing scale-aware parameterizations and combining these physical processes into a more unified framework will become increasingly important for atmospheric modeling. The interaction between model physics and dynamics also deserves more attention as the errors in this part may overwhelm the improvement of other parts. As the scale separation between dynamics and physics becomes increasingly vague in future global high-resolution models, a tighter collaboration of developers with different backgrounds would be instrumental.
In the field of data assimilation, progress has been made at both operational and research centers. A 4D-Var system has been operationally used by the GRAPES model, and its improvement over the 3D-Var system is promising. Achievements have also been made in terms of satellite data assimilation, which includes some improved techniques. For regional fine-scale forecasting, RUC analysis has been developed with improved performance, owing to the more frequent updating cycle and higher resolution. The development of new techniques for optimally utilizing various observational satellite data by advanced assimilation systems is a particularly essential direction for the development of data assimilation in the future. Furthermore, ensemble forecasting plays a more important role in both global and regional NWP systems. The developments are associated with some fundamental techniques, such as singular vector-based initial perturbation methods, multiscale hybrid initial perturbation methods, stochastic perturbation of physical tendency methods, and stochastic kinetic energy backscatter methods. These techniques have shown benefits for reducing the forecast errors and help to improve the forecast skills.
With regard to model evaluation, some novel performance metrics have been proposed to quantify various aspects of model biases. These metrics concisely summarize the strengths and weaknesses in modeling skill, and have been used for evaluating model performance in both weather forecasting and climate modeling applications, as well as helping users and developers better understand and improve models. Moreover, by utilizing standardized and well-designed experimental protocols, the model evaluation procedure may be performed in a dynamic way. Such strategies typically require both sensitivity experiments and performance diagnostics to better understand model behaviors. In the future, these procedures should be formulated in a more normalized and concise fashion, serving as standardized routines for model evaluation.
Finally, it is suggested that development and evaluation procedures cannot be separated. These components should work together and form a close loop. Each incremental advance in the development stage should be accompanied by a detailed assessment and documentation to understand the isolated impact of specific technical changes. A hierarchy of models would be helpful in this regard because the knowledge gained from simplified model configurations may inform decision and choice in a more complex environment. The entire process of development should be a repeat of multiple loops that gradually advance modeling performance.

相关话题/Recent Progress Numerical