HTML
--> --> --> -->2.1. Horizontal and vertical discretization
A modern atmospheric dynamical core may be separated into two parts: (i) the basic dynamical component that solves the atmospheric governing equations (hydrostatic or nonhydrostatic, shallow or deep atmosphere), which determine the evolution of mass, momentum and energy; and (ii) a passive transport component that handles the advection of moisture and other passive tracer species. The basic dynamics also contain many transport terms (e.g., mass, momentum, heat), which are tightly connected with wave-like phenomena. Hence, the transport solver usually has an important impact on model performance. Transport methods are usually categorized into Lagrangian and Eulerian types based on the specification of the flow field. The (semi) Lagrangian approach solves the material time derivative equation by tracking the fluid trajectory. It is able to maintain numerical stability when combined with large Courant numbers but requires nontrivial effort to meet the conservation laws. The Eulerian approach solves the local time derivative equation, and the time step is usually limited by the Courant number, but the flux-form Eulerian scheme can automatically guarantee conservation of mass. In practical modeling, the choice between them is usually a tradeoff depending on the primary focus and the target applications.A semi-Lagrangian transport method (Bermejo and Conde, 2002) is used by the GRAPES model, which was developed at the National Meteorological Center of the CMA. The original semi-Lagrangian scheme effectively avoids the time step restriction in the polar regions, but it does not conserve mass or any mass-weighted quantity. The scheme also does not resolve well the sharp gradient of the moisture field, and thus may adversely impact model accuracy. To improve model performance, (Shen et al., 2011) and (Wang et al., 2011b) replaced the original semi-Lagrangian scheme with a piecewise rational method advection scheme. This approach reduces the numerical errors in both idealized transport tests and realistic modeling experiments. The results associated with water vapor distribution and heavy rainfall forecasting were improved by this scheme due to a more accurate representation of moisture transport. (Su et al., 2013) further provided a flux-form implementation of this scheme for solving the moisture transport equation in GRAPES.
For Eulerian transport schemes, the extremely short zonal distance in the pole region is a problem that restricts their applications in global models based on the regular latitude-longitude or Gaussian grid. Several methods that can alleviate zonal Courant number restrictions have been proposed (e.g., Leonard et al., 1996; Lin and Rood, 1996). In (Zhang et al., 2013c), a leaping-point difference variant of the two-step shape-preserving advection scheme (TSPAS; Yu, 1994) was established. The scheme helps to alleviate the time step restriction, and does not create computational difficulties in the polar regions, as confirmed by both idealized transport tests and realistic climate modeling tests. Long-term simulation results (e.g., Yu et al., 2015b; Zhang and Li, 2016) have demonstrated the robust performance of this modified TSPAS when used in a global model (Community Atmosphere Model, version 5; CAM5). Moreover, CAM5 with TSPAS shows improved performance of summer rainfall simulations at the southern edge of the Tibetan Plateau (Yu et al., 2015b). As shown in Fig. 3, artificial rainfall amounts (Fig. 3a) at the steep southern edge of the Tibetan Plateau are reduced in the model when TSPAS is activated (Fig. 3b). The reason may lie in the restriction of multigrid moisture transport that can easily occur in the original semi-Lagrangian scheme. This transport scheme has been implemented to the Chinese Academy of Meteorological Sciences climate system model (Rong et al., 2018), which will participate in CMIP6.
Figure3. Differences in summer (June-July-August) mean precipitation amount (shading; units: mm d-1) (a) between CAM5 and TRMM 3B42 and (b) between M-CAM5 (with TSPAS) and CAM5. Black contours indicate orographic height (shown for 200, 500, 2000 and 4000 m). [Reprinted from (Yu et al., 2015b)].
Another research activity associated with the transport solver is the development of fully Lagrangian advection schemes (e.g., Dong and Wang, 2012; Dong et al., 2014, Dong et al., 2015). Fully Lagrangian schemes do not discretize the space, as done by a finite-difference/volume scheme. Instead, these schemes discretize the total fluid mass into a number of small pieces and track the trajectory of each fluid unit. Thus, these schemes do not necessarily assume the underlying mesh shape. The proposed fully Lagrangian scheme is inherently shape-preserving and shows good performance in terms of transporting the sharp gradient field. The scheme also produces quite small range-preserving unmixing, which is potentially important for transporting the chemical species. For practical application of this scheme, the major challenge is to establish a parallel framework, because the fluid unit may go to any place on the sphere.
In the vertical direction, the atmospheric equations may be cast into either height-based or mass-based governing equations (Laprise, 1992), and some alternative choices are also available (e.g., Bleck, 1974; Randall et al., 2000). For nonhydrostatic modeling, using height-based or mass-based coordinates is physically similar, while the numerical techniques could be different. Mass-based coordinates are a natural extension of the pressure-based coordinates that have been used by most hydrostatic models. Their applications in nonhydrostatic modeling are also widespread. The Advanced Regional Eta-coordinate Model (Yu and Xu, 2004), which was originally a hydrostatic primitive equation model, was upgraded to include the nonhydrostatic effect using a mass-based coordinate (Cheng et al., 2018). The nonhydrostatic formulation is able to reproduce the hydrostatic solutions when nonhydrostatic effects are trivial. This is a desirable feature for simulating large-scale flow because the physical constraints demand a hydrostatic balance. (Yang et al., 2015) implemented a hybrid finite-element finite-difference vertical discretization for a global nonhydrostatic spectral model based on a mass coordinate. In this approach, a finite-difference scheme that satisfies the set of constraints is applied to the linear components, and a cubic finite-element scheme with high-order accuracy is applied to the nonlinear components. A deep atmosphere extension for this model is further described in (Yang et al., 2017).
At the bottom, the model surface is usually arranged to follow the topography, and the coordinate levels are gradually transformed to pure pressure or pure height levels at the model top. These hybrid coordinates have been widely adopted because they alleviate the rise and fall of the pure sigma coordinate at the upper level. (Li et al., 2012) compared several types of hybrid coordinates for GRAPES, which uses a height-based terrain-following vertical coordinate. The results suggested that among four selected coordinates, the one that has the minimal rise and fall at the upper level leads to the optimal performance. (Zhang et al., 2015c) implemented a height-based hybrid vertical coordinate (Klemp, 2011) in the GRAPES model. The scheme reduces numerical errors associated with the pressure gradient force term and leads to improved model performance. (Li et al., 2014b) proposed an orthogonal terrain-following coordinate system and tested it using idealized advection tests. This scheme helps to reduce advection errors compared with the corresponding hybrid sigma coordinate, while the convergence of grid lines over orography also restricts the time step and increases numerical errors.
2
2.2. Modeling on the quasi-uniform grids
The next-generation of weather and climate models will largely depend on massively parallel computers to obtain desirable computational performance. A large amount of model development effort has been directed on quasi-uniform grids (e.g., Staniforth and Thuburn, 2012). The regular latitude-longitude grid has a well-known polar problem that restricts its parallel efficiency, and this problem worsens as the horizontal resolution increases. Quasi-uniform grids facilitate the use of localized discretization methods for global models that have a high-resolution modeling requirement. Meanwhile, the nonhydrostatic effect will become significant as the horizontal resolution increases above certain thresholds (e.g., <10 km for the atmosphere). Global modeling at such resolutions would require a nonhydrostatic dynamical formulation such that motions at various scales can be better resolved. This section provides a review of research activities in this regard.For the GRAPES model, a Yin-Yang grid variant has been developed in recent years (Li et al., 2015b). The Yin-Yang grid is a combination of two regular latitude-longitude grids with some overlapping regions (Kageyama and Sato, 2004). Because of the regular and orthogonal line structure, a Yin-Yang grid model can largely inherit its regular latitude-longitude grid counterpart (e.g., the semi-implicit semi-Lagrangian technique of GRAPES). The elliptical equation is solved by the generalized conjugate residual solver, and bicubic Lagrangian interpolation is used to provide Dirichlet-type boundary conditions between the subdomains. Long-term integration under different sea surface temperature conditions was carried out on an aqua planet (Li and Peng, 2018), and the simulations of precipitation and large-scale circulation generally showed reasonable performance. A major challenge when developing a Yin-Yang grid model is that the overlapping regions will require special treatment to meet mass conservation requirements (e.g., Peng et al., 2007). However, this problem is not urgent for the Yin-Yang GRAPES model because its semi-Lagrangian transport does not conserve mass.
A multimoment finite-volume technique (e.g., Ii and Xiao, 2007) has been developed for atmospheric applications on quasi-uniform grids in recent years. This technique has been applied to modeling 2D advection problems on a sphere (e.g., Chen et al., 2012; Li et al., 2013b), global nonlinear shallow water waves (e.g., Chen et al., 2014; Li et al., 2015c), and nonhydrostatic dynamics on a 2D transect (Li et al., 2013c). The basic idea of this method (an earlier prototype also known as constrained interpolation profile; Yabe et al., 2001; Xiao and Ikebata, 2003) is to use more than one type of prognostic variable (e.g., point value, derivative value, and cell-averaged value, two types are typically used) for model integration. The multimoment finite-volume approach defines both cell-averaged values and point values as independent prognostic variables. In contrast, conventional finite-volume/finite-difference methods only define the cell-averaged or point value in a single-moment sense. The additional moment increases the local degree of freedom and helps to enhance the numerical accuracy by using high-order local reconstructions. A parallelized multimoment nonhydrostatic atmospheric dynamical core has been constructed recently. The model has both global and regional configurations, and exhibits good performance in several idealized numerical benchmark experiments (X. L. Li, 2019, personal communication). Ongoing efforts have been carried out to build an NWP model based on this dynamical core.
A new modeling approach based on an unstructured mesh is currently being explored at the Chinese Academy of Meteorological Sciences, in collaboration with the National Supercomputing Center in Wuxi. The unstructured mesh does not assume the underlying mesh shape and can be used for both global quasi-uniform and regional-refinement applications in a consistent way. The quasi-uniform application starts from a subdivided icosahedron, which is a semistructured geodesic grid that has two types of neighboring relations. The local line segments on an icosahedral grid can be arranged to be either orthogonal (e.g., Heikes and Randall, 1995) or nonorthogonal (e.g., Tomita et al., 2001). The orthogonal approach (referred to as the Voronoi polygon) is used because it provides beneficial properties for staggering finite-volume methods (Weller et al., 2012), and naturally assimilates existing variable-resolution techniques (e.g., Ringler et al., 2008). In (Zhang et al., 2018a), a global shallow water modeling framework, which isolated most horizontal components of a 3D model, was developed and tested. The model shows comparable results with reference solutions in various 2D idealized benchmark tests. In particular, the importance of potential vorticity transport on the modeling behavior was described and emphasized in that study. Some numerical methods for solving the transport equation have also been developed and evaluated (e.g., Zhang et al., 2017; Zhang et al., 2018a).
Recently, a global nonhydrostatic dynamical core has been further established on an unstructured mesh (Zhang et al., 2019a). An added-value of this model is that the variable-resolution capability is able to resolve more fine-scale fluid structures without a uniform increase in the global resolution. Figure 4 shows the simulated baroclinic waves using this newly developed global nonhydrostatic framework. In the variable-resolution modeling with resolution from ~70 km to ~30 km (Fig. 4a), the model is able to resolve the fine-scale structure of the relative vorticity field over the refinement region, better than the quasi-uniform ~60 km result (Fig. 4b) and closer to the ~30 km simulation (Fig. 4c). More importantly, the model does not show evident performance deterioration in the mesh transition zone. Future work should extend the present framework to a fully-fledged atmospheric general circulation model (AGCM).
Figure4. Simulated dry baroclinic waves from a newly developed global nonhydrostatic atmospheric dynamical core using (a) variable-resolution from ~60 km to ~30 km, (b) quasi-uniform modeling with a quasi-uniform ~60 km resolution, and (c) as in (b) but using a resolution of ~30 km. Results are compared for the relative vorticity (units: 10-5 s-1) at the model level near 850 hPa after 10 simulation days. The black contours in (a) denote the refinement region centered at (35°N, 180°E). [Reprinted from (Zhang et al., 2019a)].
2
2.3. Impact of the dynamical core on model performance
Once a dynamical core is constructed, it is usually difficult to isolate the impact on model performance of changing the dynamical core. The impact of the dynamical core on model performance is sometimes overlooked. Some recent studies have suggested that such an impact is not small. (Zhang et al., 2013b) investigated the sensitivity of the simulated climate to two atmospheric dynamical cores coupled with the same model physics: the Institute of Atmospheric Physics (IAP) AGCM and CAM3.0. The authors showed that the model with the IAP core simulated a colder troposphere than that with the CAM3.0 core, reducing the original warm bias of CAM3.0 in the tropical and midlatitude troposphere (using the same model physics at a similar resolution). However, when the two dynamical cores were used in the idealized dry Held-Suarez test, the IAP core simulated a warmer troposphere than that using the CAM3.0 core. These results suggest that interactive physical processes can change the effect of the dynamical core on climate simulations. (Zhang et al., 2015d) compared the simulations of deep stratus clouds over continental East Asia between two CAM5 versions with a grid point finite-volume core (CAM-FV) and a spectral transform core (CAM-EUL). They showed that CAM-FV tends to produce an artificial low-value center on the eastern periphery of the Tibetan Plateau. This problem, which is not found in CAM-EUL (also not found in the low-resolution CAM-FV; (Zhang et al., 2014c)), leads to a significant deficit in the shortwave cloud radiative forcing. The reason is attributed to the artificial low-level divergence around the steep terrain, which creates subsiding motion that limits the generation of stratus clouds. This problem might be related to the artificial response to the better-resolved steep topography in a high-resolution configuration (at least as configured as in that study). These findings demonstrate that the dynamical environment produced by the numerical solver can have a strong impact on the simulations of cloud physical features.-->
3.1. Moist convection and turbulence
The moist convection and turbulence are small-scale atmospheric phenomena. The associated dynamical processes would be resolved provided that the model resolution is high enough. For typical global weather and climate models, the collaborative effects of these processes on the resolvable scale need to be parameterized. The absence of a parameterization of these effects may result in significant model biases (e.g., Chen et al., 2016c).Cumulus convection is central to the hydrological cycle, and the cumulus scheme is usually crucial to model performance. Some studies have explored the sensitivity of simulated precipitation to deep convection parameterization. (Wu, 2012) proposed a mass-flux cumulus parameterization scheme based on a bulk-cloud approach for large-scale atmospheric models. This scheme shows good performance in both single-column and global simulations when used by the BCC climate model (Wu et al., 2014). (Wang and Zhang, 2013) investigated the impact of different closure and triggering parameterizations of deep convection on the single-column simulations of tropical convection by CAM4 and CAM5. Rainfall amounts associated with different precipitation types in the model are sensitive to these choices, but the total precipitation amount may be insensitive because of compensation by different parameterization components. These results highlight the need for multiple datasets for constraining models, as well as treating different parameterizations as an integrated system. (Xu et al., 2014a) applied a modified simplified Arakawa-Shubert scheme to the GRAPES-Meso model, and found that the new scheme improves the prediction of typhoon tracks.
The importance of shallow convection has also been recognized. Shallow convection is more tightly connected with boundary layer processes and has a strong impact on the global hydrological and energy cycles. A shallow convection mass flux scheme has been used in the GRAPES model to replace the original vertical turbulent mixing and diffusion approach (Liu et al., 2015). The results show that the precipitation distribution in the tropics is improved, with reduced spurious precipitation. The improvement is attributed to the more reasonable shallow convective heating rates. (Wan et al., 2015) improved the trigger function of the shallow convection scheme in the GRAPES-Meso model by relating the trigger function to the distribution of temperature and humidity in the convective surface layer. As more shallow convection is triggered, the model presents better performance of 24-hour forecasts of accumulated precipitation and surface air temperature. (Kang et al., 2016) further incorporated a compensation parameterization of cloud cover and hydrometeors from shallow convection based on this scheme. The results exhibit an improvement in the cloud cover and surface total solar radiation based on a one-month forecast experiment. Furthermore, a recent global climate modeling study also suggested that a better representation of the transition from shallow to deep convection may help to delay the peak time of continental rainfall in the afternoon (Zhang and Chen, 2016), although this study used a completely different subgrid parameterization framework (see section 3.4).
As the resolution of global NWP models approaches below ~10 km, the model dynamics needs to resolve nonhydrostatic effects more effectively. The under-resolved dynamical processes under the previous coarse resolution regime will be partly resolved, and more atmospheric energy will be captured by grid-scale dynamics. Moreover, below certain resolution thresholds, clouds can be fully represented by the model grid (sometimes in an approximate way), and explicit cloud microphysics will thus more tightly interact with grid-scale dynamics. Meanwhile, research has also pointed out that there are dynamical difficulties of gray-zone modeling the convective boundary layer, because convection under this regime is artificially governed by the grid resolution rather than the natural state of the flow (Zhou et al., 2014a). Developing parameterization schemes that can adequately blend effects from both resolved and under-resolved scales (so-called scale-aware schemes) has become an important subject in recent years, and thus a unified approach can be used for multiscale modeling applications. Some basic research has been devoted to this aspect. In terms of convection parameterization, (Huang et al., 2014) developed a subgrid convective cloud parameterization scheme that is suitable for gray-zone resolutions. The new scheme is automatically adaptive to variation in grid size, and accounts for microphysical processes consistently with resolved clouds. Numerical experiments of an idealized tropical cyclone showed that this new scheme has a positive impact on the intensity and precipitation distribution of tropical cyclones owing to the effect of subgrid clouds on the total diabatic heating. (Huang et al., 2018) compared an idealized supercell large-eddy simulation with coarse resolution simulations. They suggested that the nondimensional subgrid vertical flux of the moist static energy may vary with the subgrid fractional cloudiness according to a function of fractional cloudiness, regardless of the box size. Based on these results, these authors are currently developing a subgrid mass flux transport parameterization to include the impact of the subgrid cloud fraction, and will apply it to a convection parameterization scheme (W. Huang et al., 2018, personal communication).
For boundary layer parameterization, (Zhang et al., 2018b) developed a 3D turbulent kinetic energy subgrid mixing scheme for the Weather Research and Forecasting (WRF) model to address the gray-zone problem in the subgrid turbulent mixing parameterization. This scheme combines horizontal and vertical subgrid turbulent mixing into a single energetically consistent framework and is self-adaptive to the mesh space variation from the typical large-eddy simulation resolution (a few hundred meters) to the typical mesoscale resolution (a few kilometers). For the convective boundary layer, (Zhou et al., 2017) examined several conventional horizontal mixing schemes and pointed out they all have certain limitations. They then used the gradient-diffusion framework for parameterizing the turbulent horizontal mixing and obtained improved performance. The proposed schemes are not strictly scale-aware but can also be used in the gray zone as a working solution.
For the development of convection schemes, another important issue is the parameterization of small-scale in-cloud processes and the reduction of their uncertainty. (Wang and Zhang, 2014) studied the bulk budgets of in-cloud vertical velocity and evaluated its parameterization for different plumes in the shallow cumulus environment. Their results suggest that shallow convection may require specifically parameterized vertical velocity schemes for different types of plumes. (Lu et al., 2016) examined the relations between entrainment rate and various quantities (e.g., vertical velocity, buoyancy, and turbulent dissipation rate) and suggested that using a combination of multiple variables, rather than using a single variable fitting, may lead to a better representation of the entrainment rate.
Furthermore, the importance of incorporating stochastic effects in the convection scheme for reducing uncertainty has been recognized. (Wang and Zhang, 2016) implemented a stochastic deep convection parameterization (Plant and Craig, 2008) in CAM5. In this scheme, the total cloud base mass flux is a random variable at a given time step and grid point location. The ensemble mean cloud base mass flux is determined from the deterministic convection scheme using spatially and temporally averaged temperature and moisture profiles. The results showed that when stochastic processes are incorporated, deep convection will be suppressed, while shallow convection will be enhanced, leading to improved cloud and precipitation simulations.
2
3.2. Cloud parameterization
In an atmospheric model, the term "cloud scheme" usually denotes the parameterization of cloud physical features (e.g., cloud amount, cloud microphysics), although the dynamical processes are certainly integral parts of the cloud processes in nature. The cloud scheme in a numerical model is important for several reasons, and the relevant studies are briefly described as follows.First, cloud microphysical processes are directly related to precipitation, which impacts the entire hydrological cycle. Some cloud microphysical parameters (e.g., effective radii of droplets and cloud ice) also have important implications for cloud radiative properties, which affect the global energy balance. The microphysical scheme generates the local tendency from various microphysical processes for different hydrometeor species. For meso- to cloud-scale models that can explicitly resolve the cloud fraction and provide more appropriate dynamics, the microphysical scheme can be applied in a relatively straightforward manner. For example, (Gao et al., 2011) implemented a two-moment bulk microphysical scheme in the WRF model. A new approach for parameterizing heterogeneous droplet activation was developed in this scheme. The scheme performs well in terms of modeling the microphysics of clouds and precipitation. (Lin and Colle, 2011) proposed a new parameterization of heterogeneous droplet activation. A flexible and self-consistent framework was developed to represent the physical properties of precipitating ice, such as fall speed and density. The scheme has been adopted in the WRF model and shows good performance.
For large-scale models that have scale separation between the convective and stratiform cloud systems, the cumulus cloud parameterization usually adopts a highly simplified microphysical scheme, although more advanced microphysical strategies may also be adopted to improve performance (e.g., Xu et al., 2014b). In contrast, stratiform clouds are usually treated using more comprehensive microphysical schemes. In this regard, (Zhao et al., 2017) modified the MG (Morrison and Gettelman, 2008) microphysics of CAM5 by combining ice and snow into one single variable (called total ice). The justification for doing so was that ice and snow are not well separated in nature as cloud droplets and raindrops because ice and snow share basically the same growth paths. Moreover, the division between ice and snow in GCMs is quite empirical and artificial, and has been largely treated as a tuning procedure. The single ice scheme not only reduces the uncertainty in the ice to snow autoconversion process (because eight microphysical processes are no longer needed), but also improves tropical high-cloud simulation with improved shortwave and longwave cloud forcing.
A cloud macrophysical scheme is usually used in large-scale global models for generating the subgrid fractional cloud amount and large-scale condensation, which are important inputs for the microphysical and radiation schemes. (Wang et al., 2015c) evaluated three cloud fraction schemes based on numerical simulations from high-resolution cloud-resolving models. Among the investigated schemes, a semiempirical scheme and a probability-density-function-based statistical scheme performed better than a relative-humidity-based scheme. (Qin et al., 2018) proposed a diagnostic probability-density-function cloud scheme and tested it in CAM5. The scheme diagnoses the subgrid scale variance of a combined variable of total water and liquid water potential temperature from the shallow convection and boundary layer turbulence schemes. Compared to the default relative humidity cloud fraction scheme, the new scheme improved marine low-cloud simulation, especially for shallow cumulus clouds. (Ma et al., 2018) implemented an explicit prognostic cloud cover scheme in the GRAPES model. It adds an additional prognostic equation for the cloud fraction in a similar manner as (Tiedtke, 1993). The cloud fraction is governed by both the grid-scale and subgrid-scale tendencies. Their simulations showed improved cloud fraction and radiative flux over the original diagnostic cloud scheme. Meanwhile, (Li et al., 2014a) showed that enhanced stratiform condensation and evaporation can improve the simulations of optically and hydrologically important moisture and cloud properties, which lead to reduction in the biases of shortwave cloud radiative forcing and rainfall performance.
2
3.3. Radiation transfer scheme
The radiation process is one of the best-understood aspects of atmospheric science (Liou, 2002). However, building radiative transfer parameterizations remains a challenging task because the primary focus is to achieve a balance between accuracy and efficiency. The traditional two-stream approximation used in most weather and climate models has acceptable accuracy in a clear-sky atmosphere, but may lead to larger errors under cloudy-sky conditions. Thus, higher-order schemes may be considered. (Zhang et al., 2013a) developed a four-stream doubling-adding discrete ordinates method based on the invariance principle. (Zhang and Li, 2013) further developed an analytical doubling-adding algorithm for the four-stream spherical harmonic expansion method. These methods could potentially improve the radiation performance in a global model.Radiation transfer is tightly connected with the cloud and aerosol properties (e.g., cloud fraction, optical depth, geometric location), and the treatment of these properties in the radiation code deserves special attention (e.g., Barker et al., 1999). In this regard, (Zhang et al., 2014a) implemented the BCC radiative transfer code (RAD) to the BCC climate model. The BCC-RAD model is coupled with the Monte Carlo independent column approximation (McICA; e.g., Pincus et al., 2003) method to treat subgrid cloud variability. The results showed that the inclusion of a cloud-type dependent function for the cloud fraction decorrelation length and the horizontal inhomogeneity of cloud condensation leads to improved shortwave and longwave cloud radiative forcing. Moreover, the treatment of cloud overlapping in the radiation module also affects cloud radiation simulations. (Zhang and Jing, 2016) provided an overview of this issue in their study.
2
3.4. Super parameterization
The super parameterization (SP; Grabowski and Smolarkiewicz, 1999; Khairoutdinov and Randall, 2001) modeling approach has received growing attention in recent years. For global models, the SP framework provides a more consistent treatment of various subgrid physical processes in a systematic manner, and reduces the artificial and empirical assumptions associated with conventional convection parameterizations. (Wang et al., 2015a) extended an early-developed SPCAM version (Wang et al., 2011a) with a high-order turbulence closure scheme, so as to improve the simulation of marine boundary low clouds. The results showed that when the high-order closure is used, cloud fraction and cloud radiative forcing over the marine boundary layer region become more realistic. The added-value of SPCAM with regard to climate modeling of precipitation over continental East Asia was analyzed in (Zhang and Chen, 2016). The SP model improves the frequency-intensity spectrum of rainfall, and shows a delay of the rainfall peak in the afternoon, which is related to more abundant shallow convection before intense rainfall starts. (Zhang et al., 2019b) further showed that the nocturnal and early-morning rainfall peaks over East Asia can be more realistically simulated by the SP model provided that the resolvable-scale flow becomes more favorable, highlighting the importance of correctly representing multiscale dynamical and physical effects for the climate simulation of the diurnal cycle of rainfall. The SP approach has also been explored in the regional GRAPES-Meso model. (Zhu et al., 2015) developed a 2D cloud-resolving model and used it for constructing SP-GRAPES-Meso. Unlike SPCAM, which adopts a filtered anelastic model for small-domain cloud-scale simulations, SP-GRAPES-Meso uses a simplified compressible model as the embedded cloud model, and the target application is weather forecasting rather than climate modeling. Preliminary tests suggest that the adoption of SP into GRAPES-Meso helps to improve the rainfall forecast.2
3.5. Coupling between dynamics and physics
In addition to individual physical schemes, the coupling of dynamics and physics is also key, and has become an increasingly important issue in recent years (e.g., Gross et al., 2018). Normally, coupling is established during the early stages of model development. However, due to suboptimal work in the early stages, some further adjustments are needed to optimize model performance. For example, the GRAPES model uses a Charney-Philips vertical staggering grid. However, the model physics still operates on a Lorenz grid. Interpolation is needed to perform the dynamics-physics coupling from one grid to another, which can lead to increased numerical error. Recently, a boundary layer scheme that directly operates on a Charney-Philips grid was developed for the GRAPES model (Chen et al., 2017). After the modification, the model showed improved forecasting skills owing to the absence of vertical temperature and moisture interpolation. This reminds us that the physics-dynamics coupling procedure should be carefully considered in the early stage of model development, as its errors may overwhelm the improvement of other parts. The coupling issue will become more important when conventional global models extend their resolution to the nonhydrostatic regime, as convective-scale models usually require a different coupling approach to coarse-resolution global GCMs (e.g., Giorgetta et al., 2018).-->
4.1. Variational and ensemble data assimilation
Data assimilation is an integral part of NWP systems. Current data assimilation frameworks may be categorized into two types: variational and ensemble methods (Gustafsson, 2007; Kalnay et al., 2007). Research activities along these two lines are reviewed in this section.Variational methods have a solid root in many operational centers. Some centers have switched from conventional 3D-Var to 4D-Var, which is computationally more expensive but also more accurate. The 4D-Var system has the ability to handle flow-dependencies developing during the time window of the assimilation and may assimilate the observations at multiple time points. At the National Meteorological Center of the CMA, a 3D-Var system was developed in which a dynamic balance equation was defined on isobaric surfaces during the early development stage of GRAPES (Zhuang et al., 2005; Ma et al., 2009). Afterwards, improvements were made to estimate dynamic balance constraints, and perform calculation at model levels for the 3D-Var system. In the new formulation, the regression coefficients between streamfunction and dimensionless pressure were used, and the balance relationship of rotational wind with divergent wind was considered. The experiments revealed that the new scheme could bring significant benefits to not only the accuracy of the analysis but also the performance of forecasts, especially over the tropics (Wang et al., 2012, 2014, 2015b). This improved 3D-Var system, which is the basis of the GRAPES 4D-Var system, was operationally used between January 2016 and June 2018. Recently, a 4D-Var-based system has replaced 3D-Var for operational forecasts. Several new techniques were developed for the 4D-Var system (e.g., Liu et al., 2017b; Zhang and Liu, 2017), including:
(1) An improved computational strategy for the background potential temperature profiles that does not depend on the thickness of the model layer and possess greater accuracy;
(2) An improved incremental analysis technique that uses multiple outer loops to alleviate strong nonlinear impacts;
(3) A conjugate gradient algorithm based on the Lanczos iterative method to improve the accuracy and efficiency of the minimization procedure of the target function (Liu et al., 2018);
(4) A global tangent linear and adjoint framework for GRAPES-GFS;
(5) A number of linear model physics schemes to alleviate analytical error;
(6) A digit filter to improve the 4D-Var system;
(7) An improvement of the parallel efficiency.
The use of the 4D-Var system leads to improved performance of global medium-range forecasts when compared with the 3D-Var system (Fig. 5), especially for results in the Southern Hemisphere.
Figure5. Anomaly correlation coefficient of global annual mean geopotential height (500 hPa) forecasts by the GRAPES 4D-Var and 3D-Var systems.
Compared with 4D-Var, the Ensemble Kalman Filter (EnKF) approach (see Houtekamer and Zhang, 2016for a recent review) is relatively easier to implement and is model independent. This approach also provides optimal initial ensemble perturbations and automatically filters out fast processes. The main disadvantage of EnKF is that it tends to introduce sampling errors in the estimation of the background error covariance because of the low dimensionality of the ensemble (Kalnay et al., 2007). The EnKF approach has been used to build a hybrid rapid refresh forecasting system (Pan et al., 2018). (Liu et al., 2013b) also studied ocean-atmosphere coupled data assimilation based on the ensemble approach. The analysis produced with EnKF may be dynamically inconsistent and contain unbalanced modes. The incremental analysis update (IAU) is widely used to control these unbalanced modes in EnKF analysis. A 4D-IAU was developed by (Lei and Whitaker, 2016) to take into account the propagation of the analysis increment during an assimilation window. The model constructs time-varying analysis increments by applying all observations in an assimilation window to state variables at different times during the assimilation window. Furthermore, some research has attempted to combine the strengths of 4D-Var and EnKF. (Tian and Feng, 2015) proposed a nonlinear least-squares enhanced proper orthogonal decomposition-based 4D-Var algorithm for nonlinear ensemble-based 4D-Var. The scheme was further upgraded to reduce the computational and implementation complexity (Tian et al., 2018). For ensemble-based assimilation (and forecasting) systems, a balance between the ensemble size and the ensemble resolution is useful given limited computational cost, and the solution usually depends on the specific forecast metric (e.g., Ma et al., 2012; Lei and Whitaker, 2017).
2
4.2. Satellite data assimilation
In addition to the development of a data-assimilation framework, a proper assimilation of satellite data may lead to improved forecasting skill. (He et al., 2011) examined the impact of assimilating Advanced Microwave Sounding Unit-A (AMSU-A) data over land, and suggested that using proper surface temperature is quite important to improve radiance and temperature forecasts when assimilating AMSU-A lower sounding channels. (Qin et al., 2013) suggested that assimilating Geostationary Operational Environmental Satellites Imager Radiance Data leads to better quantitative precipitation forecasts. (Zou et al., 2015) showed that the direct assimilation of satellite microwave and infrared radiance observations with an appropriately high model top can help to improve the track and intensity of tropical storms. (Lei et al., 2018) improved the performance of EnKF in assimilating radiance observations by using model space localization. (Chen et al., 2018) demonstrated that the assimilation of Feng-Yun-3B satellite microwave humidity sounder data over land, based on the dynamically retrieved emissivity, has a neutral to slightly positive impact on the five-day forecast. (Xia et al., 2019) assimilated aerosol optical depth data from the Fengyun-3A and MODIS meteorological satellites using the Gridpoint Statistical Interpolation 3D-Var data assimilation system. They showed that the assimilation of satellite aerosol optical depth observational data can significantly improve model aerosol mass prediction skills.Meanwhile, the assimilation of all-sky satellite radiances has received growing attention in recent years. In this regard, some important issues related to assimilation techniques were reviewed in (Li et al., 2016). (Chen et al., 2015) extended the WRF data assimilation system to assimilate cloud liquid and ice water path based on the Geostationary Gridded Cloud products, and obtained improved forecast performance in terms of various metrics. (Yang et al., 2016) examined the assimilation of all-sky radiances from Advanced Microwave Scanning Radiometer 2 data in the forecast of Hurricane Sandy. They found that the all-sky assimilation case generated better track and central sea level pressure forecasts than the clear-sky case for all forecast lead times, due to a better analysis in the tropical cyclone core areas. (Zhang and Guan, 2017) suggested that the direct assimilation of cloud-affected AMSU-A data can effectively adjust the structure of large-scale temperature, humidity and wind analysis fields due to the assimilation of more AMSU-A observations in typhoon cloudy areas.
At the National Meteorological Center, some major improvements have been made in the field of operational satellite data assimilation, including the establishment of a comprehensive data quality control system with improved assimilation methods and an efficient utilization of various satellite datasets. Some important contributions are listed as follows:
(1) The development of a constrained bias correction method for the satellite radiance assimilation in the presence of model biases, especially for the stratosphere where the model biases are relatively large (Han and Bormann, 2016);
(2) The application of a background error estimate and a quality control technique to improve the background error associated with the brightness temperature, which helps to significantly increase the accuracy of analysis in the upper stratosphere in the Southern Hemisphere (Wang et al., 2016);
(3) The assimilation of global navigation satellite radio occultation observations to improve global forecasts, especially for reducing the root-mean-square errors and bias errors in the upper troposphere and stratosphere (Liu and Xue, 2014);
(4) The assimilation of hyperspectral infrared satellite data, microwave temperature data and humidity sounder data (Li and Liu, 2016);
(5) The development of a data control system for assimilating the FY-3C data that helps to improve the forecast skill (Li and Zou, 2014).
For operational analysis and forecasts, research has clearly demonstrated that satellite observations have a positive impact. (Zhang et al., 2018a) suggested that the refractivity data from Global Navigation Satellite System (GNSS) radio occultation (GNSS-RO), satellite radiance, and atmospheric motion vector data had significant influence on the predication skill (Fig. 6). Low-impact observation points were located in data-rich areas, whereas high-impact observation points were located in data-poor areas. In the Southern Hemisphere, where conventional observations are sparse, the contribution of GNSS-RO increases with height to more than 60% above 500 hPa. The AMSU-A measurements contributed with a minimum value higher than 20%. For the Northern Hemisphere, the contribution of AMSU-A data was about 15% at all isobaric levels. The results from observing system experiments (Liu and Xue, 2014) and other verification methods (Liu et al., 2016) demonstrate that GNSS-RO data are the most important observation types for the GRAPES data assimilation system, exerting a significant influence on forecasts. These results suggest that using multiple satellite observational data clearly makes a positive contribution to NWP skill.
Figure6. Percentage of temperature error variance reduction (vertical axis) compared with the total variance reduction at various isobaric surfaces due to different observation subsets at 0000 UTC throughout June 2014 in (a) the Southern Hemisphere, (b) the equatorial region, and (c) the Northern Hemisphere. TEMP: observations from radiosondes; SYNOP: surface pressure from land stations and ships; AIREP: wind data from aircraft measurements; GNSS-RO: refractivity data of radio occultation from the Global Navigation Satellite System (GNSS); AMV: atmospheric motion vector; and AMSU-A, satellite microwave sounder radiances. [Reprinted from (Zhang et al., 2018a)].
2
4.3. Rapid updating cycle analysis
For meso- to convective-scale data assimilation, research and operational analysis methods include variational methods (3D-Var and 4D-Var), ensemble methods (EnKF), and a hybrid of the two (3DEnVar and 4DEnVar). Doppler weather radar reflectivity and radial wind data, wind profiler data, dense surface observations, and geostationary satellite image data all play an important role in convective-scale analysis. The quality of forecasts using initial data from rapid updating cycle (RUC) analysis is significantly better than the quality of forecasts from the simple downscaling of larger-scale initial data (Gustafsson et al., 2018). The major challenges for RUC analysis lie in the wide range of spatial and temporal scale, flow-dependent background error covariance, and insufficient observation information and data coverage for the analysis domain.At the National Meteorological Center, a 3D-Var analysis method with a three-hourly updated analysis cycle is used for the operational regional model GRAPES-Meso. Hydrometeors are initialized by the 3D cloud analysis scheme, which uses Doppler weather radar 3D mosaic reflectivity data, geostationary meteorological satellite data, and surface observation data (Zhu et al., 2017b). The 3D cloud analysis fields are incorporated into GRAPES-Meso by the nudging technique. Recent improvements to the GRAPES-Meso 3D-Var system include:
(1) A more efficient and accurate background error covariance structure for the mesoscale analysis;
(2) A blending method to merge the global analysis with the mesoscale analysis to address the problem of accumulated meso-scale analysis errors after a period of continuous cycling;
(3) Intensive observation and geostationary satellite radiance data assimilation, such as Fengyun 4A hyperspectral infrared sounding and infrared image radiance data;
(4) Doppler radar reflectivity data quality control and ingestion of more data in the cloud analysis scheme (Huang et al., 2017).
(Hao et al., 2011) compared 3-h/1-h RUC analysis with the operational cold-start analysis by using the GRAPES-RUC analysis. The three-month experiments for the flood season showed that the RUC analysis is able to more effectively assimilate the multi-time observation data such that the rapidly changing severe convective weather system can be better captured. Both threat and bias scores were improved by its contribution [e.g., Fig. 11 in (Hao et al., 2011)].
Similar 3D-Var analysis methods combined with cloud analysis schemes and nudging techniques are also utilized at some regional centers of the CMA, with more frequent updating analysis cycles and higher horizontal analysis resolution than those of the GRAPES-Meso system (Chen et al., 2013). In addition, there are data-assimilation research and testing activities at regional centers that: (i) utilize a more efficient data assimilation method, such as the regional 4D-Var method, EnKF ensemble method, or hybrids of the two (Chen et al., 2016b; Zhu et al., 2017a); (ii) include 10-minute period RUC analysis, now-casting up to 2 h by extrapolation, and blending with km-scale high-resolution model results in a 2-6-h forecast range only for rapid updating of grid forecasts of 0-12 h [which include, for instance, the Rapid Multiscale Analysis and Prediction System of the Beijing Meteorological Center (http://www.bjmb.gov.cn/info/842/3988671.html)]; (iii) include hydrometeor variables initialized by 3D cloud analysis schemes that are coherent with model cloud physics (Li et al., 2017); and (iv) include extracting and assimilating more hydrometeor variable information from dual polarization Doppler radar data (Wang et al., 2018a).