HTML
--> --> --> -->2.1. Observations and model setup
The reference data of monthly temperature over CORDEX-EA were taken from the Climatic Research Unit Time-Series (CRU TS) 4.03 dataset developed by the University of East Anglia (available atFive RCMs were used in the CORDEX-EA experiment: the Hadley Centre Global Environmental Model, version 3, with Regional Atmosphere configurations (HadGEM3-RA); the Fifth-generation Pennsylvania State-National Center for Atmospheric Research Mesoscale Model (MM5); the Weather Research and Forecasting (WRF) model; the Regional Climate Model, version 4 (RegCM4); and the Yonsei University Regional Climate Model (YSU-RCM). The selected RCMs include three non-hydrostatic models (HadGEM3-RA, MM5 and WRF) and two hydrostatic models (RegCM4 and YSU-RCM) (von Storch et al., 2000; Cha et al., 2008; Giorgi et al., 2012; Baek et al., 2013; Wang et al., 2013). Table 1 lists the detailed configurations of the five RCMs, including dynamics processes, physical parameterization schemes, and spectral nudging (Gu et al., 2018). The CORDEX-EA domain covers East Asia, India, South Asia, and the northern part of Australia (Fig. 1), and the spatial resolution is 50 km (except HadGEM3-RA, whose resolution is 0.44°). The historical experiment (1980-2005) and the future projections under the RCP4.5 and RCP8.5 scenarios (2006?49) are driven by the outputs of HadGEM2-AO (Hadley Centre Global Environmental Model, version 2, with Atmosphere and Ocean and sea ice configurations), whose horizontal resolution is 1.875° × 1.25°. Several studies have confirmed the good performance of HadGEM2-AO for simulating East Asia’s climatology (Martin et al., 2011; Baek et al., 2013; Sperber et al., 2013).
Name | HadGEM3-RA | RegCM4 | MM5 | WRF | YSU-RCM |
Resolution | 0.44° | 50 km | 50 km | 50 km | 50 km |
Dynamics process | Non-hydrostatic | Hydrostatic | Non-hydrostatic | Non-hydrostatic | Hydrostatic |
Convective scheme | Revised mass flux scheme | MIT-Emanuel | Kain?Fritsch II | Kain?Fritsch II | Simplified Arakawa?Schubert |
Land-surface parameterization | MOSES2 | CLM3 | CLM3 | NOAH | NOAH |
Planetary boundary layer | MOSES2 nonlocal | Holtslag | YSU | YSU | YSU |
Spectral nudging | No | Yes | Yes | Yes | Yes |
Research center | Met Office Hadley Centre | International Centre for Theoretical Physics | Seoul National University | NCAR’s Mesoscale and Microscale Meteorology Laboratory | Climate Limited-area Modelling Community |
Table1. Configurations of the five RCMs in the CORDEX-EA region (after Gu et al., 2018).
Figure1. Simulation domain and topology of CORDEX-EA and the 10 selected subregions: northwestern China (NW), Tibetan Plateau (TP), northeastern China (NE), northern China (NC), southern China (SC), Korean Peninsula and Japan (KJ), Mongolia (MG), India, Indochina (InC), and Southeast Asia (SEA).
To further correct the biases at smaller spatial scales, 10 subregions were selected: northwestern China (NW; 36°?43°N, 75°?103°E); the Tibetan Plateau (TP; 28°?35°N, 75°?103°E); northeastern China (NE; 30°?42°N, 104°?121°E), northern China (NC; 42°?55°N, 113°?132°E); southern China (SC; 18°?30°N, 104°?122°E); the Korean Peninsula and Japan (KJ; 43°?51°N, 91°?112°E); Mongolia (MG; 30°?42°N, 125°?141°E); India (5°?27°N, 69°?91°E), Indochina (InC; 8°?28°N, 92°?110°E), and Southeast Asia (SEA; 10°S?6°N, 95°?151°E) (Zou and Zhou, 2016; Zhou et al., 2016; Li et al., 2018a; Tang et al., 2018). The reference period (1980?99) and projected period (2030?49) are analyzed to explore future climate change.
2
2.2. Bias correction methods
As mentioned above, due to the deficiencies in RCMs, large biases can be found in simulations when compared to the observations. Thus, bias correction methods based on the observations can be implemented to improve the performance of RCMs. Considering the relatively short time sequence of data, here three common stationary temperature bias correction methods are used from the R packages “hyfo” and “DownscaleR”; namely, variance scaling, additive scaling, and quantile mapping based on empirical distribution (Wilcke et al., 2013). To facilitate bias correction and ensemble calibration, temperatures from RCMs were interpolated to a common grid of 0.5° × 0.5° latitude/longitude, following CRU, using bilinear interpolation.3
2.2.1. Additive scaling
The additive scaling method performs bias correction based on the bias between the average simulation and average observation during the calibration period. It is expressed aswhere
3
2.2.2. Variance scaling
Building on the additive scaling method, the variance scaling method further corrects the variance of the temperature (Terink et al., 2010) using the following procedure. First, we apply the additive scaling method, which corrects the average of the temperature:Then, we convert the average temperature to 0, using
Next, we scale the variance of temperature according to the ratio of the temperature variance between the calibration and validation periods:
And finally, we add the temperature in step 3 to the temperature in step 1:
3
2.2.3. Quantile mapping
The quantile mapping (QM) method derives from the empirical transformation developed by Panofsky and Brier (1968). It has been widely used in the bias correction of both GCMs and RCMs (Wilcke et al., 2013; Miao et al., 2016). By contrast with the methods mentioned above, the QM method focuses not only on the mean of the distribution but also on correcting the quantiles of the distribution. The QM method estimates the cumulative distribution function (CDF) from the simulation data in the calibration period and then finds the corresponding percentile values of the model projections. The corrected projections can be derived through inverse CDFs of the observations. The transfer function is shown as follows:where the subscripts O and mc denote the observation and model calibration periods, respectively.
2
2.3. Simple multi-model averaging method
Simple multiple-model averaging (SMA), which gives each member in the ensemble equal weight, is the most common ensemble post-processing method. For an ensemble of n members, the weight of each member is2
2.4. BMA method
The BMA method was introduced by Raftery et al. (2005) to combine different model forecasts into an ensemble and calibrate the under-dispersion during the ensemble forecasts (Duan and Phillips, 2010; Miao et al., 2013). BMA can be viewed as a post-processing method for producing the forecast probability density function (PDF) of output variables, which is a weighted average of the bias-corrected PDF of each individual ensemble member. The weights reflect the relative performance of each member in the ensemble during the training period. Following the notation in Raftery et al. (2005), the BMA-weighted forecast PDF of variable y iswhere
The mean of the BMA-weighted forecast can be interpreted as a weighted sum of the normal distributions with equal variance but centered at the bias-corrected forecast
The BMA weights
2
2.5. Model evaluation in spatial simulation
In this study, we conducted the bias correction for each subregion and for each month of the year. In addition, considering the time dependence of the model biases and the relatively short time sequence of historical simulations (only 26 years, from 1980 to 2006), cross validation (Miao et al., 2016) was used to calibrate and validate the performance of the RCMs. For calibration, 20 years were randomly selected out of the 26 years and then three bias correction methods were applied to get the bias correction factors. The remaining 6 years were used for validation, and the bias correction factors were applied. The sampling method used in cross validation was simple random sampling without replacement, which guarantees that each member has an equal opportunity to be selected. Validation was conducted by comparing the corrected temperatures with CRU data. We repeated the whole cross-validation process 30 times to overcome the limitation of insufficient sample sizes and enhance the robustness of the validation results. The performance of different bias correction methods was also evaluated based on the cross-validation results.Based on the cross-validation results, for each pixel in each subregion, 30 results were acquired for each method and each month. If at least 27 corrected results agreed well with the CRU data, the corresponding method was considered to be effective. For all subregions and RCMs, we summed up the number of all effective pixels and calculated their percentage out of all pixels over the 30 validations. In this way, we analyzed the performance of each method across all regions and RCMs. Further, the temporal average (averaging over 12 months and 6 years) of validated and observed temperature was used to calculate the relative decrease in root-mean-square error (RMSE) (
While the most effective bias correction method was being obtained, the historical temperature simulations (1980-2005) were corrected for all subregions. Then, the ensemble averaging methods were applied to the bias-corrected temperatures. To validate the performance of ensemble averaging methods, monthly temperature distributions, interannual variability and Taylor diagrams were used. The Taylor diagram is especially useful in evaluating multiple aspects of complex models (IPCC, 2001). It incorporates three evaluation terms?spatial correlation, centered RMSE and standard deviations?and graphically measures how closely the simulation patterns match the observations (Taylor, 2001).
-->
3.1. Bias correction evaluation
Figure 2 illustrates the spatial pattern of annual average (1980-2005) temperatures of CRU and the driving GCM (HadGEM2-AO) as well as the annual average temperature biases of the driving GCM and the five RCMs. In addition, the annual average temperatures of the five RCMs are shown in Fig. S1 in the electronic supplementary material (ESM). We can see that both the GCM and the RCMs captured the spatial pattern of the observations with a decreasing south to north temperature gradient. However, both the driving GCM and the RCMs generally underestimated the annual average temperature in most regions, especially in the TP region, where the greatest bias exceeded ?8°C. The cold biases in the simulation came mainly from December?January?February (DJF), and the biases in June-July-August (JJA) were small (Fig. S2). Several studies have also reported similar annual average and seasonal temperature bias patterns (Ham et al., 2016; Guo et al., 2018; Li et al., 2018a; Hui et al., 2019). The cold biases in DJF were remarkable in most RCMs, with the largest bias exceeding ?8°C in most RCMs. The only exception was RegCM, which had warm biases exceeding 4°C at high latitudes in CORDEX-EA, which is consistent with previous studies (Gao and Giorgi, 2017). The RCMs’ performance also varied in JJA temperature simulations. For example, the YSU-RCM model presented a large cold bias in NE and the TP (exceeding ?8°C), while the bias was small in other RCMs.Figure2. Spatial distribution of annual average temperatures according to (a) CRU and (b) the HadGEM2-AO GCM. (c?h) Annual temperature biases of (c) the HadGEM2-AO GCM and (d?h) five RCMs during the years 1980?2005.
In addition, compared to the driving GCM, some RCMs had improved their skill in simulating the temperature in several regions. For example, WRF and MM5 reduced the biases over the NC, SC and KJ regions, and RegCM and YSU-RCM improved their performance in the India region. However, for the other regions (e.g., TP and NW), the improvement of the RCMs was less and the cold biases were even larger than those of the driving GCM. Increased resolution does not always lead to improvement in simulations (Pr?mmel et al., 2010; Gu et al., 2018), and there are several reasons for this phenomenon. For instance, due to their simplified physical parameterization schemes, longwave radiation is underestimated in RCMs, which limits the heating of the low-level atmosphere (Hui et al., 2019). Also, the overestimation of albedo in RCMs (in the lower boundary conditions) may also lead to cold biases (Meng et al., 2018).
Given the large biases in simulating annual and seasonal average temperatures, bias correction is necessary to improve the RCMs’ performance. We calculated the percentage of effective pixels for each method over 30 rounds of validation (Fig. 3). We found that all the methods can effectively improve RCM performance. The percentage of effective pixels at the 25% quantile was higher than at the 50% and 75% quantiles. The QM method outperformed the other two methods for the 25%, 50% and 75% quantiles. This may be due to the fact that, although all bias correction methods corrected the biases in the RCMs, their foci are different. For example, the additive scaling method focuses on the mean difference between the calibration model and observation data (Teutschbein and Seibert, 2012; Ezéchiel et al., 2016), while the variance scaling method focuses on adjustment of variance (Luo et al., 2018). The two methods can adjust the monthly mean values, but they neglect the cumulative distribution of the temperature, and thus they cannot adjust the quantiles of the simulation. However, in contrast with these two methods, the QM method constructs CDFs of RCMs and adjusts the distributions according to the corresponding distribution of observation data (Bennett et al., 2014; Singh et al., 2017; Ayugi et al., 2020). Therefore, the QM method can adjust not only the mean value of temperature but also the quantile values.
Figure3. Comparisons of different bias correction methods at the 25th, 50th and 75th quantiles.
We calculated the RMSE for both raw and bias-corrected temperature data, then the relative decrease in the RMSE for the 10 subregions and five RCMs was calculated based on the 30 rounds of cross validation (Fig. 4). Results showed that all the bias correction methods effectively reduced biases for all RCMs. The QM method was most effective among the methods, with the maximum relative decrease in the RMSE reaching 59.8% (HadGEM3-RA), 63.2% (MM5), 51.3% (RegCM), 80.7% (YSU-RCM) and 62.0% (WRF). For subregions, although all the bias correction methods significantly reduced the biases over most subregions, results varied. For example, in the SEA region, MM5, bias-corrected by the additive scaling method, was worse than the raw simulation. We analyzed the results (Fig. S3) and found that this may be due to the fact that the additive scaling method can only adjust the mean difference between the model values and observations, while for extreme values, the additive scaling method failed (Fang et al., 2015). In the SEA region, for MM5, the additive scaling method narrowed the biases for high temperature but amplified the bias for low temperature when compared to the raw simulations. However, the QM-corrected results almost perfectly fitted the CRU distribution. Several previous studies have also shown that bias correction methods are region-dependent, and the scaling method may have adverse effects in some regions (Berg et al., 2012; Ayugi et al., 2020). Moreover, for almost all subregions (except the MG region as simulated by WRF and MM5), the QM method outperformed the other two methods. We also found that, for the TP and NW regions, where the bias was largest among the subregions, the bias reduction was significant, with the maximum relative decrease in the RMSE reaching 61.5% and 80.7%, respectively (both for YSU-RCM).
Figure4. Relative decrease in RMSE (%) of three bias correction methods for the 10 subregions for the five RCMs: (a) HadGEM3-RA, (b) RegCM, (c) MM5, (d) WRF, and (e) YSU-RCM.
Figure 5 gives the spatial distribution of the relative decrease in the MAE for the 10 subregions (the QM method performed best, so only the results of QM are shown here). The spatial distribution results are consistent with results in Fig. 4. The bias correction was effective in most regions, especially in the TP, NW, SC and NC subregions, where the MAE was reduced more than 50% compared to the MAE of the raw model output. For the YSU-RCM model, which had the largest biases in the simulation, the bias reductions were remarkable: the relative decrease in the MAE was more than 60% for most regions and more than 70% in most parts of China. The spatial distribution results for winter (DJF) and summer (JJA) (Fig. S4) also showed that the bias correction method effectively reduced the bias in seasonal simulations, especially in winter. The correction for summer focused mainly on the southern part of the CORDEX region, including India, InC and SC. In summary, all bias correction methods effectively reduced the biases in the simulations, and the QM method was most effective for almost all subregions and all RCMs. Therefore, the QM method was chosen as the most suitable method to correct the temperature simulations in the following text. Figure S5 provides the annual average temperature error distribution of the bias-corrected data using the QM method. The results show that, compared with the raw RCM simulations, the errors were removed remarkably well by the bias correction process, especially for the NE, NC, KJ, SC, INC, SEA and India regions, where the errors were small. However, for the NW, TP and MG regions, although bias correction did decrease the cold biases, some cold biases remained. Furthermore, due to the remarkable cold biases in the NW, TP and MG regions, the bias correction tended to overcorrect cold biases in these regions, especially for the TP and NW regions in the YSU-RCM model.
Figure5. Spatial distribution of relative decrease in MAE (%) for annual temperatures using the QM method. Panels (a?e) show results from each of the five RCMs. See Fig. S4 in the ESM for spatial distribution of relative decrease in MAE (%) for seasons.
2
3.2. Multi-model averaging based on bias correction
In addition to the bias correction method, BMA and SMA were also used to further narrow the uncertainty in the corrected RCMs. Figure 6 gives the seasonal distribution of CRU, raw simulated, bias-corrected, SMA-weighted [bias-corrected (BC)] and BMA-weighted (BC) temperatures. Results show that, compared to the CRU data, the raw simulation had significant cold biases in winter (DJF) and biases were small in summer (JJA) over the 10 subregions. This is consistent with the results in Fig. S2 and several previous studies, where the cold biases in winter were large and in summer were small (Ham et al., 2016; Hui et al., 2019). Moreover, RCM performance was region-dependent; more specifically, the RCMs performed well for the regions at high latitudes (e.g., the NE, KJ and MG regions) and badly for regions at low latitudes (e.g., the SEA, TP, NW and InC regions). Several previous studies have analyzed the possible causes of the biases in these regions and suggested that careful configuration of the RCM parameterization schemes could help. For example, according to Hui et al. (2019), the radiation parameterization schemes in RCMs underestimate the monthly longwave upward and downward fluxes throughout the year, especially in cold months over subtropical regions, which leads to significant cold biases in subtropical regions. For the cold biases in the TP region, they may be attributable to the overestimation of upward shortwave radiation and the corresponding overestimation in albedo (Tangang et al., 2015; Chen et al., 2017; Hui et al., 2019; Yin et al., 2020). When the bias corrections of SMA and BMA were applied, the seasonal cycles of the raw models were adjusted and fitted the CRU data well. The bias corrections of SMA and BMA significantly improved the performance of the RCMs.Figure6. Observed (CRU), raw simulated, bias-corrected, SMA [bias-corrected (BC)] and BMA [bias-corrected (BC)] monthly temperatures of 10 subregions in the validation period.
The model’s ability to capture the real interannual variability is another important performance measure. Here, we used the variance among 26 historical years as the indicator of interannual variability. Figure 7 gives the results of interannual variability of the SMA (BC), BMA (BC), SMA (raw) and CRU temperature values. The results show that the RCMs’ ability to capture the real interannual variability varied among subregions. For the NW, TP, NE, MG and SEA regions, the interannual variability of bias-corrected data was closer to the real interannual variability. But for other regions, the bias correction narrowed the variability. The results may be due to the fact that the bias correction methods mainly focus on the mean and trend of the data, with less focus on the variance (Ayugi et al., 2020). Although, for most subregions, the performances of SMA and SMA (BC) were similar, but for the MG region, where the interannual variability was greater, the variabilities of SMA (BC) were closer to the variability of CRU. Thus, SMA (BC) was considered as a better method. In addition, the results based on SMA (BC) were better than those based on BMA (BC) when compared to the variability of CRU. This is due to the fact that the objective function of BMA only considers the minimum bias without adjusting for variance (Raftery et al., 2005; Fragoso et al., 2018). In future studies, we will pay more attention to the variance and consider multi-objective optimization.
Figure7. Interannual variability for SMA [bias-corrected (BC)], BMA [bias-corrected (BC)], SMA (raw model), and CRU temperature values among the 26 years of the historical period.
The spatial variability statistics of the models in reproducing annual average temperature are shown using Taylor diagrams in Fig. 8. The Taylor diagrams show that the bias correction improved the performance of the RCMs, contributing to a higher spatial correlation and lower normalized standard deviation. Furthermore, the BMA and SMA ensemble results both reduced the uncertainties in simulation with a closer distance to the observation. For some subregions, the performances of BMA and SMA were similar (e.g., in NE, NC, TP and SC). However, for other regions, such as NW, MG and SEA, the BMA method performed better. This is reasonable because the BMA weights are estimated according to the RCMs’ performance in the training period (Duan et al., 2007). A previous study also showed that the BMA method outperformed the SMA method when applied to the CORDEX-EA data (Kim and Suh, 2013). For the correlation coefficient, the improvement rate of the BMA method was between 2% and 31% when compared with individual RCMs. Although the SMA performed better with respect to interannual variability, we focus mainly on the mean and trend in the projection. Thus, we chose the BMA method for the projection to narrow the uncertainties. Although the BMA-related improvement was not substantial, the amount of improvement was reasonable considering the temperature had already been corrected by bias correction methods.
Figure8. Taylor diagrams evaluating the model skill in simulating the annual temperature and bias correction effects over 10 subregions. CRU observation data used as reference. The x- and y-axes refer to the standard deviations (normalized) and the azimuthal axis refers to the spatial pattern correlation between two fields.
2
3.3. Temperature projections
Based on the most effective bias correction method (QM) and BMA weights derived from the training period (1980?2005), the bias-corrected and BMA-weighted temperature projections for the 10 subregions under the two scenarios (RCP4.5 and RCP8.5) were generated. Figure 9 shows the average temperature projections for the BMA ensemble under the RCP4.5 and RCP8.5 scenarios (results of five RCMs and the driving GCM are shown in Figs. S6 and S7 under the RCP4.5 and RCP8.5 scenarios, respectively). Similar warming trends were detected over the 10 subregions for the 2030?2049 period under both scenarios but with a more obvious warming trend under the RCP8.5 scenario. The warming trend was more remarkable in the northern part of CORDEX-EA than in the southern part, especially for the TP, NW, MG and NE regions, where the warming was over 3°C in the 2030?49 period. Moreover, analysis of seasonal warming results indicated that the warming was more remarkable in winter than in summer (not shown). Similar warming patterns have also been detected in previous studies (Ham et al., 2016; Gu et al., 2018), although these studies focused mainly on the China region. The BMA results indicate a clear increase in average temperature under the RCP4.5 and RCP8.5 scenarios for all subregions (Table 2). However, for a given subregion, the warming varied among RCMs. For example, the annual temperature increase over NE ranged from 0.5°C to 3.5°C under the RCP4.5 scenario. Figure 10 also illustrates the obvious warming trend over the 10 subregions from 2006 to 2049 under both scenarios. Note that the warming trends in the TP, NW, NE and MG regions [reaching 0.6°C (10 yr)?1?0.7°C (10 yr)?1] were more remarkable than in the other regions [0.3°C (10 yr)?1?0.5°C (10 yr)?1] under the RCP8.5 scenario, which is consistent with results in Fig. 9.Subregion | Scenario | HadGEM3-RA | RegCM | MM5 | WRF | YSU-RCM | BMA |
Northwestern China | RCP 4.5 | 2.6 | 3.1 | 1.8 | 3.4 | 2.7 | 2.1 |
RCP 8.5 | 2.9 | 3.8 | 3.2 | 3.2 | 3.3 | 3.1 | |
Tibetan Plateau | RCP 4.5 | 2.5 | 2.4 | 2.2 | 2.0 | 1.8 | 2.2 |
RCP 8.5 | 2.9 | 2.9 | 2.9 | 3.0 | 2.2 | 2.7 | |
Northern China | RCP 4.5 | 1.7 | 1.9 | 2.5 | 1.2 | 1.9 | 1.9 |
RCP 8.5 | 2.0 | 2.4 | 2.6 | 2.5 | 2.4 | 2.3 | |
Northeastern China | RCP 4.5 | 2.7 | 2.8 | 3.5 | 0.5 | 2.2 | 2.4 |
RCP 8.5 | 3.2 | 3.5 | 3.6 | 3.2 | 3.2 | 3.2 | |
Southern China | RCP 4.5 | 1.5 | 1.5 | 2.1 | 1.0 | 1.6 | 1.4 |
RCP 8.5 | 1.7 | 1.8 | 2.0 | 1.9 | 1.9 | 1.8 | |
Mongolia | RCP 4.5 | 2.9 | 3.1 | 3.0 | 3.0 | 3.0 | 2.6 |
RCP 8.5 | 3.4 | 3.9 | 3.9 | 3.3 | 4.0 | 3.5 | |
Korean Peninsula & Japan | RCP 4.5 | 2.0 | 2.0 | 2.4 | 1.8 | 2.1 | 2.0 |
RCP 8.5 | 2.2 | 2.4 | 2.6 | 2.6 | 2.6 | 2.4 | |
India | RCP 4.5 | 1.6 | 1.4 | 1.1 | 1.3 | 1.3 | 1.4 |
RCP 8.5 | 1.8 | 1.6 | 1.7 | 1.3 | 1.4 | 1.5 | |
Indochina | RCP 4.5 | 1.4 | 1.2 | 1.6 | 1.1 | 1.3 | 1.3 |
RCP 8.5 | 1.6 | 1.5 | 1.8 | 1.6 | 1.5 | 1.6 | |
Southeast Asia | RCP 4.5 | 1.1 | 0.9 | 0.9 | 1.0 | 0.9 | 1.0 |
RCP 8.5 | 1.4 | 1.1 | 1.2 | 1.1 | 1.1 | 1.2 |
Table2. Annual temperature changes (future minus baseline) in °C projected by five RCMs and the BMA method for 10 subregions under the RCP4.5 and RCP8.5 scenarios.
Figure9. Spatial distribution of annual temperature changes [RCP 8.5 (4.5) minus baseline] projected by five RCMs and the BMA method. See supplementary material for results of five RCMs and the driving GCM under RCP4.5 (Fig. S6) and RCP8.5 (Fig. S7).
Figure10. Results from five RCMs and BMA for annual temperature change under two scenarios, RCP4.5 and 8.5. The straight lines, marked with uneven dashes, represent the trends of the temperature changes.
The future temperature changes varied among subregions and months. Figure 11 illustrates the changes of monthly temperature in the 10 subregions under the RCP8.5 scenario (changes under RCP4.5 were similar but with smaller amplitude; not shown). The BMA projection also indicates that monthly temperature increased more notably in the northerly regions of CORDEX-EA, especially the NE and MG regions, where the most rapid increases were in November (more than 4.5°C). Furthermore, we also found that the increases in monthly temperature varied by latitude. For example, the MG and NE regions exhibited similar increasing patterns; the same was true for the KJ, NC, NW and TP regions, as well as for the SEA, India and InC regions. The SC region was dissimilar to the other regions. According to Hui et al. (2019), this may be due to the cloud cover, but determining detailed reasons for how the pattern of increases in temperature varies with latitude needs further study. Finally, the monthly warming pattern in the subregions of China under the RCP4.5 scenario ranged from 0.8°C to 4.2°C. These values are similar to but larger than findings in a previous study (0.3°C?2.2°C) (Gu et al., 2018), which was based on the raw temperature and simple multi-model ensemble averaging.
Figure11. Projected monthly temperature changes (RCP 8.5 minus baseline) for 10 subregions. The monthly temperature changes for the RCP4.5 scenario are similar to RCP8.5 and not shown.
The uncertainty in projections needs to be considered when using them in applications (such as driving hydrological models). Here, we took the standard deviation among five RCMs as an uncertainty indicator (Nordhaus, 2018). Figure 12 gives the uncertainty of the raw model outputs and the bias-corrected results for both the RCP4.5 and RCP8.5 scenarios. The results show that the uncertainty had no apparent relationship with time or solar radiation forcing (RCP4.5 and RCP8.5). The results were different from uncertainties projected by GCMs in Asia, where the uncertainty increases with time (Miao et al., 2016). This may be due to the fact that the RCMs in CORDEX-EA use the same driving GCM, so there is only internal variability (Chen et al., 2019). The uncertainty was greatest for the TP and NW regions, exceeding 2.5°C both for the RCP4.5 and RCP8.5 scenarios throughout the projection period. For the SEA, KJ and India subregions, the uncertainty was lowest?less than 0.7°C for both scenarios. This indicates that there was more uncertainty in the high-latitude subregions. Several previous studies have also shown that the uncertainty was greater at high latitude (Deser et al., 2012; Miao et al., 2016; Woldemeskel et al., 2016). Because the QM method narrowed the differences among RCMs, the uncertainty was reduced for most subregions and both scenarios (except for the NE region under the RCP4.5 scenario). The reductions were more remarkable for the RCP8.5 scenario, ranging from 66% to 94% across all subregions. The lower uncertainty in the RCP8.5 scenario indicates a consistent warming trend under the scenario for all subregions.
Figure12. Uncertainty of raw model data and bias-corrected data under two scenarios. We used the standard deviation as a measure of uncertainty.