HTML
--> --> --> -->2.1. Definition
Recently, a methodology called "emergent constraint" has been developed for reducing uncertainties in climate change projections. This framework is based on:(1) Identifying responses to climate change perturbations in which models disagree (e.g., cloud feedback).
(2) Relating the intermodel spread in the climate change responses to present-day biases or short-term variations that can be observed.
This could be achieved by identifying an empirical relationship between the intermodel spread of an observable variable (hereafter named A) and spread of a response to a given perturbation (B). The variable A is called the predictor and the variable B the predictand. Because observed measurements of the predictor A can then be used to constrain the models' responses, B, the relationship between A and B is called an emergent constraint (Klein and Hall, 2015). The variable A may represent a metric that characterizes the climate system (e.g., humidity, winds) or some natural variability (e.g., in the seasonal cycle, or from year to year). The response B can be the global-mean response of the climate system (e.g., ECS) or a local response to perturbations (e.g., a regional climate feedback). Therefore, the goal is to find a predictor that, given its relation to a climate response, emerges as a constraint on future projections.
Once variable A is estimated observationally, the emergent constraint can be used to assess the realism of models and to eventually narrow the spread of climate change projections. As an idealized example, Fig. 1 shows a randomly generated relationship between a predictor A simulated by 29 climate models and a projection of future climate changes (in principle, any climate change response may be considered). The green distribution represents an observational measurement and its uncertainties. We see that differences in A are significantly associated with differences in B, here with a correlation coefficient of r = 0.83. By constraining A through potential observations (green distribution), this example suggests that some models are more realistic and, by inference, are associated with a more realistic predictand. The degree to which the models' A deviates from the observed A can be used to derive weights for the models to compute a weighted average of the models' response, B (see section 2.2.3).
Figure1. Idealized relationship between a predictor and a predictand. The 29 models (dots) are associated with randomly generated values of the predictor A ( x-axis, between 0 and 3). The predictand B, on they-axis follows the idealized relationship
2
2.2. Criterion and uncertainties
32.2.1. Physical understanding
An emergent constraint can be trusted if it meets certain criteria. The most important one is an understanding of physical mechanisms underlying the empirical relationship, which is the key to increasing the plausibility of a proposed emergent constraint. Several methods have recently been suggested to verify the level of confidence in emergent constraints (Caldwell et al., 2018; Hall et al., 2019). One of these methods consists of checking the reliability of an emergent constraint by developing sensitivity tests that would modify A for some models (if there is a straightforward way of manipulating A). For accurate model comparison, this would require coupled model simulations with global-mean radiative balance as performed in CMIP. If the models' behavior after the modification deviates from that expected from the emergent constraint, the relationship may have been found by chance. A study showed that this risk is not negligible (Caldwell et al., 2014), primarily because climate models are not independent, often being derived from each other (Masson and Knutti, 2011; Knutti et al., 2013). Keeping only models with enough structural differences often reduces the reliability of identified emergent constraints. The search for correlations with no obvious physical understanding could lead to such spurious results. Conversely, if those sensitivity tests confirm the intermodel relationship, the credibility of assumed physical mechanisms and observational constraints on climate change projections increases. Those tests could be performed through an ensemble of simulations over which either parameterizations or uncertain parameters are modified. This would help (1) disentangle structural and parametric influence on the multimodel spread in predictor A and (2) highlight underlying processes explaining the empirical relationship (Kamae et al., 2016).3
2.2.2. Observation uncertainties
The second criterion is related to the correct use of observations. Uncertainties tied to the observation of the predictor must be small enough so that not all models remain consistent with the data. This criterion may not be satisfied if observations are available only over a short time period [as is the case for the vertical structure of clouds, (e.g.,Winker et al., 2010)], or if the predictor is defined through low-frequency variability (trends, decadal variability), or if there is a lack of consistency among available datasets [as in the case for global-mean precipitation and surface fluxes, (e.g.,G?inu??-Bogdan et al., 2015)]. Finally, some observational constraints rely on parameterizations used in climate models, e.g., reanalysis data that use sub-grid assumptions for representing clouds (e.g.,Dee et al., 2011) or data products for clouds that use sub-grid assumptions for radiative transfer calculations (Rossow and Schiffer, 1999).3
2.2.3. Statistical inference
Emergent constraints can allow us to narrow uncertainties and quantify more likely estimates of climate projections, i.e., a constrained posterior range of a prior distribution. However, not all emergent constraints should be given the same trust. Hall et al. (2019) suggested to relate this trust to the level of physical understanding associated with the emergent relationship. This means making predictions only for confirmed emergent constraints.Posterior estimates are influenced by the way the statistical inference has been performed. However, no consensus has yet emerged for this inference. A first method for quantifying this constraint is to directly use uncertainties underlying the observational predictor and project it onto the vertical axis using the emergent constraint relationship. This method takes into account uncertainties in both observations and the estimated regression model, through bootstrapping samples for instance (Huber et al., 2011). Most studies use this straightforward framework. In our idealized example, this would give a posterior estimate that is slightly larger and narrower than the prior estimate (Fig. 1). However, several problems with this kind of inference might be highlighted, as suggested by Schneider (2018):
● Most fundamentally, the inference generally revolves around assuming that there exists a linear relationship, and estimating parameters in the linear relationship from climate models. However, it is not clear that such a linear relationship does in fact exist, and estimating parameters in it is strongly influenced by models that are inconsistent with the observations (extreme values). In other words, the analysis neglects structural uncertainty about the adequacy of the assumed linear model, and the parameter uncertainty the analysis does take into account is strongly reduced by models that are "bad" according to this model–data mismatch metric. Thus, outliers strongly influence the result. However, the influence of models consistent with the data but off the regression line is diminished. Given that there is no strong a priori knowledge about any linear relationship (this is why it is an "emergent" constraint), it seems inadvisable to make one's statistical inference strongly dependent on models that are not consistent with the data at hand.
● Often, analysis parameters are chosen so as to give strong correlations between the response of models to perturbations and the predictor. This introduces selection bias in the estimation of the regression lines. This leads to underestimation of uncertainties in parameters, such as the slope of the regression line, which propagates into underestimated uncertainties in the inferred estimate.
● When regression parameters are estimated by least squares, the observable on the horizontal axis is treated as being a known predictor, rather than as being affected by error (e.g., from sampling variability). This likewise leads to underestimation of uncertainties in regression parameters. This problem can be mitigated by using errors-in-variables methods.
A second method consists of estimating a posterior distribution by weighting each model's response by the likelihood of the model given the observations of the predictor. This can be accomplished by a Bayesian weighting method (e.g.,Hargreaves et al., 2012) or through information theory (e.g.,Brient and Schneider, 2016), such as the Kullback–Leibler divergence or relative entropy (Burnham and Anderson, 2003). This method does not use linear regression for estimating the posterior distribution and therefore favors realistic models and de-emphasizes outliers inconsistent with observations. For instance, the Kullback–Leibler divergence applied to our idealized example (assuming an identical standard deviation between observation and each model) suggests a posterior estimate lower and narrower than the prior estimate (Fig. 1).
This more justifiable inference still suffers from several shortcomings (Schneider, 2018). For example, it suffers from selection bias, and it treats the model ensemble as a random sample (which it is not). It also only weights models, suggesting that climate projections far outside the range of what current models produce will always come out as being very unlikely. Given uncertainties underlying each method, posterior estimates should thus be quantified using different methods [as previously done in Hargreaves et al. (2012), for instance], which must be explicitly described.
Figure 2 provides a tangible example for explaining the importance of statistical inference. It shows the relation in 29 current climate models between ECS and the strength with which the reflection of sunlight in tropical low-cloud regions covaries with surface temperature (Brient and Schneider, 2016). That is, the horizontal axis shows the percentage change in the reflection of sunlight per degree of surface warming, for deseasonalized natural variations. It is clear that there is a strong correlation (correlation coefficient of about ?0.7) between ECS on the vertical axis and the natural fluctuations on the horizontal axis. The green line on the horizontal axis indicates the probability density function (PDF) of the observed natural fluctuations. What many previous emergent-constraint studies have done is to take such a band of observations and project it onto the vertical ECS axis using the estimated regression line between ECS and the natural fluctuations, taking into account uncertainties in the estimated regression model. If we do this with the data here, we obtain an ECS that likely lies within the blue band: between 3.1 and 4.2 K, with a most likely value of 3.6 K. Simply looking at the scatter of the 29 models in this plot indicates that this uncertainty band is too narrow. For example, model 7 is consistent with the observations, but has a much lower ECS of 2.6 K. The regression analysis would imply that the probability of an ECS this low or lower is less than 4%. Yet, this is one of 29 models, and one of relatively few (around 9) that are likely consistent with the data. Obviously, the probability of an ECS this low is much larger than what the regression analysis implies. As explained before, these flaws could be reduced by weighting ECS by the likelihood of the model given the observations. Models such as numbers 2 and 3, which are inconsistent with observations, would receive essentially zero weight (unlike in the regression-based analysis, they do not influence the final result). No linear relationship is assumed or implied, so models such as 7 receive a large weight because they are consistent with the data, although they lie far from any regression line. The resulting posterior PDF for ECS is shown by the orange line in Fig. 1b. The most likely ECS value according to this analysis is 4.0 K. It is shifted upward relative to the regression estimate, toward the values in the cluster of models (around numbers 25 and 26) with relatively high ECS that are consistent with the observations. The likely ECS range stretches from 2.9 to 4.5 K. This is perhaps a disappointingly wide range. It is 50% wider than what the analysis based on linear regressions suggests, and it is not much narrower than what simple-minded equal weighting of raw climate models gives (gray line in Fig. 1b). It is, however, a much more statistically defensible range.
Figure2. (a) Scatterplot of ECS versus deseasonalized covariance of marine tropical low-cloud reflectance
In order to generalize the sensitivity of inferred estimates to the statistical methodology, 104 random emergent relationships are generated. Figure 3 shows the statistics of inferences (mode, confidence intervals) as a function of correlation coefficients. Averaged modes and confidence intervals obtained from the two inference methods are consistent with each other. However, the variance of inferred best estimates (modes) using the weighting method is larger than the one using the inference method. This is in agreement with results obtained from the tangible example from Brient and Schneider (2016), which show different most-likely values (Fig. 2). Therefore, this suggests the best estimate is significantly influenced by the way statistical inference is performed.
Figure3. Relationship between modes and correlation coefficient (r) of 104 randomly generated emergent constraints, as per the example shown in Fig. 1. Thick lines, dashed lines and shades represent the average mode, the average 66% confidence interval and the standard deviation of the mode across the set of emergent relationships. Characteristics of the prior distributions are represented in black color. Posterior estimates using the slope inference or the weighting averaging are represented in blue and red, respectively, using an idealized observed distribution of the predictor as defined in Fig. 1. The PDF of correlation coefficients is shown as a thin black line on the x-axis. This figure shows that average modes and confidence intervals remain independent of the inference method, but the uncertainty of the mode value is larger for the weighting method.
Finally, uncertainties underlying these estimates may be influenced by the level of structural similarity between climate models. Indeed, adding models with only weak structural differences (e.g., model versions with different resolution, interactive chemistry) can artificially strengthen the correlation coefficient of the empirical relationship and the inferred best estimate (Sanderson et al., 2015). This coefficient is usually the first criterion that quantifies the statistical credibility of an emergent constraint, i.e., the larger the correlation coefficient, the more trustworthy the regression-based inference will be. However, it remains unknown what level of statistical significance justifies an emergent constraint and whether these correlations best characterize their credibility.
Reference | Predictand | Original | Constrained | |
A | Covey et al. (2000) | ECS (K) | 3.4±0.8 | – |
A | Volodin (2008) (RH) | ECS (K) | 3.3±0.6 | 3.4±0.3 |
A | Volodin (2008) (cloud) | ECS (K) | 3.3±0.6 | 3.6±0.3 |
A | Trenberth and Fasullo (2010) | ECS (K) | 3.3±0.6 | > 4.0 |
A | Huber et al. (2011) | ECS (K) | 3.3±0.6 | 3.4±0.6 |
A | Fasullo and Trenberth (2012) | ECS (K) | 3.3±0.6 | 4.1±0.4* |
A | Sherwood et al. (2014) | ECS (K) | 3.4±0.8 | 4.5±1.5* |
A | Su et al. (2014) | ECS (K) | 3.4±0.8 | >3.4 |
A | Zhai et al. (2015) | ECS (K) | 3.4±0.8 | 3.9±0.5 |
A | Tian (2015) | ECS (K) | 3.4±0.8 | 4.1±1.0* |
A | Brient and Schneider (2016) | ECS (K) | 3.4±0.8 | 4.0±1.0* |
A | Lipat et al. (2017) | ECS (K) | 3.4±0.8 | 2.5±0.5* |
A | Siler et al. (2018) | ECS (K) | 3.4±0.8 | 3.7±1.3 |
A | Cox et al. (2018) | ECS (K) | 3.4±0.8 | 2.8±0.6 |
B | Qu et al. (2014) | Low-cloud amount feedback (% K?1) | ?1.0±1.5 | ? |
B | Gordon and Klein (2014) | Low-cloud optical depth feedback (K?1) | 0.04±0.03 | ? |
B | Brient and Schneider (2016) | Low-cloud albedo change (% K?1) | ?0.12±0.28 | ?0.4±0.4* |
B | Siler et al. (2018) | Global cloud feedback (% K?1) | 0.43±0.30 | 0.58±0.31 |
C | Allen and Ingram (2002) | Global-mean precipitation | ? | ? |
C | O’Gorman (2012) | Tropical precipitation extremes (% K?1) | 2?23 | 6?14 |
C | DeAngelis et al. (2015) | Clear-sky shortwave absorption (W m2 K?1) | 0.8±0.3 | 1.0±0.1 |
C | Li et al. (2017) | Indian monsoon rainfall changes (% K?1) | 6.5±5.0 | 3.5±4.0 |
C | Watanabe et al. (2018) | Hydrological sensitivity (% K?1) | 2.6±0.3 | 1.8±0.4 |
D | Cox et al. (2013) | Tropical land carbon release (GtC K?1) | 69±39 | 53±17 |
D | Wang et al. (2014) | Tropical land carbon release (GtC K?1) | 79±43 | 70±45* |
D | Wenzel et al. (2014) | Tropical land carbon release (GtC K?1) | 49±40 | 44±14 |
D | Hoffman et al. (2014) | CO2 concentration in 2100 (ppm) | 980±161 | 947±35 |
D | Wenzel et al. (2016) | Gross primary productivity (%) | 34±15 | 37±9 |
D | Kwiatkowski et al. (2017) | Tropical ocean primary production (% K?1) | ?4.0±2.2 | ?3.0±1.0 |
D | Winkler et al. (2019) | Gross primary production (PgC yr?1) | 2.1±1.9 | 3.4±0.2 |
E | Plazzotta et al. (2018) | Global-mean cooling by sulfate [K (W m?2)?1] | 0.54±0.33 | 0.44±0.24 |
F | Hall and Qu (2006) | Snow-albedo feedback (% K?1) | ?0.8±0.3 | ?1.0±0.1* |
F | Qu and Hall (2014) | Snow-albedo feedback (% K?1) | ?0.9±0.3 | ?1.0±0.2* |
F | Boé et al. (2009) | Remaining Arctic sea-ice cover in 2040 (%) | 67±20* | 37±10* |
F | Massonnet et al. (2012) | Years of summer Arctic ice-free | (2029–2100) | (2041–2060) |
F | Bracegirdle and Stephenson (2013) | Arctic warming (°C) | ~2.78 | <2.78 |
G | Kidston and Gerber (2010) | Shift of the Southern Hemispheric jet (°) | ?1.8±0.7 | ?0.9±0.6 |
G | Simpson and Polvani (2016) | Shift of the Southern Hemispheric jet (°) | ~?3 | ~?0.5* (winter) |
G | Gao et al. (2016) | Shift of the Northern Hemispheric jet (°) | ~0 | ~?2 (winter) |
G | Gao et al. (2016) | Shift of the Northern Hemispheric jet (°) | ~+1.5 | ~?1 (spring) |
G | Douville and Plazzotta (2017) | Summer midlatitude soil moisture | ? | ? |
G | Lin et al. (2017) | Summer US temperature changes (°C) | 6.0±0.8 | 5.2±1.0* |
G | Donat et al. (2018) | Frequency of heat extremes (?) | ? | ? |
H | Hargreaves et al. (2012) | ECS (K) | 3.1±0.9 | 2.3±0.9 |
H | Schmidt et al. (2013) | ECS (K) | 3.3±0.8 | 3.1±0.7 |
Table1. List of 45 published emergent constraints, the predictand they constrain, and the original and constrained ranges. The mean and standard deviations of prior and posterior estimates are listed where available. An asterisk signifies that the moments of the distribution are not directly quantified in the reference paper but derived from their emergent relationship and the observational constraint, and thus should be understood only as a qualitative assessment. Letters correspond to groups of emergent constraints with related predictands
In the late 1990s, signs of climate feedback started to be constrained from climate models and observations (e.g., Hall and Manabe, 1999). Usually analyzing one unique model, these studies improved our understanding of physical mechanisms driving climate feedback. However, the lack of intermodel comparisons in these studies did not allow quantifying the relative importance of feedbacks in driving uncertainties in climate change projections. Model intercomparisons during this period identified the cloud response to global warming as being the key contributor of intermodel spread in climate projections (Cess et al., 1990, 1996). Both types of studies pave the way toward process-oriented analysis for understanding intermodel differences in climate projections.
To the best of my knowledge, the first attempt at introducing the concept of emergent constraints was made by Allen and Ingram (2002). The authors tried to constrain the spread in global-mean future precipitation change simulated by the set of climate models participating in CMIP2 (Meehl et al., 2000) through observable temperature variability and a simple energetic framework. Despite the inability to robustly narrow future precipitation changes, they introduced the concepts that establish emergent constraints: the need for physical understanding and the ability of observations to constrain the model predictor.
An early application of emergent constraints concerns the snow-albedo feedback. Hall and Qu (2006) showed that differences among models in seasonal Northern Hemisphere surface albedo changes are well correlated with global-warming albedo changes in CMIP3 models. The three main criteria for a robust emergent constraint are satisfied: the physical mechanisms are well understood, the statistical relationship between the quantities of interest is strong, and uncertainties in the observed variations are weak, allowing the authors to constrain the Northern Hemisphere snow-albedo feedback under global warming. Despite this successful application, the generation of models that followed (CMIP5) continued to exhibit a large spread in seasonal variability of snow-albedo changes (Qu and Hall, 2014). This could be narrowed through targeted process-oriented model development based on the evaluation of snow and vegetation parameterizations (Thackeray et al., 2018). Yet, this study can be seen as the first confirmed emergent constraint (Klein and Hall, 2015; Hall et al., 2019).
The success of the Hall and Qu (2006) study led a number of studies to seek emergent constraints able to narrow climate change responses. In the following sections, these studies aimed at constraining ECS, cloud feedback, or various changes in Earth system components, such as the hydrological cycle or the carbon cycle, are described.
6.1. The hydrological cycle
Uncertainties in the response of precipitation to global warming are important and remain to be narrowed. Increasing the confidence in precipitation changes would provide important benefits for regional climate projections and risk assessment (Christensen et al., 2013). Links between natural variability of extreme precipitation and temperature offer possible observational constraints for changes in climate extremes, especially because the underlying physical mechanisms are relatively well understood (O’Gorman and Schneider, 2008). These constraints usually suggest a strong intensification of heavy rainfall with warming (O’Gorman, 2012; Borodina et al., 2017). Changes in the hydrological cycle can partly be attributed to changes in the clear-sky shortwave absorption, which is related to models' radiative transfer parameterizations (DeAngelis et al., 2015). Watanabe et al. (2018) followed this path by providing a best estimate for both hydrological sensitivity and shortwave cloud feedback, through the surface longwave cloud radiative effect climatology. This study then connected the intermodel spread of changes in the water cycle and ECS. Process-oriented analysis of specific emergent constraints might thus lead to targeted model development for narrowing the spread in climate projections.2
6.2. The carbon cycle
A second topic that has also received considerable attention is the sensitivity of the carbon cycle to climate change. Cox et al. (2013) found a robust relationship that links interannual covariations between tropical temperature and carbon release into the atmosphere (the predictor) and the weakening in carbon storage under global warming. Observations highlight that most climate models overestimate the present-day sensitivity of land CO2 changes, suggesting an overly strong weakening of the CO2 tropical land storage with climate change (Table 1). This constraint has been confirmed in subsequent analysis (Wang et al., 2014; Wenzel et al., 2014). Additional studies have aimed to constrain other aspects of the climate–carbon cycle feedback, such as terrestrial photosynthesis (Wenzel et al., 2016), sinks and sources of CO2 (Hoffman et al., 2014; Winkler et al., 2019), and tropical ocean primary production (Kwiatkowski et al., 2017).2
6.3. Geoengineering
Constraining uncertainties in geoengineering simulations has also been addressed. Intermodel differences in the climate response to an artificial increase in sulfate concentrations are correlated to intermodel differences in the simulated cooling by past volcanic eruptions (Plazzotta et al., 2018). Physical assumptions underlying this relationship consist of assuming that volcanic eruptions can be understood as an analogue of solar radiation management (Trenberth and Dai, 2007). Observations from satellites suggest that models overestimate the cooling by volcanic eruptions, thus overestimating the potential cooling effect by an addition of aerosols in the stratosphere.2
6.4. Regional climate changes
While most emergent constraints focus on global scales, several aim to better understand and constrain regional climate changes. So far, these studies mostly focus on extratropical climate responses, as was the case for the pioneering work of Hall and Qu (2006). Attempts in constraining changes of extreme temperature have recently showed that models slightly overestimate the increasing frequency of heat extremes with global warming in Europe and North America (Donat et al., 2018), in relation to overly strong soil drying (Douville and Plazzotta, 2017). Changes in the extratropical circulation have also been studied. Models show a robust poleward shift of the Southern Hemisphere jet with global warming, and are uncertain about the sign of the shift in the Northern Hemisphere jet. Emergent constraints suggest that models overestimate the Southern Hemispheric poleward shift (Kidston and Gerber, 2010; Simpson and Polvani, 2016) and predict that the Northern Hemisphere jet will likely move poleward (Gao et al., 2016). Finally, a number of studies have sought to constrain changes over the Arctic region. Their results show that most models delay the year when summertime sea-ice cover is likely to disappear (Boé et al., 2009; Massonnet et al., 2012) and slightly overestimate the strength of the polar amplification (Bracegirdle and Stephenson, 2013).Regional emergent constraints remain rare, which reduces the ability to compare metrics and observations to one another. Results are thus not yet robust, and should be viewed with caution. However, knowing the large uncertainties underlying regional climate projections and the advantages local populations will get from better model projections (Christensen et al., 2013), I expect in the near future to see numerous new emergent constraints aimed at narrowing uncertainties in regional climate changes. Nevertheless, this should be addressed through rigorous physical understanding given the numerous multi-scale interactions and adjustments that induce regional differences.
2