HTML
--> --> --> -->2.1 Outlier classification
Complex sources of errors in observational datasets make it difficult to identify the cause of errors based on the observed data itself (Sciuto et al., 2013). Therefore, outliers are classified into four types based on their characteristics:(1) Spatially and temporally inconsistent outliers (ST-outliers). Similar to meteorological parameters (Steinacker et al., 2011), the scale of pollution phenomena exceeds that resolved by the observational network, and one can expect the measured concentrations to be smooth both in time and space. Observations that differ greatly from values observed at the adjacent time or in neighboring areas are defined as ST-outliers.
(2) Low variance outliers (LV-outliers). This type of outlier has a very low variance in time series compared to neighboring sites. Some LV-outliers do not change over time and can be observed when the pump of the instruments is stuck or the filter tape is depleted. Most outliers that change very slowly come from CO observations measured by the pressure difference between two chambers. These outliers can be observed when the aging of the light sources in the two chambers is not synchronized (Luo, 2016).
(3) Periodic outliers (P-outliers). This type of outlier usually appears every 24 h, as shown in Fig. 1e. For some instruments, the accuracy may decrease with time due to the ageing of light sources and the changing of the ambient environment. Regular calibration is required for such instruments. However, the calibration processes may interfere with the observations, thus inserting abnormal values into the online measurement datasets.
Figure1. Samples of outliers. (a, b) Spatiotemporal outliers have large differences with neighboring observations in time and space. (c, d) Low variance outliers either stay the same or change abnormally slowly in time and differ significantly with observations from nearby sites. (e) Periodic outliers appear periodically, usually every 24 h. (f) PM10< PM2.5 outliers are the PM10 observations that are lower than the PM2.5 observations at the same time and site.
(4) Lower PM10 than PM2.5 outliers (LP-outliers). This type of outlier involves PM2.5 concentrations being higher than PM10 concentrations observed at the same hour and same site. Most PM2.5 monitors in the CNEMN network were installed around a decade after the PM10 monitors, with more advanced instruments used for PM2.5 monitoring, and the loss of semi-volatile components of particulate matter is better handled (see section 2.7 for details). Therefore, PM2.5 data are more reliable than PM10 data, and PM10 data are marked as LP-outliers if the observed concentration of PM2.5 is higher than that of PM10.
Figure 1 shows examples of classified outliers. It is important to note that the four types of outliers are not exclusive. Some P-outliers are also ST-outliers, and some LP-outliers are also LV-outliers.
2
2.2. Probabilistic automatic outlier detection
The most common technique for identifying outliers in meteorological data is the z-score method (Lanzante, 1996; Feng et al., 2004; Durre et al., 2010). This method normalizes the observed values using their mean and standard deviation, then removes those values whose z-score exceeds a specified threshold. The z-score is calculated as follows: \begin{equation} z(i)=\frac{|f(i)-\bar{f}|}{\sigma} , \ \ (1)\end{equation} where f(i) and z(i) are the observed value and z-score at time i; $\bar f$ and σ are the mean and standard deviation of the observed values.The z-score method can identify outliers that are considerably deviated from most observations. The limitation of this method is the normal distribution assumption made for observations (Durre et al., 2010), when in fact pollutant concentrations are always positive and their distributions are known to be closer to lognormal (Leiva et al., 2008). To deal with this limitation and combine detection methods by probability theory, three modifications are made to the z-score method:
(1) Instead of directly assessing the pollutant concentration observations, the residuals between the observed and the estimated concentration values are evaluated: \begin{equation} R(i)=f(i)-F(i) . \ \ (2)\end{equation} Here, F(i) is the estimated concentration at time i. Such an estimation can be conducted using either filters or regression models. Observations with large residuals are more prone to be marked as outlier candidates.
(2) The standard deviation of the residuals is computed within a sliding window and updated constantly so that the outlier detection method will be more sensitive to local outliers. The standard deviation S of the residuals is calculated by \begin{equation} S(i)=\sqrt{\frac{\sum_{k=-n}^nR(i+k)^2}{2n}} , \ \ (3)\end{equation} where i-n and i+n are the start and end of the sliding window.
Substituting the observed values f and the standard deviation σ in Eq. (1) by the residual R and its standard deviations S, it becomes: \begin{equation} Z(i)=\frac{|R(i)-\bar{R}|}{S(i)} . \ \(4) \end{equation}
Generally, the mean of residuals $\bar R$ should be zero, and the calculation of probability is not affected by the sign of Z. The numerator $|R(i)-\bar R|$ can then be simplified to R(i); therefore, \begin{equation} Z(i)=\frac{R(i)}{S(i)} . \ \(5) \end{equation}
(3) Instead of the z-score values, the probability of Z is introduced as a criterion to determine whether or not an observation is abnormal. The z-values in Eq. (5) are set to be normally distributed, and its probability is calculated by \begin{equation} P(i)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}Z(i)^2} . \ \ (6)\end{equation}
The normal distribution is chosen because it is a central distribution appropriate for residuals; plus, among all distributions with a given mean and a given variance, the normal distribution maximizes the entropy and is thus least informative. The introduction of the probability provides a framework based on which multiple rules for identifying abnormal observations can be combined (see sections 2.4.3 and 2.5). Next, we provide further details on how to use this probabilistic automatic outlier detection (PAOD) method to identify different types of outliers in surface observations of air pollutants.
2
2.3. Detection of outliers with large errors
The first detection involves identifying and removing outliers with large observational errors. These outliers might increase the residuals of normal observations and decrease the residuals of outliers. They make it more difficult to identify outliers with small observational errors, and should be removed before other detections. The detection and removal of outliers with large errors consists of the following two steps:(1) Firstly, outliers exceeding the measurable range of the instrument are removed. The measurable range of the instrument for the six pollutants are specified by China National Standards (HJ653-2013, HJ654-2013), as listed in Table 1.
(2) In the second step, the PAOD method described in section 2.2 is applied to further identify outliers with large observational errors. Here, the estimated values are calculated by a median filter [Eq. (7)], which is less likely to be affected by the outliers: \begin{equation} F_{\rm m}(i)=\mathcal{M}(f(i+k)),\quad k\in[-n,n] \ \(7)\end{equation} where F m is the value estimated by the median filter at time i; $\mathcal M$ is the median function; i-n and i+n represent the start and end of the sliding window. The length of the sliding window, 2n+1, is set to one month.
The residual R m is obtained by substituting F m into Eq. (2). The standard deviation of the residual S m is calculated using the median absolute deviation (MAD), as follows: \begin{equation} S_{\rm m}(i)=1.4826\mathcal{M}(|R_{\rm m}(i+k)|),\quad k\in[-n,n] \ \ (8)\end{equation}
Compared with the conventional method described by Eq. (3), obtaining the standard deviation by MAD is more robust to outliers (Dunn et al., 2012). The probability P m can be calculated through using the regression residual R m, its standard deviation S m and Eqs. (5) and (6). The probability threshold is set to 10-15 after several sensitivity tests, and the data with probability P m less than the threshold value are marked as outliers and removed from the datasets.
2
2.4. Detection of ST-outliers
After removing the outliers with large observational errors, spatiotemporal outlier detection is implemented to remove the ST-outliers described in section 2.1. To better identify these outliers, both the observed data of the target site at adjacent times and the data at neighboring sites are simultaneously used. The spatial and temporal residuals are assumed to follow the bivariate normal distribution, which makes it convenient to combine the estimations from both temporal and spatial estimation models.2.4.1 Temporal consistency estimation
The estimation of pollutant series of temporal consistency is carried out using a low-pass filter: \begin{equation} F_{\rm t}(i)=\sum_{k=-15}^{15}f(i-k)h(k) , \ \ (9)\end{equation} where F t(i) is the estimated value at time i, f is the observed value at the target site, and h(k) represents the low-pass filter coefficient (listed in Table 2). The filter coefficients are calculated following the algorithm designed by (Karam and McClellan, 1995), and the passband and stopband frequency is set to 1/8 and 1/24 h, respectively. The low-pass filter tends to preserve the low-frequency signals of normal variations from atmospheric chemistry and restrain the abnormal high-frequency signals accompanied by outliers. Compared with a moving average, it makes the residuals of normal observations smaller through giving higher weights to the data closer to the checkpoint.
2.4.2 Spatial consistency estimation
The pollutant series of spatial consistency is estimated by spatial regression: \begin{equation} \label{eq7} F_{\rm s}(i)=\sum_{r}\frac{f_{r}(i)a_{r}(i)c_{\rm r}}{\sum_{r}a_{r}(i)c_{r}} , \ \ (10)\end{equation} where F s(i) is the estimated value at the checkpoint i; fr is the observed value from neighboring site r; ar and cr are the index of agreement and localization coefficient between the target and a neighboring site.
Following the method of (Legates and McCabe, 1999), the index of agreement is defined as \begin{equation} a_{r}(i)=1-\frac{\sum_{k=-n}^n|f_{r}(i+k)-f(i+k)|}{\sum_{k=-n}^n[|f(i+k)-\bar{f}_{r}|+|f_{r}(i+k)-\bar{f}_{r}|]} , \ \ (11)\end{equation} where fr is the observed value at neighboring sites, and \(\barf_r\) is the time-average of fr within the sliding window. The length of the sliding window, 2n+1, is also set to one month.
The index of agreement is often employed to evaluate the simulated results of models, and has been used in quality assurance as well (Durre et al., 2010). It provides a measure for both the covariation and the absolute difference between the data of two time-series. Compared with the correlation coefficient, it is less affected by outliers, but still affected by sampling errors. To deal with this problem, this paper borrows the idea of localization from data assimilation and adopts a localization method introduced by (Gaspari and Cohn, 1999). The method reduces the influence from remote sites, and the localization coefficient is calculated as follows:
\begin{equation} c_{r}=\left\{ \begin{array}{l@{\quad}l} -\dfrac{1}{4}\left(\dfrac{|d|}{d_c}\right)^5+\dfrac{1}{2}\left(\dfrac{|d|}{d_c}\right)^4+\dfrac{5}{8}\left(\dfrac{|d|}{d_c}\right)^3 -\dfrac{5}{3}\left(\dfrac{|d|}{d_c}\right)^2+1 , & 0\leq|d|\leq d_c\\ \dfrac{1}{12}\left(\dfrac{|d|}{d_c}\right)^5-\dfrac{1}{2}\left(\dfrac{|d|}{d_c}\right)^4+\dfrac{5}{8}\left(\dfrac{|d|}{d_c}\right)^3 +\dfrac{5}{3}\left(\dfrac{|d|}{d_c}\right)^2-5\left(\dfrac{|d|}{d_c}\right)+4 -\dfrac{2}{3}\left(\dfrac{d_c}{|d|}\right), & d_c\leq|d|\leq 2d_c\\ 0, & 2d_c\leq |d| \end{array} \right., \ \ (12)\end{equation}
where d is the distance between the target and a neighboring site, and dc is the characteristic length of localization.
2.4.3 Combining temporal and spatial consistency estimations
After obtaining the estimated values F t in Eq. (9) and F s in Eq. (10) using the observed data at adjacent times and the data at neighboring sites respectively, the corresponding residuals at the checkpoints can be calculated using Eq. (2). Then, the residuals are normalized to Z t and Z s using Eqs. (3) and (5). The spatial and temporal consistency are evaluated simultaneously under our PAOD framework, and compute the probability of Z t and Z s by a bivariate normal distribution:
\begin{equation} P_{\rm st}(i)=\frac{1}{2\pi\sqrt{1-\rho(i)^2}}e^{\left(-\frac{1}{2(1-\rho(i)^2)}[Z_{\rm t}(i)^2+Z_{\rm s}(i)^2-2Z_{\rm t}(i)Z_{\rm s}(i)]\right)} , \ \ (13)\end{equation} where P st(i) is the joint probability at time i, and ρ is the correlation coefficient between Z t and Z s: \begin{equation} \rho(i)=\frac{\sum_{k=-n}^n[Z_{\rm t}(i+k)-\overline{Z_{\rm t}}][Z_{\rm s}(i+k)-\overline{Z_{\rm s}}]}{\sqrt{\sum_{k=-n}^n[Z_{\rm t}(i+k)-\overline{Z_{\rm t}}]^2\sum_{k=-n}^n[Z_{\rm s}(i+k)-\overline{Z_{\rm s}}]^2}} . \ \ (14)\end{equation} Here, $\overline {Z_{t}}$ and $\overline {Z_{s}}$ are the means of Z t and Z s, respectively, within the sliding window; i-n and i+n are the start and end of the sliding window. Observations with low probability at the checkpoint are identified as outliers.
Figure 2 displays the outliers detected in two ways. One uses the joint probability of Z t and Z s, and the other only employs the probability of Z t (only considering the temporal consistency). When only the temporal consistency is considered, the method identifies the outliers at checkpoint Nos. 1 and 2, but misses the outlier at No. 3 and misrecognizes the normal observations at Nos. 4 and 5 as outliers. After using the joint probability, the method deals with the above defects properly. This suggests that the detection method using the joint probability of the residuals improves the accuracy of the detection method. The two detection methods are applied to the raw data described in section 3. For PM2.5, PM10, SO2, NO2, CO and O3 observations, 37%-83% of the data marked with spatiotemporal inconsistency are also marked with temporal inconsistency. For some outliers with moderate temporal inconsistency but strong spatial inconsistency (such as observation 3 in Fig. 2), it can only be identified by the detection using spatiotemporal consistency. Among the data marked with spatiotemporal inconsistency, 46% of them are not marked by the detection using temporal inconsistency. For normal observations with strong spatial consistency but week temporal consistency (such as observations 1 and 2 in Fig. 2), it might be misidentified by the detection using temporal inconsistency. Among the data marked with temporal inconsistency, 54% of the them are not marked by the detection using spatiotemporal consistency.
Figure2. Comparison of outlier detections based on temporal consistency and spatiotemporal consistency. The detection based on spatiotemporal consistency can detect outliers 1, 2 and 3 while preserving valid observations 4 and 5.
2
2.5. Detection of LV-outliers
To deal with the LV-outliers that stay the same or change slowly, the low variance periods are first detected by checking the first and second derivatives of the observed values over time. As some low variance observations will be normal observations when the ambient air is clean or stable, the spatial consistency is combined to decide whether the data in those periods should be mark as outliers. The residual of a low variance period is an average of the residuals of the spatial consistency estimation in this period: \begin{equation} R_{\rm v}=\frac{\sum_{i=b}^eR_{\rm s}(i)}{(e-b+1)} , \ \ (15)\end{equation} where R s(i) is the residual of the spatial consistency estimation, as in section 2.4.2, and b and e are the beginning and end of this period. The standard deviation of R v is calculated according to the standard error of the mean for samples that are normally distributed: \begin{equation} S_{\rm v}=\frac{\sum_{i=b}^eS_{\rm s}(i)}{(e-b+1)\sqrt{(e-b+1)}} , \ \ (16)\end{equation} where S s(i) is the standard deviation of the residuals of the spatial consistency estimation in section 2.4.2. Using Eqs. (5) and (6), the normalized residual Z v and its probability P v for the whole period can be obtained. If P v is smaller than a predefined value (10-6 in this study), the data within the whole period are identified as LV-outliers.2
2.6. Detection of P-outliers
P-outliers are mainly caused by the daily self-calibration of the instrument and appear every 24 h. According to this characteristic, the time series of the observed data within 11 days are firstly processed into diurnal-variation data: \begin{equation} f_{\rm p}(i)=\frac{\sum_{k=-5}^5f(i+24k)}{11} , \ \ (17)\end{equation} where f is the hourly observed concentrations and f p is the data after processing. Then, a median filter is applied: \begin{equation} F_{\rm p}(i)=\mathcal{M}(f_{\rm p}(i+k)),\quad k\in[-1,1] \ \ (18)\end{equation} where F p(i) is the estimated value and $\mathcal M$ is the median function.Figure3. Periodic outliers of O3 observations detected for a site in Wuhan, China, between 22 and 30 August 2014. Some periodic outliers can be detected by spatiotemporal consistency, while others have relatively small measurement error compared to the variation of neighboring observations and can only be identified in the detection of periodic outliers.
Instead of Eq. (3), the standard deviation S p(i) of the residuals is calculated using the following method: \begin{equation} S_{\rm p}(i)=g(R_{\rm p}(i+k)),\quad k\in[-72,72] \ \ (19)\end{equation} where R p is the residual and g is a function that finds the 93.75th percentiles. The advantage of using the 93.75th percentiles of the residuals is that the obtained standard deviation is the second largest residual in one day, and only the observation with the largest residual in a day might be identified as the P-outliers. Using Eqs. (5) and (6), the probability P p of the residual can be obtained. If P p is smaller than a predefined value (10-4 in our check), the corresponding observation will be identified as the P-outliers. Figure 3 gives an example of the P-outliers that shows a peak concentration of ozone at 0400 LST (LST=UTC+8) every day. It should be noted that most of the P-outliers present large differences from their neighboring observations and can be identified by the detection of ST-outliers in section 2.4. However, there are also some P-outliers that have a relatively small difference from the neighboring observations and need to be identified by the P-outlier detection.
2
2.7. Detection of LP-outliers
The last detection is to mark the outliers with an observed concentration of PM2.5 higher than that of PM10 at the same hour and same site. This step is very simple but very important for the observed datasets from the China Nationwide Air Quality Monitoring Network. The PM2.5 and PM10 are mainly measured by the beta attenuation monitoring method (BAM) or tapered element oscillating microbalance method (TEOM). Both methods use heaters to reduce the humidity of the sampled air to prevent fogging and inhibit particle growth under high humidity. However, the heating process may lead to the volatilization of semi-volatile organic compounds of particulate matter. To deal with this problem, most new monitors adopt a filter dynamic measurement system to measure the volatile portion of the sample air when the concentrations are measured by TEOM. Also, a smart heater is implemented when the concentrations are measured by BAM, to keep the relative humidity at around 35% instead of keeping the temperature at around 50°C. The PM2.5 monitoring started from 2013, about a decade later than the PM10 monitoring. A filter dynamic measurement system or a smart heater is mandatory for the PM2.5 measuring instruments, while they are optional for the PM10 measuring instruments. Also, neither of them is implemented for most PM10 measuring instruments that were established before 2013 (Pan et al., 2014). As a result, the PM2.5 data are more reliable than the PM10 data, and the LP-outliers are more likely to occur at nighttime and on foggy days when the relative humidity is higher (Niu, 2017).Most outliers or "bad data" in the observed PM10 data are LP-outliers before 2016. The percentage of "bad data" can be higher than 7% (Fig. 4). However, as measuring instruments and management have been upgraded, the percentage of "bad data" had reduced to about 1% in 2016, and will hopefully continue to decrease in the future.