HTML
--> --> -->The CNEMC started to monitor the concentrations of six air pollutants (PM2.5, PM10, SO2, NO2, CO and O3) from 2012. By March 2017, the China National Environmental Monitoring Network (CNEMN) had included 1436 monitoring sites from 369 cities. Real-time hourly observations of the six pollutants at every monitoring site are uploaded to the CNEMC and released to the public (http://www.cnemc.cn/). These real-time observations are being assimilated into the chemical transport model at the CNEMC to improve air quality forecasts. They are also an important data source for the Ministry of Environmental Protection of the People's Republic of China to assess urban air quality and establish air quality control strategies. The observations are widely used in many research fields, such as data assimilation (Tang et al., 2016), model verification (Wang et al., 2016), health impact (Li et al., 2017), satellite product verification (Gu et al., 2017), and analyzing the formation processes of pollution episodes (Shan et al., 2009; Zheng et al., 2015).
The quality of ambient air quality datasets is vital for interpreting results in related research fields. However, instrument malfunctions, the influence of harsh environments and the limitation of measuring methods can cause outliers in the observed datasets. For example, wearing, insufficient air tightness and excess axial resistance of the pump of the monitoring instrument will interfere with the measured values (Luo, 2016). A blockage of the filter core or a breakage or running out of filter tapes can lead to significant errors in the monitoring of particulate matter. After rainstorms, observed concentrations of particulate matter might be negative if the mass of particulate matter accumulated is less than the water evaporated. Malfunction of the cooling system or the photomultiplier tube will cause NO2 outliers, while instable infrared sources will cause CO outliers (Guan, 2016).
Outlier detection involves separating inconsistent observations from regular ones, based on statistical or physical criteria that characterize the regular or inconsistent observations (Aggarwal, 2016). Outlier detection has been widely applied to a diverse range of environmental data. In meteorology, outliers can be detected for those observations that are inconsistent in space and time (You et al., 2008; Steinacker et al., 2011; Liao et al., 2014), or against the characteristics of observed variables such as surface wind, precipitation and snowfall (Golz et al., 2005; Durre et al., 2010; Jiménez et al., 2010). There are also some detailed studies about outlier detection for variables in oceanography (e.g., temperature and salinity profiles) and soil science (e.g., temperature and moisture) (Ingleby and Huddleston, 2007; Fiebrich et al., 2010; Dorigo et al., 2013).
Outliers in air quality are difficult to detect, because pollutants show particularly rich patterns of variations in space and time on multiple scales. These variations are driven by complex processes of chemical reactions, atmospheric transport, emissions and depositions, and thus cannot be easily represented by either statistical models or chemistry transport models. Recent outlier detection studies in air quality mainly probe the inconsistency in space and time in pollutant concentration observations from clusters of stations. For instance, (Kracht et al., 2014) identified outliers as daily PM10 measurements that manifest extreme values compared with the smoothed values——the weighted averages from neighboring background stations in the European air quality database AirBase; whereas, (Bobbia et al., 2015) compared hourly PM10 measurements with the weighted median from nearest-neighbor stations or with geostatistical spatial predictions using classical kriging techniques. (Araki et al., 2017) performed leave-one-out predictions of daily mean PM2.5 and NO2 in Japan using ordinary kriging of residuals from a land use regression model, and identified observations with large kriging errors as outliers. Interestingly, (?ampulová et al., 2015) proposed an automatic procedure to mark out potential outliers from two urban stations measuring hourly PM10 concentrations by checking whether observations lie in a plausible interval obtained by analyzing residuals of data smoothing.
The main challenge for outlier detection in air quality is that the true spatiotemporal variations of pollutants can only be estimated to serve as detection criteria to separate outliers from signals, while all estimations have limitations. For instance, in kriging estimation, a more detailed diurnal covariance structure can be introduced (Wu et al., 2010), but covariance modelling in high dimensional space is a very difficult issue (Bickel and Levina, 2008). In practice, manual inspection (Fiebrich et al., 2010) is still an effective choice to identify outliers. A majority (80%) of outliers were identified by manual inspection at four sites in Shanghai in China from 2014 to 2016 (Guan, 2016). In this paper, outliers are classified into four types based on their characteristics. Then, an automatic outlier detection framework is proposed based on the probability of residuals between the observations and their estimation discriminating the known characteristics of different types of outliers. Our hope is that this probabilistic outlier detection method will help in routinely identifying outliers in pollutant observations from the CNMEC in an automatic manner such that cumbersome real-time manual inspections can be avoided. Subsequently, better datasets could be constructed to support various research aims related to the severe air pollution problem that is currently a national concern in China.
2.1 Outlier classification
Complex sources of errors in observational datasets make it difficult to identify the cause of errors based on the observed data itself (Sciuto et al., 2013). Therefore, outliers are classified into four types based on their characteristics:(1) Spatially and temporally inconsistent outliers (ST-outliers). Similar to meteorological parameters (Steinacker et al., 2011), the scale of pollution phenomena exceeds that resolved by the observational network, and one can expect the measured concentrations to be smooth both in time and space. Observations that differ greatly from values observed at the adjacent time or in neighboring areas are defined as ST-outliers.
(2) Low variance outliers (LV-outliers). This type of outlier has a very low variance in time series compared to neighboring sites. Some LV-outliers do not change over time and can be observed when the pump of the instruments is stuck or the filter tape is depleted. Most outliers that change very slowly come from CO observations measured by the pressure difference between two chambers. These outliers can be observed when the aging of the light sources in the two chambers is not synchronized (Luo, 2016).
(3) Periodic outliers (P-outliers). This type of outlier usually appears every 24 h, as shown in Fig. 1e. For some instruments, the accuracy may decrease with time due to the ageing of light sources and the changing of the ambient environment. Regular calibration is required for such instruments. However, the calibration processes may interfere with the observations, thus inserting abnormal values into the online measurement datasets.

(4) Lower PM10 than PM2.5 outliers (LP-outliers). This type of outlier involves PM2.5 concentrations being higher than PM10 concentrations observed at the same hour and same site. Most PM2.5 monitors in the CNEMN network were installed around a decade after the PM10 monitors, with more advanced instruments used for PM2.5 monitoring, and the loss of semi-volatile components of particulate matter is better handled (see section 2.7 for details). Therefore, PM2.5 data are more reliable than PM10 data, and PM10 data are marked as LP-outliers if the observed concentration of PM2.5 is higher than that of PM10.
Figure 1 shows examples of classified outliers. It is important to note that the four types of outliers are not exclusive. Some P-outliers are also ST-outliers, and some LP-outliers are also LV-outliers.
2
2.2. Probabilistic automatic outlier detection
The most common technique for identifying outliers in meteorological data is the z-score method (Lanzante, 1996; Feng et al., 2004; Durre et al., 2010). This method normalizes the observed values using their mean and standard deviation, then removes those values whose z-score exceeds a specified threshold. The z-score is calculated as follows: \begin{equation} z(i)=\frac{|f(i)-\bar{f}|}{\sigma} , \ \ (1)\end{equation} where f(i) and z(i) are the observed value and z-score at time i; $\bar f$ and σ are the mean and standard deviation of the observed values.The z-score method can identify outliers that are considerably deviated from most observations. The limitation of this method is the normal distribution assumption made for observations (Durre et al., 2010), when in fact pollutant concentrations are always positive and their distributions are known to be closer to lognormal (Leiva et al., 2008). To deal with this limitation and combine detection methods by probability theory, three modifications are made to the z-score method:
(1) Instead of directly assessing the pollutant concentration observations, the residuals between the observed and the estimated concentration values are evaluated: \begin{equation} R(i)=f(i)-F(i) . \ \ (2)\end{equation} Here, F(i) is the estimated concentration at time i. Such an estimation can be conducted using either filters or regression models. Observations with large residuals are more prone to be marked as outlier candidates.
(2) The standard deviation of the residuals is computed within a sliding window and updated constantly so that the outlier detection method will be more sensitive to local outliers. The standard deviation S of the residuals is calculated by \begin{equation} S(i)=\sqrt{\frac{\sum_{k=-n}^nR(i+k)^2}{2n}} , \ \ (3)\end{equation} where i-n and i+n are the start and end of the sliding window.
Substituting the observed values f and the standard deviation σ in Eq. (1) by the residual R and its standard deviations S, it becomes: \begin{equation} Z(i)=\frac{|R(i)-\bar{R}|}{S(i)} . \ \(4) \end{equation}
Generally, the mean of residuals $\bar R$ should be zero, and the calculation of probability is not affected by the sign of Z. The numerator $|R(i)-\bar R|$ can then be simplified to R(i); therefore, \begin{equation} Z(i)=\frac{R(i)}{S(i)} . \ \(5) \end{equation}
(3) Instead of the z-score values, the probability of Z is introduced as a criterion to determine whether or not an observation is abnormal. The z-values in Eq. (5) are set to be normally distributed, and its probability is calculated by \begin{equation} P(i)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}Z(i)^2} . \ \ (6)\end{equation}
The normal distribution is chosen because it is a central distribution appropriate for residuals; plus, among all distributions with a given mean and a given variance, the normal distribution maximizes the entropy and is thus least informative. The introduction of the probability provides a framework based on which multiple rules for identifying abnormal observations can be combined (see sections 2.4.3 and 2.5). Next, we provide further details on how to use this probabilistic automatic outlier detection (PAOD) method to identify different types of outliers in surface observations of air pollutants.
2
2.3. Detection of outliers with large errors
The first detection involves identifying and removing outliers with large observational errors. These outliers might increase the residuals of normal observations and decrease the residuals of outliers. They make it more difficult to identify outliers with small observational errors, and should be removed before other detections. The detection and removal of outliers with large errors consists of the following two steps:(1) Firstly, outliers exceeding the measurable range of the instrument are removed. The measurable range of the instrument for the six pollutants are specified by China National Standards (HJ653-2013, HJ654-2013), as listed in Table 1.
(2) In the second step, the PAOD method described in section 2.2 is applied to further identify outliers with large observational errors. Here, the estimated values are calculated by a median filter [Eq. (7)], which is less likely to be affected by the outliers: \begin{equation} F_{\rm m}(i)=\mathcal{M}(f(i+k)),\quad k\in[-n,n] \ \(7)\end{equation} where F m is the value estimated by the median filter at time i; $\mathcal M$ is the median function; i-n and i+n represent the start and end of the sliding window. The length of the sliding window, 2n+1, is set to one month.
The residual R m is obtained by substituting F m into Eq. (2). The standard deviation of the residual S m is calculated using the median absolute deviation (MAD), as follows: \begin{equation} S_{\rm m}(i)=1.4826\mathcal{M}(|R_{\rm m}(i+k)|),\quad k\in[-n,n] \ \ (8)\end{equation}
Compared with the conventional method described by Eq. (3), obtaining the standard deviation by MAD is more robust to outliers (Dunn et al., 2012). The probability P m can be calculated through using the regression residual R m, its standard deviation S m and Eqs. (5) and (6). The probability threshold is set to 10-15 after several sensitivity tests, and the data with probability P m less than the threshold value are marked as outliers and removed from the datasets.
2
2.4. Detection of ST-outliers
After removing the outliers with large observational errors, spatiotemporal outlier detection is implemented to remove the ST-outliers described in section 2.1. To better identify these outliers, both the observed data of the target site at adjacent times and the data at neighboring sites are simultaneously used. The spatial and temporal residuals are assumed to follow the bivariate normal distribution, which makes it convenient to combine the estimations from both temporal and spatial estimation models.2.4.1 Temporal consistency estimation
The estimation of pollutant series of temporal consistency is carried out using a low-pass filter: \begin{equation} F_{\rm t}(i)=\sum_{k=-15}^{15}f(i-k)h(k) , \ \ (9)\end{equation} where F t(i) is the estimated value at time i, f is the observed value at the target site, and h(k) represents the low-pass filter coefficient (listed in Table 2). The filter coefficients are calculated following the algorithm designed by (Karam and McClellan, 1995), and the passband and stopband frequency is set to 1/8 and 1/24 h, respectively. The low-pass filter tends to preserve the low-frequency signals of normal variations from atmospheric chemistry and restrain the abnormal high-frequency signals accompanied by outliers. Compared with a moving average, it makes the residuals of normal observations smaller through giving higher weights to the data closer to the checkpoint.
2.4.2 Spatial consistency estimation
The pollutant series of spatial consistency is estimated by spatial regression: \begin{equation} \label{eq7} F_{\rm s}(i)=\sum_{r}\frac{f_{r}(i)a_{r}(i)c_{\rm r}}{\sum_{r}a_{r}(i)c_{r}} , \ \ (10)\end{equation} where F s(i) is the estimated value at the checkpoint i; fr is the observed value from neighboring site r; ar and cr are the index of agreement and localization coefficient between the target and a neighboring site.
Following the method of (Legates and McCabe, 1999), the index of agreement is defined as \begin{equation} a_{r}(i)=1-\frac{\sum_{k=-n}^n|f_{r}(i+k)-f(i+k)|}{\sum_{k=-n}^n[|f(i+k)-\bar{f}_{r}|+|f_{r}(i+k)-\bar{f}_{r}|]} , \ \ (11)\end{equation} where fr is the observed value at neighboring sites, and \(\barf_r\) is the time-average of fr within the sliding window. The length of the sliding window, 2n+1, is also set to one month.
The index of agreement is often employed to evaluate the simulated results of models, and has been used in quality assurance as well (Durre et al., 2010). It provides a measure for both the covariation and the absolute difference between the data of two time-series. Compared with the correlation coefficient, it is less affected by outliers, but still affected by sampling errors. To deal with this problem, this paper borrows the idea of localization from data assimilation and adopts a localization method introduced by (Gaspari and Cohn, 1999). The method reduces the influence from remote sites, and the localization coefficient is calculated as follows:
\begin{equation} c_{r}=\left\{ \begin{array}{l@{\quad}l} -\dfrac{1}{4}\left(\dfrac{|d|}{d_c}\right)^5+\dfrac{1}{2}\left(\dfrac{|d|}{d_c}\right)^4+\dfrac{5}{8}\left(\dfrac{|d|}{d_c}\right)^3 -\dfrac{5}{3}\left(\dfrac{|d|}{d_c}\right)^2+1 , & 0\leq|d|\leq d_c\\ \dfrac{1}{12}\left(\dfrac{|d|}{d_c}\right)^5-\dfrac{1}{2}\left(\dfrac{|d|}{d_c}\right)^4+\dfrac{5}{8}\left(\dfrac{|d|}{d_c}\right)^3 +\dfrac{5}{3}\left(\dfrac{|d|}{d_c}\right)^2-5\left(\dfrac{|d|}{d_c}\right)+4 -\dfrac{2}{3}\left(\dfrac{d_c}{|d|}\right), & d_c\leq|d|\leq 2d_c\\ 0, & 2d_c\leq |d| \end{array} \right., \ \ (12)\end{equation}
where d is the distance between the target and a neighboring site, and dc is the characteristic length of localization.
2.4.3 Combining temporal and spatial consistency estimations
After obtaining the estimated values F t in Eq. (9) and F s in Eq. (10) using the observed data at adjacent times and the data at neighboring sites respectively, the corresponding residuals at the checkpoints can be calculated using Eq. (2). Then, the residuals are normalized to Z t and Z s using Eqs. (3) and (5). The spatial and temporal consistency are evaluated simultaneously under our PAOD framework, and compute the probability of Z t and Z s by a bivariate normal distribution:
\begin{equation} P_{\rm st}(i)=\frac{1}{2\pi\sqrt{1-\rho(i)^2}}e^{\left(-\frac{1}{2(1-\rho(i)^2)}[Z_{\rm t}(i)^2+Z_{\rm s}(i)^2-2Z_{\rm t}(i)Z_{\rm s}(i)]\right)} , \ \ (13)\end{equation} where P st(i) is the joint probability at time i, and ρ is the correlation coefficient between Z t and Z s: \begin{equation} \rho(i)=\frac{\sum_{k=-n}^n[Z_{\rm t}(i+k)-\overline{Z_{\rm t}}][Z_{\rm s}(i+k)-\overline{Z_{\rm s}}]}{\sqrt{\sum_{k=-n}^n[Z_{\rm t}(i+k)-\overline{Z_{\rm t}}]^2\sum_{k=-n}^n[Z_{\rm s}(i+k)-\overline{Z_{\rm s}}]^2}} . \ \ (14)\end{equation} Here, $\overline {Z_{t}}$ and $\overline {Z_{s}}$ are the means of Z t and Z s, respectively, within the sliding window; i-n and i+n are the start and end of the sliding window. Observations with low probability at the checkpoint are identified as outliers.
Figure 2 displays the outliers detected in two ways. One uses the joint probability of Z t and Z s, and the other only employs the probability of Z t (only considering the temporal consistency). When only the temporal consistency is considered, the method identifies the outliers at checkpoint Nos. 1 and 2, but misses the outlier at No. 3 and misrecognizes the normal observations at Nos. 4 and 5 as outliers. After using the joint probability, the method deals with the above defects properly. This suggests that the detection method using the joint probability of the residuals improves the accuracy of the detection method. The two detection methods are applied to the raw data described in section 3. For PM2.5, PM10, SO2, NO2, CO and O3 observations, 37%-83% of the data marked with spatiotemporal inconsistency are also marked with temporal inconsistency. For some outliers with moderate temporal inconsistency but strong spatial inconsistency (such as observation 3 in Fig. 2), it can only be identified by the detection using spatiotemporal consistency. Among the data marked with spatiotemporal inconsistency, 46% of them are not marked by the detection using temporal inconsistency. For normal observations with strong spatial consistency but week temporal consistency (such as observations 1 and 2 in Fig. 2), it might be misidentified by the detection using temporal inconsistency. Among the data marked with temporal inconsistency, 54% of the them are not marked by the detection using spatiotemporal consistency.

2
2.5. Detection of LV-outliers
To deal with the LV-outliers that stay the same or change slowly, the low variance periods are first detected by checking the first and second derivatives of the observed values over time. As some low variance observations will be normal observations when the ambient air is clean or stable, the spatial consistency is combined to decide whether the data in those periods should be mark as outliers. The residual of a low variance period is an average of the residuals of the spatial consistency estimation in this period: \begin{equation} R_{\rm v}=\frac{\sum_{i=b}^eR_{\rm s}(i)}{(e-b+1)} , \ \ (15)\end{equation} where R s(i) is the residual of the spatial consistency estimation, as in section 2.4.2, and b and e are the beginning and end of this period. The standard deviation of R v is calculated according to the standard error of the mean for samples that are normally distributed: \begin{equation} S_{\rm v}=\frac{\sum_{i=b}^eS_{\rm s}(i)}{(e-b+1)\sqrt{(e-b+1)}} , \ \ (16)\end{equation} where S s(i) is the standard deviation of the residuals of the spatial consistency estimation in section 2.4.2. Using Eqs. (5) and (6), the normalized residual Z v and its probability P v for the whole period can be obtained. If P v is smaller than a predefined value (10-6 in this study), the data within the whole period are identified as LV-outliers.2
2.6. Detection of P-outliers
P-outliers are mainly caused by the daily self-calibration of the instrument and appear every 24 h. According to this characteristic, the time series of the observed data within 11 days are firstly processed into diurnal-variation data: \begin{equation} f_{\rm p}(i)=\frac{\sum_{k=-5}^5f(i+24k)}{11} , \ \ (17)\end{equation} where f is the hourly observed concentrations and f p is the data after processing. Then, a median filter is applied: \begin{equation} F_{\rm p}(i)=\mathcal{M}(f_{\rm p}(i+k)),\quad k\in[-1,1] \ \ (18)\end{equation} where F p(i) is the estimated value and $\mathcal M$ is the median function.
Instead of Eq. (3), the standard deviation S p(i) of the residuals is calculated using the following method: \begin{equation} S_{\rm p}(i)=g(R_{\rm p}(i+k)),\quad k\in[-72,72] \ \ (19)\end{equation} where R p is the residual and g is a function that finds the 93.75th percentiles. The advantage of using the 93.75th percentiles of the residuals is that the obtained standard deviation is the second largest residual in one day, and only the observation with the largest residual in a day might be identified as the P-outliers. Using Eqs. (5) and (6), the probability P p of the residual can be obtained. If P p is smaller than a predefined value (10-4 in our check), the corresponding observation will be identified as the P-outliers. Figure 3 gives an example of the P-outliers that shows a peak concentration of ozone at 0400 LST (LST=UTC+8) every day. It should be noted that most of the P-outliers present large differences from their neighboring observations and can be identified by the detection of ST-outliers in section 2.4. However, there are also some P-outliers that have a relatively small difference from the neighboring observations and need to be identified by the P-outlier detection.
2
2.7. Detection of LP-outliers
The last detection is to mark the outliers with an observed concentration of PM2.5 higher than that of PM10 at the same hour and same site. This step is very simple but very important for the observed datasets from the China Nationwide Air Quality Monitoring Network. The PM2.5 and PM10 are mainly measured by the beta attenuation monitoring method (BAM) or tapered element oscillating microbalance method (TEOM). Both methods use heaters to reduce the humidity of the sampled air to prevent fogging and inhibit particle growth under high humidity. However, the heating process may lead to the volatilization of semi-volatile organic compounds of particulate matter. To deal with this problem, most new monitors adopt a filter dynamic measurement system to measure the volatile portion of the sample air when the concentrations are measured by TEOM. Also, a smart heater is implemented when the concentrations are measured by BAM, to keep the relative humidity at around 35% instead of keeping the temperature at around 50°C. The PM2.5 monitoring started from 2013, about a decade later than the PM10 monitoring. A filter dynamic measurement system or a smart heater is mandatory for the PM2.5 measuring instruments, while they are optional for the PM10 measuring instruments. Also, neither of them is implemented for most PM10 measuring instruments that were established before 2013 (Pan et al., 2014). As a result, the PM2.5 data are more reliable than the PM10 data, and the LP-outliers are more likely to occur at nighttime and on foggy days when the relative humidity is higher (Niu, 2017).Most outliers or "bad data" in the observed PM10 data are LP-outliers before 2016. The percentage of "bad data" can be higher than 7% (Fig. 4). However, as measuring instruments and management have been upgraded, the percentage of "bad data" had reduced to about 1% in 2016, and will hopefully continue to decrease in the future.

Figure 4 shows the removal ratios of the six pollutants during the three years. The removal ratio is the fraction of raw data being labeled as outliers. During 2014-15, the PM10 observations have a high removal ratio (more than 7%). Most of the removed data are LP-outliers. However, in 2016, the removed ratio decreases sharply to about 1%. This might be related to the implementation of a compensation algorithm for the loss of semi-volatile materials in the PM10 measurements. Interestingly, the removal ratios of the PM2.5, SO2, NO2, CO and O3 observations also decrease. This indicates that the data quality of the CNEMN has been improved from 2014 to 2016.


To evaluate the impact of the outlier detections on the observed concentrations, Fig. 5a compares the observed annual PM2.5 concentrations before and after quality assurance in Shijiazhuang City during 2014-16. The results show a big difference of more than 150 μg m-3 for estimating the annual mean concentration in 2014, which is mainly caused by some outliers that exceed the measurement range. Figure 5b presents the diurnal variations of the observed O3 concentrations before and after quality assurance at a site in Wuhan in 2015. Due to the P-outliers that occurred at 0400 LST, the raw data display a false peak at that time. The quality assurance reduces this false peak, and the data after quality assurance show more reasonable daily variations of O3 concentrations.
Figure 6 displays the differences in annual concentrations caused by outliers in 2015. For PM2.5, the differences are lower than 1 μg m-3 at most sites. However, there are 66 sites and 17 sites whose differences are greater than 10 μg m-3 and 50 μg m-3 respectively. For PM10, the differences at most sites are within 1-10 μg m-3, while there are 92 sites and 23 sites with differences greater than 10 μg m-3 and 50 μg m-3 respectively. For CO, big differences of more than 1 mg m-3 can be observed at 80 sites. For O3 and NO2, the differences are relatively small, and only a few stations (<20) have differences of more than 10 μg m-3. For SO2, there are 38 sites with differences of more than 10 μg m-3. The above results suggest that outliers might lead to significant biases in the estimation of annual mean concentrations of these six pollutants. Identifying and removing outliers is a necessary step before using the online dataset.
The outliers detected account for 0.65% to 5.68% of the observations for the six pollutants. PM10 observations have the most outliers, among which LP-outliers contribute the most (see section 2.7 for why the PM2.5 data are more reliable than the PM10 data). The proportions of outliers in the six pollutants all decrease from 2014 to 2016, which suggests an improvement in the data quality of the CNEMN network. The impact of outliers is estimated by the difference in annual mean concentrations of PM2.5 between the raw data and the data after outlier detection. The differences are less than 1 μg m-3 at most sites, but there are 66 sites whose differences are greater than 10 μg m-3. This suggests that outlier detection is essential before using the monitoring datasets, even for evaluation of the annual mean concentrations.
The outlier detection method was developed with the help of the CNEMC, which is also the institute responsible for releasing the real-time air quality data. The method has been used in the assimilation system of the CNEMC, and is going to be integrated into the data management system. Hopefully, outliers in the real-time air quality data will be removed by our method in the near future.
Although the application of the PAOD method has brought some positive and interesting results, improvements can still be made for better performance in further in-depth studies. First, the outlier detection method is performed separately for each species; however, these different pollutants are closely linked to one another by atmospheric chemistry. Further developments of the outlier detection method could take into consideration information from multiple pollutants at the same time to account for chemical transformations. Second, the method does not take into account the specific locations of the stations in the urban environment. Stations in the proximity of heavy traffic highways will have much stronger variations in observations than those in urban green spaces. Developing different parameters sets for different types of stations might improve the performance. Third, the outlier detection method assumes that outliers account for a small proportion of the raw data. However, at some stations, most of the raw data on a weekly basis can be bad data, providing little or no information on reals concentrations. Examination of the distributions within the raw data might help in such cases. Finally, this method uses the normal distribution to compute the probability of residuals. However, when more information is available, new distribution types should be tested to better adapt to the CNEMN dataset.