Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6682793 | https://doi.org/10.1155/2021/6682793

Jun Jin, Zhongxun Zhao, "Composite Quantile Regression Neural Network for Massive Datasets", Mathematical Problems in Engineering, vol. 2021, Article ID 6682793, 10 pages, 2021. https://doi.org/10.1155/2021/6682793

Composite Quantile Regression Neural Network for Massive Datasets

Academic Editor: Mukhtaj khan
Received16 Nov 2020
Revised19 Mar 2021
Accepted20 Apr 2021
Published04 May 2021

Abstract

Traditional statistical methods and machine learning on massive datasets are challenging owing to limitations of computer primary memory. Composite quantile regression neural network (CQRNN) is an efficient and robust estimation method. But most of existing computational algorithms cannot solve CQRNN for massive datasets reliably and efficiently. In this end, we propose a divide and conquer CQRNN (DC-CQRNN) method to extend CQRNN on massive datasets. The major idea is to divide the overall dataset into some subsets, applying the CQRNN for data within each subsets, and final results through combining these training results via weighted average. It is obvious that the demand for the amount of primary memory can be significantly reduced through our approach, and at the same time, the computational time is also significantly reduced. The Monte Carlo simulation studies and an environmental dataset application verify and illustrate that our proposed approach performs well for CQRNN on massive datasets. The environmental dataset has millions of observations. The proposed DC-CQRNN method has been implemented by Python on Spark system, and it takes 8 minutes to complete the model training, whereas a full dataset CQRNN takes 5.27 hours to get a result.

1. Introduction

With the development of information technology, mobile Internet, social networks, and e-commerce have greatly expanded the boundaries and applications of the Internet. Terabytes of dataset are becoming more common. For example, the National Aeronautics and Space Administration Earth Observing System Terra and Aqua Satellites monitor the Earth atmosphere, oceans, and land, producing approximately 1.5 TB of environmental data per day. According to Intel’s forecast, in 2020, a networked self-driving car will generate 4 TB of data every 8 hours of operation. Massive datasets offer researchers both unprecedented challenges and opportunities. The key challenge is that using conventional computing methods to directly apply machine learning and statistical methods to these massive datasets is prohibitive. First, the calculation time is too long to get results quickly. Second, the data can be too big so that the computer primary memory overflowed. In order to overcome these challenges, researchers have proposed a divide-and-conquer method [13], which may be an effective method to analyze massive datasets.

In this paper, we consider a divide-and-conquer method for massive datasets. Fan et al. [4] analyzed the least squares regression of the linear model on a massive dataset using a divide-and-conquer method. Lin [1] considered a divide-and-conquer method for estimating equations in massive datasets. Chen [2] analyzed the generalized linear models in extraordinarily large data using the divide-and-conquer method. Zhang [5] proposed a divide-and-conquer kernel ridge regression. Schifano [3] extended the divide-and-conquer approach in [3] to online updating for stream data. A block average quantile regression (QR) approach for the massive dataset is proposed in [6] by combining the divide-and-conquer method with QR. Jiang [7] extended the work of [6] to composite quantile regression (CQR) for massive datasets. Recently, Chen et al. [8] studied QR under memory constraint for massive datasets. Chen [9] considered a divide and conquer approach for quantile regression in big data.

As we all know, QR is more robust than ordinary least squares regression (OLS) when error distribution is heavily skewed. However, the relative efficiency of the QR can be arbitrarily small compared to OLS. To this end, CQR, proposed in [10], is always effective regardless of the error distribution and works much better in efficiency when compared with the OLS. Since then, CQR has been extensively studied in the nonlinear model. Reference [11] utilized local polynomials and CQR to estimating nonparametric model and proposed the local polynomial CQR. Kai [12] studied CQR for varying coefficient partially linear model. Guo [13] proposed CQR for partially linear additive model. Jiang [14] studied two-step CQR for the single index model.

Artificial intelligence (AI) does not require any priori assumptions about the model in dealing with nonlinear problems, so that it is a significant advantage compared to the statistical model such as nonparametric model and varying coefficient partially linear model. There has been a lot of research on combining the nice properties of AI and QR or CQR. For instance, Taylor and Cannon [15, 16] proposed the quantile regression neural network (QRNN) by combining the artificial neural network (ANN) with QR. A support vector quantile regression (SVQR) method is proposed in [17] through combining support vector machine (SVM) and QR. A composite quantile regression neural network (CQRNN) method is studied in [18], which adds ANN structure to CQR. But, when the amount of data is large, it is well known that using conventional computing methods to directly compute CQR and ANN are very slow, so the computation of CQRNN is slower. It is shown that computation is a bottleneck for application of CQRNN on massive datasets.

In this paper, our focus is on CQRNN for massive datasets whose size exceeds the limitations of a single computer primary memory. Fortunately, we are not limited to a single computer. In this end, we consider CQRNN for massive datasets by the divide-and-conquer method on the distributed system. A distributed system is composed of multiple computers (called nodes) that can run independently, and each node uses wire protocols (such as RPC and HTTP) to transfer information to achieve a common goal or task. In the distributed system, the communication cost of data communication between different nodes is usually very expensive. A “master and worker” type of distributed system is considered where workers do not communicate directly with each other, which is shown in Figure 1. The concrete steps are illustrated as follows. Step 1: randomize the initial parameters. Step 2: distribute the initial parameters to each worker. Step 3: each worker trains a subset of the data and sends the training results to the master. Step 4: the master takes the weighted average of the training results of each worker as the approximate global training results. Step 5: when there is more data to be processed, return to Step 2.

Traditional statistical methods and machine learning on massive datasets are challenging owing to limitations of computer primary memory. CQRNN is an efficient and robust estimation method. But, most of existing computational algorithms cannot solve CQRNN for massive datasets reliably and efficiently. In this paper, we propose a DC-CQRNN method to extend CQRNN on massive datasets. The major idea is to divide the overall dataset into some subsets, applying the CQRNN for data within each subsets, and final results through combining these training results via weighted average. The proposed DC-CQRNN method can significantly reduce the computational time and the required amount of primary memory, and the training results will be as effective as analyzing the full data at the same time. For illustration, we used Monte Carlo simulation to compare the performance of DC-CQRNN method with CQRNN [18], QRNN [15, 16], the artificial neural network (ANN) [19], SVM [20], and random forest (RF) [21]. In addition, an environmental dataset application verifies and illustrates that our proposed approach performs well for CQRNN on massive datasets, where the environmental dataset has millions of observations.

The remainder of this paper is organized as follows. In Section 2, we present the DC-CQRNN for massive datasets in detail. In Section 3, we used Monte Carlo simulation to illustrate the finite sample performance of the proposed DC-CQRNN method. A detailed presentation of the environmental dataset analysis is in Section 4. In Section 5, we conclude the paper.

2. Methodology

2.1. Our Motivation

In recent years, China GDP has grown rapidly. This has also intensified the contradiction between national economic development and the environment. At the same time, smog pollution has occurred in parts of North China, Northeast China, and Central China, which had a huge effect on China. Therefore, it is extremely important to accurately report air quality to the public and to prevent smog measures in advance. At present, a large number of air quality monitoring stations are established in many places in China such as Beijing, Chengde, Tangshan, and Tianjin. However, for those areas that do not have air monitoring stations, how to accurately predict air quality and report to the public in a timely manner remains a problem.

To this end, we collected environmental datasets from air quality monitoring stations in different places, with a total of 1,018,562 observations. When using the CQRNN method to dealing with environmental datasets, we found that the data can be too big so that the general computer primary memory overflowed, and the computational time is too long to get results quickly. CQRNN method is basically ineffective in dealing with massive data. At the same time, it is obvious that SVM, ANN, RF, etc., also take a long time to processing massive datasets. In this section, we propose a divide-and-conquer CQRNN (DC-CQRNN) method to extend CQRNN on massive datasets. The same idea can be applied to SVM, ANN, and RF.

2.2. Composite Quantile Regression Neural Network

In the real world, there is usually a nonlinear relationship between predicted and predictor , which can be seen as a stochastic model as follows:where is the model error and is a vector of unknown parameters. There are many regression techniques to solve the unknown parameters of the model, such as OLS, QR, CQR, and their derived methods. Xu et al. [18] proposed CQRNN by combining the nice properties of ANN and CQR. Given predictor and predicted , the CQRNN objective aims for minimizing the empirical loss function:where , for (see [10]), and is the conditional quantile of at and can be estimated by the following two steps. Firstly, the outcome of the th hidden-layer node is calculated by an activation function to the inner product among and the hidden-layer weights plus the hidden layer bias .where denotes an activation function generally using the hyperbolic tangent, is a weight vector of hidden layer, and is a bias vector of the hidden layer. Secondly, an estimate of predicted is consequently given bywhere is the output layer activation function, is the output later weight vector, and is the output layer bias. Let contains all coefficients including weights and biases, to be trained or estimated. The main purpose of CQRNN is to estimate using

Remark 1. (1) According to the suggestion of [16], we use the Huber norm to overcome the problem that is not always differentiable. (2) To avoid over fitting the model, we add a penalty term in equation (5) to obtain equation (6).where is the -norm and is a regularization parameter. According to the suggestion of [18], we choose and through EBIC-like criterion:where is the number of selected variables corresponding to and .

2.3. Divide-and-Conquer Composite Quantile Regression Neural Network

When the sample size is too big, using conventional computing methods to directly solve the optimization problem in (5) is infeasible. Based on the ideas of [1, 2], our method is to divide the overall dataset into some subsets with each containable in the computer primary memory. Then, we implement the CQRNN for data within each subset. Final, we obtain results through combining these training results via weighted average.

In detail, we proposed DC-CQRNN which can be obtained by the following concrete steps:Step 1: randomize the initial parameters and divide the full dataset into subsets, so that the th subset contains observations: .Step 2: distribute the initial parameters to each worker.Step 3: each worker trains a subset of the full data set; obtains the estimators , using the methodology in solving equation (5); and sends to the master.Step 4: the master implements the weighted average of to obtain as the resulting estimator of :Step 5: when there are more data to be processed, return to Step 2. The detailed process is shown in Figure 2.

Remark 2. Obviously, if the number of K is large, the subsets’ data size will be very small, so the correlation among the values of the dataset is destroyed. To this end, we give some restrictions on to manage correlation. Regularity conditions are as follows:(a)The sample size of the th subset is and that (b), where and

3. Numerical Simulations

In this section we exploit Monte Carlo simulation to compare the finite sample performance of the DC-CQRNN method with CQRNN [18], QRNN [15, 16], ANN [19], SVM [20], and RF [21]. The performances of the CQRNN method with different are very similar in the simulation of [22]. Thus, we only consider as a compromise between the estimation and computational efficiency of the CQRNN method.

3.1. Simulation Data

In order to investigate the performance of DC-CQRNN methods with different structures, we choose the various values of parameters and , namely, and . Simultaneously, to illustrate the effectiveness and robustness of our method, we consider three different error distributions for : standard normal distribution , a student distribution with three degrees of freedom , and a Chi-square distribution with three degrees of freedom and two cases for .Case 1 (i.i.d): the data generate fromwhere and are independent and follow .Case 2 (non-i.i.d): the data are generated fromwhere and guarantees that is a non-i.i.d one.

We generated samples with the same observations and variables under case 1 and case 2 with three error distributions, respectively, and randomly assigned 50,000 samples as a training dataset (In sample) and the remaining samples as testing datasets (out of sample). In order to test the performance of the competitive model, all of the simulations are run for 100 replicates.

3.2. Prediction Performance

We use the mean absolute error (MAE) and root mean square error (RMSE) to compare the prediction performance of DC-CQRNN, CQRNN, DC-QRNN, QRNN, ANN, RF, and SVM both in sample and out of sample, wherewhere is an estimator of . We employ in (7) to select and , and the results are showed in Table 1. To reduce the computational load, we consider all the combinations of and . The predicted performance is listed in Tables 2 and 3.


CaseErrorsQRNNANN

150.250.250.25
50.150.150.15
50.250.250.25

250.0550.0550.055
50.150.150.15
50.0550.0550.055


SamplesErrorsIndicesDC-DC-QRNNDC-QRNNANNRFSVM

In sample0.20850.21170.21580.31850.32830.32840.41820.41640.41830.31370.30380.3568
(0.0007)(0.0010)(0.0019)(0.0012)(0.0019)(0.0023)(0.0037)(0.0051)(0.0091)(0.0159)(0.0017)(0.0015)
0.34890.35300.35720.44890.45090.45980.49990.50120.50520.45120.44730.4607
(0.0021)(0.0021)(0.0011)(0.0020)(0.0027)(0.0027)(0.0019)(0.0072)(0.0086)(0.0115)(0.0018)(0.15)
0.55830.56290.57550.65830.66320.67500.75810.75860.75120.75850.72910.7676
(0.0036)(0.0088)(0.0013)(0.0039)(0.0042)(0.0066)(0.0068)(0.0077)(0.0078)(0.0042)(0.0020)(0.0033)
0.86070.86420.87960.96070.96430.97371.11071.11101.10791.06141.04371.1807
(0.0029)(0.0031)(0.0050)(0.0024)(0.0037)(0.0052)(0.0029)(0.0057)(0.0062)(0.0299)(0.0137)(0.0311)
0.72220.73610.73920.89260.89090.89151.01921.00821.00951.02631.00661.0269
(0.0023)(0.0017)(0.0023)(0.0024)(0.0025)(0.0045)(0.0022)(0.0076)(0.0085)(0.0052)(0.0012)(0.0031)
0.86340.87260.89751.02631.02661.02781.26371.25111.21131.22341.18661.2924
(0.0046)(0.0040)(0.0052)(0.0046)(0.0059)(0.0072)(0.0043)(0.0079)(0.0098)(0.0051)(0.0013)(0.0047)

Out of sample0.20870.21140.21230.32830.32830.32850.41850.41980.42420.31450.46510.3564
(0.0003)(0.0026)(0.0027)(0.0009)(0.0019)(0.0028)(0.0013)(0.0048)(0.0091)(0.0167)(0.0019)(0.0020)
0.34910.35260.35710.44980.45290.46130.50900.51330.51010.45220.54980.4587
(0.0007)(0.0027)(0.0027)(0.0007)(0.0022)(0.0021)(0.0066)(0.0075)(0.0087)(0.0130)(0.0041)(0.0041)
0.55760.56330.58030.65960.66350.67640.76760.76490.76220.75820.82240.7682
(0.0031)(0.0041)(0.0058)(0.0033)(0.0037)(0.0052)(0.0033)(0.0070)(0.0087)(0.0047)(0.0025)(0.0027)
0.86470.86850.87150.96460.96870.98801.11471.11291.11981.15451.28521.2003
(0.0025)(0.0044)(0.0065)(0.0046)(0.0073)(0.0095)(0.0059)(0.0075)(0.0095)(0.0102)(0.0148)(0.0105)
0.73360.74610.75310.89400.89450.90321.02301.00891.01201.02761.19241.0786
(0.0010)(0.0013)(0.0022)(0.0029)(0.0038)(0.0049)(0.0032)(0.0071)(0.0077)(0.0045)(0.0040)(0.0031)
0.85020.85520.86111.07071.07281.08571.22051.21491.22251.22891.49261.3986
(0.0065)(0.0040)(0.0061)(0.0036)(0.0078)(0.0091)(0.0043)(0.0083)(0.0101)(0.0067)(0.0067)(0.0069)


SamplesErrorsIndicesDC-DC-QRNNDC-QRNNANNRFSVM

In sample0.03000.03020.03040.04010.04030.04070.06010.06050.06070.06030.03770.0598
(0.0002)(0.0003)(0.0001)(0.0002)(0.0002)(0.0004)(0.0002)(0.0002)(0.0005)(0.0002)(0.0001)(0.0002)
0.04520.05540.05570.05530.05540.05590.07520.07540.07600.07530.04450.0750
(0.0003)(0.0004)(0.0001)(0.0003)(0.0003)(0.0005)(0.0003)(0.0003)(0.0006)(0.0003)(0.0001)(0.0003)
0.17030.17070.17660.22030.22080.22510.32030.32210.32630.32040.20440.3201
(0.0011)(0.0012)(0.0028)(0.0011)(0.0013)(0.0013)(0.0011)(0.0016)(0.0027)(0.0014)(0.0011)(0.0014)
0.33270.33290.36150.38280.38310.38620.48280.48090.48250.48330.27100.4832
(0.0018)(0.0019)(0.0019)(0.0184)(0.0184)(0.0180)(0.0185)(0.0148)(0.0134)(0.0183)(0.0079)(0.0183)
0.30580.30610.31120.35580.35620.36270.45580.45640.46370.46970.26220.4558
(0.0018)(0.0017)(0.0008)(0.0018)(0.0017)(0.0042)(0.0018)(0.0019)(0.0038)(0.0020)(0.0010)(0.0016)
0.50530.50260.51940.55530.55260.56140.65540.65760.66310.63790.34000.6554
(0.0033)(0.0040)(0.0006)(0.0032)(0.0040)(0.0054)(0.0032)(0.0047)(0.0067)(0.0030)(0.0016)(0.0030)

Out of sample0.03010.03030.03020.04010.04030.04070.06010.06030.06070.06020.06820.0598
(0.0003)(0.0002)(0.0001)(0.0003)(0.0002)(0.0004)(0.0002)(0.0002)(0.0004)(0.0003)(0.0004)(0.0004)
0.04530.04550.04550.05530.05550.05600.07530.07540.07600.07530.08650.0750
(0.0003)(0.0003)(0.0001)(0.0003)(0.0002)(0.0004)(0.0003)(0.0003)(0.0005)(0.0004)(0.0004)(0.0004)
0.17040.17080.17770.22040.22090.22500.32040.32220.32630.32050.38210.3202
(0.0009)(0.0009)(0.0027)(0.0009)(0.0007)(0.0009)(0.0009)(0.0016)(0.0026)(0.0010)(0.0016)(0.0009)
0.32440.32460.32810.37440.37480.37800.47440.48330.48180.47390.55260.4738
(0.0094)(0.0094)(0.0020)(0.0094)(0.0093)(0.0087)(0.0094)(0.0227)(0.0106)(0.0089)(0.0107)(0.0088)
0.30580.30610.31140.35580.35610.36280.45590.45650.46390.46970.54080.4553
(0.0015)(0.0013)(0.0006)(0.0015)(0.0013)(0.0049)(0.0015)(0.0020)(0.0047)(0.0024)(0.0030)(0.0019)
0.50800.50530.51690.55800.55530.56410.65810.65810.66350.63970.75140.6569
(0.0032)(0.0040)(0.0007)(0.0032)(0.0040)(0.0036)(0.0031)(0.0052)(0.0072)(0.0038)(0.0040)(0.0039)

In Tables 1 and 3, the DC- and DC- denote DC-CQRNN with and , respectively, and the and denote CQRNN with and , respectively. For QRNN and DC-QRNN, we only report the results at .

From Tables 2 and 3, we can see that(1)All of the case, CQRNN has higher prediction accuracy than QRNN, ANN, RFR, and SVM, especially when Q = 9(2)The results of DC-CQRNN and CQRNN are very close, indicating that DC-CQRNN method can be used as an effective approximation of CQRNN method under massive datasets

3.3. Running Time Performance

To assess the computational efficiency of the DC-CQRNN method, we record the running time of all method implemented in Python programming language and carried out on a Intel(R) Core(TM)i9-8950HK CPU (2.90 GHz) processors and 16 GB RAM. For fair comparison, we recorded only the average CPU time used by 100 repetitions of each method. The average CPU time of all method is presented in Table 4.


Method

96.4083.2097.6085.6188.6298.83
DC-26.1226.1629.7221.5831.2433.61
DC-8.778.499.416.669.229.98
70.4054.0067.4056.2163.4272.23
DC-11.6010.3213.0610.2211.1613.12
DC-6.725.416.416.447.318.47
QRNN51.2041.8055.0044.4246.2353.14
DC-QRNN6.504.505.408.0111.9810.57
DC-QRNN3.302.803.805.345.146.01
ANN36.4031.6037.8040.8232.4134.62
RFR222.00241.20247.80150.42165.87174.44
SVM129.40114.23113.42133.21109.17108.83

It can be seen from Table 4 that the CPU time of CQRNN is longer than that of QRNN. At the same time, DC-CQRNN runs faster than QRNN and CQRNN, and the CPU time of DC-CQRNN decreases as K increases. The main advantage of DC-CQRNN is that each worker in the master and worker type of distributed system is independent, so the CQRNN on each worker can be executed in parallel. As a result, the computing time of DC-CQRNN in massive datasets is greatly reduced compared to CQRNN.

In addition, to compare the computational efficiency of the DC-CQRNN method in both sequential and parallel distributed environment, in Table 5, the computing time of DC-CQRNN and DC-QRNN under sequential distributed environment (SDE) and parallel distributed environment (PDE) is recorded. It can be seen from Table 5 that the computational efficiency of PDE has obvious advantages compared with SDE.


Method

DC-PDE26.1226.1629.7221.5831.2433.61
SDE251.42244.58285.12196.51296.12305.2

DC-PDE8.778.499.416.669.229.98
SDE388.45382.51402.22312.92396.61427.32

DC-PDE11.6010.3213.0610.2211.1613.12
SDE102.2597.46116.4296.5599.17119.23

DC-PDE6.725.416.416.447.318.47
SDE314.25252.40298.42304.23339.58402.15

DC-QRNNPDE6.504.505.408.0111.9810.57
SDE58.2638.7547.5672.58102.292.68

DC-QRNNPDE3.302.803.805.345.146.01
SDE148.25115.45167.38241.61232.48272.15

4. Real-World Data Applications

4.1. Environmental Dataset

The research areas of this paper contain Baoding, Beijing, Chengde, Shijiazhuang, Tangshan, Tianjin, Xingtai, and Zhangjiakou in China from January 1, 2015, to July 1, 2019, totally 1,018,561 observations. Our study collected hourly historical monitoring PM2.5 from website—the Ministry of Environmental Protection of the People’s Republic of China (http://datacenter.mep.gov.cn)—and collected meteorological data from website—National Oceanic and Atmospheric Administration (https://www.noaa.gov/). Each sample includes PM2.5 concentration, temperature, pressure, humidity, wind direction, wind speed, and visibility, respectively, measured at the air quality monitoring stations. Details for the dataset are shown in Table 6.


IndicatorsUnitSourcePeriod

PM2.5The Ministry of Environmental Protection of the People’s Republic of China2015.1–2019.7
TemperatureCNational Oceanic and Atmospheric Administration
PressurehPa
Wind speedm/s
Dew-point temperatureC
Relative humidity
Visibilitym

4.2. Normalization

Data normalization is an important step for many machine-learning estimators, particularly when dealing with neural networks. Dataset with a wider range is likely to cause instability when training models. Standardization was used to standardize the features by deducting the mean and scaling the data, with the variance of variable calculated aswhere , , and are the sample values, mean, and standard deviation, respectively.

4.3. Empirical Results

The aim of our experiments is to examine the effectiveness of the proposed DC-CQRNN for the spatial prediction of PM2.5 concentration. We also consider models including ANN, QRNN, CQRNN, RF, and SVM. For the PM2.5 concentration monitoring dataset, it is divided into a training set and a testing set according to a ratio of 7 : 3. To get the results in a most objective way, we have repeated 100 times the experiments of training and testing at randomly chosen composition of training and testing data. The final results of training and testing are the average of all trials. For parameters setting in intelligent model, we use the EBIC approach, and Table 7 presents the results of parameters selection, and for machine learning model, we consider cross-validation [23] for their parameter setting.


LayerLayer typeNeuron countRegularization valueActivation function

1Input60
2Hidden500.05Relu
3Output10Relu

Setting  = 1, 10, 50 and  = 4, 9. For DC-QRNN, we only report the results at . The results of our experiments measured by RMSE, MAE, and CPU time are reported in Table 8.


MethodIn sampleOut of sample
MAERMSEMAERMSE

1898417.132027.694619.907329.0148
DC-180217.685828.659621.127429.8764
DC-48219.730628.894021.877130.5780
1839318.308928.170520.914829.2282
DC-167519.865528.974721.928930.8750
DC-47620.714730.878622.550032.0981
QRNN1489620.430330.146122.649531.2272
DC-QRNN160221.944831.421023.949632.4893
DC-QRNN39122.666831.696624.219233.6168
ANN1487222.953331.359124.817733.1205
RF977622.403431.875027.533236.3443
SVM866623.903232.625626.110634.5832

For the prediction accuracy of the model, the training results of DC-QRNN and DC-CQRNN are close to the QRNN and CQRNN based on full sample, respectively. The CQRNN is significantly better than the ANN, RF, and SVM. In particular, the CQRNN performs best when K = 9 for both in sample and out of sample.

Considering the CPU time of the model, the CPU time of the ANN based on full sample training is higher than RF and SVM. In the DC-QRNN and DC-CQRNN, the training of each piece of data is independent, so parallel calculation can be performed on each piece of data, which significantly reduces the computational time. The experimental results show that the CPU time of DC-QRNN and DC-CQRNN are much lower than QRNN and CQRNN based on full sample, respectively.

The environmental dataset includes 1,018,562 observations. The entire methodology has been implemented by Python on Spark system. By using the proposed DC-CQRNN method with and on the Spark system, it takes 8 minutes to complete the model training, whereas a full sample CQRNN method with takes 5.27 hours to get a result.

5. Results and Discussion

Massive datasets offer researchers both unprecedented challenges and opportunities. The key challenge is that using conventional computing methods to directly apply machine learning and statistical methods to these massive datasets is prohibitive. In this paper, the Monte Carlo simulation studies and an environmental dataset application verify and illustrate that our proposed approach performs well for CQRNN on massive datasets. Therefore, the DC-CQRNN method is effective and important. Obviously, the larger the value of K, the more efficient the DC-CQRNN method is. But, it should be noted that if the number of is large, the subsets data size will be very small, so the correlation among the values of the dataset is destroyed. Therefore, as long as the value of is not very large, it is ok.

6. Conclusion

Using composite quantile regression neural networks to dealing with massive datasets faces two main challenges: first, the calculation time is too long to get results quickly. Second, the data can be too big so that the computer primary memory overflowed. To solve this difficulty, we propose DC-CQRNN by a divide-and-conquer method on a “master and worker” type of distributed system. The proposed DC-CQRNN can significantly reduce the computational time and the required amount of computer primary memory, and the training results are as effective as analyzing the full data simultaneously. The divide-and-conquer ideal also extends to QRNN, ANN, and SVM. In the future, we will try to use the subsampling method to reduce the time-consuming of the neural network training massive data and save the input cost.

Data Availability

The research areas of this paper contains Baoding, Beijing, Chengde, Shijiazhuang, Tangshan, Tianjin, Xingtai, Zhangjiakou in China from January 1, 2015 to July 1, 2019, totally 1,018,561 observations. Our study collected hourly historical monitoring PM2.5 from website: the Ministry of Environmental Protection of the People’s Republic of China (http://datacenter.mep.gov.cn), and collected meteorological data from website: National Oceanic and Atmospheric Administration (https://www.noaa.gov/).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (no. 11471264).

References

  1. N. Lin and R. Xi, “Aggregated estimating equation estimation,” Statistics and Its Interface, vol. 4, no. 1, pp. 73–83, 2011. View at: Publisher Site | Google Scholar
  2. X. Chen and M. Xie, “A split-and-conquer approach for analysis of extraordinarily large data,” Statistica Sinica, vol. 24, pp. 1655–1684, 2014. View at: Publisher Site | Google Scholar
  3. E. D. Schifano, J. Wu, C. Wang, J. Yan, and M.-H. Chen, “Online updating of statistical inference in the big data setting,” Technometrics, vol. 58, no. 3, pp. 393–403, 2016. View at: Publisher Site | Google Scholar
  4. T.-H. Fan, D. K. J. Lin, and K.-F. Cheng, “Regression analysis for massive datasets,” Data & Knowledge Engineering, vol. 61, no. 3, pp. 554–562, 2007. View at: Publisher Site | Google Scholar
  5. Y. Zhang, J. Duchi, and M. Wainwright, “Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates,” Journal of Machine Learning Research, vol. 16, pp. 3299–3340, 2015. View at: Google Scholar
  6. Q. Xu, C. Cai, C. Jiang, F. Sun, and X. Huang, “Block average quantile regression for massive dataset,” Statistical Papers, vol. 61, pp. 141–165, 2017. View at: Publisher Site | Google Scholar
  7. R. Jiang, X. Hu, K. Yu, and W. Qian, “Composite quantile regression for massive datasets,” Statistics, vol. 52, no. 5, pp. 980–1004, 2018. View at: Publisher Site | Google Scholar
  8. X. Chen, W. Liu, and Y. Zhang, “Quantile regression under memory constraint,” Annals of Statistics, vol. 47, pp. 3244–3273, 2019. View at: Publisher Site | Google Scholar
  9. L. Chen and Y. Zhou, “Quantile regression in big data: a divide and conquer based strategy,” Computational Statistics & Data Analysis, vol. 144, 2019. View at: Publisher Site | Google Scholar
  10. H. Zou and M. Yuan, “Composite quantile regression and the oracle model selection theory,” Annals of Statistics, vol. 36, pp. 1108–1126, 2008. View at: Publisher Site | Google Scholar
  11. B. Kai, R. Li, and H. Zou, “Local composite quantile regression smoothing: an efficient and safe alternative to local polynomial regression,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 72, no. 1, pp. 49–69, 2010. View at: Publisher Site | Google Scholar
  12. B. Kai, R. Li, and H. Zou, “New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models,” Annals of Statistics, vol. 39, pp. 305–332, 2011. View at: Publisher Site | Google Scholar
  13. J. Guo, M. Tang, M. Tian, and K. Zhu, “Variable selection in high-dimensional partially linear additive models for composite quantile regression,” Computational Statistics & Data Analysis, vol. 65, pp. 56–67, 2013. View at: Publisher Site | Google Scholar
  14. R. Jiang, Z.-G. Zhou, W.-M. Qian, and Y. Chen, “Two step composite quantile regression for single-index models,” Computational Statistics & Data Analysis, vol. 64, pp. 180–191, 2013. View at: Publisher Site | Google Scholar
  15. J. W. Taylor, “A quantile regression neural network approach to estimating the conditional density of multiperiod returns,” Journal of Forecasting, vol. 19, no. 4, pp. 299–311, 2000. View at: Publisher Site | Google Scholar
  16. A. J. Cannon, “Quantile regression neural networks: implementation in R and application to precipitation downscaling,” Computers & Geosciences, vol. 37, no. 9, pp. 1277–1284, 2011. View at: Publisher Site | Google Scholar
  17. I. Takeuchi, Q. Le, T. Sears, and A. Smola, “Nonparametric quantile estimation,” Journal of Machine Learning Research, vol. 12, Article ID 1231C1264, 2006. View at: Google Scholar
  18. Q. Xu, K. Deng, C. Jiang, F. Sun, and X. Huang, “Composite quantile regression neural network with applications,” Expert Systems with Applications, vol. 76, pp. 129–139, 2017. View at: Publisher Site | Google Scholar
  19. X. Feng, Q. Li, Y. Zhu, J. Hou, L. Jin, and J. Wang, “Artificial neural networks forecasting of PM2.5 pollution using air mass trajectory based geographic model and wavelet transformation,” Atmospheric Environment, vol. 107, pp. 118–128, 2015. View at: Publisher Site | Google Scholar
  20. X. Li, A. Luo, J. Li, and Y. Li, “Air pollutant concentration forecast based on support vector regression and quantum-behaved particle swarm optimization,” Environmental Modeling & Assessment, vol. 24, no. 2, pp. 205–222, 2019. View at: Publisher Site | Google Scholar
  21. X. Hu, J. H. Belle, X. Meng et al., “Estimating PM2.5Concentrations in the conterminous United States using the random forest approach,” Environmental Science & Technology, vol. 51, no. 12, pp. 6936–6944, 2017. View at: Publisher Site | Google Scholar
  22. Y. Tian, Q. Zhu, and M. Tian, “Estimation of linear composite quantile regression using EM algorithm,” Statistics & Probability Letters, vol. 117, p. 183, 2016. View at: Publisher Site | Google Scholar
  23. A. Krogh and J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” in Advances in Neural Information Processing Systems, pp. 231–238, 1995. View at: Google Scholar

Copyright © 2021 Jun Jin and Zhongxun Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views328
Downloads269
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.