Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 1649086 | 11 pages | https://doi.org/10.1155/2019/1649086

Research on Prediction Method of Reasonable Cost Level of Transmission Line Project Based on PCA-LSSVM-KDE

Academic Editor: Javier Martinez Torres
Received06 Mar 2019
Accepted18 Jul 2019
Published01 Aug 2019

Abstract

In order to reduce the investment risk, the evaluation standard of transmission line project investment planning becomes higher, which puts forward higher requirements for the reasonable level prediction of transmission line project cost. This paper combines principal component analysis (PCA) with the least squares support vector machine (LSSVM) model and establishes a point prediction model for transmission line project cost. Based on the analysis of the error of the point prediction model, the kernel density estimation (KDE) method is innovatively introduced to estimate the prediction error, and the probability density function of the error is obtained. Then, according to different confidence levels, the corresponding cost intervals are obtained, which means that the reasonable level of transmission line project cost is obtained. The results show that the coverage rate of the cost prediction interval under 85% confidence level is 88.57%. This conclusion shows that the model has high reliability and can provide a reliable basis for the evaluation of transmission line project investment planning.

1. Introduction

With the rapid development of national economy, the demand for power energy is increasing. Transmission line project is an important part of power grid construction, and rational evaluation of its investment planning is an important part of cost control. At present, cost control line [1] and general cost [2] are mostly used as evaluation criteria in power industry, and a specific value is given, which makes the evaluation results less compatible. Therefore, in order to make a reasonable evaluation of investment planning and determine the reasonable cost level of transmission line projects, reasonable cost intervals should be given on the basis of specific cost control lines.

There are many factors affecting the transmission line projects’ cost, which have the characteristics of randomness and instability. However, the general point prediction results cannot represent the variability of the transmission line project cost, and the information provided by them is often insufficient to meet the requirements of investment decision-making, which brings risks to the decision-making work. If the deterministic point prediction results can be given, and the fluctuation range of cost can be described at the same time, it will be helpful for power enterprises to make more reasonable investment decisions and make more reasonable investment planning evaluation. Therefore, the purpose of this paper is to study the interval prediction method of transmission line project cost, get the fluctuation interval of the cost at a certain confidence level, and obtain the prediction results in the sense of probability. The results of probabilistic prediction can provide more valuable information to decision makers, help them better grasp the changes of data, and also help power enterprises to make investment planning, risk analysis, and reliability evaluation.

With the development of machine learning and intelligent algorithm, the research of project cost point prediction has developed rapidly, and the prediction accuracy has greatly been improved. Ji and Abourizk constructed a special absorption Markov chain, which takes into account the uncertainty of rework caused by quality, and stochastically modeled the manufacturing process of building products so as to estimate and control the rework cost caused by quality [3]. Lesniak and Juszczyk established a regression model based on the artificial neural network to estimate field management cost quickly and reliably [4]. Bhargava et al. introduced a risk-based polynomial model and Monte Carlo simulation to predict that the project will follow a specific cost increase path in its development phase and will produce a given level of cost deviation severity [5]. Lesniak and Zima proposed a case-based reasoning method for estimating the construction cost of sports venues and CBR method based on historical data and sustainable development criteria was used to estimate the initial cost of construction projects [6]. Juszczyk et al. put forward a method of predicting the construction cost of stadium based on neural network and evaluated its prediction quality and accuracy [7].

The support vector machine (SVM), proposed by Vapnik [8], is an effective method based on statistical learning theory. The algorithm does not use the principle of minimum empirical risk to minimize the training error but is based on the principle of structural risk minimization to minimize the upper limit of generalization error so that the global optimal solution can be obtained theoretically [9, 10]. Least squares support vector machine (LSSVM) can effectively simplify computational complexity and improve operational efficiency by changing inequality constraints to equality constraints [11, 12], which makes it have great advantages in multifactor prediction. Liang et al. proposed a hybrid model based on the wavelet transform (WT) and LSSVM and optimized it with improved cuckoo search (CS) so as to achieve accurate load forecasting [13]. Kang et al. proposed a hybrid ensemble empirical mode decomposition (EEMD) and LSSVM methods to improve the accuracy of short-term wind speed prediction [14].

The research on the prediction method of engineering cost has developed rapidly, and many scholars have carried out in-depth research on the prediction of electric power engineering cost. Kong et al. established a cost prediction model of transmission and transformation project based on SVM which used SVM to solve the regression equation, and then the cost was predicted by the model [15]. Wang took the comprehensive cost index of transmission and transformation as the basis of project investment decision-making and probed into the establishment of a model for project investment cost prediction by using the Markov chain [16]. Lu et al. took full account of the characteristics of subitem cost and adopted different methods to forecast separately and then superimposed to get the total cost [17]. Wang et al. established the EEMD-BP model and used BP algorithm to forecast the trend components, and the final prediction results were obtained by considering the prediction values of the trend components and the fluctuation intervals [18]. Yi et al. evaluated the global sensitivity of input variables and proposed a neural network prediction method of transmission line project cost based on feedforward and postpropagation multilayer perception structure [19]. Wang et al. established the REGR-WNN prediction model and compared it with REGR and WNN models separately, and the prediction accuracy of this method is higher [20].

Some scholars have done some research in the area of interval prediction. Grounds et al. believed that prediction intervals show a series of values with specific probability and have the potential to improve decision-making compared with point prediction, so interval prediction may have important benefits [21]. Samal and Tripathy used a nonparametric kernel density method to express wind speed measurements that were not suitable for parameter distribution and used the chi-square test and Kolmogorov–Smirnov goodness-of-fit test to evaluate their applicability [22]. Fan et al. used nuclear density estimation to express the maximum allowable temperature in a period of time on the basis of estimating the instantaneous state real-time thermal rating (RTTR) when predicting the RTTR of overhead lines [23]. Amara et al. discussed the influence of temperature on the nonlinear relationship of power demand and proposed an adaptive conditional density estimation (ACDE) method based on kernel density estimation (KDE) to improve the accuracy of load forecasting [24].

2. Research Method

2.1. Principal Component Analysis (PCA)

Principal component analysis (PCA) is a method of transforming a set of variables that may be correlated into a series of linear irrelevant variables by orthogonal transformation. The transformed variables are called principal components, which can eliminate the multiple collinearities between data. PCA can be divided into the following steps:(1)The original index data are a p-dimensional random vector . Standard transformation of n samples is carried out: where and , and the normalized matrix Z is obtained.(2)The normalized matrix is calculated, and the sample correlation coefficient matrix is obtained.(3)The characteristic equation of R is solved, and p characteristic roots are obtained, and then the number of principal components is determined so as to ensure that the cumulative contribution rate of principal components can exceed 85%. For each , the unit eigenvector can be obtained by solving the system of equations .(4)The standardized index variables are transformed into the main component .(5)PCA extracts p totally new and unrelated variables by concentrating p observation variables as follows:

Before PCA,

After PCA,

2.2. Least Squares Support Vector Machine (LSSVM)

Although the traditional support vector machine (SVM) is good enough to avoid falling into the shortcomings of the local optimal solution, it will prolong the running time of the computer if the capacity of the data set is large because the SVM uses quadratic programming in the process of solving. Therefore, Suykens proposed the least squares support vector machine (LSSVM); that is, LSSVM is the improvement and perfection of the support vector machine [25]. LSSVM is characterized by solving linear equations rather than quadratic programming problems, which can simplify the calculation process and reduce the solving time effectively. As a result, its application has become more and more widespread. Regression algorithms for LSSVM are described as follows:

Given the training set , where represents the m-dimensional input vector and represents the output vector corresponding to . The regression model is constructed from the nonlinear mapping function as follows:where represents the weight vector and represents the offset. According to the model complexity and fitting error, the objective function of the LSSVM algorithm is as follows:

The constraint condition iswhere represents regression error and represents the penalty coefficient. The function of is to adjust the error, and the larger the value of , the smaller the corresponding regression error will be. For the parameter in the model to be optimized, the Lagrange function iswhere denotes the Lagrange multiplier. According to the quadratic programming KKT condition, the following results are obtained:

After eliminating and , the final matrix linear equations are obtained from the Mercer condition as follows:where and is an nn unit matrix, and the LSSVM regression function is obtained by solving the final equations as follows:

In this paper, the radial basis function is chosen as the kernel function:

2.3. Interval Prediction Theory Based on Kernel Density Estimation (KDE)

In point prediction of transmission line project cost, the predicted value is only the approximate value of the real value , and the probability that the predicted value is exactly equal to the real value is very small. In order to ensure the reliability of investment decision-making, it is necessary to estimate an approximate range of the predicted value and how much credibility (or confidence level) the range covers the real value. This range of variation is generally expressed by intervals, known as confidence intervals. When the number of samples and confidence level remain unchanged, the length of the confidence interval is inversely proportional to the accuracy of interval estimation. Confidence intervals generally have a two-sided confidence interval and one-sided confidence interval and . The confidence interval calculated in this paper is a two-sided confidence interval, which is defined as follows.

Let be a sample of the population , and the random interval of statistics and is called an interval estimate of . For a given confidence level , satisfy the following equation:where and are the lower and upper confidence limits of the error values, respectively. Interval is called the confidence interval under the confidence level . This paper uses the equal tail confidence interval:

The probabilistic density function estimation problem is to estimate its probability density function through samples. There are usually parametric, semiparametric, and nonparametric estimation methods. Nonparametric density estimation only takes the data of the sample itself as the basis of probability density function estimation. It does not need to make assumptions about the form of sample distribution beforehand and can deal with arbitrary density distribution. The commonly used nonparametric density estimation methods include histogram density estimation method and kernel density estimation method. Although the concept of the histogram density estimation method is simple and easy to use, the result is discontinuous; that is, the density estimation value at the boundary of the region will drop to 0 suddenly, and the efficiency is low. Kernel density estimate (KDE) is proposed by Parzen, also known as Parzen window estimation. And it is a very effective nonparametric density estimation method [26]. Its general expression is as follows:where is the total number of samples, is the bandwidth or smoothing parameter, is the given sample, and is the kernel function, satisfying the following conditions:

KDE can be regarded as the integration of forms centered on each observation sample point. Its performance depends on the selection of the kernel function and window width. If the selection of window width is too large, some characteristics of distribution will be concealed and excessive averaging will make the estimator deviate greatly; if the selection of window width is too small, the whole estimation, especially the tail, will be disturbed greatly, thus increasing the variance trend.

This paper uses relative error to define the deviation between the predicted value and the actual value of construction cost:

The error probability density function can be obtained by nonparametric kernel density estimation, and then the cumulative probability distribution function of relative error (as a random variable of error) can be obtained by integral shown as follows:

According to the cumulative probability distribution function of the error and the point prediction value of the sample, the confidence interval with confidence level can be obtained as follows:where , , and is the inverse function of .

The main research ideas and methods of this paper are shown in Figure 1. Firstly, this paper considers many impact indexes of transmission line project cost and extracts and screens the indexes. Secondly, PCA is used to reduce the dimension of the original index, and the corresponding principal component is used as the input variable of the prediction model. Then, a point prediction model for the transmission line project is constructed based on LSSVM. Finally, the probability density function of the prediction model error is obtained by KDE, and then the cost interval at a certain confidence level is obtained.

3. Selection of Cost Indexes and Data Source

3.1. Analysis of Influencing Factors of Cost and Screening of Indexes

The purpose of this paper is to study the interval prediction method of transmission line project cost and to guide the investment decision of transmission line project. The engineering characteristics of transmission line projects play a decisive role in project cost. Therefore, in order to predict the transmission line cost, it is necessary to comprehensively analyze the factors affecting the cost, and select the engineering characteristics that have a greater impact on the transmission line cost as the indexes affecting the transmission line cost, combining with the practical experience of transmission line project.

Transmission line project can be divided into six major units, namely, foundation project, tower project, erecting engineering, grounding project, annex project, and auxiliary project. Therefore, when identifying the influencing indexes of transmission line project cost, this paper firstly analyzes the six major units of the project and identifies their main influencing indexes. For the cost of each unit project, combined with the engineering practice experience, the engineering characteristics related to the site where the project is located, large quantities of works and high price of materials and other indexes which have a greater impact on the cost are selected as the analysis object. After classifying and summarizing, the influencing indexes of the overall cost are obtained, and the specific identification process is shown in Figure 2.

In Figure 2, some indexes have an impact on different unit project cost, so the cost impact indexes in Figure 2 are summarized and divided into natural indexes, technical indexes, and economic indexes as shown in Table 1.


Index typeInfluencing indexesNumber

Natural indexesTopographyX1
GeologyX2
The thickness of ice and wind speedX3

Technical indexesConductor consumptionX4
Tension ratioX5
Number of towersX6
Tower material consumptionX7
Foundation earthwork consumptionX8
Foundation concrete consumptionX9
Foundation steel consumptionX10

Economic indexesPrice of conductorX11
Price of tower materialX12
Price of foundation steelX13

3.2. Sample Data Source

This paper collects the actual data of transmission line project for model empirical analysis. According to the settlement data of transmission line projects completed and put into operation in Xinjiang by State Grid Corporation in 2017, 140 representative projects under 110 kV voltage level are selected. The original data samples in this paper are obtained through index processing [27], as shown in Table 2.


SampleX1X2X3X4X5X6X7X8X9X10X11X12X13Unit length cost (10 kUSD/km)

11.304.0010.003.850.657.1662.820.134.062.860.621.290.597.58
21.005.0012.990.416.46108.771243.081.009.640.600.621.250.387.17
31.006.007.006.255.2692.14808.041.0010.172.010.621.290.575.69
41.053.247.003.660.2334.0474.460.091.382.910.631.300.285.35
51.253.504.502.270.9920.93231.720.4613.292.960.661.670.408.89
61.002.004.502.350.7130.52117.240.238.072.900.581.320.5610.56
71.003.607.843.090.6411.8543.250.184.892.860.621.290.515.86
81.004.004.462.261.8241.44154.370.2721.552.940.611.200.288.65
1401.002.103.002.472.4834.14149.960.5815.692.910.661.260.267.40

4. Construction of Cost Interval and Empirical Calculation

4.1. Dimension Reduction of Cost Impact Indexes

There may be multiple collinearities among variables, so this paper makes principal component analysis of sample data to exclude the influence of correlation among variables and reduce the number of variables at the same time. Before principal component analysis, the KMO test and Bartlett spherical test were used to study the correlation between variables. The KMO value of 13 indexes is 0.679, and the significance level of the Bartlett spherical test is far less than 0.01. The test results show that the principal component analysis is feasible.

After principal component analysis, aggregated indexes that can express most of the information instead of the original indicators are selected. These aggregated indexes have almost the same function as the original indexes and can meet the needs of analysis. In addition, the selection of these aggregated indexes reduces the number of input indexes of the prediction model, reduces the complexity of the prediction model, and improves the running speed of the model. This paper argues that the aggregated indexes with cumulative variance over 85% can express the vast majority of the overall information. After SPSS dimensionality reduction, Table 3 gives the percentage and cumulative percentage of the total information explained by the corresponding principal components. It can be concluded that 87.699% of the total information can be explained by the first seven principal components, which is over 85%. Therefore, it is considered that extracting the seven principal components can better explain the information contained in the original variables.


ComponentInitial eigenvalues
Total% of varianceCumulative (%)

13.44126.46926.469
22.15516.57343.042
31.69513.04056.081
41.2939.94566.026
51.0658.19674.222
60.9657.42281.644
70.7876.05587.699
80.5354.11691.816
90.3572.74994.565
100.2752.11896.683
110.2221.70598.389
120.1371.05099.439
130.0730.561100.000

In order to establish the expression between the principal component and 13 cost impact indexes, the component score coefficient matrix is obtained by SPSS calculation as shown in Table 4.


U1U2U3U4U5U6U7

X1−0.05640.30940.23650.1536−0.22810.0275−0.3417
X20.06360.31360.13000.1818−0.0485−0.33190.0203
X3−0.00940.1011−0.25140.58140.20540.11980.6599
X4−0.0263−0.12190.46810.4893−0.05970.2384−0.0754
X50.27020.0184−0.0648−0.06400.0762−0.02840.1344
X60.2662−0.03360.00890.0300−0.02510.0379−0.0868
X70.25250.0701−0.09160.09310.06720.0549−0.1686
X80.2316−0.04510.0576−0.0521−0.05940.07190.1753
X90.1115−0.27880.38820.0888−0.06020.0311−0.0484
X10−0.0824−0.31020.02430.03100.2293−0.21940.2107
X110.01430.10910.3816−0.13160.4250−0.62620.2349
X12−0.01980.15600.3159−0.3781−0.07010.52710.6500
X13−0.01120.08950.0432−0.02990.74930.4521−0.4179

According to this matrix, the expression between seven principal components and cost impact indexes can be obtained. Taking the first principal component as an example,

4.2. Establishment and Training of LSSVM Point Prediction Model

The penalty coefficient of the model parameter is set as , and the kernel function parameter is set as to establish the LSSVM cost point prediction model. The input set of the point prediction model is composed of seven principal components which are processed by PCA, and the unit length cost is selected as the output variable. The LSSVM point prediction model is trained with the first 70 samples, and the latter 70 samples are used to verify the point prediction model, and the corresponding prediction values are obtained. The real values are sorted from small to large, and the predicted values and real values, as well as the corresponding relative errors, are obtained as shown in Figure 3.

The point prediction model is the basis of construction cost interval. This paper uses mean absolute percentage error (MAPE), root-mean-square error (RMSE), mean prediction error (MRE), and determination coefficient (R2) to evaluate and analyze the point prediction model. The calculation formulas of each index are as follows:where represents the predicted value of project cost, represents the actual value of project cost, represents the number of samples, and and represent the maximum and average value of the actual value of project cost. The MAPE value, RMSE value, MRE value, and R2 value of the LSSVM model are 8.59%, 4.99, 4.94%, and 88.67%. It can be concluded that the LSSVM point prediction model has high prediction accuracy and can further establish the cost interval.

4.3. Establishment of Cost Interval

On the basis of point prediction, the probability density of prediction error is obtained by nonparametric kernel density estimation according to the relative error of point prediction. Then the probability distribution polarity curve of the prediction error is fitted by cubic spline interpolation, and the and quantile are found. In this paper, the optimal window width is given automatically by using MATLAB, and the kernel function is the Gauss kernel function:

The estimated error probability curve is shown in Figure 4.

Figure 4 shows that the kernel density curve is in good agreement with the frequency histogram, retaining the internal characteristics, and in the tail, it is in good agreement with the normal distribution, with less interference. Cubic spline interpolation is used to fit the probability distribution of relative error of prediction, and the corresponding quantile and are found. The confidence intervals of the predicted values are calculated when the confidence level is 95%, 90%, 85%, and 80%, as shown in Table 5 and Figure 5.


SampleActual valuePoint prediction valueCost prediction interval
80% confidence level85% confidence level90% confidence level95% confidence level

14.344.133.604.533.564.583.504.713.414.87
24.904.443.874.873.834.933.765.063.675.23
35.015.244.575.764.525.824.445.984.346.18
45.044.654.065.114.015.173.945.313.855.49
55.085.494.786.024.736.094.656.254.546.47
65.104.594.015.043.965.103.895.243.805.42
7012.8413.5211.7914.8411.6515.0111.4515.4111.1815.94

This paper uses prediction interval coverage probability (PICP) to evaluate the reliability of the prediction interval. The formula is as follows:where is the total number of samples and is the Boolean quantity. If the actual value falls within the prediction range, , otherwise . When PICP = 1, it means that all the actual values fall within the predicted range, and the reliability is the highest. If only for the sake of purely pursuing reliability, the endpoint of the prediction interval can be scaled to the boundary value, but the prediction interval thus obtained loses its practical significance. Therefore, in order to get an effective prediction interval, PICP should be as close as possible to the preset confidence level in the actual interval prediction. If PICP is far less than the confidence level, the predicted interval is invalid and needs to be reconstructed. PICP at each confidence level is shown in Table 6.


Confidence level (%)PICP (%)

8075.71
8588.57
90100
95100

Table 6 shows that PICP at 85% confidence level is the closest to the confidence level. It shows that the interval prediction model has high reliability at 85% confidence level, which ensures the accuracy of prediction and does not lose practical significance because of the high value. Therefore, the cost interval under 85% confidence level should be selected.

5. Conclusion

The investment planning of transmission line project is of great significance to improve the investment benefit of the power grid project. Strengthening the evaluation of investment planning is an important means to determine investment rationally. Therefore, it is necessary to change the cost prediction from point prediction to interval prediction in order to improve the compatibility and reliability of investment planning evaluation. This paper proposes a reasonable cost-level prediction model based on PCA, LSSVM, and KDE. The PCA is used to screen and reduce the dimension of transmission line project cost data, and the principal component which can basically describe the factors affecting the cost is obtained as the input set of the prediction model. Using the theoretically mature LSSVM point prediction model, the nonlinear mapping between transmission line engineering characteristics and the cost is determined, and the model is trained and predicted. The error of the point prediction model is analyzed, and the probability density function of error is estimated by the KDE method. The corresponding cost interval is obtained according to different confidence levels. Finally, the accuracy and reliability of the interval prediction model are verified by calculating the PICP of different confidence level cost intervals. The results show that the PICP of the cost interval prediction model based on PCA-LSSVM-KDE is 88.57% at 85% confidence level, which not only guarantees sufficient accuracy but also has strong practical significance.

In summary, the reasonable level of the cost prediction model based on PCA-LSSVM-KDE proposed in this paper has good compatibility and reliability for transmission line project cost prediction. This model can have strong practical significance and good application effect in transmission line project cost investment planning and evaluation.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interests regarding the publication of this paper.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (NSFC) (71501071), Beijing Social Science Fund (16YJC064), and Fundamental Research Funds for the Central Universities (2017MS059 and 2018ZD14).

References

  1. Z. Z. Han, “Study on transmission line cost control line based on Monte Carlo simulation,” China Power Enterprise Management, vol. 15, pp. 92–96, 2016. View at: Google Scholar
  2. W. Liu, B. He, and Y. P. Wang, “Deepening analysis of universal cost of overhead transmission line project,” China Power Enterprise Management, vol. 3, pp. 22–24, 2016. View at: Google Scholar
  3. W. Ji and S. M. Abourizk, “Data-driven simulation model for quality-induced rework cost estimation and control using absorbing Markov chains,” Journal of Construction Engineering and Management, vol. 144, no. 8, Article ID 04018078, 2018. View at: Publisher Site | Google Scholar
  4. A. Lesniak and M. Juszczyk, “Prediction of site overhead costs with the use of artificial neural network based model,” Archives of Civil and Mechanical Engineering, vol. 18, no. 3, pp. 973–982, 2018. View at: Publisher Site | Google Scholar
  5. A. Bhargava, S. Labi, S. Chen, T. U. Saeed, and K. C. Sinha, “Predicting cost escalation pathways and deviation severities of infrastructure projects using risk-based econometric models and Monte Carlo simulation,” Computer-Aided Civil and Infrastructure Engineering, vol. 32, no. 8, pp. 620–640, 2017. View at: Publisher Site | Google Scholar
  6. A. Lesniak and K. Zima, “Cost calculation of construction projects including sustainability factors using the case based reasoning (CBR) method,” Sustainability, vol. 10, no. 5, p. 1608, 2018. View at: Publisher Site | Google Scholar
  7. M. Juszczyk, A. Lesniak, and K. Zima, “ANN based approach for estimation of construction costs of sports fields,” Complexity, vol. 2018, Article ID 7952434, 11 pages, 2018. View at: Publisher Site | Google Scholar
  8. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.
  9. C. Aldrich and L. Auret, “Statistical learning theory and kernel-based methods,” in Unsupervised Process Monitoring and Fault Diagnosis with Machine Learning Methods, pp. 117–181, Springer, London, UK, 2013. View at: Google Scholar
  10. L. N. Jiang, Research on the Predict of the Construction Cost Based on Support Vector Machine, Hebei University of Engineering, Handan, China, 2009.
  11. R. Dong, J. Xu, and B. Lin, “ROI-based study on impact factors of distributed PV projects by LSSVM-PSO,” Energy, vol. 124, pp. 336–349, 2017. View at: Publisher Site | Google Scholar
  12. R. G. Gorjaei, R. Songolzadeh, M. Torkaman, M. Safari, and G. Zargar, “A novel PSO-LSSVM model for predicting liquid rate of two phase flow through wellhead chokes,” Journal of Natural Gas Science and Engineering, vol. 24, pp. 228–237, 2015. View at: Publisher Site | Google Scholar
  13. Y. Liang, D. Niu, M. Ye, W.-C. Hong et al., “Short-term load forecasting based on wavelet transform and least squares support vector machine optimized by improved cuckoo search,” Energies, vol. 9, no. 10, p. 827, 2016. View at: Publisher Site | Google Scholar
  14. A. Q. Kang, Q. Tan, X. Yuan, X. Lei, and Y. Yuan, “Short-term wind speed prediction using EEMD-LSSVM model,” Advances in Meteorology, vol. 2017, Article ID 6856139, 22 pages, 2017. View at: Publisher Site | Google Scholar
  15. J. Kong, X. Y. Cao, and F. Xiao, “Research on cost forecasting model of power transmission and transformation project based on support vector machine,” Modern Electronics Technique, vol. 41, no. 4, pp. 127–130, 2018. View at: Google Scholar
  16. D. Wang, “Application of Markov chain in comprehensive cost index prediction of power transmission and transformation engineering,” Engineering Cost Management, vol. 4, pp. 41–44, 2014. View at: Google Scholar
  17. Y. Lu, D. X. Niu, J. P. Qiu, and W. Liu, “Prediction technology of power transmission and transformation project cost based on the decomposition-integration,” Mathematical Problems in Engineering, vol. 2015, Article ID 651878, p. 11, 2015. View at: Publisher Site | Google Scholar
  18. X. H. Wang, W. N. Wen, Y. C. Lu, and D. Xu, “EEMD-BP based on the cost of power transmission and transformation project uncertainty factor forecast,” China Power Enterprise Management, vol. 6, pp. 79–84, 2016. View at: Google Scholar
  19. T. Yi, Z. Yan, H. Lv et al., “Unit investment prediction of transmission line projects: a neural network approach,” Journal of the Balkan Tribological Association, vol. 22, no. 2A, pp. 1659–1668, 2016. View at: Google Scholar
  20. X. Wang, L. An, and Y. Zhang, “Cost prediction of power transmission and transformation project based on the combined variable weight model of REGR-WNN,” China Power Enterprise Management, vol. 13, pp. 93–96, 2016. View at: Google Scholar
  21. M. A. Grounds, S. Joslyn, and K. Otsuka, “Probabilistic interval forecasts: an individual differences approach to understanding forecast communication,” Advances in Meteorology, vol. 2017, Article ID 3932565, 18 pages, 2017. View at: Publisher Site | Google Scholar
  22. R. K. Samal and M. Tripathy, “Estimating wind speed probability distribution based on measured data at Burla in Odisha, India,” Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, vol. 41, no. 8, pp. 918–930, 2019. View at: Publisher Site | Google Scholar
  23. F. Fan, K. Bell, and D. Infield, “Transient-state real-time thermal rating forecasting for overhead lines by an enhanced analytical method,” Electric Power Systems Research, vol. 167, pp. 213–221, 2019. View at: Publisher Site | Google Scholar
  24. F. Amara, K. Agbossou, Y. Dubé, S. Kelouwani, A. Cardenas, and J. Bouchard, “Household electricity demand forecasting using adaptive conditional density estimation,” Energy and Buildings, vol. 156, pp. 271–280, 2017. View at: Publisher Site | Google Scholar
  25. J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, and J. Vandewalle, Least Squares Support Vector Machines, World Scientific, Singapore, 2002.
  26. Z.-W. Li and P. He, “Data-based optimal bandwidth for kernel density estimation of statistical samples,” Communications in Theoretical Physics, vol. 70, no. 6, pp. 728–734, 2018. View at: Publisher Site | Google Scholar
  27. T. Yi, H. Zheng, Y. Tian, and J.-P. Liu, “Intelligent prediction of transmission line project cost based on least squares support vector machine optimized by particle swarm optimization,” Mathematical Problems in Engineering, vol. 2018, Article ID 5458696, 11 pages, 2018. View at: Publisher Site | Google Scholar

Copyright © 2019 Zhao Xue-hua et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

496 Views | 185 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.