- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2014 (2014), Article ID 178313, 9 pages

http://dx.doi.org/10.1155/2014/178313

## Radial Basis Function Neural Network Based on an Improved Exponential Decreasing Inertia Weight-Particle Swarm Optimization Algorithm for AQI Prediction

^{1}School of Information and Communication Engineering, North University of China, Taiyuan, Shanxi 030051, China^{2}Department of Mathematics, North University of China, Taiyuan, Shanxi 030051, China

Received 17 April 2014; Revised 8 July 2014; Accepted 9 July 2014; Published 17 July 2014

Academic Editor: Suohai Fan

Copyright © 2014 Jinna Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper proposed a novel radial basis function (RBF) neural network model optimized by exponential decreasing inertia weight particle swarm optimization (EDIW-PSO). Based on the inertia weight decreasing strategy, we propose a new Exponential Decreasing Inertia Weight (EDIW) to improve the PSO algorithm. We use the modified EDIW-PSO algorithm to determine the centers, widths, and connection weights of RBF neural network. To assess the performance of the proposed EDIW-PSO-RBF model, we choose the daily air quality index (AQI) of Xi’an for prediction and obtain improved results.

#### 1. Introduction

The radial basis function (RBF) neural network is a novel and effective feed-forward neural network [1], which has good performance of best approximation and global optimum. It has been broadly used in considerable applications, such as function approximate, classification, regression problems, prediction, signal processing, and other problems [2–6]. The RBF neural network architecture has three layers composed of input layer, hidden layer, and output layer. The input layer is composed of input vectors. Before the input vectors are input to the network, data processing should be done, such as normalization processing. This processing also can be done in the input layer.

The hidden layer is composed of hidden neurons, the number of which is determined by the issues described. The RBF networks are different from other types of neural networks mainly in the hidden neurons [7, 8]. Each hidden neuron has a radial basis function which is a center symmetric nonlinear function with local distribution. The radial basis function consists of a center position and a width parameter. Once the center and width are determined, the input vectors are mapped to the hidden space by the mapping . Suppose that the input layer has input units; the hidden layer has radial basis functions. The output of the th hidden neuron is expressed as where is the overall input sample as , and each is the -dimension input vector expressed as . and are, respectively, the center and the width of the th hidden neuron, and is an -dimension vector as . is Euclidean norm usually taking 2-norm. is the radial basis function. It can take a variable of formula expression such as B-Spline RBF [9], thin-plate Spline RBF [10], Cauchy RBF [11], and Gaussian RBF [12]. Among them the Gaussian function is the most used, as in where is the variable of radial basis function .

The output layer implements the mapping . is the number of output neurons. The output function is a linear combination of the outputs of the radial basis functions through connection weights which connect the hidden layer and the output layer, which is shown as where is the connection weight between the th hidden layer and the th output of the network.

RBF neural network contains three groups of parameters which are centers , widths , and connection weights . To optimize the RBF parameters, many optimization algorithms have been proposed, such as orthogonal least squares (OLS) algorithm [13], Expectation-Maximization (EM) algorithm [14], gradient descent algorithm [15], K-means clustering algorithm [16], Genetic algorithm (GA) [17], ant colony optimization (ACO) algorithm [18] and particle swarm optimization (PSO) algorithm [19, 20], and support vector machine (SVM) and extreme learning machine (ELM) [21]. Compared to other algorithms, PSO algorithm has many advantages: stable convergence, few parameters, and fast convergence speed. Many researchers have successfully applied the PSO algorithm in the learning and structure improvement of the RBF neural network for application problems.

The particle swarm optimizing (PSO) algorithm is put forward by Eberhart and Kennedy in 1995 [22], which is initially motivated by the intelligent collective behavior of birds in the foraging process. In PSO algorithm, each bird also called a particle has a position and a velocity and searches for the optimal solution by updating the position and the velocity. In the following years, many researchers introduce “inertia weight” and propose many dynamic variations of PSO based on the inertia weight [23–26]. Different inertia weight strategies imply different incremental changes in velocity per time step which means exploration of new search areas in pursuit of a better solution. In this paper, we propose an Exponential Decreasing Inertia Weight PSO (EDIW-PSO) algorithm to get the optimal parameters of RBF network.

This paper establishes a RBF network model based on EDIW-PSO algorithm. Section 2 introduces the basic PSO algorithm and several variants of inertia weight. Section 2.3 gives the improved EDIW-PSO algorithm based on an Exponential Decreasing Inertia Weight strategy. Section 3 presents the methodology of the proposed EDIW-PSO-RBF structure. Section 4 shows an experiment using this methodology comparing with other three models. The last section summarizes the conclusions of this study.

#### 2. Dynamic Particle Swarm Optimizing Algorithm

##### 2.1. Basic Particle Swarm Optimizing Algorithm

PSO algorithm is a parallel evolutionary computation algorithm. In PSO, each potential solution to an optimization problem is treated as a bird, which is also called a particle. The set of particles, also known as a swarm, is flown through the -dimensional search space of the problem. Each particle changes its own position and velocity based on the experiences of the particle itself and those of its neighbors. In the searching process, every particle is connected to and able to share information with every other particle in the swarm and the swarm communication topology is known as a global neighborhood described in [27]. This information sharing mechanism keeps the overall consistency to get the global solution for the overall swarm.

The position and velocity of the th particle in -dimensional solution space are denoted as and , respectively, where is the number of the swarm, , , and and are the lower and upper bounds of the th dimension.

According to a preset fitness function, we obtain the personal best position (also named as the local best fitness) of the th particle denoted as and the global best position (also named as the global best fitness) found so far of all particles of the swarm denoted as . At each iterative, the th particle updates its position and velocity as follows: where and are acceleration factors and positive constants, and are random numbers ranging from 0 to 1, and is the inertia weight on the interval keeping the memory of the old velocity vector of the same particle. When is a constant [22], it can lead to a static PSO, and when is varying iteratively, it leads to a dynamic PSO.

##### 2.2. Several Variants of Inertia Weight

The inertia weight determines the proportion of the current particle velocity. Large inertia weight can lead to large speed and strong searching ability of particles, but the global optimal solution may be missed. In contrast, small inertia weight makes the particle have strong development capability, but it needs long search time for fine tuning the local optimal solution.

By changing the inertia weight dynamically, the search capability is dynamically adjusted. Many researchers have proposed several variants of PSO according to the impact of . Most of the PSO variants use time-varying inertia weight strategies in which the value of the inertia weight varies with the iteration numbers. These methods can be either linear or nonlinear. In 1998, Shi and Eberhart introduced a Linearly Decreasing Inertia Weight (LDIW) strategy to get better inertia weight as the following formula [23]: where is the number of current iterative steps, is the maximum number of iterative steps the PSO is allowed to continue, and is the initial inertia weight and is the final inertia weight.

Chatterjee and Siarry propose a nonlinear decreasing inertia weight (NDIW) strategy to modulate inertia weight adaptation with time for improved performance of PSO algorithm [24]. The proposed adaption of is given as where is the nonlinear modulation index. With , the system becomes a special case of linearly adaptive inertia weight with time, as proposed by Shi and Eberhart [23]. After implementing the performance of this strategy for the famous Sphere function, they draw a conclusion that was a typical value and could be suitably determined for each case individually. When , during the early iterations a higher value of facilitates taking larger steps in the solution space, and during the later iterations is decreased more rapidly than the linear case, which is very suitable to determine the optimal region among the already discovered promising suboptimal regions.

In [25], Arumugam and Rao propose a global-local best inertia weight (GLbestIW) method in which the inertia weight is neither set to a constant value nor set as linearly decreasing time-varying function. The inertia weight is determined by the ratio of the global best fitness and the local best fitness in each iteration. The GLbestIW is given by the following equation: where is the global best fitness at th iterative and is the local best fitness of the th particle at th iterative.

##### 2.3. The Proposed Inertia Weight Variant

Larger is conducive to find the global best solution as soon as possible in the early iterative steps but may lead to miss the global best solution easily in later iterative steps. However, smaller means longer time to provide slower updating for fine tuning a local exploration. Hence in the early iterative steps, larger is needed for coarse global exploration, but in later iterations should decrease for fine tuning the local exploration. Appropriate inertia weight can help find the best solution with the least number of iterative steps.

A larger inertia weight facilitates global exploration and a smaller inertia weight tends to facilitate local exploration to fine-tune the current search area. In order to balance the global and local exploration, we present a new Exponential Decreasing Inertia Weight (EDIW) strategy. In this strategy, the inertia weight is exponential decreasing with the increase of iterative step . The proposed adaption of is given as where is controlling parameter to control the convergence rate of the inertia weight, . When , . When , can be expressed by the following equation: When , we obtain different values of inertia weight by the proposed adaption by setting varying sets of . The results are shown in Table 1. According to Table 1, when , for same and , small makes be still far from the final inertia weight , and the value of is more and more close to with the increase of . When , is nearly equal to . So the condition of can ensure that varies fully in the given range of [, ] when ranges from 1 to .

When takes different values, different decreasing effects will be got. For convenience to compare the different decreasing effects, we set , , and . The decreasing curves of inertia weight attained by the proposed EDIW strategy for varying are shown in Figure 1.

According to Figure 1, for the same , the rate of descent of gradually declines as the the increase of iterative step and begins to flatten in the later iterative steps. For different , the rate of descent of gradually increases as the increase of in the early iterative steps. Smaller can ensure that does not decrease so fast in the early iterations but will be still far from in the final iteration. For example, when and , the inertia weight is 0.4575 which is far from (0.2). Larger is conducive to making decrease fast to discover the promising suboptimal regions but may lead to the algorithm prematurely to begin the local search. For example, when and , the inertia weight begins to flatten at the iterative step . Parameter should be appropriately selected in EDIW-PSO algorithm. Based on the analysis above, we can choose the value of . During the early iterations decreases more rapidly than the linear case, which is very suitable for the algorithm to discover promising suboptimal regions. During the later iterations is fine-tuned, which is conducive to determining the optimal region among the already discovered optimal regions.

#### 3. The Proposed RBF Model by EDIW-PSO Algorithm

In this section we use the proposed EDIW-PSO algorithm to determine the optimal structure of the RBFNN and establish the EDIW-PSO-RBF network model. The position vector of each particle is needed to be optimized, which represents RBF centers , widths , and connection weights , where , , and . Therefore the dimension of of each particle in EDIW-PSO algorithm is . We map each to the RBFNN and obtain the prediction output. The fitness function of EDIW-PSO algorithm is defined in terms of the relative mean square error (RMSE) between the prediction output values and the actual values in the network training process. Thereby, we can minimize the fitness value of the network by the powerful search performance of EDIW-PSO algorithm. The fitness function is denoted by formula where and are the actual value and the prediction value of the th output neuron in the th sample, and is the number of training samples, while is the number of output neurons.

The iteration process of the improved EDIW-PSO-RBF learning algorithm can be described clearly as follows.

*Step 1. *Initialize the relative parameters, including the size of swarm , the boundary of position and velocity , the acceleration factors and , and the maximum iterative steps . Initialize ; for each particle, select two -dimensional vectors randomly to initialize the position and velocity of this particle, respectively.

*Step 2. *Map the position vector of each particle to the parameters of RBFNN.

*Step 3. *Calculate the fitness value of each particle according to formula (10). Set the current position of each particle as the personal best fitness . Then find the minimum fitness value as the global best fitness of the whole swarm.

*Step 4. *Update the inertia weight according to formula (8). Modify the particle velocity and position according to formula (4).

*Step 5. *Map the new position vector of each particle to the parameters of RBFNN, input training data, and train RBFNN.

*Step 6. *Recalculate the fitness values of the new particles and modify and . For each particle, if the current fitness value is better than the previous local best, then set the current fitness value to be the local best; or keep the previous local best. For the global swarm, if the best value of all current local best is better than the previous global best, then update the value of the global best; or keep the previous global best.

*Step 7. *Judge whether the particle satisfies the conditions: . If the condition is met, go to Step 8; otherwise, let , and go back to Step 4.

*Step 8. *Record the global best value ; exit the iteration.

*Step 9. *Use the optimal structure of RBFNN to perform prediction problem.

Apply the above steps until the terminal conditions hold. The flow chart is as in Figure 2.

#### 4. Experiment

In order to ensure the prediction accuracy of the proposed EDIW-PSO-RBF model, we choose the daily air quality index (AQI) of Xi’an [28] for the time series prediction. In recent days, the air pollution affects peoples’ travel and life. Daily AQI is a dimensionless index quantitatively describing air quality, which is calculated by the following six indicators: sulfur dioxide (), nitrogen dioxide (), particulate matter (: particle size is less than or equal to 10 microns), particulate matter (: particle size is less than or equal to 2.5 microns), carbon monoxide (CO), and ozone (). Among them, , , and CO are all the 24-hour average density; is the 8-hour moving average density. We choose 400 sets of data from 2013.1.1 to 2014.2.5 as train data and 5 sets of data from 2014.2.6 to 2014.2.10 as test data. Normalization processing is done with all the sets of data before it is used in the model. We adopt function to normalize the data to a range of as the following formula: where is the original data before normalization, and are the minimum value and the maximum value before normalization, respectively, is the data after normalization, and and are the minimum value and the maximum value after normalization, and, respectively, they are −1 and 1.

In this section, we assess the effectiveness of the proposed EDIW-PSO-RBF model comparing with other three inertia weight variants, which are LDIW-PSO-RBF model [23], NDIW-PSO-RBF model [24], and GLbestIW-PSO-RBF model [25]. Firstly, we build a RBF network consisting of 7 input neurons and 1 output neuron. The 7 input neurons consist of the six indicators and the air quality index, and the 1 output neuron is the next day’s air quality index. The number of neurons in hidden layer is determined to be 10. Therefore, the dimension of each particle in the modified PSO algorithm is .

Secondly, several parameters in the PSO simulation must be specified. In the proposed EDIW-PSO-RBF model, the value of is set to be 8. In the NDIW-PSO-RBF model, the nonlinear modulation index is set to be 1.2. In all the four models, the acceleration factors and are fixed to be 2. The minimum velocity and minimum position of every particle both are set to be −1. Meanwhile, the maximum velocity and maximum position of every particle both are set to be 1. The maximum number of iterations is set to be 1000. The population size is set as 50.

To assess the performance of the four different models, mean square error (MSE), relative mean square error (RMSE), and mean absolute percentage error (MAPE) are used as criteria, defined as where and denote the actual value and the network output value at the th day, respectively.

The actual daily air quality index (AQI) of Xi’an from January 2 in 2013 to February 5 in 2014 are shown in Figure 3. In this experiment, the proposed EDIW-PSO method, the LDIW-PSO method, the NDIW-PSO method, and the GLbestIW-PSO method are adopted to train RBFNNs. The global best fitness values of the four methods in the training process are given in Figure 4, which shows that the proposed EDIW-PSO method has the fastest convergence rate and the lowest fitness value in the four PSO methods.

By the EDIW-PSO, when the fitness reaches its minimum, the values of the optimal parameters of RBFNN: centers , widths , and connection weights are shown as in Table 2. Figure 5 shows the trained output curve by four methods for the daily air quality index (AQI) of Xi’an. Figure 6 shows the plots of trained absolute errors between the trained outputs and the actual values by four methods. And the values of MSE, RMSE, and MAPE () of trained ouput by four methods for the daily air quality index (AQI) of Xi’an are shown in Table 3. From Table 3, the three errors MSE, RMSE, and MAPE () of trained output by EDIW-PSO are, respectively, , , and , which are all the smallest among the four methods.

Based, respectively, on the trained optimal parameters of RBFNN by the four methods, we predict the daily air quality index (AQI) of Xi’an of following five days from 2014.2.6 to 2014.2.10. Table 4 shows the predicted outputs by using the LDIW-PSO-RBFNN model, the NDIW-PSO-RBFNN model, the GLbestIW-PSO-RBFNN model, and the EDIW-PSO-RBFNN model. Table 5 shows the values of MSE, RMSE, and MAPE () of predicted output by the four models according to formula (12). By EDIW-PSO-RBF model, the three errors MSE, RMSE, and MAPE () are, respectively, , , and , which are all the smallest among the four models.

#### 5. Conclusion

In this paper, we present and discuss an improved EDIW-PSO-RBF model to solve prediction problem. Based on the EDIW-PSO algorithm, we optimize the centers, widths, and connection weights of radial basis function (RBF) neural network. Furthermore, EDIW-PSO-RBF model is applied to the daily air quality index (AQI) prediction comparing with LDIW-PSO-RBF model, NDIW-PSO-RBF model, and GLbestIW-PSO-RBF model. In the processing of optimizing RBF, the value of fitness obtained by the proposed EDIW-PSO method is smaller than the values obtained by other three methods. And for the train data and the prediction test data, the MSE, RMSE, and MAPE () obtained by the improved EDIW-PSO-RBF model are all smaller than the values of which by the other three models. From the simulation results it can be observed that the PSO with the proposed inertia weight, EDIW, provides better result compared to the other PSO methods with LDIW and NDIW as well as the GLbestIW. The proposed EDIW-PSO-RBF model can be satisfactorily used in other prediction problems.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This work is fully supported by the National Natural Science Foundation Grant no. 61275120 of China.

#### References

- D. S. Broomhead and D. Lowe, “Radial basis functions, multi-variable func tional interpolation and adaptive networks,” Royal Singals and Radar Establishment Malvern (UNITED KINGDOM), 1988.
- G. Huang, P. Saratchandran, and N. Sundararajan, “A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation,”
*IEEE Transactions on Neural Networks*, vol. 16, no. 1, pp. 57–67, 2005. View at Publisher · View at Google Scholar · View at Scopus - S. Albrecht, J. Busch, M. Kloppenburg, F. Metze, and P. Tavan, “Generalized radial basis function networks for classification and novelty detection: self-organization of optimal Bayesian decision,”
*Neural Networks*, vol. 13, no. 10, pp. 1075–1093, 2000. View at Publisher · View at Google Scholar · View at Scopus - H. Han, Q. Chen, and J. Qiao, “An efficient self-organizing RBF neural network for water quality prediction,”
*Neural Networks*, vol. 24, no. 7, pp. 717–725, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - M. B. Li, G. B. Huang, P. Saratchandran, and N. Sundararajan, “Performance evaluation of GAP-RBF network in channel equalization,”
*Neural Processing Letters*, vol. 22, no. 2, pp. 223–233, 2005. View at Publisher · View at Google Scholar · View at Scopus - A. Zhang, C. Chen, and H. R. Karimi, “A new adaptive LSSVR with on-line multikernel RBF tuning to evaluate analog circuit performance,”
*Abstract and Applied Analysis*, vol. 2013, Article ID 231735, 7 pages, 2013. View at Publisher · View at Google Scholar - B. Walczak and D. L. Massart, “Local modelling with radial basis function networks,”
*Chemometrics and Intelligent Laboratory Systems*, vol. 50, no. 2, pp. 179–198, 2000. View at Publisher · View at Google Scholar · View at Scopus - L. J. Herrera, H. Pomares, I. Rojas, A. Guillén, G. Rubio, and J. Urquiza, “Global and local modelling in RBF networks,”
*Neurocomputing*, vol. 74, no. 16, pp. 2594–2602, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. Saranli and B. Baykal, “Complexity reduction in radial basis function (RBF) networks by using radial B-spline functions,”
*Neurocomputing*, vol. 18, no. 1–3, pp. 183–194, 1998. View at Publisher · View at Google Scholar · View at Scopus - A. Punzi, A. Sommariva, and M. Vianello, “Meshless cubature over the disk using thin-plate splines,”
*Journal of Computational and Applied Mathematics*, vol. 221, no. 2, pp. 430–436, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - J. Z. H. Zhang and H. Y. Li, “A reconstruction approach to CT with cauchy RBFs network,”
*Advances in Neural Networks*, vol. 3174, pp. 531–536, 2004. View at Google Scholar - M. A. T. Figueiredo, “On gaussian radial basis function approximations: interpretation, extensions, and learning strategies,” in
*Proceedings of the IEEE 15th International Conference on Pattern Recognition*, vol. 2, pp. 618–621, Barcelona, Spain, 2000. View at Publisher · View at Google Scholar - S. Chen, C. F. N. Cowan, and P. M. Grant, “Orthogonal least squares learning algorithm for radial basis function networks,”
*IEEE Transactions on Neural Networks*, vol. 2, no. 3, pp. 302–309, 1991. View at Publisher · View at Google Scholar · View at Scopus - R. Langari, L. Wang, and J. Yen, “Radial basis function networks, regression weights, and the expectation: maximization algorithm,”
*IEEE Transactions on Systems, Man, and Cybernetics A:Systems and Humans*, vol. 27, no. 5, pp. 613–623, 1997. View at Publisher · View at Google Scholar · View at Scopus - N. B. Karayiannis, “Reformulated radial basis neural networks trained by gradient descent,”
*IEEE Transactions on Neural Networks*, vol. 10, no. 3, pp. 657–671, 1999. View at Publisher · View at Google Scholar · View at Scopus - P. S. Grabusts, “A study of clustering algorithm application in RBF neural networks,”
*Scientific Proceedings of Riga Technical University*, pp. 50–57, 2001. View at Google Scholar - S. A. Billings and G. L. Zheng, “Radial basis function network configuration using genetic algorithms,”
*Neural Networks*, vol. 8, no. 6, pp. 877–890, 1995. View at Publisher · View at Google Scholar · View at Scopus - C. Man, X. Li, and L. Zhang, “Radial basis function neural network based on ant colony optimization,” in
*Proceedings of the International Conference on Computational Intelligence and Security Workshops (CIS '07)*, pp. 59–62, Harbin, China, December 2007. View at Scopus - S. Chen, X. Hong, B. L. Luk, and C. J. Harris, “Non-linear system identification using particle swarm optimisation tuned radial basis function models,”
*International Journal of Bio-Inspired Computation*, vol. 1, no. 4, pp. 246–258, 2009. View at Google Scholar - D. Wu, K. Warwick, Z. Ma et al., “Prediction of parkinson's disease tremor onset using a radial basis function neural network based on particle swarm optimization,”
*International Journal of Neural Systems*, vol. 20, no. 2, pp. 109–116, 2010. View at Publisher · View at Google Scholar · View at Scopus - G. B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning machine for regression and multiclass classification,”
*IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics*, vol. 42, no. 2, pp. 513–529, 2012. View at Publisher · View at Google Scholar · View at Scopus - R. Eberhart and J. Kennedy, “New optimizer using particle swarm theory,” in
*Proceedings of the 6th International Symposium on Micro Machine and Human Science*, pp. 39–43, Nagoya, Japan, October 1995. View at Scopus - Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in
*Proceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC '98)*, pp. 69–73, May 1998. View at Scopus - A. Chatterjee and P. Siarry, “Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization,”
*Computers and Operations Research*, vol. 33, no. 3, pp. 859–871, 2006. View at Publisher · View at Google Scholar · View at Scopus - M. S. Arumugam and M. V. C. Rao, “On the performance of the particle swarm optimization algorithm with various inertia weight variants for computing optimal control of a class of hybrid systems,”
*Discrete Dynamics in Nature and Society*, vol. 2006, Article ID 79295, 17 pages, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - L. Kaiyou, Q. Yuhui, and H. Yi, “A new adaptive well-chosen inertia weight strategy to automatically harmonize global and local search ability in particle swarm optimization,” in
*Proceedings of the 1st International Symposium on Systems and Control in Aerospace and Astronautics (ISSCAA '06)*, pp. 977–980, January 2006. View at Scopus - R. Chen and C. Wang, “Project scheduling heuristics-based standard PSO for task-resource assignment in heterogeneous grid,”
*Abstract and Applied Analysis*, vol. 2011, Article ID 589862, 20 pages, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - http://www.xianemc.gov.cn/.