Mathematical Problems in Engineering

Volume 2014, Article ID 761403, 14 pages

http://dx.doi.org/10.1155/2014/761403

## A Novel Adaptive Elite-Based Particle Swarm Optimization Applied to VAR Optimization in Electric Power Systems

^{1}Department of Electrical Engineering, Chung Yuan Christian University, Chung Li City 320, Taiwan^{2}Department of Electrical Engineering, National Central University, Chung Li City 320, Taiwan^{3}Department of Applied Electronics Technology, National Taiwan Normal University, Taipei 320, Taiwan

Received 5 February 2014; Accepted 30 March 2014; Published 22 May 2014

Academic Editor: Ker-Wei Yu

Copyright © 2014 Ying-Yi Hong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Particle swarm optimization (PSO) has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1) appending the mean search to the original approach and (2) pruning/cloning particles. The mean search, leading to stable convergence, helps the iterative process coordinate between the global and local searches. The mean of the particles and standard deviation of the distances between pairs of particles are utilized to prune distant particles. The best particle is cloned and it replaces the pruned distant particles in the elite strategy. To evaluate the performance and generality of the proposed method, four benchmark functions were tested by traditional PSO, chaotic PSO, differential evolution, and genetic algorithm. Finally, a realistic loss minimization problem in an electric power system is studied to show the robustness of the proposed method.

#### 1. Introduction

Particle swarm optimization (PSO) is a population-based optimization method [1]. PSO attempts to mimic the goal-searching behavior of biological swarms. A possible solution of the optimization problem is represented by a particle. The PSO algorithm requires iterative computation, which consists of updating the individual velocities of all particles at their corresponding positions. The PSO algorithm has some merits: it does not need the crossover and mutation operations used in genetic algorithms; it has a memory so that the elite solutions can be retained by all particles (solutions); and the computer code of PSO is easy to develop. The PSO has been successfully applied to solve many problems, for example, reconfiguration of shipboard power system [2], economic dispatch [3], harmonic estimation [4], harmonic cancellation [5], and energy-saving load regulation [6].

PSO is designed to conduct global search (exploration) and local search (exploitation) in the solution space. The former explores different extreme points of the search space by jumping among extreme particles. In contrast, the latter searches for the extreme particle in a local region. The particles ultimately converge to the same extreme position. However, when a problem involves a great number of unknowns, the PSO generally requires a large number of particles in order to gain a global optimal solution (position). Consequently, achieving the coordination between the global search and the local search becomes more difficult.

To overcome the limitation described above, van den Bergh and Engelbrecht presented the cooperative PSO (COPSO) based on the dimension partition [7]. Nickabadi et al. focused on several well-known inertia weighting strategies and gave a first insight into various velocity control approaches [8]. The adaptive PSO (APSO) presented by Zhan et al. utilized the information on population distribution to identify the search status of particles; the learning strategy tries to find an elitist and then search the global best position in an iterative step [9]. Caponetto et al. presented a chaos algorithm to compute the inertia weight of preceding position of a particle and a self-tuning method to compute learning factors [10]. Their method introduces a chaos parameter, which is determined by a logistic map. The learning factors are adaptive according to the fitness value of the best solution during the iterations. To enhance the scalability of PSO, José and Enrique presented two mechanisms [11]. First, a velocity modulation is designed to guide the particles to the region of interest. Second, a restarting modulation redirects the particles to promising areas in the search space. Deng et al. presented a two-stage hybrid swarm intelligence optimization algorithm incorporating with the genetic algorithm, PSO, and ant colony optimization to conduct rough searching and detailed searching by making use of the advantages of the parallel and positive feedback [12]. Li et al. presented a cooperative quantum-behaved PSO, which produces several particles using Monte Carlo method first and then these particles cooperate with one another to converge to the global optimum [13]. Recently, Arasomwan and Adewumi tried many chaotic maps, in additional to the traditional logistic map, which attempted to enhance the performances of PSO [14]. Yin et al. incorporated the crossover and mutation operators in PSO to enhance diversity in the population; besides, a local search strategy based on constraint domination was proposed and incorporated into the proposed algorithm [15].

The variants of PSO generally have the following limitations.(1)The solutions are likely to be trapped in the local optima and undesirable premature situations because it relies heavily on the learning factors and inertia weight.(2)Compared with the traditional PSO, variants of PSO require longer CPU times to compute the global optimum due to the extra treatment in exploration and exploitation.(3)The iterative process is unstable or even fails to converge because of the ergodic and dynamic properties, for example, chaos sequence.

Based on the above literature review, a novel PSO is proposed in this paper. This novel adaptive elite-based PSO employs the mean of all particles and standard deviation of distances between any two particles. In addition to global and local searches, a new mean search is introduced as an alternative method for finding the global optimum. The standard deviation of distances among all particles is utilized to prune the distant particles. The best particle is cloned to replace the discarded particles. This treatment ensures a stable iterative process. The increase in the computational burden of this enhancement is trivial.

The rest of this paper is organized as follows. Section 2 provides the general PSO algorithm. Section 3 proposes the novel elite-based PSO. In Section 4, four benchmark functions are utilized to validate the proposed method. Comparative studies among different optimization methods (traditional PSO, chaotic PSO [16], differential evolution [17], and genetic algorithm) are given. The results of its application to the real problem of loss minimization in an electric power system are presented. Section 5 draws conclusions.

#### 2. General PSO Algorithm

PSO, which is an evolutionary optimization method for minimizing an objective , reflects the behavior of flocking birds or schooling fish. PSO comprises a population of particles iteratively updating the empirical information about a search space. The population consists of many individuals that represent possible solutions and are modeled as particles moving in a -dimensional search space. Let the superscript be the iteration index. The position and velocity of each particle are updated as follows [1]: where vectors and are the -dimensional position and velocity of particle , (number of population size), respectively. The inertia weight is designed to copy the previously updated features to the next iteration. If , then the preceding has a strong impact on . and are the best position of a particle and the best known position found by any particle in the swarm so far, respectively. The random numbers and are between 0 and 1. Learning factors and are positive constants constrained within the range 0 and 2 such that . The products and thus stochastically control the overall velocity of a particle. In this paper, denotes the th vector (particle) at the th iteration and can be regarded as the th updated value that is added to .

The second term in (1) is designed to conduct the global search (exploration) and the third term in (1) describes the local search (exploitation) in the solution space, as described in Section 1. The global and local searches should be coordinated in order to gain the global optimum and avoid premature results. The inertia weight decreases linearly throughout the iterations and governs the momentum of the particle by weighting the contribution of the particle’s previous velocity to its current velocity. A large is designed for global searches, whereas a small is used for local searches. Consequently, in the earlier stages of the search process, a large is preferred, while a small value is preferred in the later stages.

#### 3. Proposed Adaptive Elite-Based PSO

The proposed novel PSO has the following three features. An extra mean search is introduced to coordinate exploration and exploitation; distant particles are pruned and the best particle is cloned to ensure that the iterative process is stable; and complicated computation is avoided and CPU time is thus reduced. The proposed adaptive elite-based PSO, thus, performs 2 main tasks: mean search and particle pruning/cloning. The effects of these two tasks decline as the number of iterations increases. The limitations of the traditional variants of PSO described in Section 1 can therefore be eliminated.

##### 3.1. Mean Search

The proposed method is based on the mean of the particles and standard deviation of the distances between any two particles in the* t*th iterations. Equation (1) is herein modified to the following:
where the inertia weight is decreased linearly from 0.5 in the first iteration to 0.3 in the final iteration. In addition to and , the term is also a random number between 0 and 1. is a -dimensional vector of the mean values of all particles. The last term introduced in (3) is used to coordinate the global and local searches as well as the particle pruning/cloning task later.

Because [18], the new learning factor is defined by where and [10]. Equation (4) ensures that decreases gradually to zero and (3) becomes (1) when the iterative process converges.

##### 3.2. Particle Pruning/Cloning

introduced in Section 3.1 is used to develop the second task of the adaptive elite strategy: pruning the distant particles. The standard deviation of all distances among particles in the* t*th iteration is evaluated. Suppose that all distances follow a Gaussian distribution. Then covers 99.7% of all particles. Let . Because in (4) decreases approximately from 1.48 to zero, the values of will be increased from 0.04 to 3. Hence, the second task in the adaptive elite-based PSO is initially to prune distant particles and finally cover all particles. Restated, the particles outside the range of are pruned. The reduced number of particles are substituted by cloning . That is,

As shown in Figure 1, is a virtual particle which includes the mean of all particles. Consider the range . Only seven particles are inside this range and three are outside it. Consider which is outside the range and is pruned. is hence substituted by , which is denoted by .

##### 3.3. Algorithmic Steps

The proposed method can be implemented using the following algorithmic steps.

*Step 1. *Input the objective, unknowns, and the number of population size.

*Step 2. *Give randomly the particles (vectors of unknowns). Let .

*Step 3. *Calculate all objective values for all particles.

*Step 4. *If the iterative process meets the convergence criteria, then stop.

*Step 5. *Find the and from all particles according to the objective values of all particles.

*Step 6. *Calculate and using (5). Compute using (4).

*Step 7. *Evaluate , , and .

*Step 8. *If a particle lies outside the range , then it is pruned and .

*Step 9. *Calculate the velocities of all particles using (3).

*Step 10. *Update all particles using (2).

*Step 11. *Let and go to Step 3.

##### 3.4. Proof of Convergence of the Proposed Method

Let be a real number. Equations (2) and (3) are rewritten in the discrete-time matrix form as follows [19, 20]: where

Adding a constant vector into both sides of (7), one can yield Define

Thus, (9) can be rewritten as follows:

If the condition holds, (11) becomes a linear system as

The equilibrium point must occur at as for any . According to the condition , one can obtain

Since and the equilibrium point , this implies which is true only when as . Hence, the equilibrium point is

In order to guarantee of the discrete-time linear system as shown in (12) to be asymptotically stable, the following necessary and sufficient conditions are obtained using the classical Jury criterion:

Since random numbers , , and are between 0 and 1, the above stability conditions are equivalent to the following set of parameter selection heuristics, which guarantee convergence for the proposed adaptive elite-based PSO algorithm:

According to (17), the convergence of the proposed adaptive elite-based PSO can be guaranteed.

#### 4. Simulation Results

The performance of the proposed adaptive elite-based PSO is verified by four benchmark test functions in this section. Traditional PSO, chaotic PSO (CPSO), differential evolution (DE), and genetic algorithm (GA) are used to investigate the benchmark functions for comparative studies [21]. The number of dimensions of each benchmark function is 20 (meaning that the number of unknowns ). The number of particles (population size) is also 20 in all methods. The mutation rate and crossover rate in DE are 0.8 and 0.5, respectively; while those in GA are 0.01 and 1.0, respectively. Ten simulations are conducted by each method to verify the optimality and robustness of the algorithms. Two convergence criteria are adopted: convergence tolerance and fixed maximum number of iterations. Five manuscript codes are developed using MATLAB 7.12 to minimize the benchmark functions. The CPU time for studying these four test functions is evaluated using a PC with Intel (R) Core i7 (CPU@3.4 GHz, RAM 4 GB). Finally, the problem of loss minimization in an electric power system is used to validate the proposed method.

##### 4.1. Benchmark Testing: Sphere Function

First, the sphere function is tested as follow: where and is a unimodal test function whose optimal solution is and . First, the convergence tolerance is set to 0.001. Table 1 shows the shortest, mean and the longest CPU times obtained from 10 simulations by the five methods. As illustrated in Table 1, all methods can approach similar solutions except for GA (which fails to converge). DE requires the longest CPU times. In the conditions of mean and the longest CPU times, the proposed method yields smaller values of than the other methods. Figure 2 shows the iteration excursions of the different methods with their shortest CPU times.

Table 2 shows the best, mean and the worst obtained by running 500 iterations using all methods. It can be found that the proposed method always yields the smallest among the five methods. Figures 3 and 4 present the parameter and the standard deviation obtained using the proposed method in the case with the best , respectively. The values of decrease quickly in the first 30 iterations and the standard deviation oscillates while decreasing to zero near the 200th iteration.

Generally, the CPU time required by the proposed method is longer than those taken by PSO and CPSO but shorter than those taken by DE and GA.

##### 4.2. Benchmark Testing: Rosenbrock Function

This subsection employs the Rosenbrock function for testing as follows: where and . Each contour of the Rosenbrock function looks roughly parabolic. Its global minimum is located in the valley of the parabola (banana valley). Since the function changes little in the proximity of the global minimum, finding the global minimum is considerably difficult. is also a unimodal test function whose optimal solution is and . First, the convergence tolerance is set to 0.001. Table 3 shows the shortest, mean and the longest CPU times obtained by performing 10 simulations using the five methods. As illustrated in Table 3, all methods yield similar solutions except for CPSO and GA (which fail to converge). The CPU times required by the proposed method are between those taken by PSO and DE. It could be found that DE requires fewer iterations but longer CPU times in all studies. All solutions (optimality), from the viewpoints of the shortest, mean and the longest CPU times, are similar. Figure 5 shows the iteration excursions of the different methods (the cases with their shortest CPU times).

Table 4 shows the best, mean and the worst obtained by running 1000 iterations using all methods. Since the proposed AEPSO and PSO require 6500~8000 iterations to find the global optimum of , they only reach a near optimal solution in the 1000th iterations. Figures 6 and 7 display the parameter and standard deviation of the proposed method in the case with the best , respectively. The values of decrease quickly in the first 40 iterations. However, the standard deviation decreases to a very small positive value near the 220th iteration and then oscillates continuously.

##### 4.3. Benchmark Testing: Griewank Function

In this subsection, the Griewank function is tested as follows: where and . If , then the function has 191 minima, with the global minimum at and local minima at for , 12.5601, 18.8401, and 25.1202,…. First, the convergence tolerance is set to 0.01. Table 5 shows the shortest, mean and the longest CPU times achieved using the five methods in 10 simulations. As illustrated in Table 5, CPSO, DE, and GA fail to converge. The optimality of the proposed method and PSO, as given by the shortest, mean and the longest CPU times, is similar. Figure 8 shows the iteration excursions of the methods with the shortest CPU times.

Table 6 shows the best, mean and the worst obtained at the 1000th iteration by all methods. The proposed method still attains the best optimality compared with other methods.

##### 4.4. Benchmark Testing: Ackley Function

The Ackley function is an* n*-dimensional highly multimodal function that has a great number of local minima, which look like noise but only one global minimum at and :
where and herein.

First, the convergence tolerance is set to 0.001. Table 7 shows the shortest, mean and the longest CPU times of 10 simulations obtained by the five methods. As illustrated in Table 7, all methods are able to find the global optimum, except for CPSO (which fails to converge). The CPU times required by the proposed method are between those required by PSO and DE, while GA needs very long CPU times. All solutions (optimality) are similar from the viewpoints of the shortest, mean and the longest CPU times. Figure 11 shows the iteration excursions of the various methods with the shortest CPU times. The proposed method converges the fastest.

Table 8 shows the best, mean and the worst obtained at the 500th iteration by all methods. The proposed method and DE can gain the global optimum but PSO, CPSO, and GA cannot. The proposed method always gains the smallest values of by inspecting the best, mean and the worst values of the . The proposed method also requires the shortest CPU times to converge to solutions. Figures 12 and 13 illustrate parameter and standard deviation , respectively, obtained using the proposed method in the case with the best . The values of decrease to small positive values close to the 220th iteration. The standard deviation decreases to very small positive values near the 400th iteration and thereafter oscillates (see Figures 9 and 10).

##### 4.5. Studies on Loss Minimization in a Power System

Active power loss in the electric power system is caused by the resistance in the transmission/distribution lines. The losses can be evaluated as or simplified as (total active power generations − total active power demands). If the active power demands are constant and voltage magnitudes are increased, then the line currents will be reduced, leading to reduction of active power losses. Therefore, engineers who work on electric power systems tend to utilize voltage controllers to increase the voltage profile in order to reduce the active power losses [22]. The voltage controllers include generator voltages, transformer taps, and shunt capacitors. The problem formulation of active power loss minimization is given in the appendix.

Figure 14 illustrates a 25-busbar power system with 3 wind farms at busbars 13, 14, and 25. The wind power generations in these three busbars are 2.4, 2.4, and 5.1 MW, respectively. The diesel generators are located at busbars 1~11. The shunt capacitors are at busbars 21, 22, and 23. Seven demands can be seen at busbars 17~19 and 21~24. Consequently, the number of control variables is 39 (14 generator voltage magnitudes, 22 taps, and 3 shunt capaciotrs). The number of state variables is 36 (voltage magnitudes and phase angles at busbars 12, 15~19, and 20~24, phase angles and reactive power generations at generator busbars, and active power generation at reference busbar 1).

The proposed method, PSO, CPSO, DE, and GA are applied to the find the optimal 39 control variables in order to minimize the active power (MW) losses. Ten simulations, each with fixed 500 iterations are conducted using each method. Figure 15 illustrates the iteration performance of these five methods. It can be found the iteration process of the proposed method converges slowly at first but quickly after the 220th iteration. Table 9 reveals that the proposed method is able to converge the fastest and requires the shortest CPU times for the conditions of the best active power loss, mean values, and the worst active power loss in the 10 simulations. As shown in Table 9, GA still needs much more CPU time to solve the realistic problem and DE yields the worst, mean values over 10 simulation runs. Table 10 gives the results of optimal controls obtained using the different methods (best solution). Figure 16 displays the optimal voltage profile, which is obtained by the proposed method, of the electric power system.

##### 4.6. Comparison of Time, Space Complexities, and Performance

Compared with the traditional PSO, the proposed novel mean search and particle pruning/cloning manners have been added to achieve stable iterative process and they avoid ineffective searches, respectively. Although the space complexity is increased a little bit due to these two tasks as shown in (3)–(6), the number of iterations can be mitigated and the computational burden can be reduced effectively. Thus, the time complexity is similar to that of the traditional PSO.

The performance comparisons among the proposed method, PSO, CPSO, DE, and GA are implemented using a set of four benchmark functions and one realistic loss minimization problem in an electric power system. The simulation results, as shown in Tables 1–10, imply that the CPSO and DE sometimes are unable to find the optimal solution or even diverge. Furthermore, both GA and DE always require long CPU times for computation. Additionally, the computational effort required by the proposed method to obtain high quality solutions is less than the effort required to obtain the same high quality solutions by the GA and DE. Therefore, the proposed method has the most stable convergence performance to find the optimums and has the best computational efficiency compared with the other methods considering the time complexity and space complexity.

#### 5. Conclusions

In this paper, to provide a better tradeoff between global and local search in the iterative process than existing methods, a novel elite-based PSO approach is proposed for studying highly dimensional and extraordinarily nonlinear optimization problems. Two tasks have been developed: mean search and particle pruning/cloning. The mean of all particles and the standard deviation of the distances between pairs of particles are evaluated in each iteration. The mean search ensures that the iterative process is stable. Particle pruning/cloning avoids ineffective searches. All parameters are adaptively self-tuned during the iterations. The impacts of these two tasks are gradually mitigated in the iteration process. When the proposed process is near convergence, its computational burden is similar to that of the traditional PSO.

Based on the studies herein of benchmark functions, the CPSO and DE sometimes are unable to find the optimal solution or even diverge. GA is often unable to find the optimum if the benchmark functions are very nonlinear. In terms of optimality, the proposed method is generally better than PSO. In the loss minimization problem, the best, mean and the worst objective values obtained by the proposed method in 10 simulations are always the best among those obtained by all methods. The smallest mean loss value implies that the numerical process of proposed method is the more stable than all of the others. The proposed method also requires the least CPU time to solve the loss minimization problem.

#### Appendix

#### A.

The active power loss minimization problem can be formulated as follows. Let the number of transmission lines, the set of generator busbars, and the set of total busbars be , , and , respectively. The objective is to minimize the line losses in all lines:
Let busbar 1 be the reference with a phase-angle of zero. The loss can be expressed as
where and are the active generation and demand at the* k*th generator and busbar , respectively. The terms and represent the voltages of busbars and , respectively. is the element at location of the system admittance matrix. The term represents the voltage phase-angle difference between busbars and .

The objective is subject to the following equality and inequality constraints.

##### A.1. Equality Constraints

The equality constraints covering the power flow equations are shown in (A.3) as follows: where and are the active and reactive power generation at busbar , respectively. and are the active and reactive power demands at busbar , respectively. and are the active and reactive power flow injections at busbar , respectively. is the injected reactive power at busbar where a capacitor is installed. and can be represented as follows:

##### A.2. Inequality Constraints

In the following, the subscripts “max” and “min” denote the upper and lower limits.

*Operational Limits of Voltage Magnitudes at Generator Busbars.* Consider
where represents the voltage magnitude at generator busbar . The symbol denotes the set of generator busbars.

*Limitations on Reactive Power at Generators.* Consider
where represents the reactive power output at generator .

*Operational Limits at Transformers.* Consider
where represents the tap at transformer . The symbol represents the set of transformers.

*Operational Limits of Capacitors.* Consider
where represents the injected reactive power at busbar . The symbol represents the set of capacitors.

*Operational Limits of Voltage Magnitudes at Demand Busbars.* Consider
where represents the voltage magnitude at busbar . The symbol indicates the set of demands.

The active/reactive power demands, active power generation, and line impedances of the 25-busbar system are given in Tables 11(a), 11(b), and 11(c), respectively. Please note that .

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to thank Professor C. R. Chen at NTUT (Taiwan) for his comments about this work. The authors are also grateful for financial support from the National Science Council (Taiwan) under Grant NSC 102-3113-P-008-001.

#### References

- R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in
*Proceedings of the 6th International Symposium on Micro Machine and Human Science (MHS '95)*, pp. 39–43, Nagoya, Japan, October 1995. View at Scopus - C. Wang, Y. C. Liu, and Y. T. Zhao, “Application of dynamic neighborhood small population particles warm optimization for reconfiguration of shipboard power system,”
*Engineering Applications of Artificial Intelligence*, vol. 26, no. 4, pp. 1255–1262, 2013. View at Publisher · View at Google Scholar - M. Neyestani, M. M. Farsangi, and H. Nezamabadi-Pour, “A modified particle swarm optimization for economic dispatch with non-smooth cost functions,”
*Engineering Applications of Artificial Intelligence*, vol. 23, no. 7, pp. 1121–1126, 2010. View at Publisher · View at Google Scholar · View at Scopus - B. Vasumathi and S. Moorthi, “Implementation of hybrid ANNPSO algorithm on FPGA for harmonic estimation,”
*Engineering Applications of Artificial Intelligence*, vol. 25, no. 3, pp. 476–483, 2012. View at Publisher · View at Google Scholar · View at Scopus - W. X. Liu, I.-Y. Chung, L. Liu, S. Y. Leng, and D. A. Cartes, “Real-time particle swarm optimization based current harmonic cancellation,”
*Engineering Applications of Artificial Intelligence*, vol. 24, no. 1, pp. 132–141, 2011. View at Publisher · View at Google Scholar · View at Scopus - R. J. Wai, Y. C. Huang, Y. C. Chen, and Y. W. Lin, “Performance comparisons of intelligent load forecasting structures and its application to energy-saving load regulation,”
*Soft Computing*, vol. 17, no. 10, pp. 1797–1815, 2013. View at Publisher · View at Google Scholar - F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to particle swarm optimization,”
*IEEE Transactions on Evolutionary Computation*, vol. 8, no. 3, pp. 225–239, 2004. View at Publisher · View at Google Scholar - A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novel particle swarm optimization algorithm with adaptive inertia weight,”
*Applied Soft Computing*, vol. 11, no. 4, pp. 3658–3670, 2011. View at Publisher · View at Google Scholar · View at Scopus - Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, “Adaptive particle swarm optimization,”
*IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics*, vol. 39, no. 6, pp. 1362–1381, 2009. View at Publisher · View at Google Scholar · View at Scopus - R. Caponetto, L. Fortuna, S. Fazzino, and M. G. Xibilia, “Chaotic sequences to improve the performance of evolutionary algorithms,”
*IEEE Transactions on Evolutionary Computation*, vol. 7, no. 3, pp. 289–304, 2003. View at Publisher · View at Google Scholar · View at Scopus - G. N. José and A. Enrique, “Restart particle swarm optimization with velocity modulation: a scalability test,”
*Soft Computing*, vol. 15, no. 11, pp. 2221–2232, 2011. View at Publisher · View at Google Scholar · View at Scopus - W. Deng, R. Chen, B. He, Y. Liu, L. Yin, and J. Guo, “A novel two-stage hybrid swarm intelligence optimization algorithm and application,”
*Soft Computing*, vol. 16, no. 10, pp. 1707–1722, 2012. View at Publisher · View at Google Scholar · View at Scopus - Y. Y. Li, R. R. Xiang, L. C. Jiao, and R. C. Liu, “An improved cooperative quantum-behaved particle swarm optimization,”
*Soft Computing*, vol. 16, no. 6, pp. 1061–1069, 2012. View at Publisher · View at Google Scholar - A. M. Arasomwan and A. O. Adewumi, “An investigation into the performance of particle swarm optimization with various chaotic maps,”
*Mathematical Problems in Engineering*, vol. 2014, Article ID 178959, 17 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet - H. Yin, C. Zhang, B. Zhang, Y. Guo, and T. Liu, “A hybrid multiobjective discrete particle swarm optimization algorithm for a SLA-aware service composition problem,”
*Mathematical Problems in Engineering*, vol. 2014, Article ID 252934, 14 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet - B. Liu, L. Wang, Y.-H. Jin, F. Tang, and D.-X. Huang, “Improved particle swarm optimization combined with chaos,”
*Chaos, Solitons & Fractals*, vol. 25, no. 5, pp. 1261–1271, 2005. View at Publisher · View at Google Scholar · View at Scopus - R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,”
*Journal of Global Optimization*, vol. 11, no. 4, pp. 341–359, 1997. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - J. Kennedy and R. Eberhart,
*Swarm Intelligence*, Morgan Kaufmann, San Francisco, Calif, USA, 2001. - S. Bouallègue, J. Haggège, M. Ayadi, and M. Benrejeb, “PID-type fuzzy logic controller tuning based on particle swarm optimization,”
*Engineering Applications of Artificial Intelligence*, vol. 25, no. 3, pp. 484–493, 2012. View at Publisher · View at Google Scholar - N. J. Li, W. J. Wang, C. C. J. Hsu, W. Chang, H. G. Chou, and J. W. Changa, “Enhanced particle swarm optimizer incorporating a weighted particle,”
*Neurocomputing*, vol. 124, pp. 218–227, 2014. View at Publisher · View at Google Scholar - G. A. Ortiz, “Evolution Strategies (ES),” Mathwork, 2012.
- A. J. Wood and B. F. Wollenberg,
*Power Generation, Operation and Control*, John Wiley & Son, New York, NY, USA, 2nd edition, 1996.