New Trends in Networked Control of Complex Dynamic Systems: Theories and ApplicationsView this Special Issue
A Local and Global Search Combined Particle Swarm Optimization Algorithm and Its Convergence Analysis
Particle swarm optimization algorithm (PSOA) is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA), and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA). Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.
Swarm intelligence (SI) techniques, including particle swarm optimization (PSO) and ant colony optimization (ACO), use swarm behavior to solve the problem, and they use the concept of neighborhood intelligence along with individual intelligence. Particle swarm was originally formulated to study the interesting concept of a social behavior (bird flocking or fish schooling) in a simplified manner in simulation. However, very soon the potential of this technique was realized to develop into a powerful optimization tool which can be successfully applied in the fields of both continuous and discrete optimization problems such as function optimization, scheduling and hence received many attentions since it is originated.
The particle swarm concept originated as a simulation of simplified social system, and this technique has been developing rapidly. Eberhart and Kennedy in [1, 2] originally formulated particle swarm in 1995, and recently, several investigations have been undertaken to improve the performance of original PSO. There are many improved PSO algorithm such as Shi and Eberhart who introduced an inertia weight making the influence of the previous velocity on the new velocity in [3–5] and Kennedy and Mendes use cluster analysis improving PSO’s performance in . Angeline presented the model of improved PSO algorithm called (HPSO) in  which uses selection operation of evolution computing. Literature  Suganthan sets up the model of PSO with adjacent filed operation. Jiao et al. present a dynamic inertia weight particle swarm optimization algorithm in paper  in which the dynamic inertia weight that decreases according to iterative generation increasing is used. Many experts also studied others properties of PSO such as Clerc and Kennedy in [10, 11] who have researched the particle swarm explosion stability and convergence in a multidimensional complex space. Eberhart and Shi have investigated PSO developments, applications, and resources in  and parameter selection in  and dimension selection methods in .
PSO has been successfully applied in many areas: function optimization, artificial neural network training, fuzzy system control, and other areas where GA can be applied. Parsopoulos and Vrahatis approach global optimization problems using PSO in [14, 15] and Abido uses PSO optimizing power flow in . PSO technique have been used in assignment problem in , reactive power and voltage control considering voltage security assessment in , and so forth. PSO algorithm is widely used, such as the paper [19, 20] that introduces a Hybrid Particle Swarm algorithm with artificial Immune Learning (HPSIL) for solving Fixed Charge Transportation Problem (FCTP). Hamta et al. in paper  presented a hybrid PSO algorithm for a multiobjective assembly line balancing problem with flexible operation times, sequence-dependent setup times, and learning effect. Chou  used a particle swarm optimization with cocktail decoding method for hybrid flow shop scheduling problems with multiprocessor tasks. This study searches for a number of solid decoding methods that can be incorporated into the cocktail decoding method. Then, it develops a particle swarm optimization (PSO) algorithm that can be combined with the cocktail decoding method. PSO has particularly gained prominence due to its relative ease of operation and capability to quickly arrive at an optimal/near-optimal solution. However it was pointed out in our studies that original PSO might frequently tend to get stuck in a near optimal solution in reaching optimum solutions especially for middle or large size function optimization problems. According to the shortcoming of original PSO, a local and global search combine particle swarm optimization algorithm is presented in this paper, and the simulation results show that it is efficacious. At the same time, this paper rigorously analyzes new PSO algorithm convergence and obtains its convergence qualification.
This work differs from the existing ones at least in four aspects: firstly, it proposes a local and global search combine particle swarm optimization (LGSCPSO) algorithm iterative formula, in which all particles share the best information of the local particles, global particles and neighborhood particles. Secondly, it finds the best combine parameters of LGSCPSO for different size optimization problems. Thirdly, it is to compare the original PSO with LGSCPSO and shows that the latter is more efficacious for optimization problems. Fourthly, it strictly analyzes the PSO algorithm’s convergence and obtains its convergence qualification. The rest of the paper is organized as follows: The next section introduces the original PSO model. The iteration formulation of LGSCPSO algorithm is presented in Section 3. In Section 4, it analyzes the PSO algorithm’s convergence and finds its convergence qualification. Then, it describes the test functions, experimental settings, and compares experimental results of original PSO with LGSCPSO algorithm. Finally, Section 5 summarizes the contribution of this paper and conclusions.
2. Original Particle Swarm Optimization Algorithm
2.1. Description of Original Particle Swarm Optimization Algorithm
Particle swarm optimization (PSO) is an evolutionary computation technique, and its concept originated as a simulation of a simplified social system. The major difference is that the evolution computing techniques use genetic operators whereas swarm intelligence techniques use the physical movements of the individuals in the swarm. PSO is distinctly different from other evolutionary-type methods in a way that it does not need complex encoding and decoding processes and does not use the operation (such as crossover and/or mutation), and it takes real numbers as particles in the aspects of representation solution. From the procedure, one can learn that PSO shares many common points with GA; for example, they start with a neighborhood of a randomly generated population, evaluate the population with fitness values, update the population and search for the optimum with random techniques, and do not guarantee optimal. Similar to other population-based algorithms, PSO as an optimization tool can solve a variety of difficult optimization problems. Compared to GA, one of the important attractive factors of PSO is simple that there are very few parameters to adjust. It can achieve the optimal or near-optimal solutions in a rather short time without enormous iterative computations in digital implementation.
Swarm intelligence simulates a social behavior such as bird flocking to a promising position for certain objectives in a multidimensional space. It is initialized with a population of random solutions and searches for optima by updating generations. PSO system combines local search method (through self-experience) with global search methods (through neighboring experience), attempting to balance exploration. Like evolutionary algorithm, PSO search uses a population (called swarm) of individuals (called particles) that are updated from iteration to iteration. Each particle represents a candidate position (i.e., solution) to the problem at hand, resembling the chromosome of GA. A particle is treated as a point in an M-dimension space, and the status of a particle is characterized by its position and velocity. Initialized with a swarm of random particles, PSO is achieved through particle flying along the trajectory that will be adjusted based on the best experience or position of the one particle (called local best) and ever found by all particles (called global best). PSO updates a population of particles with the internal velocity and attempts to profit from the discoveries of themselves and previous experiences of all other companions.
2.2. The Model of Original Particle Swarm Optimization Algorithm
In every search iteration, each particle is updated by following two “best” values. The first one is the best solution (fitness) it has achieved so far, and this value is called Best. Another “best” value obtained so far by any particle in the population, and this best value is a global best and called Best. After finding the two best values, the particle updates its velocity and positions with following formulas (1) and (2): In (1), is the best previous position of the th particle (also known as Best). According to the different definitions, there are two different versions of PSO, with local version, and particles only have information of their own and their neighbors’ bests, rather than that of the entire neighborhood. If is the best position among all the particles in the swarm (also known as Best), such a version is called the global version. If is taken from some smaller number of adjacent particles of the population (also known as Best), such a version is called the local version. In (1) and (2) k represents the iterative number and D is dimension of particles, and range of particles are all determined by the problem to be optimized. Variables are learning factors, usually , which represent the weighting of the stochastic acceleration terms that pull each particle towards Best and Best positions. Thus, adjustment of these constants changes the amount of “tension” in the system. , , and are an inertia weight, which is initialized typically in the range of . Inertia weight controls the impact of previous historical values of particle velocity on its current one. A larger inertia weight pressures towards global exploration (searching new area), while a smaller inertia weight pressures toward fine-tuning the current search area. Particle’s velocities on each dimension are confined to a maximum velocity which is a parameter specified by the user. If the iteration formulas would cause the velocity on that dimension to exceed , then the velocity on that dimension is limited to The termination criterion for the iterations is determined according to whether the max generation or a designated value of the fitness of is reached.
The main disadvantage of the above original PSO is that it is difficult to keep the diversity of population and to balance local and global search and hence it may result in local optimal solutions. Besides, their search rates are commonly lower and sometimes need more computation when solving some difficult optimization problems. The original PSO frequently gets into the local solution, especially when the problem size is middle or large. In the following a local and global search combine particle swarm optimization algorithm is proposed to deal with the above disadvantages of original PSO in solving optimization problems.
3. A Local and Global Search Combined Particle Swarm Optimization Algorithm
3.1. The Model of a Local and Global Search Combined Particle Swarm Optimization Algorithm
Because both local and global version have their own advantages and disadvantage, respectively, one can use them both in the algorithm, while global version is used to get quick result and local version can refine the search space. In original PSO algorithm, the information of individual best and global best was shared by next generation particles, in which it tends to get stuck in a near optimal solution especially for middle and large size problems. A local and global combine particle swarm optimization (LGSCPSO) algorithm is presented in this paper, in which the particles of next generation share the best information of one particle itself, the best of particles in neighborhood population (local best particles), and the best of all particles in the swarm (global best particles). Among the neighborhood population, the particles are constituted by the best particles of every generation with different proportion, which is called Best. As the generation increases, neighborhood population diversity will grow larger and larger. LGSCPSO algorithm improves upon the original PSO variation to increase accuracy of solution without sacrificing the speed of solution significantly, and its detailed information will be given in following.
Suppose that the searching space is D-dimensional and m particles form the colony. The th particle represents a D-dimensional vector . It means that the th particle locates at in the searching space and the position of each particle is a potential solution. We could calculate the particle’s fitness by putting its position into a designated objective function. When the fitness is lower, the corresponding is “better.” The th particle’s “flying” velocity is also a D-dimensional vector, denoted as . Denote the best position of the th particle as , the best position of the local as , and the best position of the global as , respectively. After finding the three best values, the particle of LGSCPSO updates its velocity and positions with the following formulas: where is an inertia weight, which is initialized typically in the range of . The neighborhood particles of generation are composed by the best neighborhood particle (local the particle) of generation. is weight index that is chosen according to different optimization problem, which reflects relatively important degree of the best position of the th particle and the best position of the th generation neighborhood particle. The variables are acceleration constant, which control how far a particle will move in a single iteration, and others parameters are same as in (1) and (2).
The searching is a repeat process, and the stop criteria are that the maximum iteration number is reached or the minimum error condition is satisfied. The stop condition depends on the problem to be optimized. In LGSCPSO algorithm, each particle of the swarm shares mutual information globally and benefits from discoveries and previous experiences of all other colleagues during the search process.
3.2. Pseudocode of LGSCPSO (See Pseudocode 1)
Because PSO has very deep intelligent background, it is suitable for science computation and engineering applications. A thorough mathematical foundation for the methodology was not developed so far with the step of the algorithm; however, it has been proven to be very effective for application.
3.3. Convergence Analysis of LGSCPSO
In this section the LGSCPSO algorithm’s convergence is studied.
Proof. Let and ; from (3) and (4) we have
From (4) we can get , and combining (5) we have
LGSCPSO algorithm’s recurrence equation is linearity constant coefficient nonhomogeneous, and the secular equation of corresponding homogeneous recurrence equation is as follows:
Latent roots of above secular equation are as follows:
According to the relations of recurrence equation and its special solution, let the special solution of LGSCPSO algorithm’s recurrence equation (6) substitute into (6), and we can solve
According to the relations of recurrence equation’s general solution, special solution, and its latent roots, we can obtain the general solution of LGSCPSO algorithm’s recurrence equation (6) as follows:
The initial solution and velocity of LGSCPSO algorithm recurrence equation is stochastically generated; suppose first and second solution is and , respectively, and according to (10), we can obtain that To use (11) substitution into (10) simplification, we can obtain the general solution of LGSCPSO recurrence equation (6) as follows: If the limit of above exists when , the LGSCPSO algorithm is convergent.
Let So the of above function is as follows: Evidently, when hold, we can obtain the function (13) latent roots .
Namely, when or is satisfied, we can obtain .
From LGSCPSO algorithm theory, we obviously get that and are descending with increase, and is also descending with increase.
When or is satisfied, namely, hold, we have Because is monotone decreasing and have lower bound which is its optimum solution, exist, namely, the endless particle sequence brought about by LGSCPSO algorithm’s recurrence equation is limited. So we can obtain that the LGSCPSO algorithm is convergent when its parameters relations or is satisfied.
4. Numerical Simulation
4.1. Test Functions
To illustrate the effectiveness and performance of LGSCPSO algorithm for optimization problems, a set of 8 representative benchmark functions with different dimensions were employed to evaluate it in comparison with original PSO. Many authors tested algorithm using them widely.
Sphere Model. Consider , where .
Schwefel’s Problem 1.2. Consider , where .
Schwefel’s Problem 2.21. Consider , where .
Generalized Rosenbrock’s Function. Consider , where .
Ackley’s Function. Consider , where .
Generalized Griewank Function. Consider , where .
Generalized Penalized Functions. Consider where
They can be divided into unimodal (function ) and multimodal functions (function ), where the number of local minima increases exponentially with the problem dimension.
4.2. Experimental Results and Comparison
The experimental results of original PSO and LGSCPSO algorithm on each test function are listed in Table 1. To get the average performance of the LGSCPSO algorithm ten runs on each problem instance were performed and the solution quality was averaged. The comparisons results of LGSCPSO and Original PSO algorithm are shown in Table 1 of the appendix.
From Table 1 one can observe that have the highest performance since using them has smaller minimum and smaller arithmetic mean in relation to the solutions obtained by the others, especially has the better search efficiency. In LGSCPSO algorithm, various parameter should be selected for optimizing different problems, as a whole, it is better when , especially . From simulation results we can obtain that the LGSCPSO algorithm is clearly better than the original PSO for continuous nonlinear function optimization problem. The convergence figures of most effective LGSCPSO comparing with original PSO for 8 instances are as follows.
From Figure 1 we can discover that the convergence rate of LGSCPSO algorithm is clearly faster than the OPSO on every benchmark function, especially it is more efficacious than OPSO for middle and large size optimization problem. Accordingly, we can state that the LGSCPSO algorithm is more effective than OPSO algorithm. For test function 5, the search capability of LGSCPSO algorithm is worse than that of OPSO at the beginning of the iteration. However, seen from the enlarged figure, the search capability of LGSCPSO is markedly enhanced along with the iteration increasing. In addition we can find that the simulation data from test function 5 has a strong advantage.
(a) Number 1
(b) Number 2
(c) Number 3
(d) Number 4
(e) Number 5
(f) Number 6
(g) Number 7
(h) Number 8
5. Conclusions and Perspectives
According to the shortcoming of OPSO algorithm that it frequently gets the local solution especially solving the middle or large size problem, a LGSCPSO algorithm is presented, and simulations show that it is efficacious. The performance of the new approach is evaluated in comparison to OPSO algorithm for eight representative instances with different dimensions and obtained results show that LGSCPSO algorithm is efficacious for solving optimization problems.
The proposed LGSCPSO algorithm approach in this paper can be considered as effective mechanisms from this point of view. There are a number of research directions that can be considered as useful extensions of this research. Although the proposed algorithm is tested with eight representative instances, a more comprehensive computational study should be made to test the efficiency of proposed solution technique. In the future it is maybe used for solving other discrete combinatorial optimization problems such as traveling salesman problem and scheduling.
See Table 1.
Conflict of Interests
The authors declare that they have no conflict of interests to this paper.
The authors are grateful to the anonymous reviewers for giving us helpful suggestions. This work is supported by National Natural Science Foundation of China (Grant nos. 61174040 and 61104178), Fundamental Research Funds for the Central Universities, Shanghai Commission of Science and Technology (Grant no. 12JC1403400), and Shanghai Dianji University Computer Application Technology Disciplines (Grant no. 13XKJ01).
R. Eberhart and J. Kennedy, “New optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, pp. 39–43, Nagoya, Japan, October 1995.View at: Google Scholar
J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995.View at: Google Scholar
Y. Shi and R. C. Eberhart, “A modified particle swarms optimiser,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 303–308, 1997.View at: Google Scholar
Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, May 1998.View at: Google Scholar
J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), pp. 1671–1676, IEEE Press, 2002.View at: Google Scholar
P. J. Angeline, “Using selection to improve particle swarm optimization,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 84–89, Anchorage, Alaska, May 1998.View at: Google Scholar
M. Lovbjerg, T. K. Rasmussen, and T. Krink, “Hybrid particle swarm optimization with breeding and subpopulations,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 1217–1222, San Diego, Calif, USA, 2000.View at: Google Scholar
R. C. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in Proceedings of IEEE International Conference on Evolutionary Computation, pp. 81–86, May 2001.View at: Google Scholar
Y. Shi and R. C. Eberhart, “Parameter selection in particle swarm optimization,” in Evolutionary Programming VII: Proceedings of the 7th Annual Conference on Evolutionary Programming, pp. 591–600, Springer, New York, NY, USA, 1998.View at: Google Scholar
F. van den Bergh and A. P. Engelbecht, “Cooperative learning in neural networks using particle swarm optimizers,” South African Computer Journal, vol. 26, pp. 84–90, 2000.View at: Google Scholar
N. Hamta, S. M. T. Fatemi Ghomi, F. Jolai, and M. Akbarpour Shirazi, “A hybrid PSO algorithm for a multi-objective assembly line balancing problem with flexible operation times, sequence-dependent setup times and learning effect,” International Journal of Production Economics, vol. 141, no. 1, pp. 99–111, 2012.View at: Publisher Site | Google Scholar
F. D. Chou, “Particle swarm optimization with cocktail decoding method for hybrid flow shop scheduling problems with multiprocessor tasks,” International Journal of Production Economics, vol. 141, pp. 137–145, 2013.View at: Google Scholar