Hybrid Intelligent Techniques for Benchmark Functions and RealWorld Optimization Problems
View this Special IssueResearch Article  Open Access
Novel Particle Swarm Optimization and Its Application in Calibrating the Underwater Transponder Coordinates
Abstract
A novel improved particle swarm algorithm named competition particle swarm optimization (CPSO) is proposed to calibrate the Underwater Transponder coordinates. To improve the performance of the algorithm, TVAC algorithm is introduced into CPSO to present an extension competition particle swarm optimization (ECPSO). The proposed method is tested with a set of 10 standard optimization benchmark problems and the results are compared with those obtained through existing PSO algorithms, basic particle swarm optimization (BPSO), linear decreasing inertia weight particle swarm optimization (LWPSO), exponential inertia weight particle swarm optimization (EPSO), and timevarying acceleration coefficient (TVAC). The results demonstrate that CPSO and ECPSO manifest faster searching speed, accuracy, and stability. The searching performance for multimodulus function of ECPSO is superior to CPSO. At last, calibration of the underwater transponder coordinates is present using particle swarm algorithm, and novel improved particle swarm algorithm shows better performance than other algorithms.
1. Introduction
Particle swarm optimization (PSO) technique is considered as one of the modern heuristic algorithms for optimization first proposed by Kennedy and Eberhart in 1995 [1]. The motivation for the development of this method was based on the simulation of simplified animal social behaviors [2]. The PSO algorithm works on the social behavior of particles in the swarm. In PSO, the population dynamics simulates a bird flock’s behavior where social sharing of information takes place and individuals can profit from the discoveries and previous experience of all other companions during the search for food. That is, the global best solution is found by simply adjusting the trajectory of each individual towards its own best location and towards the best particle of the entire swarm at each time step [1–3]. Owing to its reduction on memory requirement and computational efficiency with convenient implementation, it has gained lots of attention in various optimal control system applications, compared to other evolutionary algorithms [4]. Several researches were carried out so far to analyze the performance of the PSO with different settings; for example, Shi and Eberhart [5] indicated that the optimal solution can be improved by varying the value of from 0.9 at the beginning of the search to 0.4 at the end of the search for most problems, and they introduced a method named TVIW with a linearly varying inertia weight over the generations. Chen et al. [6] introduced exponential inertia weight strategies, which is found to be very effective for TVIW. Ratnaweera et al. [2] propose timevarying acceleration coefficients as a parameter automation strategy for the PSO named TVAC, witch reduce the cognitive component and increase the social component, by changing the acceleration coefficients with time. Ni and Deng [7] analyze the performance of PSO with the proposed random topologies and explore the relationship between population topology and the performance of PSO from the perspective of graph theory characteristics in population topologies. Noel [8] presents a new hybrid optimization algorithm that combines the PSO algorithm and gradientbased local search algorithms to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. Epitropakis et al. [9], motivated by the behavior and spatial characteristics of the social and cognitive experience of each particle in the swarm, develop a hybrid framework that combines the particle swarm optimization and the differential evolution algorithm. In an attempt to efficiently guide the evolution and enhance the convergence, the author evolved the personal experience or memory of the particles with the differential evolution algorithm, without destroying the search capabilities of the algorithm. Mousa et al. [10] propose a hybrid multiobjective evolutionary algorithm combining genetic algorithm and particle swarm optimization; the local search scheme is implemented as a neighborhood search engine to improve the solution quality, where it intends to explore the lesscrowded area in the current archive to possibly obtain more nondominated solutions.
As a kind of optimization algorithm, PSO is simple in structure, has good performance, and is easy to implement. It is widely applied in various engineering applications. Moradi and Abedini [11] combined genetic algorithm and particle swarm optimization for optimal location and sizing of distributed generation on distribution systems. The algorithm is to minimize network power losses, make better voltage regulation, and improve the voltage stability within the framework of system operation and security constraints in radial distribution systems. Chang et al. [12] apply the PSO algorithm to estimate the parameters of the GenesioTesi nonlinear chaotic systems, and the estimation of the PSO algorithm is verified by examining different sets of random initial populations under the presence of measurement noise. Soon and Low [13] proposed a new approach using particle swarm optimization with inverse barrier constraint to determine the unknown photovoltaic model parameters. The proposed method has been validated with three different photovoltaic technologies. Jiang et al. [14] proposed the barebone particle swarm optimization algorithm to determine the parameters of solid oxide fuel cell (SOPC). The cooperative coevolution strategy is applied to divide the output voltage function into four subfunctions based on the interdependence among variables. To the nonlinear characteristic of SOPC model, a hybrid learning strategy is proposed for BPSO to ensure a good balance between exploration and exploitation. Alfi [15] proposed novel particle swarm optimization, to cope with the online system parameter identification problem. The inertia weight for every particle is dynamically updated based on the feedback taken from the fitness of the best previous position found by the particle, and a novel methodology is incorporated into the novel particle swarm optimization to be able to effectively response and detect any parameter variations of system to be identified. Hu and Shi [16], to solve the premature convergence problem of PSO, improved algorithms with hybrid and mutation operators, leading to obtaining a high level of particle population diversity, decreasing the possibility of falling into local optima, and improving location accuracy. The novel algorithm is introduced in the rangebased location for wireless sensor networks and simulation shows a better performance than basic PSO algorithm.
With the development of marine economy and technology, unmanned underwater vehicle (UUV) is an effective means for marine detection, resource exploitation, military interfere, and investigation [17–19]. Navigation of UUV has been and remains a substantial challenge to platforms. One of the main driving factors is the ability to carry out longduration missions fully autonomously and without supervision from a surface ship [20, 21]. Combined with inertial navigation, the use of one or several transponders on the seabed is an accurate and costeffective approach toward solving several of these challenges [22–24]. It is obvious that the exact position of the transponder is very important in the underwater transponder positioning system [25, 26]. However, in the practical operations, due to the influence of ocean currents and other factors, the practical coordinates of transponder will drift from the position where it launched into the water. So it requires the mother ship to calibrate the coordinate of the transponder; this paper proposed the particle swarm optimization algorithm solving the transponder coordinates.
The contribution of this paper is concluded as the following. Firstly, considering the competition particle swarm algorithm, each particle will evolve along two different directions to generate two homologous particles. The optimal one is kept through comparing the cost functions of two homologous particles, and the next generation particle will be obtained finally. Secondly, according to the advantage of TVAC, combining CPSO and TVAC, ECPSO algorithm is presented. With a large cognitive component and small social component at the beginning, on the other hand, a small cognitive component and a large social component allow the particles to converge to the global optima in the latter part of the optimization. Simultaneously, the evolution for each particle at any time is along two different inertia directions to generate two homologous particles and to obtain next generation particle. Lastly, ECPSO is introduced to calibrate the coordinate of the transponder.
The rest of this paper is organized as follows. In Section 2, the basic PSO and its previous developments are summarized. In Section 3, the competition particle swarm optimization algorithm and extension competition particle swarm optimization algorithm are introduced. The experimental settings for the benchmark functions and simulation strategies are explained, and the conclusion is drawn based on the comparison analysis. In Section 4, ECPSO is introduced to calibrate the coordinates of the transponder, and simulations are designed to verify the feasibility of the algorithm present.
2. Some Previous Work
Introduced by Dr. Kennedy and Dr. Eberhart in 1995, PSO has ever since turned out to be a competitor in the field of numerical optimization, and there has been a considerable amount of work done in developing the original version of PSO. In this section, we summarize some entire significant previous developments.
2.1. Basic Particle Swarm Optimization (BPSO)
In PSO, each solution called a “particle” flies in the search space searching for the optimal position to land. PSO system combines local search method (through individual experience) with global search methods (through neighboring experience), attempting to balance exploration and exploitation [27]. Each particle has a position vector , a velocity vector , the position with the best fitness encountered by the particle, and the index of the best particle in the swarm. The position vector and the velocity vector of the particle in the dimensional search space can be represented as and , respectively. The best position of each particle (best) is , and the fitness particle found so far at generation (best) is . In each generation, each particle is updated by the following two equations:
The parameters and are constants known as acceleration coefficients. and are random values in the range from 0 to 1, and the value of and is not the same for every iteration. Kennedy and Eberhart [1] suggested setting either of the acceleration coefficients at 2, in order to make the mean of both stochastic factors in (1) unity, so that particles would over fly only half the time of search. The first equation shows that, in PSO, the search toward the optimum solution is guided by the previous velocity, the cognitive component, and the social component.
Since the introduction of the particle swarm optimization, numerous variations of the algorithm have been developed in the literature. Eberhart and Shi showed that PSO searches for wide areas effectively but tends to lack local search precision. They proposed in that work a solution by introducing , an inertia factor. In this paper, we name it as basic particle swarm optimization (BPSO):
2.2. TimeVarying Inertia Weight (TVIW)
The role of the inertia weight is considered very important in PSO convergence behavior. The inertia weight is applied to control the impact of the previous history of velocities on the current velocity. large inertia weight facilitates global exploration, while small one tends to facilitate local exploration. In order to assure that the particles converge to the best point in the course of the search, Shi and Eberhart [28] have found that timevarying inertia weight (TVIW) has a significant improvement in the performance of PSO and proposed linear decreasing inertia weight PSO (LWPSO) with a linear decreasing value of . This modification can increase the exploration of the parameter space during the initial search iterations and increase the exploitation of the parameter space during the final steps of the search [29]. The mathematical representation of inertia weight is given as follows: where and are the initial and final values of the inertia weight, respectively, is the current iteration number, and MAXITER is the maximum number of allowable iterations. Shi and Eberhart [5] indicate that the optimal solution can be improved by varying the value of from 0.9 at the beginning of the search to 0.4 at the end of the search for most problems.
Chen et al. [6] proposed natural exponential (base ) inertia weight strategies, named EPSO and expressed as
2.3. TimeVarying Acceleration Coefficient (TVAC)
In PSO, the particle was updated due to the cognitive component and the social component. Therefore, proper control of these two components is very important to find the optimum solution accurately and efficiently. Ratnaweera et al. [2] introduced a timevarying acceleration coefficient (TVAC), which reduces the cognitive component and increases the social component, by changing the acceleration coefficients and with the time evolution. The objective of this development is to enhance the global search in the early part of the optimization and to encourage the particles to converge toward the global optima at the end of the search. The TVAC is represented using the following equations: where and are constants, is the current iteration number, and MAXITR is the maximum number of allowable iterations.
Simulations were carried out with numerical benchmarks, to find out the best ranges of values for and . From the results it was observed that the best solutions were determined when changing from 2.5 to 0.5 and changing from 0.5 to 2.5 over the full search range.
3. Proposed New Developments
It is clarified from (1) that particle’s new velocity is correlated with three terms: the particle’s previous velocity, the value of the cognitive component, and the value of the social component. Therefore, proper control method with inertia weight factor and acceleration coefficients is significant to find the optimum solution accurately and efficiently.
The inertia weight is utilized to adjust the influence of the previous velocity on the current velocity and balance between global and local exploration abilities of the “flying particle” [30, 31]. A larger inertia weight implies stronger global exploration ability, advocating the particle to escape from a local minimum. A smaller inertia weight leads to stronger local exploration ability, confining the particle searching within a local range near its present position to guarantee the convergence.
Kennedy and Eberhart [1] indicated that a relatively high value of the cognitive component, compared with the social component, will result in excessive wandering of individuals through the search space. In contrast, a relatively high value of the social component may lead particles to rush prematurely toward a local optimum.
Considering those concerns, we propose a new strategy for the PSO concept.
3.1. Competition Particle Swarm Optimization (CPSO)
In the process of particle evolution, each particle is evolved along different directions with different inertia coefficients and acceleration coefficients. Two homologous particles are generated and the optimal one is kept through comparing cost functions of two homologous particles, eliminating the inferior one. Then, the next generation particle is updated finally. The evaluation function of each particle is described as
And the final equations are shown as where is the number of particles in the swarm, is the maximum iteration frequency, , are the random numbers in the range of , is the particle position in the dimension after time iteration, shows the dimension position for subparticle of the particle after time iteration, is the speed inertia weight of subparticle , and are constants denoting acceleration coefficients, is the fitting function of the subparticle after time iteration, is the speed of subparticle at dimension, is the optimal position of the particle at dimension, and is the swarm optimal position at dimension after time iteration.
Remark 1. In this paper, two subparticles are generated for each particle at one time; therefore, .
The detailed steps are shown as follows.
Step 1. Initial.
Substep 1. Set the initial parameters , , , , .
Substep 2. Take random initial .
Substep 3. Take random initial .
Substep 4. Calculate and set .
Substep 5. One has .
Step 2. If the criteria are satisfied, output the best solution; otherwise, go to Substep 6.
Substep 6. Update and .
Substep 7. Calculate .
Substep 8. One has .
Substep 9. If
If
Substep 10. Go back to Step 2.
3.2. Extension Competition Particle Swarm Optimization (ECPSO)
competition particle swarm optimization (CPSO) helps adjust the search direction particles and improve the search speed and efficiency, but, due to rapid convergence, CPSO is easy to fall into local minima. According to benchmark functions simulation in Section 4, it is obvious that CPSO is superior to the searching effective of singlemodulus function, and TVAC is superior to the searching effective of multimodulus function. The reason resulting in this phenomenon is due to the selection of acceleration coefficient. With a large cognitive component and a small social component at the beginning, particles are allowed to move around the search space, instead of moving toward the population best. On the other hand, a small cognitive component and a large social component allow the particles to converge to the global optima in the latter part of the optimization. Considering the advantage of TVAC, introduce TVAC into CPSO and the extension competition particle swarm optimization (ECPSO) with the acceleration coefficients proposed as above. The evolution equations can be mathematically represented as the following: where And it is obvious that
4. Experimental Settings and Simulation Strategies for Benchmark Testing
Simulations were carried out to observe the rate of convergence and the quality of the optimum solution of the new methods introduced in this investigation by comparing with BPSO, EPSO, and TVAC. From the standard set of benchmark problems available in the literature, there are 5 important functions considered to test the efficacy of the proposed method. All of the benchmark functions reflect different degrees of complexity.
4.1. Functions Introduction
The functions are as follows.(1)Sphere function: one has With the search space , the global minimum locates at with . It is very simple, convex, and unimodal function with only one local optimum value.(2)Axis parallel hyperellipsoid function: one has It is known as the weighted sphere model. With the search space , the global minimum locates at with . It is continuous, convex, and unimodal.(3)Rotated hyperellipsoid function (Schwefel’s problem 1.2): one has With the search space , the global minimum locates at with . It is continuous, convex, and unimodal. With respect to the coordinate axes, this function produces rotated hyperellipsoids.(4)Moved axis parallel hyperellipsoid function: one has With the search space , the global minimum locates at , , with .(5)Rosenbrock function: one has With the search space , the global minimum locates at with . It is a unimodal function and the global optimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial.(6)Rastrigin function: one has With the search space , the global minimum locates at with . It is highly multimodal. However, the locations of the minima are regularly distributed.(7)Griewank function: one has With the search space , the global minimum locates at with . It is a multimodal function and has many widespread local minima. However, the locations of the minima are regularly distributed.(8)Sum of different power function: one has
The sum of different powers is a commonly used unimodal test function. With the search space , the global minimum locates at with .(9)Ackley’s path function: one has With the search space , the global minimum locates at with .
Set the coefficients as , , .(10)Penalised function: one has where With the search space , the global minimum locates at with .
4.2. The Coefficients Setting
The parameters for simulation are listed in Table 1.

In this table, expresses the accelerations coefficients, denotes inertia weight, the dimension is , and the range of the search space and the velocity space are and . If the current position is out of the search space, the position of the particle is taken to be the value of the boundary, and the velocity is taken to be zero. If the velocity of the particle is outside of the boundary, its value is set to be the boundary value. The maximum number of iterations is set to 1000. For each function, 100 trials were carried out and the average optimal value and the standard deviation are presented. To verify the performance of the algorithm at different dimensions, variable dimension increases from 10 to 100, and the optimal mean and variance of benchmark functions are calculated. The results are presented in Table 2.
