Abstract

Barebones particle swarm optimization (BPSO) is a new PSO variant, which has shown a good performance on many optimization problems. However, similar to the standard PSO, BPSO also suffers from premature convergence when solving complex optimization problems. In order to improve the performance of BPSO, this paper proposes a new BPSO variant called BPSO with neighborhood search (NSBPSO) to achieve a tradeoff between exploration and exploitation during the search process. Experiments are conducted on twelve benchmark functions and a real-world problem of ship design. Simulation results demonstrate that our approach outperforms the standard PSO, BPSO, and six other improved PSO algorithms.

1. Introduction

Particle swarm optimization (PSO), developed by Kennedy and Eberhart [1], is a new optimization technique inspired by swarm intelligence. Liker other evolutionary algorithms (EAs), PSO is also a population-based stochastic search algorithm, but it does not contain any crossover or mutation operator. During the search process, each particle adjusts its search behavior according to the search experiences of its previous best position () and the global best position (). Due to its simplicity and easy implementation, PSO has been successfully applied to various practical optimization problems [25].

However, like other stochastic algorithms, PSO also suffers from premature convergence when handling complex multimodal problems. The main reason is that the attraction search pattern of PSO greatly depends on and . Once these best particles ( and ) get stuck, all particles in the swarm will quickly converge to the trapped position. In order to enhance the performance of PSO, different versions of PSO have been proposed in the past decades. Shi and Eberhart [6] introduced an inertia weight into the original PSO to achieve a balance between the global and local search. Reported results show that a linearly decreased is a parameter setting. Parsopoulos and Vrahaits [7] proposed a unified PSO (UPSO) which is a hybrid algorithm by combining the global and local versions of PSO. In [8], a fully informed PSO, called FIPS, is proposed by employing a modified velocity model. van den Bergh and Engelbrecht [9] used a cooperative mechanism to improve the performance of PSO on multimodal optimization problems. In the standard PSO, particles are attracted by their corresponding previous best particles and the global best particle. This search pattern is a greedy method which may result in premature convergence. To tackle this problem, Liang et al. [10] proposed a comprehensive learning PSO (CLPSO), in which each particle can be attracted by different previous best positions. Computational results on a set of multimodal problems demonstrate the effectiveness of CLPSO. In [11], Wang et al. proposed a new PSO algorithm (NSPSO) to search the neighbors of particles. This can provide more chances of finding better candidate solutions. Simulation studies show that NSPSO outperforms UPSO, FIPS, CPSO-H, and CLPSO. In [12], another version of NSPSO is proposed by employing neighborhood search and diversity enhanced mechanism.

Similar to other EAs, the performance of PSO also greatly depends on its control parameters, , , and . The first parameter is known as inertia weight, and the last two are acceleration coefficients. Slight differences of these parameters may result in significantly different performance. To tackle this problem, some PSO variants based on adaptive parameters have been proposed to minimize the dependency of these parameters [13, 14]. Compared to these adaptive PSO algorithms, Kennedy developed a novel PSO called barebones PSO (BPSO) [15], which eliminates the velocity term and does not contain the parameters , , and . In BPSO, a Gaussian sampling is used to generate new positions of particles. Empirical studies demonstrate that the performance of BPSO is competitive to the standard PSO and some improved PSO algorithms. Inspired by the idea of BPSO, some new algorithms haven been proposed. In [16], Omran et al. combined BPSO with differential evolution (DE) and the proposed barebones DE (BBDE). The reported results show that BBDE outperforms the standard DE and BPSO and it also achieves promising solutions for unsupervised image classification. Krohling and Mendel [17] introduced Gaussian and Cauchy mutations into BPSO to improve its performance. Experimental studies on a suite of well-known multimodal benchmark functions demonstrate the effectiveness of this approach. Blackwell [18] presented a theoretical analysis of BPSO. A series of experimental trials confirmed that the BPSO situated at the edge of collapse is comparable to other PSO algorithms and that performance can be still further improved with the use of an adaptive distribution. In [19], Wang embedded opposition-based learning (OBL) into BPSO to solve constrained nonlinear optimization problems. In addition, a new boundary search strategy is utilized. Simulation studies on thirteen constrained benchmark functions show that the new approach outperforms PSO, BPSO, and six other improved PSO algorithms.

In this paper, we also propose an improved barebones PSO called NSBPSO which employs a global and local neighborhood search strategies to make a balance between exploration and exploitation during the search process. In order to verify the performance of NSBPSO, twelve well-known benchmark functions and a real-world problem on ship design are used in the experiments. Computational results show that our approach outperforms PSO, BPSO, and several other improved PSO variants in terms of the quality of solutions.

The rest of the paper is organized as follows. The standard PSO and barebones PSO are given in Section 2. In Section 3, our approach NSBPSO is proposed. Experimental studies are presented in Section 4. Section 5 presents a real-world application on ship design. Finally, the work is summarized in Section 6.

2. Barebones Particle Swarm Optimization

PSO is a population-based stochastic search algorithm, which simulates the behaviors of fish schooling or birds flocking. Each particle has a velocity and a position vectors. During the search space, a particle dynamically adjusts its velocity to generate a new position as follows: where and are the position and velocity vector for the th particle, respectively. is the previous best particle of the th particle and is the global best particle. and are two independently generated random numbers within . The parameter is known as inertia weight. and are acceleration coefficients.

A recent study [20] proved that the particles in PSO converge to the weighted position of and as follows:

Based on the convergence characteristic of PSO, Kennedy [15] proposed a new PSO variant called barebones PSO (BPSO), in which each particle only has a position vector and eliminates the velocity vector. Therefore, BPSO does not contain the parameters , , and . In BPSO, a new position is updated by Gaussian sampling as follows: where indicates a Gaussian distribution with mean and standard deviation .

Due to the intrinsic randomness, both PSO and EAs suffer from premature convergence when solving complex multimodal problems. Sometimes, the suboptima are near to the global optimum and the neighborhoods of trapped particles may contain the global optimum. For this case, searching the neighbors of particles is beneficial for finding better solutions. Based on this idea, some neighborhood search strategies have been successfully applied to various algorithms.

In [11], Wang et al. proposed a new PSO algorithm called PSO with neighborhood search strategies (NSPSO), which utilizes one local and two global neighborhood search strategies. The NSPSO includes two operations. First, for each particle, three trial particles are generated by the above neighborhood search strategies, respectively. Second, the best one among the three trial particles and the current particle is chosen as the new current particle. Simulation studies on twelve unimodal and multimodal benchmark problems show that NSPSO achieves better results than standard PSO and five other improved PSO algorithms.

Although NSPSO has shown good search abilities, its performance is still seriously influenced by its control parameters, , , and . In [11], NSPSO used an empirical parameter settings, and . In order to minimize the effects of the control parameters on the performance of NSPSO, this paper proposes an improved PSO algorithm by combining barebones PSO and the neighborhood search strategies.

There are various population topologies, such as ring, wheel, star, Von Neumann, and random. A recent study shows that the complexity of population topology affects the performance of PSO. A population topology with few connections (low complexity) may perform well on multimodal problems, while a highly interconnected population topology may perform well on unimodal problems. In this paper, a ring topology is used by the suggestions of [11].

The ring topology assumes that particles are organized as a ring. In [21], a special ring topology is proposed by connecting the indices of particles. For example, the fourth particle is connected by the third one and the fifth one . In other words, and are two immediate neighbors of . Figure 1 shows the employed ring topology. Based on the ring topology, a -neighborhood radius is defined, where is a predefined integer number. For each particle , its -neighborhood radius consists particles (include itself), which are . It is obvious that the parameter satisfies . Figure 1 shows the 2-neighborhood radius of , where 5 particles are covered by the neighborhood. By the suggestions of [11], is used in this paper.

Based on the -neighborhood radius, a local neighborhood search strategy is proposed. For each particle , a local particle is generated as follows [11]: where and are the position vectors of two particles, and , randomly selected from the -neighborhood neighborhood, , , , are three random numbers within , and . In [11], the velocity of keeps the same with . Although the velocity mechanism is simple, it may not beneficial for the next flight of . Therefore, we use a similar method to generate : where is the velocity vector of and , are the velocity vectors of and , respectively.

Beside the local neighbor strategy, a global neighborhood search strategy is proposed. For each particle , a global particle is generated as follows [11]: where and are the position vectors of two particles, and , randomly selected from the current swarm, , , , are three random numbers within , and . In [11], the velocities of keeps the same with . So, the velocity of , , and are the same, but (local) and (global) are two different types of particles. Like (5), this paper uses a new method to generate : where is the velocity vector of and , are the velocity vectors of and , respectively.

After generating two new particles and , a greedy selection mechanism is used. Among , , and , we select the best one as the new .

In our approach NSBPSO, it embeds the local and global neighborhood search strategies into barebones PSO. The neighborhood search strategies focus on searching the neighbors of particles and provide different search behaviors. The BPSO concentrates on minimizing the dependency of the control parameters (without , , and ). By hybridization of BPSO and the neighborhood strategies, NSBPSO is almost a parameter-free algorithm (except for the probability of the neighborhood search), which achieves a tradeoff between exploration and exploitation.

The main steps of NSBPSO are listed as follows.

Step 1. Randomly initialize the swarm, and evaluate the fitness values of all particles.

Step 2. Initialize and .

Step 3. For each particle , calculate its new position vector according to (3). Evaluate the fitness value of . If needed, update and .

Step 4. For each particle , if and , where and is a random number within and is the probability of conducting neighborhood search, then go to Step 5; otherwise go to Step 6.

Step 5. Generate a local particle according to (4) and (5). Generate a local particle according to (6) and (7). Evaluate the fitness values of and . Among , , and , we select the best one as the new . If needed, update and .

Step 6. If the stop criterion is satisfied, then stop the algorithm and output the results; otherwise go to Step 3.

4. Experimental Study

4.1. Test Problems

In order to verify the performance of our approach, there are twelve well-known benchmark problems used in the following experiments [10]. According to the properties of these problems, they are divided into two three types: unimodal problems (), unrotated multimodal problems (), and rotated multimodal problems (). For rotated problems, the original variable is left multiplied by the orthogonal matrix to get the new rotated variable . For all test problems, they are to be minimized and their global optima are zero. The specific descriptions of these problems are presented in Table 1.

4.2. Effects of the Parameter

The main contribution of this paper is to minimize the effects of the control parameters and improve the performance of BPSO. Although NSBPSO eliminates the control parameters, , , and , it introduces two new parameters and . The parameter is the size of neighborhood radius. The ring population topology used in this paper assumes that particles are connected by their indices. Although and are two neighbors of , they may not be the nearest one to (Euclidean distance). So, the size of the neighborhood radius does not affect the selection of particles in the local neighborhood search. Our empirical studies also confirm it (here we do not list the results of NSBSO with different -neighborhood radius). According to the suggestions of [11], is set to 2 in this paper.

The parameter controls the probability of conducting neighborhood search. A larger will result in more neighborhood search operations, while a smaller will have less. This may affect the performance of NSBPSO. To investigate the effects of , this section presents an experimental study. In the experiment, the is set to 0.0, 0.1, 0.3, 0.5, 0.7, and 1.0, respectively. The performance of NSBPSO with different is compared.

For other parameters of NSBPSO, we use the following settings by the suggestions of [10]. The population size is set to 40. When the number of fitness evaluations (FEs) reaches the maximum value MAX_FEs, the algorithm stops running. In the experiment, MAX_FEs is set to . For each test problem, NSBPSO is run 30 times and the mean fitness error values are reported.

Table 2 presents the computational results of NSBPSO under different , where “Mean” represents the mean fitness error values. The best results among the comparison are shown in boldface. As seen, the performance of NSBPSO is not sensitive to the parameter . A smaller () or larger () value of almost achieves similar results. For , NSBPSO is equal to the original BPSO, because the neighborhood search operations are not conducted. For this case, the algorithm shows poor performance and falls into local minima on most test functions. When , NSBPSO significantly outperforms NSBPSO with . It demonstrates that the neighborhood search strategies are very effective. Even if we use a small , NSBPSO can also obtain promising results.

The value of does not affect the performance of NSBPSO, and is applicable for all test problems. In this paper, is used in the following experiments.

Figure 2 presents the convergence processes of NSBPSO with different . Although different of NSBPSO can find the global optimum on the majority of test functions, they show different convergence characteristics. For problem , larger converges faster than smaller . For problem , converges fastest than other values. For , shows the fastest convergence speed.

4.3. Comparison of NSBPSO with Other PSO Algorithms

In this section, experiments are conducted to compare nine PSO algorithms including the proposed NSBPSO on the 12 test problems. The involved algorithms are listed as follows. (1)standard PSO,(2)barebones PSO (BPSO),(3)unified PSO (UPSO) [7],(4)fully informed PSO (FIPS) [8],(5)cooperative PSO (CPSO-H) [9],(6)comprehensive learning PSO (CLPSO) [10],(7)adaptive learning PSO (APSO) [13],(8)pSO with neighborhood search (NSPSO) [11],(9)our approach (NSBPSO).

For the sake of fair comparison, we use the same settings for the same parameters. For all algorithms, the population size is set to 40, and the maximum number of fitness evaluations (MAX_FEs) is set to . For standard PSO, is linearly decreased from 0.9 to 0.4, and . For NSPSO, the probability of neighborhood search is set to 0.3. The parameter settings of UPSO, CPSO-H, FIPS, and CLPSO are described in [10]. For NSPSO and APSO, the same parameters are used by the literature [11, 13], respectively. For each test problem, each algorithm is conducted 30 times and the mean fitness error values are reported.

Table 3 lists the comparison results of NSBPSO with other eight PSO algorithms, where “Mean” represents the mean fitness error values. The best results are shown in boldface. From the results, it can be seen that NSBPSO outperforms PSO, BPSO, and FIPS on all test problems. UPSO and APSO perform better than NSBPSO on , while NSBPSO achieves better results for the rest 11 problems. CPSO-H obtains better solution than NSBPSO on , while NSBPSO outperforms CPSO-H on 10 problems. For problem , both NSBPSO and CPSO-H can find the global optimum. NSPSO performs better than NSBPSO on , while NSBPSO outperforms NSPSO on 4 problems. Both of them can converge to the global optimum on 7 problems.

From the comparison of BPSO and PSO, BPSO outperforms PSO on 6 problems, while PSO achieves better results than BPSO for the rest 6 problems. The results demonstrate that the performance of BPSO is similar to PSO on these problems. Compared to PSO, BPSO is more competitive, because BPSO does not contain any control parameter (except for the population size), while PSO employs empirical parameter settings. By hybridization of BPSO (or PSO) and the neighborhood search, NSBPSO (or NSPSO) achieves significantly improvements on the performance of BPSO (or PSO). Compared to NSPSO, NSBPSO not only achieves better results, but also has less control parameters.

In order to compare the performance differences between NSBPSO and the other eight PSO algorithms, we conduct the Wilcoxon signed-rank test by the suggestions of [22]. Table 4 shows the -values achieved by the Wilcoxon test. The values below 0.05 are shown in boldface. As shown, NSBPSO is significantly better than all other algorithms except for CLPSO and NSPSO. Though NSBPSO is not significantly better than them, it outperforms them in the majority of test problems.

5. Application on Ship Design

5.1. Problem Description

This section investigates the performance of our approach NSBPSO for a conceptual ship design. The original optimization statements are presented in [23, 24]. The ship design optimization problem used in this paper has six design variables, three objectives, and 9 inequality constraints. The design variables are length , beam , depth , draft , block coefficient , and speed in knots . The ship design problem aims to minimize transportation cost and lightship weight and maximize annual cargo [24]: where , , and . The specific model definition is described in Table 5 [25]. The search ranges of the six variables , , , , , and are listed in Table 6.

There are 9 inequality constraints listed as follows:

5.2. Constraint Handling

In order to deal with the constraints, an adaptive penalty method is employed by the suggestions of [19]. Let be the objective function (, , or ). The fitness evaluation function is defined by where is the number of inequality constraints, is the constraint violation for the th constraint, is the th constraint, is the mean objective function value in the current swarm, and is a penalty coefficient defined as follows: where is the average violation of the th constraint for all particles in the swarm.

5.3. Computational Results

The ship design problem is a multiobjective optimization problem which has three objectives. By the suggestions of [24, 25], this paper only considers single objective optimization. Therefore, the whole problem is divided into three single objective optimization problems: (1) minimize transportation cost , (2) minimize lightship weight , and (3) maximize annual cargo .

In this section, we conduct three series of experiments for the three single optimization problems. In order to verify the performance of our approach NSBPSO, we compare it with four other algorithms. The involved algorithms are listed as follows:(1)parsons and Scott’s method [24],(2)standard PSO,(3)barebones PSO (BPSO),(4)PSO with neighborhood search (NSPSO),(5)our approach (NSBPSO).

To have a fair comparison, the same parameter settings are used for common parameters. For all algorithms, the population size and the maximum number of fitness evaluations (MAX_FEs) are set to 100 and . For standard PSO, is linearly decreased from 0.9 to 0.4 and . For NSBPSO and NSPSO, is set to 0.3. For each optimization problem, each algorithm is run 10 times and the best results among these runs are presented.

Tables 79 show the computational results for the three problems. For Table 7, NSBPSO achieves the minimal transportation cost among the five algorithms; but it also obtains the minimal value of annual cargo. For the objective of , NSBPSO is the best among the five algorithms, however, it could not obtain the best results for all three objectives. For annual cargo , Parsons and Scott’s [24] method is the best. Tables 8 and 9 can also get similar conclusions. The results demonstrate that NSBPSO shows better performance than the other three algorithms for single objective optimization problem of the ship design. When considering all objectives, we cannot conclude which algorithm is the best. To perfectly solve this problem, we may use multiobjective optimization algorithms. Figures 3, 4, and 5 present the convergence curves of PSO, BPSO, NSPSO, and NSBPSO for the three single objective problems. For minimizing transportation cost, NSBPSO shows faster convergence speed at the last stage of the evolution. For minimizing lightship weight, NSBPSO converges faster than other three PSO algorithms. For maximizing annual cargo, both NSBPSO and NSPSO show similar convergence characteristics.

6. Conclusions

Barebones PSO (BPSO) is a new variant of PSO which eliminates the velocity term. Although some reported results show that BPSO is better than PSO, it still gets stuck when solving complex multimodal problems. In order to enhance the performance of BPSO, this paper proposes an improved version called BPSO with neighborhood search (NSBPSO). The new approach embeds one local and one global neighborhood search strategies into the original BPSO to achieve a tradeoff between exploration and exploitation. Compared to other improved PSO algorithms, NSBPSO is almost parameter-free algorithm except for the probability of neighborhood search .

Experimental studies are conducted on twelve well-known benchmark problems, including unimodal, multimodal, and rotated multimodal problems. Computational results show that the parameter does not affect the performance of NSBPSO. When , NSBPSO can obtain good performance. Another comparison demonstrates that NSBPSO performs better than, or at least comparable to, several other state-of-the-art PSO algorithms. Compared to PSO with neighborhood search (NSPSO), our approach NSBPSO not only achieves better results, but also has less control parameters.

For the ship design problem, NSBPSO performs better than other three algorithms when optimizing a single objective. When considering all three objectives, we cannot determine which algorithm is the best. Because one algorithm only achieves better results than other algorithms on one or two objectives. To tackle this problem, we can use multiobjective optimization algorithms to optimize the three objectives at the same time. This will be investigated in the future work.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (no. 51209057).