Abstract

Comprehensive learning particle swarm optimization (CLPSO) and enhanced CLPSO (ECLPSO) are two literature metaheuristics for global optimization. ECLPSO significantly improves the exploitation and convergence performance of CLPSO by perturbation-based exploitation and adaptive learning probabilities. However, ECLPSO still cannot locate the global optimum or find a near-optimum solution for a number of problems. In this paper, we study further bettering the exploration performance of ECLPSO. We propose to assign an independent inertia weight and an independent acceleration coefficient corresponding to each dimension of the search space, as well as an independent learning probability for each particle on each dimension. Like ECLPSO, a normative interval bounded by the minimum and maximum personal best positions is determined with respect to each dimension in each generation. The dimensional independent maximum velocities, inertia weights, acceleration coefficients, and learning probabilities are proposed to be adaptively updated based on the dimensional normative intervals in order to facilitate exploration, exploitation, and convergence, particularly exploration. Our proposed metaheuristic, called adaptive CLPSO (ACLPSO), is evaluated on various benchmark functions. Experimental results demonstrate that the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities help to significantly mend ECLPSO’s exploration performance, and ACLPSO is able to derive the global optimum or a near-optimum solution on all the benchmark functions for all the runs with parameters appropriately set.

1. Introduction

Particle swarm optimization (PSO) [1, 2] is a powerful class of metaheuristics for global optimization. PSO simulates the social behavior of sharing individual knowledge when a flock of birds search for food. In PSO, flock and bird are, respectively, termed as swarm and particle, and each particle represents a candidate solution. Suppose the problem to be solved has D decision variables, each particle, denoted as i, “flies” in a D-dimensional search space and is accordingly associated with a D-dimensional velocity , a D-dimensional position , and a fitness f (Pi) indicating the optimization performance of Pi. The swarm of particles randomly initialize velocities and positions and search the global optimum iteratively, and the final solution found is the historical position that exhibits the best fitness value among all the particles. In each iteration (or generation), i updates according to the present value of , the historical position giving i’s best fitness value so far (i.e., i’s personal best position), and the personal best positions of other particles.

Many different PSO variants have been proposed in the literature since the introduction of PSO in 1995 [3]. For the earliest proposed global PSO (GPSO) [3, 4], the global best position with the best fitness value among all the particles’ personal best positions is used for particle velocity update. To be specific, in each generation, i’s velocity and position Pi are adjusted on each dimension as follows:where d (1 ≤ d ≤ D) is the dimension index; is the inertia weight; a and b are the acceleration coefficients; r and s are two random numbers uniformly distributed in [0, 1]; is i’s personal best position; and is the global best position. GPSO is liable to get stuck in a local optimum if the global best position is far from the global optimum. Local PSO (LPSO) [5] sets up a social topology with the shape of, e.g., ring, star, and pyramid. i’s neighborhood comprises i itself and the particles that are directly connected with i in the topology. Unlike GPSO, LPSO takes advantage of i’s local best position that gives the best fitness value among i’s neighborhood to guide the flight trajectory update of i, as can be seen from the following equation:

Compared with GPSO, LPSO reduces the chance of resulting in a local optimum. With respect to both GPSO and LPSO, i’s personal best position and the global/local best position are used for updating on all the dimensions. However, the personal best position and the global/local best position actually do not always contribute to the velocity update on each dimension. Comprehensive learning PSO (CLPSO) [6] and orthogonal learning PSO (OLPSO) [7] encourage i to learn from different exemplars on different dimensions according to equation (4) when updating .where is i’s exemplar position. In CLPSO, i is additionally associated with a fixed learning probability that controls Ei,d = Bi,d or Bj,d on each dimension d, where j is a particle randomly selected, and j ≠ i. OLPSO sets Ei,d as i’s personal best position or the global/local best position on each dimension d with the aid of orthogonal experimental design; OLPSO therefore has two versions: the global version OLPSO-G and the local version OLPSO-L. CLPSO and OLPSO redetermine Ei if i’s personal best fitness value f (Bi) does not improve for a consecutive number of generations. CLPSO and OLPSO significantly outperform GPSO and LPSO in terms of preserving the particles’ diversity and probing different regions of the search space to obtain a promising solution.

Metaheuristics including PSO need to address three important issues, namely, exploration, exploitation, and convergence. Exploration means searching diversely to locate a small region that possibly contains the global optimum, while exploitation refers to concentrating the search around the small region for solution refinement. Convergence is the gradual transition from initial exploration to ensuing exploitation. We studied enhancing the exploitation and convergence performance of CLPSO in [8], and our proposed PSO variant is called enhanced CLPSO (ECLPSO). ECLPSO calculates and which are, respectively, the lower bound and the upper bound of all the particles’ personal best positions on each dimension d in each generation as follows:where N is the number of particles. is termed as the normative interval of dimension d. Let the search space on dimension d be with being the lower bound and being the upper bound; ECLPSO deems that when becomes indeed small (i.e., no greater than 1% of and no greater than absolute value 2 simultaneously), the swarm of particles enter the exploitation phase on dimension d (i.e., the global optimum or a near-optimum solution has been identified to be likely around the normative interval on dimension d); otherwise, the particles are still in the exploration phase on dimension d (i.e., searching different regions on dimension d). ECLPSO adaptively updates the learning probability of each particle based on the ranking of all the particles’ personal best fitness values and the number of the dimensions that have entered the exploitation phase. In addition, ECLPSO conducts perturbation on each dimension d that has entered the exploitation phase in order to find a high-quality solution around the normative interval on that dimension.

For a PSO variant, the velocity of each particle i on each dimension d is usually clamped by a maximum velocity , i.e.,

If was too large, the particles might miss some promising solutions on dimension d; on the contrary, too small would slow down the search process on dimension d. is fixed at 20% of the dimensional search space by many literature PSO variants including GPSO, LPSO, CLPSO, OLPSO, and ECLPSO. The experimental results on various benchmark functions reported in [8] have demonstrated that though ECLPSO significantly improves the exploitation and convergence performance of CLPSO, it still cannot locate the global optimum or a near-optimum solution on a number of functions including Rosenbrock’s function, rotated Schwefel’s function, and rotated Rastrigin’s function. The fight trajectory and search behavior of all the particles in ECLPSO are directly affected by each dimension d’s maximum velocity , the inertia weight , the acceleration coefficient a, and each particle i’s learning probability Li. The experimental results reported in [8] have also indicated that the search process of the particles evolves differently on each dimension; hence, in this paper, we propose to assign an independent inertia weight and an independent acceleration coefficient corresponding to each dimension, as well as an independent learning probability for each particle on each dimension. The dimensional independent maximum velocities, inertia weights, acceleration coefficients, and learning probabilities are adaptively updated based on the dimensional normative intervals in order to facilitate exploration, exploitation, and convergence, particularly exploration. We call the variant with the dimensional independent and adaptive parameters as adaptive CLPSO (ACLPSO).

We note that existing PSO variants, e.g., [342], have rarely considered using dimensional independent parameters other than the dimensional independent maximum velocities, and we find only one work [26] as an exception. In [26], Taherkhani and Safabakhsh modified GPSO, CLPSO, and OLPSO with an independent inertia weight and an independent acceleration coefficient for each particle i on each dimension d; the inertia weight and the acceleration coefficient are adaptively adjusted according to the improvement status of i’s personal best fitness value and the distance between i’s dimensional position Pi,d and i’s dimensional personal best position Bi,d to achieve better exploration and faster convergence.

The rest of this paper is organized as follows. Section 2 reviews the related work on PSO. The more detailed working principles of CLPSO and ECLPSO are elaborated in Section 3. Section 4 presents our proposed dimensional independent and adaptive parameters and the space and time complexity analysis of ACLPSO. Performance evaluation of ACLPSO on a variety of benchmark functions is given in Section 5. Section 6 concludes this paper.

A lot of researchers worldwide have studied PSO. The status quo and research trend of PSO relevant research are to investigate multistrategy and adaptivity based on the 4 typical PSO variants, i.e., GPSO, LPSO, CLPSO, and OLPSO. Multistrategy refers to employing multiple strategies, while adaptivity stands for adaptively setting some parameters as well as appropriately invoking and switching the strategies. Multistrategy and adaptivity aim to realize goals such as exploration, exploitation, and convergence and help the particles efficiently find the global optimum or a near-optimum solution.

2.1. Related Work Based on GPSO/LPSO

Zhan [9] proposed adaptive GPSO; the variant identifies the swarm’s evolution status based on the distribution of the distances among each particle and all the other particles; the inertia weight and acceleration coefficient are adaptively adjusted according to the swarm’s evolution status for expediting convergence; the variant additionally takes advantage of Gaussian mutation to appropriately impose some momentum on the global best position to help escaping from a local optimum. Median-oriented GPSO was studied in [10]; the variant assigns an independent acceleration coefficient for each particle i; i is intentionally guided away from the swarm’s median position that gives the median fitness value among all the particles’ fitness values during the flight velocity update, and i’s associated acceleration coefficient is adaptively updated based on i’s fitness value, the swarm’s worst fitness value, and the swarm’s median fitness value so as to benefit jumping out of premature stagnancy in a local optimum and accelerating convergence. Chen et al. [11] introduced an aging mechanism with an aging leader and challengers for GPSO to address exploration; by evaluating the improvement status of the global best fitness value f (G), all the particles’ personal best fitness values, and the leader’s fitness value, the variant adaptively analyzes the leader’s leading capability, adjusts the leader’s life span, and generates a challenger through uniform mutation to possibly replace the leader when the leader’s span becomes exhausted. GPSO augmented with multiple adaptive strategies was presented in [12]; nonuniform mutation and adaptive subgradient are alternatively applied to the global best position, respectively, contributing to escaping from a local optimum and conducting local search; the variant also performs Cauchy mutation on a randomly selected particle; as Cauchy mutation hinders convergence, the variant assigns an independent inertia weight and an independent acceleration coefficient for each particle and minimizes the sum of the distances between each particle and the global best position such that the inertia weights and acceleration coefficients are adaptively set and convergence is accordingly accelerated. In [13], LPSO with adaptive time-varying topology connectivity was investigated; for each particle i, the variant determines i’s historical contribution status to the global best position and the historical status of i’s topology connectivity getting stuck in a threshold value for every 5 consecutive generations and then adaptively updates i’s connectivity in the topology; the variant relies on neighborhood search to help the particles with their personal best fitness values ceasing improving in the present generation to jump out of stagnancy. Xia et al. [14] discussed GPSO with tabu detection and local search in a shrunk space; each dimension d is segmented into 7 regions of equivalent sizes; for every 5 consecutive generations, the variant calculates the excellence level of each region on dimension d based on the ranking of all the particles’ personal fitness values and the distribution of all the particles’ personal best positions in the regions; according to the excellence level of the region that the global best position belongs to, the variant appropriately randomly generates a possible replacement from some other region to assist escaping from a local optimum; when the global best position falls in a region on dimension d for 80 consecutive generations, the variant shrinks the dimensional search space to that specific region for the purpose of speeding up convergence; moreover, the variant conducts local search with the aid of differential evolution. Other recent works related to integrating GPSO/LPSO with multistrategy and/or adaptivity include [1532].

2.2. Related Work Based on CLPSO/OLPSO

Liang and Suganthan [33] proposed adaptive CLPSO with history learning; for every 20 consecutive generations, the variant adaptively updates each particle’s learning probability based on the best learning probability out of all the particles’ learning probabilities (i.e., having resulted in the biggest improvement for the personal best fitness value) and Gaussian distribution. Memetic CLPSO was introduced in [34]; the variant employs chaotic local search to help each particle that cannot improve the personal best fitness value for 10 consecutive generations getting out of stagnancy and applies simulated annealing to the particle whose personal best fitness value continues improving for 3 consecutive generations and whose personal best position is actually the global best position for solution refinement. Zheng et al. [35] studied adaptively determining the inertia weight for CLPSO according to the relative ratio of the number of particles with improved personal fitness values in the present generation and adaptively setting the acceleration coefficient by considering the sum of the ratio of each particle’s fitness change over the particle’s position change in the present generation. Superior solution guided CLPSO was presented in [36]; for the variant, the set of superior solutions includes not only each particle’s personal best position but also other historically experienced positions with excellent fitness values; each particle learns from the superior solutions for velocity update; the variant applies nonuniform mutation on each particle i to help escaping from a local optimum, and the mutation is activated only when i’s personal best fitness value ceases improving for 50 consecutive generations, and the average distance between i’s position in the present generation and i’s position in the previous 5 generation is less than a threshold value; the variant additionally takes advantage of some local search techniques (e.g., quasi-Newton, pattern search, and simplex search) to refine the global best position after 80% of the search progress. Qin et al. [37] investigated 4 auxiliary strategies for OLPSO to generate an appropriate exemplar position, respectively, for the purpose of preserving diversity, jumping out of premature stagnancy, accelerating convergence, and local search; the variant mutates the global best position to further strengthen exploration. Other recent works including [3842] are also related to multistrategy and/or adaptivity research based on CLPSO/OLPSO.

3. Background

3.1. Comprehensive Learning Particle Swarm Optimization

In equation (4), the inertia weight linearly decreases in each generation, and the acceleration coefficient a is a constant value equivalent to 1.5. Let kmax be the predefined maximum number of generations; is updated in each generation k according to the following equation:where and are, respectively, the maximum and minimum inertia weights.

Equation (8) is the empirical expression for setting each particle i’s learning probability Li. All the particles are associated with different learning probabilities.where Lmax = 0.5 is the maximum learning probability and Lmin = 0.05 is the minimum learning probability.

For each particle i on each dimension d, a random number uniformly distributed in [0, 1] is generated; if the number is no less than Li, the dimensional exemplar Ei,d = Bi,d; otherwise, Ei,d = Bj,d with j ≠ i. To determine j, two different particles excluding i are randomly selected, and j is the winner with a better fitness value out of the two particles. If Ei,d is the same as Bi,d on all the dimensions, CLPSO randomly chooses one dimension to learn from some other particle’s personal best position. CLPSO redetermines i’s exemplar position Ei if i’s personal best fitness value ceases improving for 7 consecutive generations.

CLPSO calculates the fitness value of i only if i is feasible (i.e., within the dimensional search space on each dimension d). If i is infeasible, as all the dimensional exemplars are feasible, i will eventually be drawn back to the search space.

3.2. Enhanced Comprehensive Learning Particle Swarm Optimization

ECLPSO introduces two enhancements, namely, perturbation-based exploitation (PbE) and adaptive learning probabilities (ALPs), to improve the exploitation and convergence performance of CLPSO.

In each generation, regarding each dimension d, if the dimensional normative interval is indeed small, the PbE enhancement updates the dimensional position for each particle i according to equation (9) instead of equation (4):where is the inertia weight used exclusively for the PbE enhancement; aPbE = 1.5 is the acceleration coefficient used exclusively for the PbE enhancement; and c is the perturbation coefficient. c is randomly generated from a Gaussian distribution with mean 1 and standard deviation 0.65, and c is clamped to 10 times of the standard deviation on both sides of the mean. Each particle i is pulled towards Ei,d plus a perturbation term on dimension d. The PbE enhancement contributes to sufficient exploitation around the indeed small dimensional normative interval. Note that updated by equation (9) is not limited by the dimensional maximum velocity .

The minimum learning probability Lmin is fixed at 0.05. As expressed in equation (10), the maximum learning probability Lmax logarithmically increases in each generation k:where Mk is the number of exploitation valid dimensions (i.e., the number of the dimensions whose normative intervals have ever become indeed small) before or just in generation k; h = 0.25 is the difference coefficient; and q = 0.45 is the rate coefficient. Lmax is small (i.e., 0.3) when Mk = 0 and benefits initial exploration. Lmax increases rapidly with the particles’ exploitation progress to facilitate convergence. The ALP enhancement adaptively determines all the particles’ learning probabilities based on the ranking of the particles’ personal best fitness values, i.e.,where Ti is i’s rank. If i gives the best personal best fitness value, then Ti = 1. A low-ranked particle is often better on more dimensions with respect to the personal best position than a high-ranked particle.

4. Adaptive Comprehensive Learning Particle Swarm Optimization

4.1. Dimensional Independent and Adaptive Maximum Velocities

Suppose the optimization problem to be solved is f (X) with X being the D-dimensional decision vector, and the global optimum is . CLPSO and ECLPSO fail to observe and address the fact that, on a dimension d, if the dimensional global optimum is located near either bound of the dimensional search space and all the particles’ dimensional personal best positions are scattered (i.e., the dimensional normative interval is large), then it would be difficult for the swarm of particles to locate ; this is because the dimensional maximum velocity updated by equation (4) is restricted to be 20% of . Figure 1 illustrates this phenomenon. In Figure 1(a), a particle i’s dimensional position Pi,d and are close to different bounds of , and i’s dimensional exemplar position Ei,d is located in between Pi,d and ; the distance Ei,dPi,d and the distance both are equal to 40% of ; and i needs at least 2 generations to reach around Ei,d. As can be seen from Figure 1(b), when i flies past Ei,d, i’s dimensional velocity update is influenced by two forces, i.e., the inertia force and the exemplar force ar (Ei,dPi,d); the more i being away from Ei,d, the more the exemplar force is to pull it back to Ei,d. In Figure 1(c), is around , and Pi,d is not that far from ; however, Ei,d is close to ; Ei,d guides i to fly away from . As a result, the chance for a particle to reach close to the dimensional global optimum is small. Furthermore, in case the dimensional global optimum is located near the dimensional search space bound and the dimensional normative interval is large on a significant number of dimensions, CLPSO and ECLPSO would fail to find the global optimum or a near-optimum solution, e.g., on Rosenbrock’s function, rotated Schwefel’s function, and rotated Rastrigin’s function, for all the runs as reported in [8]. Therefore, should not be fixed at 20% of . We propose to adaptively adjust in each generation according to the following equation:where s is the scaling coefficient and is a positive value. is positively related with the dimensional normative interval’s size . When is large, is large and contributes to timely flight for getting close to ; on the contrary, is small when becomes small in order to benefit fine-grained search.

Allowing each particle i’s position Pi,d on each dimension d to be infeasible also inhibits the particles to move close to . Figure 1(d) shows an example; is near , Pi,d trespasses and is infeasible, and Ei,d is around ; because of the force imposed by Ei,d, Pi,d is pulled to a feasible dimensional position far from . Accordingly, an infeasible dimensional position is proposed to be repaired immediately by reinitialization between the previous feasible dimensional position and the trespassed dimensional search space bound [43, 44].

4.2. Dimensional Independent and Adaptive Inertia Weights and Acceleration Coefficients

For CLPSO and ECLPSO, the inertia weight used in equation (4) is initially large to result in a large inertia force and is helpful for exploration, and linearly decreases in each generation to gradually decrease the inertia force for the purpose of facilitating convergence and solution refinement. As is dynamically updated according to the generation counter k in equation (7), might obstruct exploration if the swarm of particles had not found the global optimum or a near-optimum solution even when k is large, and might also impede convergence if a promising solution had already been located, and the particles can thus start solution refinement even when k is small. In addition, the same is used with respect to all the dimensions. The search processes of the particles often evolve differently on different dimensions, i.e., taking different number of generations for the exploration phase. We thus propose to assign an independent weight for each dimension d to replace in equation (4) and adaptively set when the dimensional normative interval is greater than 1% of the dimensional search space or greater than 2 as follows:where is the tradeoff coefficient and is a positive number. adjusts the tradeoff between the term and the term . The empirical value chosen for is 0.3. The incorporation of the term aims to improve the particles’ exploration and convergence capabilities. If the dimensional normative interval is large, the particles are still exploring different regions of the dimensional search space, and accordingly, needs to be large; when becomes small, also grows small to facilitate convergence.

We further propose to assign an independent acceleration coefficient ad for each dimension d to replace a in equation (4). and ad must satisfy the following so-called stability condition [26, 45, 46]:

Hence, ad is simply adaptively adjusted according to the following equation:

4.3. Dimensional Independent and Adaptive Learning Probabilities

Regarding each particle i in CLPSO and ECLPSO, a large value for i’s learning probability Li enables i to learn more from its own personal best position for velocity update and hence is beneficial for solution refinement, while a small value for Li will let i to learn more from other particles’ personal best positions and accordingly encourages i to search diversely. Li is adaptively updated based on i’s fitness rank Ti and the number of exploitation valid dimensions Mk in each generation k. A serious issue occurs if Mk is 0 or a small value in all the generations, e.g., as reported on Rosenbrock’s function, rotated Schwefel’s function, and rotated Rastrigin’s function in [8]; small Mk leads to small learning probabilities for the swarm of particles and fails to realize convergence. We propose to assign an independent learning probability Li,d for each particle i on each dimension d and adaptively set Li,d in each generation k as follows:where Lmin = 0.05; Lmax = 0.75; and is the learning probability-based coefficient and is a positive number no greater than 1. The term grows logarithmically with the generator counter k in order to facilitate convergence. The term , being positively related with , also benefits convergence when the dimensional normative interval is large.

4.4. Workflow and Complexity Analysis

ACLPSO is our proposed PSO variant based on ECLPSO with dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities. The detailed step-by-step workflow of ACLPSO is as follows:Step 1: for each particle i and each dimension d, randomly initialize i’s dimensional velocity and dimensional position Pi,d based on the dimensional search space , calculate i’s fitness value f (Pi), and set i’s personal best fitness value f (Bi) = f (Pi), i’s dimensional personal best position Bi,d = Pi,d, i’s cessation counter Wi = 0, the generation counter k = 1, the maximum number of generations kmax, and all the other parametersStep 2: if k ≤ kmax, go to Step 3; otherwise, go to Step 7Step 3: for each dimension d, determine the dimensional normative interval , and update the dimensional maximum velocity according to equation (12)Step 4: for each particle i, calculate i’s fitness rank Ti; and if Wi% ( + 1) = 0, reset Wi = 1 and reassign i’s dimensional exemplar Ei,d on each dimension dStep 5: for each particle i and each dimension d, adjust i’s dimensional learning probability Li,d according to equations (17) and (18); if is indeed small, update according to equation (9); otherwise, update the dimensional inertia weight according to equations (13) and (14), the dimensional acceleration coefficient ad according to equation (16), and according to equations (4) and (6); update Pi,d according to equation (2), and repair Pi,d if Pi,d trespasses Step 6: for each particle i, calculate f (Pi); if f (Pi) ≥ f (Bi), update Wi = Wi + 1; otherwise, set f (Bi) = f (Pi) and Bi,d = Pi,d on each dimension dStep 7: output the global best position with the best fitness value among all the particles’ personal best positions

As analyzed in [8], the time and space complexities of ECLPSO are, respectively, O (ND) bytes and O (kmax (NlogN + ND)) basic operations plus O (kmaxN) function evaluations (FEs). Concerning ACLPSO, storing the dimensional independent inertia weights and acceleration weights requires O (D) bytes, and storing the dimensional independent learning probabilities needs O (ND) bytes. Adaptively updating the dimensional independent maximum velocities, inertia weights, and acceleration coefficients calls for O (kmaxD) basic operations, and adaptively adjusting the dimensional independent learning probabilities demands for O (kmaxND) basic operations. Therefore, the time and space complexities of ACLPSO are the same as those of ECLPSO.

5. Experimental Studies

5.1. Experimental Settings

The experimental hardware platform is a Microsoft Surface Pro laptop computer with an Intel Core i5-7300U central processor at the frequency of 2.6 GHz, 8 GB internal memory, and 256 GB solid-state disk external memory. The operating system is 64 bit Windows 10.

16 commonly studied 30-dimensional functions [68, 24, 47] are used in this paper for benchmarking ACLPSO and other PSO variants. The name, the expression, the global optimum, the function value of the global optimum, the search space, and the initialization space of each function are listed in Table 1. The functions are classified into 5 categories, namely, unimodal, multimodal, shifted, rotated, and shifted rotated. Rosenbrock’s function f3 is unimodal in a 2-dimensional or 3-dimensional search space, but is multimodal in higher-dimensional cases [48]; it features a narrow valley from perceived local optima to the global optimum. With the incorporation of the cosine term , there are a large number of regularly distributed local optima for Rastrigin’s function f5. Ackley’s function f7 has a deep global optimum and many minor local optima. Griewank’s function f8 contains a cosine multiplication term that causes linkages among the decision variables; f8 is similar to f5 in terms of many regularly distributed local optima. Schwefel’s function f9 has a global optimum that is distant from the local optima. With respect to the unimodal and multimodal functions f1 to f9, the dimensional values of the global optimum are the same on all the dimensions. A shifted function shifts the global optimum to a vector Z that can be different on each dimension. A rotated function multiplies the original decision vector X by an orthogonal matrix O to get a rotated decision vector Y = XO; because of the rotation, if one dimension of X changes, all the dimensions of Y get affected. A shifted rotated function is both shifted and rotated. The shifted global optima of the shifted functions f10 to f12 and the shifted rotated function f16 can be found in [47]. The orthogonal matrices of the rotated functions f13 to f15 and the shifted rotated function f16 are generated by Salomon’s method [49]. The initialization spaces of the functions f1, f2, f4, f5, f6, f7, f8, f13, and f15 are intentionally set to be asymmetric.

We conduct experiments to investigate the following 3 issues: (1) what are the key parameters of ACLPSO and how do the key parameters impact the performance of ACLPSO? (2) How do the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities improve the performance of ACLPSO? (3) How is the performance of ACLPSO as compared with other PSO variants? We consider 3 variants of ACLPSO, i.e., ACLPSO-1, ACLPSO-2, and ACLPSO-3. ACLPSO-1, ACLPSO-2, and ACLPSO-3 are the same as ACLPSO, except that ACLPSO-1 does not repair the dimensional position Pi,d for each particle i on each dimension d if Pi,d trespasses the dimensional search space , ACLPSO-2 does not adopt the dimensional independent and adaptive inertia weights and acceleration coefficients, and ACLPSO-3 does not take advantage of the dimensional independent and adaptive learning probabilities. Besides ACLPSO-1, ACLPSO-2, and ACLPSO-3, ACLPSO is further compared with CLPSO [6], ECLPSO [8], OLPSO-L [7], adaptive GPSO (AGPSO) [9], feedback learning GPSO with quadratic inertia weight (FLGPSO-QIW) [15], and GPSO with an aging leader and challengers (ALC-GPSO) [11]. For ACLPSO, ACLPSO-1, ACLPSO-2, ACLPSO-3, CLPSO, and ECLPSO, they are all implemented by Java, the number of particles N is set as 40, and 25 runs are executed on each function. The parameters of CLPSO, ECLPSO, OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO take the recommended values that were empirically determined based on extensive experiments on various benchmark functions in [69, 11, 15]. Note that the value of N could be different for different PSO variants. N is fixed at 40 for CLPSO, ECLPSO, and OLPSO-L in [68], while it is equal to 20 for AGPSO, FLGPSO-QIW, and ALC-GPSO in [9, 11, 15]. As we do not have the source codes of OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO, we directly copy the results of these 4 variants from [7, 24] for performance comparison. Concerning all the PSO variants compared, each run consumes 200,000 FEs.

5.2. Experimental Results and Discussion

The mean and standard deviation (SD) global best fitness value results of ACLPSO with different combinations for the values of the normative interval scaling coefficient s and the learning probability-based coefficient on all the benchmark functions are listed in Table 2. Four different combinations are considered, i.e., I (s = 1.1, ), II (s = 1.1, ), III (s = 0.1, ), and IV (s = 0.1, ), and the best combination on each function is marked in bold. Table 3 gives the mean and SD final number of exploitation valid dimensions’ (i.e., ) results of ACLPSO with 4 different () combinations on all the functions. The mean and SD global best fitness value results of ACLPSO with the best () combination, ACLPSO-1, ACLPSO-2, ACLPSO-3, CLPSO, ECLPSO, OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO on all the functions are compared in Table 4. A two-tailed t-test with degree of freedom 48 and significance level 0.05 is performed between the global best fitness value results of ACLPSO with the best () combination and those of ECLPSO on each function, and the t-test results on all the functions are listed in Table 5. Table 6 gives the mean and SD execution time results of ACLPSO with the best () combination, ECLPSO, and CLPSO on all the functions. Table 7 lists the mean and SD global best fitness value results of ACLPSO with other parameter settings on f3, f4, f12, and f14. The mean and SD global best fitness value and execution time results of CLPSO with other parameter settings on f1 and f2 are given in Table 8. Figure 2 illustrates the changes of the global best fitness value during the search process of ACLPSO with the best () combination and in the best run on f2, f3, f4, f12, f13, and f14.

As can be seen from Table 2, the best () combinations are, respectively, IV, IV, III, IV, I, II, II, II, I, I, II, IV, II, II, II, and IV on the 16 functions. ACLPSO with the best () combination is able to find the global optimum or a near-optimum solution on each function for all the 25 runs. ACLPSO is likely to get trapped in an unsatisfactory local optimum on f3 with combinations II and IV, on f4 with combinations I, II, and III, on f5 with combinations II and IV, on f9 with combinations II, III, and IV, on f10 with combinations II and IV, on f12 with combinations I and II, on f13 with combinations I, III, and IV, and on f14 with combinations III and IV. The accuracy of the mean global best fitness value with the best () combination is noticeably excellent on f1, f2, f5, f6, f7, f8, f11, f15, and f16. The observations indicate that s and are the key parameters for ACLPSO, and the performance of ACLPSO is sensitive to the values of s and . The normative interval scaling coefficient s determines the search granularity; large s encourages the particles to search in a large granularity so as to escape from an unsatisfactory local optimum and locate the global optimum or a near-optimum solution, whereas small s will let the particles to search in a small granularity so as not to miss the deep global optimum or a deep near-optimum solution during the search process. The learning probability-based coefficient controls for a particle the number of dimensions learning from the particle’s own personal best position; large and small values for , respectively, contribute to the exchange of valuable information among the particles and preserving valuable information embodied in each particle; large also benefits convergence and exploitation but might lead to premature stagnancy and hinders exploration in case some valuable information about the global optimum or a near-optimum solution was not preserved. We can see from Table 3 that the mean number of exploitation valid dimensions’ results of ACLPSO with being set as 0.3 is 30 or close to 30 on all the functions except f6; in contrast, the mean number of exploitation valid dimensions’ results of ACLPSO is less than 11 or even 0 with being fixed at 0.05 on f3, f4, f6, f8, f10, f11, f12, f13, f14, and f15. The functions have different landscapes; thus, different () combinations achieve the best performance on different functions.

In Table 4, ACLPSO-1, ACLPSO-2, and ACLPSO-3 take the same values for s and/or as the best () combination of ACLPSO on each function. The mean global best fitness value results of ACLPSO-1 are worse than those of ACLPSO on f1, f2, f3, f10, f13, f14, f15, and f16. ACLPSO-2 performs inferior to ACLPSO in terms of the mean global best fitness value results on f1, f2, f3, f4, f5, f10, f11, f12, f13, f14, f15, and f16. The mean global best fitness value results of ACLPSO-3 are also worse than those of ACLPSO on all the functions except f9 and f16. ACLPSO-1 finds an unsatisfactory local optimum on f14 for all the runs, ACLPSO-2 cannot locate the global optimum or a near-optimum solution on f3, f10, f11, and f14 for all or some of the runs, and ACLPSO-3 is not able to effectively solve f4, f6, f12, f13, and f14 as the solutions found are unsatisfactory for all or some of the runs. The comparisons among ACLPSO, ACLPSO-1, ACLPSO-2, and ACLPSO-3 validate that the repairing of a particle’s dimensional position when trespassing the dimensional search space, the dimensional independent and adaptive inertia weights and acceleration coefficients, and the dimensional independent and adaptive learning probabilities is appropriate to be employed by ACLPSO.

It can be seen from Table 4 that ACLPSO, in general, outperforms the literature PSO variants including CLPSO, ECLPSO, OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO in terms of the mean global best fitness value results. CLPSO and ECLPSO both fail to obtain the global optimum or a near-optimum solution on f3, f4, f12, f13, and f14 for all the runs. ECLPSO fails additionally on f10 with the finding of an unsatisfactory local optimum for all the runs. As the mean and SD global best fitness value results of OLPSO-L, AGPSO, FLGPSO-QIW, and ALC-GPSO are directly copied from [7, 24], the symbol “—” represents an unavailable result in Table 4. We can see unsatisfactory mean global best fitness value results for OLPSO-L on f10, f13, and f14, for AGPSO on f3, f4, f5, f12, f13, f15, and f16, for FLGPSO-QIW on f3, f4, f5, f6, f12, f13, f15, and f16, and for ALC-GPSO on f3, f9, and f10. The accuracies of ACLPSO’s mean global best fitness value results are the best on f4, f5, f6, f7, f8, f9, f10, f11, f12, f13, f14, and f15. The symbol “—” also appears in Table 5 and denotes a division by zero error. The t-test results are less than 0.05 on f2, f3, f4, f10, f12, f13, and f14; therefore, the global best fitness value results of ACLPSO are significantly different from those of ECLPSO on these 7 functions according to the statistics perspective. Based on the observations, though ECLPSO enhances the exploitation and convergence performance of CLPSO, the exploration performance of ECLPSO is as weak as that of CLPSO on some complex problems; and ACLPSO successfully addresses significantly bettering the exploration performance of ECLPSO, and ACLPSO is still good at exploitation and convergence, owing to the adoption of the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and learning probabilities.

ACLPSO and ECLPSO take advantage of more strategies than CLPSO to significantly achieve better performance; as a result, the mean execution time results of ACLPSO and ECLPSO are more than those of CLPSO on all the functions in Table 6. The differences are most noticeable on f1, f2, f7, f8, f11, f15, and f16, and the mean execution time of ACLPSO and that of ECLPSO are both about 300 to 500 ms more than that of CLPSO on each of these 7 functions. It must be pointed out that, for many real-world complex problems, the function evaluation or, in other words, evaluating the fitness of a position could be much time consuming, and accordingly, the execution time difference caused by more strategies of ACLPSO and ECLPSO than CLPSO would become relatively very small. The mean execution time results of ACLPSO are slightly more than those of ECLPSO on f1, f2, f3, f7, f8, f11, and f13, considerably more on f4, f12, f14, f15, and f16, slightly less on f5 and f6, and considerably less on f9 and f10, meaning that the dimensional independent and adaptive maximum velocities, inertia weights, acceleration coefficients, and leaning probabilities essentially do not increase execution time. With respect to ACLPSO, ECLPSO, and CLPSO, the mean execution time spent on a rotated function is more than that consumed on the corresponding original function, as can be observed from the pair of the mean execution time results on f13 and f5, the pair of f14 and f9, the pair of f15 and f8, and the pair of f16 and f7. The SD execution time results of ACLPSO, ECLPSO, and CLPSO are rather small as compared to the mean execution time results on all the functions.

The number of particles N is also a key parameter for ACLPSO. As can be seen from Table 7, setting N as 20 on f4 and f15 renders much worse mean and SD global best fitness value results, and ACLPSO cannot find the global optimum or a near-optimum solution for all or most of the runs because less particles lead to insufficient diversity. We can also observe from Table 7 that letting the learning probability-based coefficient to be more than 0.05 on f3, the normative interval scaling coefficient s to be more than 0.1 on f4, to be less than 0.3 on f12, and s to be less than 1.1 on f14 causes ACLPSO to be unable to find the optimum or a near-optimum solution for all or some of the runs. The mean and SD global best fitness value and execution time results of CLPSO on f1 and f2 in Table 8 indicate that simply by increasing the number of FEs to 500,000 or even increasing N to 80, CLPSO still cannot achieve high-accuracy mean global best fitness value as ECLPSO, while the mean execution time results of CLPSO are close to those of ECLPSO. The PbE and ALP strategies as well as the empirical values chosen for N and the best () combination are thus appropriate.

As shown in Figure 2, ECLPSO is liable to get stuck in premature stagnancy on f3, f4, f12, f13, and f14. The exploitation performance of ACLPSO is quite better than that of ECLPSO on f2. ACLPSO escapes from an unsatisfactory local optimum at the early stage of the search process on f4 and f14, at the middle stage on f3 and f12, and at the late stage on f13. According to Tables 2 and 3, ACLPSO takes the same best s values on f3, f4, and f12, the same best values on f4, f12, f13, and f14, and the same best s values on f13 and f14; the mean number of exploitation valid dimensions’ results is 30 for the best () combinations of ACLPSO on f4 and f12, slightly smaller than 30 on f13 and f14, and considerably smaller than 30 on f3. It is challenging to develop a unified setting for s and based on the generation counter and the dimensional normative intervals.

6. Conclusions

In this paper, we have proposed ACLPSO for the purpose of further significantly mending the exploration performance of ECLPSO. ACLPSO introduces an independent inertia weight and an independent acceleration coefficient corresponding to each dimension and an independent learning probability for each particle on each dimension. ACLPSO determines the normative interval with respect to each dimension in each generation. Based on the dimensional normative intervals, ACLPSO adaptively adjusts the dimensional independent maximum velocities, inertia weights, acceleration coefficients, and learning probabilities. Experiments on a variety of unimodal, multimodal, shifted, rotated, and shifted rotated benchmark functions have demonstrated that ACLPSO successfully addresses exploration as well as exploitation and convergence as ACLPSO is able to derive the global optimum or a near-optimum solution on all the functions for all the runs with the normative interval scaling coefficient and the learning probability-based coefficient appropriately set. ACLPSO is a promising metaheuristic for global optimization. In the future, we plan to dig out more critical information inherently embodied in the search experience of the particles and try to develop a high-performance PSO variant based on ACLPSO with a unified parameter setting that works well on most global optimization problems and complex real-world applications, e.g., optimal operation of power systems [50, 51].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was financially supported by the National Natural Science Foundation of China Project (61703199), the Jiangxi Province Natural Science Foundation Exceptional Young Scholar Project (2018ACB21029), the Shaanxi Province Natural Science Foundation Basic Research Project (2020JM-278), and the Central Universities Fundamental Research Foundation Project (GK202003006).