Abstract

In this study, a modified version of multiswarm particle swarm optimization algorithm (MsPSO) is proposed. However, the classical MsPSO algorithm causes premature stagnation due to the limitation of particle diversity; as a result, it is simple to slip into a local optimum. To overcome the above feebleness, this work presents a heterogeneous multiswarm PSO algorithm based on adaptive inertia weight strategies called (A-MsPSO). The MsPSO’s main advantages are that it is simple to use and that there are few settings to alter. In the MsPSO method, the inertia weight is a key parameter affecting considerably convergence, exploration, and exploitation. In this manuscript, an adaptive inertia weight is adopted to ameliorate the global search ability of the classical MsPSO algorithm. Its performance is based on exploration, which is defined as an algorithm’s capacity to search through a variety of search spaces. It also aids in determining the best ideal capability for searching a small region and determining the candidate answer. If a swarm discovers a global best location during iterations, the inertia weight is increased, and exploration in that direction is enhanced. The standard tests and indications provided in the specialized literature are used to show the efficiency of the proposed algorithm. Furthermore, findings of comparisons between A-MsPSO and six other common PSO algorithms show that our proposal has a highly promising performance for handling various types of optimization problems, leading to both greater solution accuracy and more efficient solution times.

1. Introduction

Particle swarm optimization (PSO) is a stochastic population-based algorithm developed by Eberhart and Kennedy [1]. PSO was derived from the intelligent collective behavior of animal species (e.g., flocks of birds), especially that of searching food. At the start of plowing, one bird discovers the food source. Then, it is followed very quickly by another and so on. In this process, the information regarding a potential feast is widely disseminated within the group of gulls that fly in search of food in a more or less orderly manner. The gathering takes place through a (voluntary or not) social exchange of information between individuals of the same species. One of them found a solution and the others adopt it. Kennedy and Eberhart [2] initially attempted to simulate the birds’ ability to fly synchronously and to shift a path suddenly while staying in optimum shape. Then, they expanded their model into a simple and efficient optimization algorithm and the particles are individuals moving through the research hyperspace. PSO has been effectively deployed in a number of locations in recent years. Due to its multiple advantages, such as a wide search range, speedy convergence, and ease of implementation, it has been used in a variety of research fields and applications. On another side, since the PSO algorithm is easily stuck in a local optimum, it was greatly improved. The abovementioned enhancements are organized by the particular gap in the PSO they seek to fill.

The Binary PSO. To evolve the network architecture, Eberhart and Kennedy introduced a binary version of the PSO [3] to represent and solve problems of a binary nature as well as to compare genetic algorithms (GA) encoded binarily with PSO.

Rate of Convergence Improvements. Different strategies have been suggested to increase the PSO’s convergence rate. They usually require improving the PSO update equations without altering the algorithm structure, which generally enhances the optimization of local performance, in several local minima functions. There are three parameters that change the PSO update equations: the inertia weight control [46] and the two acceleration coefficients. In fact, the inertia weight is among the earliest improvements of the original PSO made to further enhance the algorithm convergence rate. One of the most common enhancements is the implementation of Shi and Eberhart inertia weights [7]. In 1998, the authors suggested a technique to dynamically adapt the inertia weight using a fuzzy controller [8]. Then, researchers, in [9], demonstrated that the application of a constriction factor is required for the PSO algorithm to converge.

In the same framework and from another analytical view, PSO algorithms have been successfully used in a multitude of industrial applications and are readily trapped in local optima. To solve such issue, several improvements have been made: enhancing particle displacement adjustment parameters and improving topological structures, heterogeneous updating rules, and population grouping using multiple swarms. In [10], Liang and Sughantan have proposed a dynamic multiswarm particle swarm optimizer (DMSPSO) wherein the entire population is split in numerous swarms by applying clustering strategies to regroup these swarms and ensure the exchange of information between them. In [11], Yen and Leong have suggested two strategies, namely, swarm growth strategy and swarm decline strategy, which enable swarms to distribute information with the best individuals in the population. In addition, in [12], a cooperative method of PSO, which divides the population into four subswarms and affects significantly the convergence of the algorithm, has been introduced. In [13], Li et al. proposed an improved adaptive holonic PSO algorithm that did not consider connections between particles and used a clustering strategy. Obviously, the introduced algorithm enhanced the search efficiency. In [14], two optimization examples were solved using an improved optimization algorithm (OIPSO): optimization of resource constraints with the shortest project duration and optimization of resource leveling at fixed duration. Through these last two, it was proven that the developed algorithm allowed improving the optimization effect of PSO and accelerating the optimization speed. Similarly, in 2014, a new metaheuristic search algorithm has been proposed by Ngaam [15]; the population was partitioned into 4 subswarms: 2 basic subswarms for exploitation search as well as adaptive subswarm and the exploration subswarm. The main advantage of MsPSO is the possibility of using easily the tuning parameters, which improves the search performance of the algorithm. Despite the reported improvements, they cannot always meet the requirements of applications in various areas. Because of the importance of inertia weight as a tuning parameter used to ensure convergence and improve accuracy, Zdiri et al. [16] have performed a comparative study of the different inertia weight tuning strategies and they have also shown that the adaptive inertia weight is the best technique that allows obtaining greater precision. In this context and to assess the MsPSO algorithm performance, a new modified version, named multiswarm particle swarm optimization algorithm based on adaptive inertia weight (A-MsPSO), is proposed in this study. The main contribution of this article is resumed as follows: the population is split into 4 subswarms, each of which represents a single PSO. In each subswarm, we use an adaptive inertia weight strategy which oversees the search situation and adapts according to the evolutionary state of each particle. Adaptation is carried out, at each iteration, by a mechanism defined by the best overall aptitude (Gbest). The latter is determined from the best overall aptitude of each subswarm. This takes place with 2 constant acceleration coefficients applied to enhance the local search capacity of each subswarm and that of global exploration and speed of convergence. The remaining of this paper is structured as follows: Section 2 presents the PSO algorithm. MsPSO algorithm is defined in Section 3. Section 4 presents the introduced optimization algorithm (A-MsPSO). In Section 5, the relation between the position and the velocity of the particles in each subswarm of A-MsPSO is well explained. Section 6 presents the simulation results and shows the efficiency of the introduced algorithm by comparing it with other approaches. Section 7 suggests the future work possibilities. Finally, Section 8 concludes this work.

2. Particle Swarm Optimization

2.1. Definition

At the outset of the algorithm, the particles of the swarm are randomly distributed in the search space where each particle has a randomly defined movement speed and position. Thereafter, it is able, at each instance, to(i)Evaluate its current position and memorize the best one it has reached so far and the fitness function value in this position.(ii)Communicate with the neighboring particles, obtain from each of them its own best performance, modify its speed according to best solutions, and consequently, define the direction of the next displacement. The strategy of particle displacement is explained in Figure 1.

2.2. Formalization

The particle displacement between iteration t and iteration t + 1 is formulated analytically by the two following relations of velocity and position, respectively,where is a constant called inertia weight; denotes the best historical personal position determined by the particle; corresponds to the best overall position found by the population; designates the cognitive parameter; is the social parameter; and , are generated randomly by a uniform distribution in .

2.3. PSO Improvement History

To improve the PSO algorithms, several modifications have been proposed. They can be grouped into three categories.

2.3.1. Association of PSO with Other Search Operators

In intelligent systems, hybridization consists in combining the desirable properties of different methods aimed at minimizing their weaknesses. Multiple hybrid PSO systems have been used in specific application contexts. In what follows, we briefly review some of these approaches. Among the techniques widely used to increase PSO performance, we can mention hybridization with evolutionary algorithms (AE) and particularly genetic algorithms (GA). Angeline [17] utilized a tournament selection mechanism whose purpose is to replace the position and speed of each poorly-performing particle with those of the best performing particle. This movement in space improves performance in 3 test functions used, yet it has moved away from the classical PSO. Moreover, Zhang and Xie used, in [18], different techniques in tandem in their PSO with differential evolution (DEPSO). The canonical PSO and DE (differential evolution) operators were utilized in alternate generations. Obviously, hybridization was efficiently applied in some test functions and the obtained results indicate that DEPSO can improve the PSO efficiency in solving larger dimensional problems. Liu and Abraham, in [19], hybridized a turbulent PSO (TPSO) with a fuzzy logic controller. The TPSO was based on the idea that particles stuck in a suboptimal solution, which led to the premature convergence of PSO. The velocity parameters were, then, adaptively regulated during optimization using a fuzzy logic extension. The efficiency of the suggested method in solving high and low dimensionality problems was tested, with positive results in both cases. In particular, when the performance of canonical PSO deteriorated considerably in the face of problems with strong dimensionality, the TPSO and FATPSO were slightly affected. Ying Tan described a PSO algorithm based on a cloning process (CPSO) inspired by the natural immune system in [20]. The method is characterized by its quick convergence and ease of implementation. The applied algorithm increased the area adjacent to the potential solution and accelerated the evolution of the swarm by cloning the best individual in the following generations, resulting in greater optimization capacity and convergence. To diversify the population, Xiao and Zuo [21] adopted a multipopulation technique, using each subpopulation, to a separate peak. Then, they utilized a hybrid DEPSO operator in order to determine the optima in each one. Moving peaks benchmark tests yielded much lower average offline error than competing solutions. Chen et al. integrated the PSO and CS (cuckoo search), in [22], to develop PSOCS, where cuckoos near excellent solutions interacted with each other, migrated slowly towards the ideal solutions and led by the global bests in PSO. In [23], a new crow swarm optimization (CSO) algorithm, that combines PSO with CSA (crow search algorithm) and allows people to explore unknown locations while being guided by a random individual, was suggested. CSO algorithm was tested on several benchmark functions. The suggested CSO’s performance was measured in terms of optimization efficiency and global search capability and all of that are vastly enhanced over either PSO or CSA.

2.3.2. Improvement of the Topological Structure

There are two original versions of PSO: the classic local ring version (Lbest) and the global star version (Gbest). In addition to these two original models, several topologies were proposed. The majority of these enhanced the PSO algorithm’s performance. A variety of topologies were introduced and improved from static topologies to dynamic ones. Some of the widely used and successful topologies are presented below.

(1) Static Topologies. The best known static versions are the global version and the classic local version. In the latter, the swarm converges more slowly than in Gbest, but can probably locate the global optimum. In general, using the Lbest model reduces the risks of premature convergence of the algorithm, which affects positively the performance of the algorithm, particularly for multimodal problems. We present some more recent examples as follows:

(2) Four Clusters [24]. The four-cluster topology uses four groups of particles linked together by several gateways. From a sociological point of view, this topology resembles four isolated communities. In each community, a few particles can communicate with a particle belonging to another community. The particles are divided into four groups, each of which consists of five particles and exchanges information with the other three groups. Each gateway is linked to only one group. This topology is characterized by the number of edges that indirectly connect two particles. The diameter of the graph is three, which means that information can be transmitted from one particle to another by borrowing a maximum of three edges.

(3) Wheel or Focal [25]. In this topology, all particles are isolated from each other and they are related only to the focal particle. Moreover, information is communicated by this particle that uses this data to adapt its trajectory.

In [26], the authors developed a new improved algorithm (FGLT-PSO) named fusion global-local topology particle swarm optimization that simultaneously utilizes local and global topologies in PSO to get out of local optima. This proposal enhanced the PSO algorithm performance in terms of solution precision and convergence velocity. The most important studies dealing with this network are represented in Figure 2.

(4) Dynamic Topologies. The authors tried to set up dynamic topologies having better performance than static topologies by changing their structures from one iteration to another. This change allows particles to alter their travel paths so that they can escape local optima. In the following part, we will discuss a set of dynamic topologies:Fitness-distance ratio [27]: it is a variation of PSO based on a special social network; it was inspired by the observation of animal behavior. This social network does not use a specific geometric structure to effect communication between particles. In fact, the position and speed of a particle are updated by dimension. For each dimension, the particle has only one informant, which prevents several informants from communicating information that cancels each other out. The choice of this informant, named nbest (best nearest), is based on two criteria: (i) it has to be close to the current particle and (ii) it must have visited a position with better fitness.Bastos-Filho [28]: in this structure, the swarm is divided into groups or clans of particles. Each clan, containing a fixed number of fully-connected particles, admits a leader. The authors added the possibility of the particles migration from one clan to another and, at each iteration, the particle with the best location was the leader.

2.3.3. Control of the PSO Algorithm Parameters

Many recent works focusing on optimization have shown that the best performance can be obtained only by selecting the adequate adjustment parameters [29, 30]. In PSO, two acceleration coefficients and the inertia weight are the three main factors affecting the algorithm performance. They affect also the orientation of the particle on its future displacement in order to guarantee a balance between local research and global research. Their control prevents the swarm from exploding and makes convergence easier.

The inertia weight allows defining the exploration capacity of each particle in order to improve its convergence. Great value equals great range of motion and, therefore, global exploration. A weak value is synonymous to low range of motion and, thus, local exploration. Therefore, fixing this factor amounts to finding a compromise between local exploration and global exploration. The size of the inertia weight influences the size of explored hyperspace and no value can guarantee convergence towards the optimal solution. Like the performance of other swarm algorithms, that of PSO can be improved by selecting the right parameters. Thus, Clerc [31] gave some general guidelines for choosing the right combination. The acceleration coefficients were first used by Kennedy and Eberhart. The social parameter they utilized is the same as the cognitive parameter with a constant value. Although several enhancements were made, the combination of the parameters , c1, and c2 is the only solution that makes the adjustment of the balance between the phases of intensification and diversification of the research process possible. For example, in [32], a new optimization strategy for particle swarms (APSO) was formulated; the coefficients are equal to 2.0 and adjusted adaptively according to the evolutionary state (exploitation, exploration, convergence, and jumping out). It allows the automated adjustment of acceleration coefficients and inertia weight during runtime, resulting in faster convergence and better search efficiency. PSO presents a major problem called premature convergence. This problem can lead to the algorithm stagnation in local optimum. In fact, much effort has been made by scientists and researchers to enhance the performance of this algorithm. In 2014 [15], a new algorithm, called multiswarm particle swarm optimization based on the distribution of population into four subswarms cooperating through a specific strategy, has been proposed by Ngaam. In the following section, we will present MsPSO in detail.

3. Multiswarm Particle Swarm Optimization Algorithm (MsPSO)

3.1. General Description

This new optimization algorithm is an improved version of PSO. In this approach, the population is divided into four subswarms (an adaptive subswarm, an exploring subswarm, and two basic subswarms) to obtain the best management of the exploration and the exploitation of the research process in order to reach the best solution. The subswarm cooperation and communication model is shown in Figure 3.

and are the basic subswarms, whereas and are the adaptive and exploration subswarms, respectively. All the subswarms share information with each other and participate in the overall exploitation. Based on cooperation and information sharing, the particles in use the speed messages and fitness values of the particles in and to refresh their speeds and positions. However, the speed of the particles in S4 is updated from the concentration of particles in , , and . To improve the PSO convergence rate, several techniques, modifying the inertia weights and/or the acceleration coefficients in the algorithm update equations, have been recently developed.

3.2. The Inertia Weight in the Literature

The inertia weight, which was first developed by Shi and Eberhart to ensure a balance between exploitation and exploration, controls the effect of the particle’s orientation on future displacement. The authors showed that, if convergence is faster with a reasonable number of iterations and a large value of facilitates a global exploration, a small value will make it easier to explore locally. Several methods were suggested to solve PSO-based optimization problems by using an inertia weight in the particle velocity equation with some modifications. The forces changing the inertia weight can be classified into three groups.

3.2.1. Constant Inertia Weight

In many introduced methods, PSO-based optimization problems were solved by utilizing a constant inertia weight in a modified particle velocity equation. In [33], for example, Shi and Eberhard used a random value of to follow the optima.where is generated randomly by a uniform distribution in .

3.2.2. Time-Varying Inertia Weight

In this class, the inertia weight value is determined as a function of time using a specific number of iterations. These strategies can be nonlinear or linear and decreasing or increasing. In what follows, we present some of these methods that have an impact factor in the literature. For example, in [34], Xin and Chen introduced a decreasing linear inertia weight efficiently used to enhance the fine tuning characteristics of the PSO. Besides, in [35], the authors added a chaotic term to the weight of inertia of linear growth. Moreover, Kordestani and Meybodi [36] developed an oscillating triangular inertia weight (OTIW) to follow the optima in a dynamic environment.

3.2.3. Adaptive Inertia Weight

Inertia weight techniques are employed in the third category to keep track of the search scenario and change the inertia weight value relying on the feedback parameters [37]. In [38], Rao and Arumugam employed the ratio between the best overall shape and the best local mean shape of the particles in order to calculate the inertia weight in each iteration and avoid the premature convergence towards the local minimum. The strategy of the adaptive inertia weight has been made available to improve its research capacity and control the diversity of the population by an adaptive adjustment of the inertia weight.

3.3. Limitations and Discussion of MsPSO Algorithm

The MsPSO algorithm presents three main adjustment parameters: , , and that affect significantly the convergence of the algorithm. The four subswarms use two constant acceleration coefficients and time-varying inertia weight. The MsPSO algorithm contains a time-varying inertia weight function given by

Besides, c1 and c2, are adjusted to a constant value.

This category of functions can cause incorrect update of the particle velocity as no data about the evolving state showing the population diversity is identified or used. Therefore, the choice of the parameters can disturb the direction of the future displacement of each particle and degrade the algorithm performance in terms of computation time and convergence. To overcome the mentioned problems, we suggest, in this manuscript, a modified version of the MsPSO algorithm called adaptive multiswarm particle swarm optimization (A-MsPSO) where a mechanism is applied to adjust both the inertia weight and acceleration coefficient. The advantage of this mechanism is that it considers the evolutionary state of the particles of the four abovementioned subswarms. This enhancement made to introduce a process of exchange and cooperation between the subswarms to guarantee the best exploration and the most efficient exploitation of the search space.

4. Adaptive Multiswarm Particle Swarm Optimization Algorithm (A-MsPSO)

4.1. General Description of A-MsPSO Algorithm

A-MsPOS algorithm is used in this paper to enhance the exploration and exploitation of the basic MsPSO that incorporates four subswarms, an adaptive inertia weight method, and two constant acceleration coefficients. The populations are split into four subswarms containing N particles in a search space of D dimension where each subswarm runs a single PSO and searches a local optimum or, at the same time, a global optimum. Thus, from the best local individual of the four subswarms, the best global optimum can be obtained as follows: . The particle settings guarantee the update of speed and position using their subswarms equation (the details are presented in subsection 4.2). The four subswarms maintain specific binding to enhance the search capability. The cooperation and communication between the subswarms are presented in Figure 4. The pseudocode of the introduced A-MsPSO is provided in Algorithm 1 and the organization chart is exposed in Figure 5. The A-MsPSO algorithm contains 4 subswarms and uses various cooperative strategies to broaden the exploration and exploitation of the MsPSO.

Begin
Set the particle size of each subswarm
Initialize the positions, velocities, inertia weight, and acceleration factors
Determine each particle’s shape worth
Find the Pbest in subswarms , , and and the in the population
Repeat until you hit the maximum number
 {
   Calculate the velocities , , , and in subswarms , , , and
   Update the positions
   Evaluate the fitness of the particle
   Update
   If the guide condition is satisfied
   In each subswarm, use diversity guided convergence strategy.
   End if
 }
End do
Return of the algorithm

Figure 6 describes the mechanism of exploring the new region where , , and positions of the particles of three subswarms , , and are randomly placed in the search space; they approach the new captured Gbest position. At time T, the velocities of each particle are defined randomly and their position, at time T + 1, is determined according to the equations update. Position has the ability to explore a new area from which another space of unknown search is provided using the modification equation (12). The search solutions are obtained from a circle containing the overall optimum. Graphically, the old region and the new one are separated by an arc of a circle. Furthermore, the particle in is modified according to the fitness values and the velocity of the particles in the basic subswarms. In S4, the particle changes its velocity depending on velocities of the particles in , , and .

4.2. A-MsPSO Search Behavior

The four subswarms collaborate and search for the optimum Gbest solution by using the best solutions of each subswarm. As shown in Figure 4, the global optimum is defined as a function of the global optimum of the 4 subswarms. The expressions of the speed and position of the particles of and are given by the following equations:where and are randomly generated in , the superscript of (1/2) represents the basic subswarms; and is the adaptive inertia weight obtained bywherewhere represents the best position shared by the entire population; and denote initial value and final one, respectively.

The particles in the adaptive subswarm adjust their flight directions to the base of best particles in and . The speed in is revised in accordance with the following equation:where and are randomly generated in .

The fitness values and of the particles in and are declared in the following equation:

The velocity update equation in is written as follows:

The position of is updated by applying the following equation:withwhere

The impact factors , , and are employed to control the effects of the information on the position of the particle in .

In A-MsPSO, information is shared by all particles in various areas. The basic subswarms ( and ) influence the velocity and position information for other particles in other swarms. In with the current velocities and the fitness values, the particles follow the new search repertoire. Figure 4 depicts the cooperation between the subswarms.

In , the update of the position and the speed are different from those in the other three subswarms and the particles adjust their speed according to the speeds and positions of , , and . The rules of cooperation are defined to ensure the communication between the subswarms in the search process.

5. Particle Paths: Relation between the Particles Position and Velocity

We examine theoretically, in this section, the trajectories of the particles, the convergence of the A-MsPSO algorithm, and the speed of the particles of each subswarm. Convergence is defined based on the following limit:where is the position at time of the particle and corresponds to local or global optimum. The velocity and position update equations for the A-MsPSO with the adaptive inertia weight is applied as follows:

Authors, in [39], investigated the convergence of MsPSO and demonstrated that parameters setting affects the particle performance. The performed analysis shows that the relation between the particles in the 4 subswarms can be presented as a system to validate the convergence of the A-MsPSO algorithm.

Applying (2)–(11), we get the following A-MsPSO system:where is the system matrix and is the constant matrix.

and is the vector consisting of individual optimums and the single global optimum.

The used symbols are written as follows:where is the random number in , , and .

The equation of a particle position in is formulated as follows:

The equation of a particle position in is written as follows:

In subswarm , the equation applied to obtain the particle position is written as follows:

The equation of a particle position in is formulated as follows:

Then, A-MsPSO system is obtained as shown in .

According to the convergence analysis performed in [40], the particles of each subswarm converge to stable points defined by the limits given in the following equations:

6. Experimental Study

This section includes three parts. In the first part, we present the test functions used in the experimental study. However, the second part shows the good performance of A-MsPSO algorithm through computational experiments, while a nonlinear system is used, in the last part, to validate the suggested method performance.

6.1. Test Function and Parameter Settings

To test the efficiency of A-MSPSO from the benchmark functions available in the literature, we chose 4 unimodal functions (F1, F2, F3, and F4) and 11 multimodal functions (from F5 to F15) [41]. The formulas and the properties employed in these functions are shown in Table 1. The utilized experimental machine has a third-generation i3 processor of 2.5 GHz and 128 GB of storage capacity and the applied programming language is MATLAB.

6.2. Results and Discussions

In this subsection, we compare the introduced method with the most widely used inertia weight strategies, on the one hand, and with other versions of PSO, on the other hand. In subsection 6.2, several inertia weight strategies, such as random-MsPSO, constant-MsPSO, linear time-varying-MsPSO, and sigmoid-MsPSO, are analyzed on 15 test functions. In subsection 6.2, the A-MsPSO algorithm is compared to a number of PSO versions and to the basic MsPSO algorithm. Then, in subsection 6.2, the computational cost of the A-MsPSO algorithm is compared to that of the existing PSO variations. Finally, in subsection 6.2.1, the performance of this algorithm on Box and Jenkins gas furnace data is validated.

6.2.1. Comparisons of Inertia Weight Diversity

The most intensively used inertia weight strategies and the A-MsPSO algorithm are generally applied to solve benchmark problems. Table 2 lists the parameters used in each approach. All of the functions were tested on 30 dimensions. The maximum number of iterations was set to 2000, while the number of executions for each test was equal to 30. The performance of the algorithm was measured by the standard deviation and the mean values, as illustrated in Table 3. The values in bold indicate the best solution with a significance level of 5%. In this context, the A-MsPSO uses an adaptive parameter adjustment strategy based on the evolving state of each particle improving performance in terms of overall optimization and convergence velocity. This finding shows that the A-MsPSO has a better search capacity compared to the other search algorithms (random-MsPSO, constant-MsPSO, linear-time-varying-MsPSO, and sigmoid MsPSO) applied on 13 functions from F1 to F15, except F8 and F15.

In Figure 7, the vertical axis and the horizontal axis represent the best fitness and the number of iterations, respectively. Each curve demonstrates a shift in the shape of each algorithm based on a predetermined number of iterations. The final point of each curve represents the global optimum.

The proposed algorithm has significantly faster convergence when applied on the functions F1, F2, F3, F4, F5, F6, F9, and F11 than when applied on the functions F7, F8, F10, F12, F13, F14, and F15. However, the ability to jump out of the local optimum is improved. The curves in the functions F7, F10, F12, F14, and F15 remain stable after 200 iterations; this shows that the swarm is trapped to a local minimum, and the A-MsPSO method loses its global search capability.

6.2.2. Comparison of the Introduced Algorithm with Other PSO Algorithms

The proposed A-MsPSO was compared to six other variants of PSO including LW-PSO, CLWPSO, SIW-APSO, MSCPSO, NPSO, and standard MsPSO. The parameters of the algorithms are listed in Table 4 according to their references. The versions of PSO were applied to the 6 test functions (F1, F2, F3, F4, F5, and F6) with 30 dimensions. The maximum number of iterations was set to 1000 and 30 independent tests of each algorithm were performed. The results of standard deviation (STD) and mean values (Mean) are listed in Table 5 and the best are in bold. From this table, we notice that, on the 4 reference functions (F1, F2, F3, and F4), the results obtained by A-MsPSO are better than those provided by the other PSO variants. They show that the search method with an adaptive parameter adjustment strategy based on the evolutionary state of each and every particle is the most efficient in improving the PSO algorithm performance in solving optimization functions compared to the other.

6.2.3. Computational Cost

The CPU time was used to compare the computational efficiency of the PSO variants; the computational efficiency of each algorithm on the test function f was computed using the following formula:where is the time of computing the benchmark function algorithm f, and denotes the cumulative time of all the function algorithms f. Figure 8 shows the percentage of the total computational time by four PSO variants (A-MsPSO, S-MsPSO, R-MsPSO, and L-MsPSO) on fifteen benchmark functions. R-MsPSO is the most time consuming algorithm out of twelve reference functions and L-MsPSO is the fastest. The A-MsPSO algorithm ranks second out of twelve functions, first in F1 and third in F12 and F14. The comparison of S-MsPSO and A-MsPSO demonstrates that the latter is the longest, on 13 reference functions, and the standard MsPSO is the fastest. This observation means that the computational complexity of introduced algorithm increases by using an adaptive inertia weight.

6.3. Box–Jenkins Gas Furnace Data

Box and Jenkins gas oven dataset is a benchmark widely used as a standard test in fuzzy modeling and identification techniques. It consists of 296 pairs of observations in/outs [u (k); y (k)] k from 1 to 296. Methane and air were combined to form a mixture of CO2-containing gases. The input u (k) of the furnace corresponds to the flow of methane and the y (k) output is the concentration of CO2 percentage in the outgoing gases.Step 1: the experiment was conducted on 296 pairs of data. The population size in the four subswarms was set to 6, the number of iterations was 50, the rule number was 3, the acceleration coefficients were fixed at 1.494 (5), and the inertia weight was chosen as declared previously according to equations (7) and (8). The model obtained for the candidate characteristic variables is visualized in Figure 9. The A-MsPSO algorithm had a performance index of 0.106, which is the best compared to those of the other algorithms indicated in Table 6.Step 2: the primary 148 data pairs were employed as training data to verify the robustness of A-MsPSO, while the remaining 148 data pairs were utilized as test data to predict the performance. The comparison of the performance of A-MsPSO and that of the real system is presented in Figures 10 and 11, which display the proposed system approximation and generalization potential (A-MsPSO). The MSE values for the training and testing processes are 0.058 and 0.146, respectively, as shown in Table 7. Experimental results reveal that A-MsPSO has good precision and high powerful generalization ability in system modeling of Box–Jenkins gas furnace dataset.

7. Future Work Possibilities

In future work, the proposed approach can be applied in several applications to (i) encrypt images using a 7D hyperchaotic map [30]; (ii) adjust the hyperparameters of a 4D chaotic map [58]; and (iii) optimize the parameters of 5D hyperchaotic map using A-MsPSO to perform encryption and decryption processes [29]; (iv) it may also be used in photovoltaic water pumping applications [59]. In addition, future research will focus on a thorough examination of A-MsPSO’s applications in increasingly complicated practical optimization situations in order to deeply analyze the attributes and evaluate its performance.

8. Conclusion

An adaptive inertia weight approach was utilized in this paper to present a modified version of the multiswarm particle swarm optimization algorithm. Based on the comparison of four methods for establishing inertia weight strategies in the MsPSO algorithm, the ability of the A-MsPSO algorithm to optimize the issues with a bigger search space may be demonstrated by comparing experimental results in higher attributes. It was concluded that the MsPSO algorithm with adaptive inertia weight strategy is the best for greater accuracy, according to the results obtained by the performed tests. Theoretically, the four subswarms of A-MsPSO can also converge towards their own position of stable equilibrium. Furthermore, the experimental findings show that the suggested algorithm is capable of producing a robust model with improved generalization ability. We expect that this study will be extremely useful to researchers and will inspire them to develop good solutions to solve optimization problems using the A-MsPSO algorithm.with , , , and .

Data Availability

The data used in this article are freely available upon request from the authors and for citing this paper in your manuscripts.

Conflicts of Interest

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Supplementary Materials

Our A-MsPSO algorithm and the full simulation results codes are available as supplementary file. The utilized machine has a third-generation i3 processor of 2.5 GHz and 128 GB of storage capacity and the applied programming language is MATLAB. (Supplementary Materials)