Abstract

In the past few decades, metaheuristic algorithms (MA) have been developed tremendously and have been successfully applied in many fields. In recent years, a large number of new MA have been proposed. Slime mould algorithm (SMA) is a novel swarm-based intelligence optimization algorithm. SMA solves the optimization problem by imitating the foraging and movement behavior of slime mould. It can effectively obtain a promising global optimal solution. However, it still suffers some shortcomings such as the unstable convergence speed, the imprecise search accuracy, and incapability of identifying a local optimal solution when faced with complicated optimization problems. With the purpose of overcoming the shortcomings of SMA, this paper proposed a multistrategy enhanced version of SMA called ESMA. The three enhanced strategies are chaotic initialization strategy (CIS), orthogonal learning strategy (OLS), and boundary reset strategy (BRS). The CIS is used to generate an initial population with diversity in the early stage of ESMA, which can increase the convergence speed of the algorithm and the quality of the final solution. Then, the OLS is used to discover the useful information of the best solutions and offer a potential search direction, which enhances the local search ability and raises the convergence rate. Finally, the BRS is used to correct individual positions, which ensures the population diversity and enhances the overall search capabilities of ESMA. The performance of ESMA was validated on the 30 IEEE CEC2014 functions and three IIR model identification problems, compared with other nine well-regarded and state-of-the-art algorithms. Simulation results and analysis prove that the ESMA has a superior performance. The three strategies involved in ESMA have significantly improved the performance of the basic SMA.

1. Introduction

Optimization problems are involved in many fields. With the expansion of production scale, optimization problems are becoming more and more tough [1]. Most of the optimization problems in the real world are NP-hard, and these problems cannot be solved by precise algorithms [2]. Conventional accurate gradient-based algorithms are helpless when facing with noncontinuous, nondirected, multipeak, and dynamic problems [3, 4]. The MA is a kind of bionic approximation algorithm and an effectual method to solve such convoluted optimization problems. For the past several decades, MAs have made tremendous research progress. Various algorithms inspired by nature have been proposed. Well-known MA includes Genetic Algorithm (GA) [5], Particle Swarm Optimization (PSO) [6], Differential Evolution (DE) [7], Bacterial Foraging Optimization (BFO) [8], Artificial Bee Colony (ABC) Algorithm, Biogeography-based Optimization (BBO) [9], and Gravitational Search Algorithm (GSA) [10]. The new methods proposed in recent years are Grey Wolf Optimizer (GWO) [11], Moth Flame Optimization (MFO) [12], Cuckoo Search Algorithm (CSA) [13], Sine Cosine Algorithm (SCA) [14], Harris Hawks Optimization (HHO) [15], Whale Optimization Algorithm (WOA) [16], Political Optimizer (PO) [17], Slime Mould Algorithm (SMA) [18], Aquila Optimizer (AO) [19], and Hunger Games Search (HGS) [20].

SMA was proposed by Li et al. in 2020 and inspired by the foraging and movement behavior of slime mould in nature. Slime mould is a single-celled amoeba without brain and nerves. Based on its veins, its foraging behavior becomes efficient. Under the oscillating model of veins, it can obtain the highest concentration of food, and this food position represents the optimal solution to global optimization problems. The performance of SMA has been confirmed in [18], and it has better performance than many well-regarded and state-of-the-art algorithms. SMA has a simple and flexible structure. Based on this flexible structure, it is easy to further improve its performance.

Due to the excellent performance of SMA, the appearance of SMA has attracted the attention of many scholars. There have been many researches and applications about SMA. Gupta et al. in 2021 [21] used SMA to solve the problem of proton exchange membrane fuel cell model parameter estimation, and the experimental results show that, compared with other algorithms, SMA has prediction results that are more consistent with actual experimental results. Yıldız [22] used SMA in the optimization design of auto components for the first time. In [23], Zobaa et al. used SMA to optimize the design of a double-tuned filter to make the performance of the power system better and reduce the impact of harmonics in the power system. Moreover, the SMA was also used in different fields: to optimize extreme learning machine parameters [24], to evaluate structural damage and gauging structural health [25], to optimize and coordinate the directional overcurrent relays in meshed power networks [26], to optimize the fuzzy controller [27], to optimize the photovoltaic hosting capacity [28], and to forecast urban water demand [29].

Although the excellent performance of SMA has been confirmed in many practical applications, SMA still fails in keeping the balance between exploration and exploitation like many other MAs. In particular, when facing with complex problems, it would accidentally be trapped at the local minimum value, and the convergence speed may decline. To overcome such deficiencies, an enhanced SMA with chaotic initialization strategy (CIS), orthogonal learning strategy (OLS), and boundary reset strategy (BRS), named ESMA, is proposed.

Chaos is a nonlinear phenomenon which is ubiquitous in nature that has the characteristics of pseudorandomness and ergodicity and is very sensitive to original conditions. Even a tiny discrepancy in original conditions will lead to significant differences in the long-term behavior [30]. In the past few decades, chaos control has been used in a large number of fields, including parameter optimization, feature selection, and chaos control [3, 31]. With the development of swarm intelligence technology over recent years, chaos has become a very popular strategy for improving MA. The improvement of metaheuristic algorithms mainly includes three aspects: chaotic parameter distribution, chaotic initialization population, and chaotic local search (CLS) strategy. For instance, Kaur et al. [32] and Ibrahim [33] et al. used the chaotic map to adapt the exploration and exploitation of WOA and SSA, respectively. Chou and Truong [34] and Ma et al. [35], respectively, used chaotic maps to generate a highly diverse initial population of their algorithm. Ibrahim et al. used the chaotic map to process the original solution to raise the convergence of the GWO [36]. Zhao et al. [37] used the CLS to enhance the performance of SSA. Chen et al. [38] proposed an enhanced BFO with CLS. And in [39], Sayed et al. proposed a CCSA with the chaotic search method and used it for feature selection problems.

OLS is also an improvement strategy that arouses the interest of many scholars. It is based on orthogonal experimental design (OED). OED can find the best level combination of various factors with fewer experiments [31, 40]. Under the premise of ensuring effectiveness, compared with the exhaustive method, it greatly improves efficiency and saves resources. Based on these characteristics of OED, a large number of researchers use OED to boost the performance of MA, such as HHO combined with orthogonal learning [41], FOA with orthogonal learning schemes [42], DE combined with OED [43], CSA with orthogonal learning strategy [44], MFO combined with orthogonal learning strategy, and Broyden–Fletcher–Goldfarb–Shanno [45].

For the ESMA proposed in this article, three efficient mechanisms, including CIS, OLS, and BRS, are integrated into the basic SMA. Firstly, the integration of CIS helps the ESMA initialize a group of initial solutions with diversity, which may speed up the convergence speed of SMA, improve the quality of the final solutions, and avoid falling into a local optimal solution. Secondly, the OLS was used on the two currently ranked best individuals to dig out more useful information, which could predict a better best individual for calculating the weights of each slime mould. This will make the population develop with a potential direction, strengthen the local search ability, and raise the convergence speed and accuracy of the ESMA. Additionally, in each iteration of the ESMA, the BRS is used to adjust the position of each search agent after all their position is updated, which ensures that all search agents are in the search region. The BRS fostering the diversity of the population avoids the population from concentrating too much at the boundary and falling into a local optimum. It is worth noting that the application of the above three strategies does not change the main structure of the basic SMA, but only strengthens the local search and group diversity. This improvement inherits the advantages of SMA and overcomes the drawbacks of SMA and greatly improves the performance of SMA.

To assess the performance of ESMA, the classic IEEE CEC 2014 benchmark functions are used as numerical tests, and three IIR model identification cases are used as real-world application tests to verify the universality of ESMA. The experimental results reveal that fairly compared ESMA with other nine well-accepted and state-of-the-art algorithms, ESMA can obtain overwhelming or very competitive performance. This confirms that the ESMA based on the three improvement strategies has a comprehensive improvement over the original SMA.

In order to improve the performance of the basic SMA algorithm and overcome its slow convergence speed and low convergence accuracy, a new upgraded version of SMA (ESMA) is proposed in this article. At the same time, the proposed ESMA algorithm would be used to solve the CEC2014 optimization problems and the IIR model identification optimization problem. The main contributions of our work are briefly summarized as follows:(i)Based on the original SMA algorithm, an improved SMA algorithm with orthogonal learning capability is proposed. At the same time, a chaotic initialization strategy and a boundary reset strategy are introduced to improve the performance of the improved SMA.(ii)The improved SMA and ten advanced metaheuristic algorithms including the original SMA are applied to solve the CEC2014 problem, and their performance is compared and analyzed based on the experimental results.(iii)The improved SMA algorithm is used to solve the IIR model identification problem, and the simulation results are compared with ten other advanced algorithms. And the practicability and efficiency of the proposed algorithm are proven.

The structure of this paper is presented as follows. The basic description of SMA is briefly introduced in Section 2. The proposed ESMA and three improvement strategies are illustrated in great detail in Section 3. In Section 4, a series of experiments were done to assess the performance of ESMA on IEEE CEC2014 test suite and three IIR model identification cases, and the simulation results and analysis are displayed and discussed. Finally, Section 5 summarizes this study and puts forward some enlightenment for future directions.

2. Original SMA

The key idea of SMA is to utilize the foraging behavior of slime mould. The foraging behavior of slime mould can be summarized into three morphotypes: approach food, wrap food, and grabble food. Relying on the propagation wave arising from the biological oscillator to change the cytoplasmic flow in the vein, slime mould can approach high-quality food, then surround the food and use enzymes to digest it, or find other higher-quality food through adaptive transformation. A brief description of SMA is provided as follows.

The position update of slime mould is shown as follows:where and indicate the position of slime mould in (t+1) th iteration and tth iteration, respectively, and represent two search agents randomly selected from the population at tth iteration, respectively, and denotes the slime mould with the best fitness value in the current iteration, that is, the search agent with the highest food concentration. U and L represent the upper bound and lower bound of the search space, respectively, rand and r mean the random value in the interval of [0, 1]. Here z is a very important constant parameter because its value affects the balance between the exploration and exploitation of SMA. Here k is another essential parameter, which is calculated as follows:where F (i) stands for the fitness of the ith search agent and aF denotes the individual with best fitness value obtained in history.

The values of vectors and oscillate in the range of [−1, 1] and [−a, a], respectively. The specific expression of is shown in equation (3), and the expression of parameter a of is shown in equation (4).where max_t stands for the maximum number of iterations. and will gradually approach 0 with iteration, which play two major roles in the algorithm. Because the two vectors and are used to imitate the behavior of slime mould in the process of foraging, they will not only exploit the areas with high concentration food, but also explore other areas to find other high concentration food sources. Through the adaptive control of and , SMA can more effectively deal with the balance between global exploration and local exploitation and avoid the SMA falling into a local optimal solution.

in equation (1) represents corresponding weight, and its calculation formula is as follows:where bF and wF denote two search agents with best fitness value and worst fitness value obtained in the present iterative procedure, respectively, condition means the individuals’ fitness ranks of the first half of the swarm, and here Index = sort (F). Equation (5) visually imitates the positive and negative feedback between the vein width of slime mould and the food concentration.

The pseudocode of SMA is illustrated in Algorithm 1.

(1)Initialize the slime mould position Xi, i = 1, 2, …, n.
(2)While (t < max_t) do for each iteration
(3) Calculate the fitness of each slime mould
(4) Update bF, wF, and Xb
(5) Update W using equation (5)
(6)for each individual (Xi)
(7)  Update p, vb, vc
(8)  Update the position of each individual using equation (1)
(9)end for
(10) Amend the position of each individual based on the upper and lower bounds
(11)end while
(12)Return bF and Xb

3. Proposed ESMA

This study proposes a new algorithm with orthogonal learning strategy (OLS), chaotic initialization strategy (CIS), and boundary reset strategy (BRS), which is called ESMA. In the following sections, the main framework of the ESMA will be described first, and then three improved strategies will be introduced.

3.1. Framework of ESMA

In this paper, the orthogonal learning strategy (OLS), chaotic initialization strategy (CIS), and boundary reset strategy (BRS) are embedded into basic SMA, without major changes in the structure of SMA. Firstly, CIS is used in the initial stage of the ESMA to create a high-quality population for guaranteeing the initial population diversity, the high convergence rate, and the excellence of the final solution. Then, in the stage of calculating the weight of each slime mould, OLS is used on the two currently ranked best individuals that may dig out more useful information to predict a better best individual for calculating the weights of each slime mould. Moreover, after the position of each slime mould is updated in each loop, BRS will be used to detect and correct individuals who are beyond the boundary, ensuring that the position of each slime mould is restricted to the search region, instead of correcting them to the boundary of search region like the original SMA algorithm. The use of BRS helps to enhance the population diversity of slime mould. The pseudocode and main structure of ESMA are shown in Algorithm 2 and Figure 1, and the CIS, OLS, and a detailed description about BRS will be shown in the following sections.

(1)Initialize the slime mould position Xi, i = 1, 2, …, n.
(2)While (t < max_t) do for each iteration
(3) Calculate the fitness of each slime mould
(4) Update bF, wF, and Xb
(5) Update the best agent by OLS
(6) Update the W using equation (5)
(7)for each individual (Xi)
(8)  Update p, vb, vc
(9)  Update the position of each individual using equation (1)
(10)end for
(11) Verify the position of each individual using equation (12)
(12)end while
(13)Return bF and Xb
3.2. Chaotic Initialization Strategy (CIS)

Chaos has the characteristics of ergodicity, pseudorandomness, and sensitivity to initial conditions. Therefore, in the past few decades, chaos theory has been used in many fields such as parameter optimization, feature selection, and chaos control [3]. In recent years, chaotic mapping has also become a widely popular method to improve metaheuristic algorithms, such as chaotic parameter control [32, 33], chaotic initialization [3436], and chaotic local search [3739]. In this study, the logistic chaos map is utilized as a chaotic sequence. As one of the simplest and widely used chaotic sequences, the logistic map has the following expression:where is a control parameter, when , the logic sequence appears chaotic, xi denotes the chaotic sequence value of the ith slime mould, x0 ∈ (0, 1), and x0 is used to create an initial population of slime mould.

A good initial population can raise the population diversity, speed up the convergence, reduce the possibility of falling into a local infinitesimal value, and enhance the solution quality. For most metaheuristic algorithms, the initial population is randomly and uniformly distributed. This method has the drawbacks of not fast convergence speed and low population diversity, which may make the algorithms slip into a local minimum [2, 46]. And the existing research shows that the method with population chaos initialization is superior to the original method [47]. Therefore, in this article, a logical chaotic value is used to create an initial population, and the chaotic disturbance is added to the initial population to improve the initial population diversity. The formula is expressed as follows:where is the jth value of the logistic sequence of the ith slime mould and is the ith slime mould’s position with chaotic disturbance.

3.3. Orthogonal Learning Strategy (OLS)

As an efficient experimental tool, orthogonal experiments have been widely used in many fields. In terms of improving metaheuristic algorithms, orthogonal learning has also been favored by many researchers. The improvements to PSO [40], BSO [48], SCA [49], FFO [42], MFO [45], HHO [41], and other algorithms have achieved significant results. OLS can vastly improve the local exploitation ability of the optimization method. In order to not alter the flexible structure and direction of the original SMA and further improve the local exploitation ability of the MA method simultaneously, OLS is employed into the SMA. The OLS is used to better balance the global and local search of the original SMA, so as to get a more promising global optimal solution.

In each iteration of SMA algorithm, before calculating the corresponding weight (equation (5)), two individuals with the best fitness value (Index (1) and Index (2)) are selected, and the search information of these two excellent individuals is used to predict a better individual with the best fitness value . If we exhaustively calculate all combinations of Index (1) and Index (2) for the optimal individual , 2D (D represents the problem dimension) trials are needed. Due to the exponential complexity, this is unrealistic in real experiments. In this paper, we use OLS to predict a relatively good new individual from Index (1) and Index (2) through as few experiments as possible. The factors are equal to the problem dimension, and the number of levels is 2.

OLS is based on orthogonal experiment design (OED), which mainly includes two parts, orthogonal array (OA) and factor analysis (FA). The following will use a simple case to explain OED from two aspects of OA and FA.

3.3.1. Orthogonal Array (OA)

OA is the basic and crucial part of OED. It is a prediction table that determines how many experiments will be performed in order to find a better candidate combination. As a graphic method of constructing mutually orthogonal Latin squares, OA can ensure a balanced comparison of levels of any factor. An OA with K factors and Q levels can be expressed as LM (QK) = [li, j]M×K, where li, j is the number of levels of the jth factor in the ith combination, L indicates the OA, and M denotes the number of combinations of test example. The relationship between these numbers satisfies three constraints: (1) Q is a prime number; (2) M = QJ; and (3) J must be a positive integer and meets the requirements:

In order to further illustrate OA, take an example of the conversion ratio in chemistry. In an experiment, there are three factors affecting conversion ratio, and each factor has three different levels. Factors and levels information are shown in Table 1. In this problem, if we exhaustively test all combinations of three factors and three levels, 33 = 27 trials are needed. However, it can be seen from equation (9) that only 9 experimental combinations need to be considered when using OA, which significantly reduces testing times and improves the experimental efficiency.

3.3.2. Factor Analysis (FA)

As another vital part of OED, FA will evaluate the influence of various factors according to the results of the experimental based on OA and then summarizes the optimal level combination. In the working process of FA, the following formula is mainly used:where ym indicates the experimental result of the mth (1 ≤ m ≤ M) combination and Enq means the effect of the qth (1 ≤ q ≤ Q) level of the kth (1 ≤ k ≤ K) factor. It is worth noting that, Smkq = 1, if mth experiment embraces the qth level of the kth factor; otherwise, Smkq = 0.

Assuming that the experimental results of 9 combinations obtained according to equation (9) are shown in Table 2, the FA results according to these experimental results are shown in the last three rows in Table 2. It can be clearly observed from Table 2 that in factor A, the three EAq are EA1 = 41, EA2 = 48, and EA3 = 61, respectively. Among them, EA3 is the largest, that is, the conversion ratio obtained at the reaction temperature of 90°C is the highest, which is the best reaction temperature among the three temperatures. Similarly, according to the results of FA in Table 2, the best level of the other two factors can be analyzed. Hence, the best levels combination of various factors that can be predicted is A3, B2, C2.

3.4. Boundary Reset Strategy (BRS)

For the swarm-based algorithm, in the process of solving the problem, it is often encountered that the search agent exceeds the search region. The usual practice is to reset the position value of the individual that exceeds the search region to the boundary value, as shown in the following equation:

The original SMA also uses equation (11) to solve the problem that slime mould individuals may move outside the search region during the algorithm iteration.

However, this method has obvious shortcomings. When a great quantity of individuals exceed the search region at the same time or when there is a local minimum near the boundary of the search region, the diversity of the population will be reduced, resulting in the algorithm may get trapped in a local optimal solution on the boundary.

This article uses a boundary reset technique called BRS to deal with the above problems. BRS has also been applied in articles [44, 5052] and has been proven effective in ensuring population diversity and preventing algorithms from trapping into local optima. The expression of BRS is shown as follows:where denotes the jth dimension valve of the position of slime mould.

4. Experiments

In this section, the performance of ESMA was evaluated on 30 IEEE CEC2014 benchmark functions and 3 IIR model identification optimization cases. Several well-regarded and state-of-the-art algorithms are used to compare with the proposed ESMA. The parameter setting of all used algorithms is shown in Table 3.

Moreover, nonparametric statistical analysis is extensively used to fairly analyze the performance of MA [53]. In this article, the famous Wilcoxon singed-rank test is used with a 0.05 significance level to compare the performances of the ESMA and other algorithms. In these tables, the value of the Wilcoxon test means the significance level. When the value is less than 0.05, it means there is a significant performance difference between ESMA and other algorithms. Otherwise, there is no significant difference between ESMA and compared methods. The symbol B/T/W indicates the performance of ESMA shows significantly better than, tied with, or worse than other methods under the significance level of 5%.

In our experimental work, for a fair comparison, the execution condition of all experiments is kept constant as follows:(i)CPU: Intel Core i7-8750H (2.20 GHz)(ii)RAM: 16.0 GB(iii)System: Windows 10(iv)Software: Matlab R2018b

4.1. IEEE CEC2014 Benchmark Functions Optimization Problem

In this section, the performance of ESMA was evaluated on IEEE CEC2014 test suite, and the performance of nine other algorithms (SMA, AO, BBO, GWO, HGS, HHO, MFO, SCA, and WOA) is compared with that of ESMA. To attain the unbiased result, each algorithm including ESMA is independently tested 50 times for each problem. The population size N and search region dimension D are all set to 30, and the maximum number of function evaluations (MaxFes) is set to .

4.1.1. Benchmark Function

In our experiments, the IEEE CEC2014 problem set is used to evaluate the performance of ESMA. As a classical test set, IEEE CEC2014 composite benchmark functions set contains 30 functions and can be divided into four types (F1–F3 are unimodal functions, F4–F16 are multimodal functions, F17–F22 are hybrid functions, and F23–F30 belong to composition functions). These types of functions simulate most engineering problems to a certain extent, so IEEE CEC2014 is used as a large number of algorithm performance evaluation test sets. More details of IEEE CEC2014 can be obtained from [54].

4.1.2. Experimental Results and Analysis

The experimental results obtained by ESMA and nine compared methods on IEEE CEC2014 problems are shown in Table 4, and the optimal results obtained for a test function are highlighted. From Table 4, obviously, ESMA outperformed all other compared algorithms in almost all functions. The proposed ESMA not only obtained the best mean value (Mean), but also has the minimum standard deviation value (Std) for most test functions.

As shown in Table 4, ESMA beats the basic SMA and HHO on all test functions except on F23, F24, F25, F27, and F28 with the same performance. With respect to AO, ESMA achieves superior results on 26 functions. At the same time, it gets inferior results on F29 and the same results on F23, F25, and F28. Compared with BBO, ESMA could get better results on all of the benchmark functions except on F4 and the worse results on F18. Compared with HGS, ESMA achieves better results on 26 functions, worse results on F30, and the same results on F23, F25, F27, and F28. Finally, ESMA beats the GWO, MFO, SCA, and WOA for all of the test functions. Based on the results of the Wilcoxon singed-rank test in the last row of Table 4, ESMA performs superior to or similar to the comparison algorithms in performance on all functions, and there is no case that ESMA is worse than any peers on any functions, which reveals that ESMA has superior capabilities. As Table 4 shown, ESMA shows a significant improvement on SMA.

In Table 5, the values of the Wilcoxon test of ESMA and other competitors against IEEE CEC2014 benchmark functions are shown. Clearly, for most cases, values are less than 0.05. This presents that the performance of ESMA is statistically superior to other competitors on most test suite. Also, these values illustrate that the improvements of ESMA are significant.

Additionally, box-whisker diagrams can clearly show the structure of the experimental results obtained through 30 tests. As box-whisker diagrams shown in Figure 2, obviously, the structure of results of ESMA was better than the other nine compared algorithms on almost all functions. It is worth noting that although SMA has also got good results, on F7, F18, and F19, ESMA has fewer outliers than SMA, which shows the effectiveness of the improvement.

Furthermore, the linear trend chart in Figure 3 can be used to compare the convergence performance of the comparison methods and ESMA more intuitively. From Figure 3, the convergence curves of ESMA are superior to other compared methods on almost all test functions. The convergence curves of ESMA indicated that, in most cases, ESMA achieves faster convergence rates and better convergence precision than other competitors, which further confirms the effectiveness of improvement.

4.2. IIR Model Identification Optimization Problem

In this section, the IIR model identification problem is solved to verify the universality of the proposed ESMA. Firstly, the problem of IIR system modeling is defined. Then, ESMA will be used to identify three unknown IIR systems, and the performance of ESMA will be compared with other nine well-regarded and state-of-the-art algorithms (SMA, AO, BBO, GWO, HGS, HHO, MFO, SCA, and WOA). The Wilcoxon test and convergence curves will be used to compare the performance of all methods. In this experimental part, in order to ensure fairness, each optimization method is independently run 50 times in each case, and the population size (N) and maximum evaluation times (MaxFes) are set to 25 and 3000 D, respectively, where D indicates the dimension of case.

4.2.1. Definition of the Problem

As a system identification problem, the digital IIR filter model identification is to find the exact parameters of mathematical model of an unknown IIR model [55]. This problem is like a black box problem. When using the optimization method to optimize the model parameters adaptively, the output error between the unknown IIR model and the designed IIR model is minimized to obtain an optimal unknown IIR model. This process is shown in Figure 4.

In the IIR system, the general form of the relationship between input and output is expressed aswhere x (n) and y (n) stand for the nth input and output signal of the IIR filter, N and M mean the numerator and denominator order of the IIR system, respectively, and ai and bj are coefficients of IIR filter which define the poles and zeroes of IIR system, respectively.

Equation (13) can also be expressed in the form of Z transformation:

A set of specific parameters of an IIR filter system is unknown. Hence, how to identify an IIR model can be considered an optimization problem. According to Figure 4, the error of the identification process between the output of the actual IIR filter and the output of the estimated model is e (n) = d (n) − y (n). Thus, in this article, we use the mean square error of input and output as the cost function (), and the expression of is as follows:where K stands for the length of samples used to calculate the cost function and indicates the series of unknown exact parameters of the IIR mathematical model. The goal of IIR model identification is to find an optimal to minimize .

In our experimental work, three widely used IIR identification cases are selected, which were labeled as IIR1, IIR2, and IIR3. The transfer functions of these three IIR models are demonstrated in Table 6. For IIR1, the plant is a superior order of the sixth-order system, and the IIR filter to be designed is the fourth order. The second case is a third-order plant modeled using third order. In the 3rd case, the unknown plant is fifth order, and the IIR filter is the fourth order.

4.2.2. Simulation Results of IIR System Identification

The results of solving three IIR system identification cases and comparing ESMA with nine other algorithms are shown in Table 7. The mean (Mean), best (Best), worst (Worst), and median (Median) value and the standard deviation (Std) are all presented in Table 7. And at the end of Table 7, the symbol B/T/W indicates the performance of ESMA shows significantly better than, tied with, or worse than other competitors under the significance level 5%. Moreover, the p value results of the Wilcoxon test on 3 IIR system identification cases are tabulated in Table 8. Figures 5 and 6 show the solution structure and mean optimization convergence curves.

As shown in Table 7, from the three IIR model identification cases tests, ESMA outperforms the compared algorithms in two cases. In the IIR1 and IIR3 cases, ESMA is significantly better than other algorithms, obviously. Although ESMA is not comparable to BBO, MFO, and GWO in the IIR2 case, the results of ESMA are still very competitive and rank at the forefront and are significantly outperformed the other 7 competitors. These are highlighted in Table 7.

From Table 8, in Case 1 and Case 3, the value is almost less than 0.05, and in Case 2, the p value is mostly less than 5%, which shows that ESMA is significantly superior to other compared methods in most cases. As can be seen from Figure 5, compared with other methods, the proposed ESMA algorithm has a faster convergence speed. For IIR1 and IIR3, ESMA still has strong search ability and obtains the best solution while other algorithms get trapped into local optimal. For IIR2, although ESMA does not obtain the first, it still obtained an acceptable convergence speed. Moreover, from Figure 6, ESMA has the best solution structure in IIR1 and IIR3 and has very competitive results in IIR2. Overall, in most cases, the solution structure and convergence speed of ESMA are very excellent in the three cases.

ESMA not only has superior average convergence speed and precision in dealing with IIR1 and IIR3, but also is far outperformed other algorithms in the structure of the solution. In the IIR2 case, although ESMA did not get the best solution, it is still very competitive and ranked high compared to other nine optimization methods.

According to the obtained results and analysis above, compared with the original SMA and several other compared algorithms, ESMA has a better ability to identify IIR unknown systems. This reveals that ESMA has a strong global optimization capability and efficiency.

5. Conclusion

In this work, we propose an improved ESMA algorithm. To improve the performance of SMA, the orthogonal learning strategy, chaotic initialization strategy, and boundary reset strategy are embedded into ESMA. The successful integration of these three key strategies further enhances the original SMA's population diversity and global and local search capabilities. These strategies also avoid getting stuck at local optimality and enhance convergence speed and precision. Nine well-regarded and state-of-the-art optimization methods are used to compare the performance with the proposed ESMA on solving IEEE CEC2014 benchmark function problems. The results obtained show that the performance of ESMA is significantly superior to other compared algorithms. In addition, three IIR model identification problems are selected to illustrate the performance of ESMA in real-world engineering application optimization problems. The simulation results show that ESMA still has a significant advantage in IIR model identification problems. Through the above benchmark problems and IIR model identification problems, it is revealed that ESMA not only has a significant improvement over SMA, but also has more superior performance compared to other algorithms. It also shows that the fusion of the three strategies has a significant improvement on SMA. Therefore, the proposed ESMA with efficient and outstanding performance has reached the initial expectation of improving SMA.

Although ESMA has achieved excellent performance, it still has some shortcomings. For example, in some cases (F4&5), ESMA is nonideal in terms of convergence accuracy. There is still some room for improvement convergence accuracy. In future work, on the one hand, some new strategies should be considered to further enhance the performance of ESMA. On the other hand, the proposed algorithm could be further utilized to solve more optimization problems in more fields.

Data Availability

Some or all data, models, or codes generated or used during the study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this study.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant nos. 41772123, 61772365, 61802280, and 61806143.