Abstract

Solving the absolute value equation (AVE) is a nondifferentiable NP-hard and continuous optimization problem with a wide range of applications. Because its solutions have different forms, it is challenging to design the most efficient algorithm that can solve different AVEs without using overcomplicated technical improvement and problem-dependent objectives. Hence, this paper proposed an improved glowworm swarm optimization (GSO) algorithm with an adaptive step size strategy based on the sigmoid function (SIGGSO) that solves the AVEs. Seven test AVEs, including multisolution and high-dimensional AVEs, are selected for testing and compared with seven metaheuristic algorithms. The experimental results show that the proposed SIGGSO algorithm has higher solution accuracy and stability when seeking multiple solution of AVEs compared to the basic GSO. Moreover, it obtains competitive advantages on multisolution and high-dimensional AVEs compared with other metaheuristic algorithms and provides an effective method for engineering and scientific calculations.

1. Introduction

The absolute value equation (AVE) is a nondifferentiable, NP-hard optimization problem in a continuous solution space. Many practical problems such as the site selection problem [1] and knapsack feasibility problem [2], which are nonlinear, nondifferentiable, multivariable, and multiparameter-complex optimization problems, are closely related to AVEs. AVEs are not only equivalent to standard and generalized linear complementarity problems but also to bilinear programming problems, which are typical mathematical programming problems with a wide range of applications in several disciplines. Compared to the other problems, the AVE has a simpler structure and easy to implement. Therefore, an in-depth study of AVEs is a good step towards solving the related problems as well.

Many algorithms have been proposed to solve the AVE. The results of traditional algorithms, such as the generalized Newton method [3], bilinear programming method [4], multivariate spectral gradient algorithm [5], and artificial neural networks (ANN) [6], have been reported. However, the traditional algorithms struggle to deal with the objective functions which lack good analytical properties, e.g., continuity, differentiability, and smoothness.

With the vigorous development of metaheuristic algorithms, many scholars have attempted to utilize metaheuristic algorithms for practical problems. We refer to the following literatures for a few examples of applications. [7] improved the traditional particle swarm optimization (PSO) algorithm through introducing into the sharing information mechanism and the competition strategy called information sharing based PSO (IPSO). The novel algorithm IPSO has the similar rapid convergence speed and enhanced global search capability to the traditional PSO. The experimental results show that IPSO has better performance than the traditional PSO and the GA algorithm on benchmark functions, especially for difficult functions. [8] presented a chaotic monarch butterfly optimization (CMBO) algorithm to solve large-scale 0–1 knapsack problem. They introduce twelve well-known one-dimensional chaotic maps to tune the parameters of CMBO. Additionally, Gaussian mutation is used to perturb small part of solutions with worse fitness. The proposed CMBO can outperform the standard MBO and other eight state-of-the-art canonical algorithms. [9] presents two binary variants of a Hunger Games Search Optimization (HGSO) algorithm based on V- and S-shaped transfer functions (BHGSO-V and BHGSO-S) within a wrapper FS model for choosing the best features from a large dataset. The experimental results demonstrate that the BHGSO-V algorithm can reduce dimensionality and choose the most helpful features for classification problems. For feature selection problem, [10] proposed the island algorithm (IA) with a Gaussian mutation strategy (IAGM) to find the optimal feature subset in the set of feature subsets. The new variant of IA can efficiently address the problem that as the number of iterations increasing, the island algorithm tends to local optimization. Colony predation algorithm (CPA) is a recently proposed algorithm which already is applied in some areas. For example, [11] proposed a framework within CPA called colony predation algorithm (CPA) with a kernel extreme learning machine (KELM), abbreviated as ECPA-KELM. The framework leads an efficient intelligence method for the diagnosis of COVID-19 from the perspective of biochemical indexes. The statistical analysis results show that ECPA-KELM can be used to discriminate and classify the severity of the COVID-19 as a possible computer-aided method and provide early warning for the treatment and diagnosis of COVID-19, effectively. [12] first proposed a novel metaheuristic algorithm called Harris hawks optimization (HHO) algorithm. The main inspiration of HHO is the cooperative behavior and chasing style of Harris’ hawks in nature called surprise pounce. The founders do extensive experiments, and the statistical results and comparisons show that the HHO algorithm provides very promising and occasionally competitive results compared to well-established metaheuristic techniques. Since HHO came up to us, it has been widely used in many areas among which [13] proposed a novel satellite image segmentation technique based on dynamic HHO with a mutation mechanism (DHHO/M) and thresholding technique. Compared with the original Harris hawks optimization (HHO), the dynamic control parameter strategy and mutation operator used in DHHO/M can avoid falling into the local optimum and efficiently enhance the search capability. Experiments on various satellite images illustrate that the DHHO/M is superior to others in the following three aspects: fitness function evaluation, image segmentation effect, and statistical tests.

For solving AVEs, several metaheuristic algorithms including the genetic algorithm (GA), particle swarm optimization (PSO) algorithm, differential evolution (DE) algorithm, and harmony search (HS) algorithm are applied. The authors have studied the optimum correction of the absolute value equation by using GA [14] and obtained some computational results which prove the effectiveness of the algorithm. They did not give sufficient examples, instead only verifying the case of high-dimensional AVEs. PSO with exponentially decreasing inertia weight (EDIW) [15] has been deeply discussed. By employing dynamic changes of the inertia weight, PSO can easily escape from a local optimum and progress towards a global optimum. The authors gave a series of test AVEs with a unique solution or multiple solutions to verify the effectiveness of EDIWPSO. However, a few test examples without high-dimensional AVEs are contained in the numerical experiments section. Despite this fact, the problem is that even for AVEs with multiple solutions, the EDIWPSO can only capture the multiple solutions of the AVE by running the algorithm several times. An improved adaptive differential evolution (IADE) algorithm [16] was proposed for solving AVEs. The algorithm combined global search ability and local search ability, using a quadratic adaptive mutation operation and crossover operation. Numerical results show that the improved algorithm can quickly find solutions of the test AVEs. However, it faces the same issue of insufficient test examples and comparisons. An improved harmony search algorithm with chaos (HSCH) [17] has been applied for solving AVEs, and this improved algorithm has a better optimization capability than the basic harmony search algorithm (HS) and the improved harmony search algorithm with differential mutation operator (HSDE). The authors verified the performance of HSCH by using three test AVEs, but their test still lacks any higher-dimensional AVEs, AVEs with multiple solutions, and comparisons with other metaheuristic algorithms.

Although many metaheuristic algorithms have been employed to solve optimization problems containing AVEs, existing metaheuristic algorithms still provide low precision solutions to AVEs and lack the ability to straightforwardly attain multiple solutions of AVEs. Therefore, the above algorithms constructed to solve AVEs lack the strong generalization ability required to work well. Hence, to address different forms of AVEs effectively, finding and optimizing an efficient algorithm possessing powerful universality is a critical issue.

Compared to the above algorithms for solving AVEs, the glowworm swarm optimization (GSO) algorithm has a natural advantage when solving multimodal optimization problems. Although the GSO algorithm shares certain characteristics with other metaheuristic algorithms compared in this study, there are several differences. GSO can simultaneously detect the multiple peaks of multimodal functions in parallel. This problem cannot be solved directly by the original version of the compared metaheuristic algorithms. Generally, the compared metaheuristic algorithms are used to find one global optima of the optimization problem. However, for solving AVEs, multiple solutions are generally obtained because of the existence of the absolute value vector , and one important consideration of the algorithms solving AVEs is to locate as many solutions as possible. This capability is what separates GSO from the other metaheuristic intelligent algorithms. This is the motivation of improving the basic GSO and designing SIGGSO to solve AVE problems. Furthermore, GSO is not subject to conditions such as individual failures, the addition of noise, and the limitation of differentiable functions and multimodal functions, which may cause GSO to lose its searchability [18]. These advantages render GSO more adaptable to practical application problems, and practical optimization problems such as AVEs, which are multimodal functions under certain conditions, can be solved. However, because the basic GSO has troubles leaving local optima, we need to modify it to improve its ability to solve multisolution and high-dimensional AVEs. We illustrate this in Section 5.

Notably, the present scrutiny attempts to address a clear scientific gap with the following contributions: (i)Based on that sigmoid function can show a good balance between linear and nonlinear, an adaptive step size strategy derived from sigmoid function is designed and applied to GSO in this paper(ii)The introduction of this adaptive step size strategy can make GSO possess a strong ability to jump out local optima, compared with the basic GSO with fixed step size strategy(iii)The proposed improved GSO outperforms basic GSO in solving several test AVEs and obtains competitive advantages on multisolution and high-dimensional AVEs compared with other metaheuristic algorithms like PSO, GA, HHO, DE, and HS. This improved GSO can provide an effective method for engineering and scientific calculations

The rest of the paper is organized as follows: in Section 2, background information and the existence and uniqueness theory of AVE’s solutions are provided. Section 3 describes the basic GSO. Section 4 describes the proposed SIGGSO in detail. The experimental design is presented in Section 5. In Section 6, conclusions and future research work are provided.

2. Absolute Value Equation

The general form of the AVE is , where , , and denote the absolute value of each component of . It is an important subclass of the absolute value matrix equation [19]. Subsequently, Mangasarian and Meyer [4] reported certain theoretical results on the existence and uniqueness of AVE solutions, which are listed below:

Lemma 1. If and all the singular values of exceed one, then there exists a unique solution for the AVE for any .

Lemma 2. For , if , for any , then there exists a unique solution for the AVE.

Lemma 3. If and , where , then there exist nonidentical solutions for the AVE; these solutions have a different sign pattern and no zero component.

For example, when , there are four solutions for the AVE with sign patterns of , respectively.

Since the GSO we consider is generally suitable for solving unconstrained optimization problems, we transform the AVE into an unconstrained optimization problem as follows:

Theorem 4. The AVE can be equivalently transformed into the following unconstrained optimization problem: where , represents the Euclidean norm, , and if the optimal value of is zero, then is the solution of AVE.

Proof. Suppose that is the optimal solution of ; according to the positive definiteness of the Euclidean norm,

Hence, the solution of the AVE is equivalent to the optimal solution of when reach the optimal value zero.

3. Basic GSO

3.1. Basic Concepts of GSO

In 2005, Krishnanand and Ghose [20] proposed a new swarm intelligence optimization algorithm called the artificial glowworm swarm optimization (GSO) algorithm. After several years of development, the GSO has good prospects for application to optimization in the scheduling of certain tasks, vehicle routing problems, and building design [2123].

Each iteration of the GSO execution process includes five phases: the glowworm deployment phase, luciferin update phase, movement probability calculation phase, location update phase, and neighborhood range update phase, which are described below.

3.2. Mathematical Model of GSO

In this section, we introduce the mathematical model of GSO proposed by Krishnanand and Ghose [20]. For the reader’s convenience, we first provide a list of the notation we will use, shown in Table 1.

3.2.1. Glowworm Deployment Phase

glowworms (solution vectors) are randomly placed in the feasible domain of the problem and labeled , respectively, where each glowworm is considered to be an -dimensional vector . Each glowworm is initialized with luciferin level , local decision radius , step size , threshold value of the number of glowworms contained in the local decision domain , luciferin decay factor , fitness enhancement factor , domain change rate , sensor range of the glowworms (threshold of the local decision domain), and iteration number .

3.2.2. Luciferin Update Phase

The luciferin level of each glowworm is equal to that of the previous iteration plus a certain extracted proportion of the glowworm’s current fitness, minus a certain proportion of the luciferin level that varies with time, as described by the following equation:

where is the luciferin level at iteration and indicates the fitness of glowworm at iteration , i.e., the corresponding value of the objective function.

3.2.3. Movement Probability-Calculating Phase

Each glowworm decides its movement direction according to the luciferin level of the glowworms in its local decision domain. For each glowworm , the probability of moving toward a neighbor is given by

where , denotes the set of all neighboring glowworms of glowworm at iteration , denotes the adaptive local decision domain of glowworm at iteration , and .

3.2.4. Location Update Phase

Glowworm select a glowworm with given by (4) and perform a location update. Glowworm moves with a certain step size toward glowworm with the maximum luciferin level in its local decision domain; then, the movement process can be represented as

3.2.5. Neighborhood Range Update Phase

Each glowworm uses an adaptive local decision radius, which changes at each iteration according to the number of neighboring glowworms (the local decision radius increases when the number of neighbors is smaller, and vice versa). At each iteration, the following rule is applied:

A variant of neighborhood range update rule was first introduced in [18]. Its mathematical model is as follows:

where is the glowworm density in the local decision domain of glowworm at iteration and is a constant representing the domain change rate.

The GSO flowchart is shown in Figure 1.

4. Improved GSO Based on the Sigmoid Function

4.1. Basic Idea for GSO Improvement

In a neural network (NN), the sigmoid function is derived from the activation function used for limiting the output amplitude of a neuron. As it suppresses the output signal to a permissible range, it is also known as the suppression function. We refer to [2426] for some application examples of sigmoid function. In general, the normal output range of a neuron can be expressed as a unit closed interval . The value range of the sigmoid function is a continuous interval from 0-1 and is differentiable. It is for this reason that the sigmoid function has a wide range of applications in neural networks as a suppression function.

The sigmoid function is a strictly increasing function which shows better balance between linear and nonlinear functions. An example of the sigmoid function is the logistic function:

where is the tilt parameter, which can be modified to change the degree of tilt.

To utilize the sigmoid function to construct an adaptively decreasing step size, we need to modify the sigmoid function such that it is strictly monotonically decreasing, and the function value remains within the range. Therefore, we set an initial large step size during GSO execution and then multiply by the constructed function. Based on this analytical approach, we make the following changes to the sigmoid function:

where and are undetermined parameters. Therefore, the fixed step size location update model (5) is changed into an adaptive variable step size model, as shown below:

As detailed in the numerical experiments (Section 5), we establish that performance of the GSO, based on the adaptively decreasing step size strategy (9), for solving AVEs is good. We call this improved GSO with adaptive variable step size model (10) SIGGSO, which comprises models (3), (4), (10), and (7).

To ensure that the improved adaptively decreasing step size model (10) can be better applied to the GSO for solving AVEs, the selection of the values of parameters and is critical.

Here, we consider AVE1 given in Section 5 for testing the set values of and . The mean value and standard deviation of the fitness when the optimal solution is obtained are listed in Table 2 for different values of and The curve with different values of λ and ε of strategy (9) is displayed in Figure 2.

Table 2 shows that when is fixed, the results improve as is increased. When SIGGSO solves AVE1 with very high accuracy, the minimum fitness is zero, and the algorithm is stable. When is fixed, the experimental results improve as is decreased. When , SIGGSO solves AVE1 with very high accuracy, the minimum fitness is zero, and the algorithm is stable. From Figures 2(a) and 2(b), as increases, decreases, and the change trend of with is first rapid decline and then gradual stabilization. This is also in line with our basic idea for adaptive step size improvement, namely, that during the early the iterations, the step size can exhibit a relatively large change, such that the glowworms can rapidly converge near the solutions, whereas in the later stages, the step size can exhibit a relatively small change, such that the glowworms can be fine-tuned near the solutions, thus rendering the solution more accurate. Based on this analysis, we fix and for the SIGGSO in the numerical experiments. We use the following model to update the locations of the glowworms:

The pseudocode of SIGGSO algorithm is depicted in Pseudocode 1, and the flowchart of SIGGSO is shown in Figure 3.

Adaptive variable step-size GSO based on the Sigmoid function
Set the dimension of solution of AVE as , number of glowworms as , maximum iteration number as , ,
Let be the location of glowworm at time ;
fordo
;
whiledo: %Main loop
{
fordo: % Luciferin update phase
  ;
fordo: %Move phase
{
  ;
  for each glowworm do:
   ;
  ; % Use the roulette method
  ;
   ;
  ;
 }
  
}
4.2. Time Complexity Analysis of SIGGSO

In this section, we will use big notation for time complexity analysis.

4.2.1. Time Complexity Analysis for Population Size

Pseudocode 1 shows that the SIGGSO algorithm has an outer loop and two inner loops, and the second inner loop has another inner loop.

Remark 5. represents the product of two scalar and .

Table 3 shows that the total time complexity of SIGGSO is

According to the addition rule, only the highest order of the time complexity is considered, and according to the multiplication rule, is equivalent to ( is constant). Therefore, the time complexity of SIGGSO is . As changes for each glowworm at each , its range is . Finally, is between and .

4.2.2. Time Complexity Analysis for -Dimensional AVEs

We assume that both the maximum iteration number and are constants that are set when analyzing the time complexity for the AVE dimension.

From Table 4, we obtain , as per the addition and multiplication rules, and .

5. Numerical Experiments

In this section, we first give seven test AVEs containing multisolution and high-dimensional ones and have analyzed the solution characteristics of the test AVEs. Parameter setting details is depicted in Section 5.3. The primary objective that shows the comparing the results of SIGGSO, basic GSO and other metaheuristic algorithms are described in Sections 5.4 and 5.5. On the compared results, we mainly consider the mean and standard deviation of fitness value as comparing metrics. Furthermore, the well-known Wilcoxon signed-rank test method performs the significance difference test on the comparing results.

5.1. Test AVEs

In AVE4, AVE5, AVE6, and AVE7, is the size of the problem, i.e., the dimension of the solution. is a random number that follows a uniform distribution in [0,1], is an -order square matrix whose elements are generated by , is -dimensional column vector whose elements are generated by , is an -order identity matrix, and is an -dimensional column vector whose elements are all unity. represents the transpose matrix of .

The above AVEs were employed to test and compare the performances of SIGGSO with other metaheuristic algorithms which were programmed using MATLAB 2018a. All the experiments were performed on a PC configured with an Intel Core i5-7400 processor and 8-GB RAM. For a quick impression, the compared algorithms are listed in Table 5.

5.2. Characteristics of the Test AVEs

For AVE1, AVE4, AVE5, and AVE6, we can easily obtain the singular values of the matrix , which all exceed one. Hence, Lemma 1 tells us that both AVEs have unique solutions. The condition of Lemma 2, i.e., , applies to in AVE7; hence, a unique solution to AVE7 exists. and in AVE2 and AVE3 satisfy Lemma 3; hence, there exist solutions for these three AVEs. Furthermore, as AVE2 is a two-dimensional AVE, it has four solutions. The dimension of AVE3 is three; hence, it has eight solutions. Based on Lemma 3, the sign pattern of AVE2 is . Analogously, the sign pattern of AVE3 is

5.3. Parameter Setting

Here are the parameter setting details of the compared algorithms: (a)For SIGGSO, the fixed parameter settings which are not specifically tuned for every AVEs are the same as those in Krishnanand and Ghose [27, 28]. We list them in Table 6. A full factorial analysis is carried out in Krishnanand and Ghose [18] to show that the choice of has some influence on the performance of the algorithm, in terms of the total number of peaks captured, and they suggested that is equal to . Based on extensive numerical experiments on test AVEs, we determined the appropriate values of and , as shown in Table 7(b)For IADE, EDIWPSO, HSCH, HHO, and GA, all the parameter settings were the same as those in their referred papers (discussed in Section 1)

Remark 6. The low-dimensional AVEs are AVE1, AVE2, and AVE3. The high-dimensional AVEs are AVE4, AVE5, AVE6, and AVE7 whose dimension equal or exceed 100.

Remark 7. For all the algorithms compared in this study, the initial population was generated randomly in the solution space, the maximum number of iterations was 1000, and the population size was 50.

Remark 8. To give distinct expressions, AVE4, AVE5, AVE6, and AVE7 are denoted as “AVE name_dimension of solution.”

Remark 9. We consider in Theorem 1 to be the fitness function of all test AVEs. The theoretical optimal values of the unconstrained optimization problem are all zero.

5.4. Advantage of SIGGSO in Solving Multisolution AVEs

Through the analysis of the solution characteristics of AVE2 and AVE3 in Section 5.2, we obtained four solutions of AVE2 and eight solutions of AVE3 by executing SIGGSO. The four solutions of AVE2 are

The eight solutions of AVE3 are

The following figures show the process of capturing the AVE solutions directly obtained by GSO and SIGGSO. To clearly demonstrate the location changing process of the glowworms, we adjusted certain parameters in this subsection. For all the AVEs used in the simulation, we set . For AVE4, AVE5, AVE6, and AVE7, we set .

Figures 49 indicate that under the same number of iterations, for solving AVE2-AVE7, SIGGSO has a strong ability to leave the local optima and then converge to the global optima, compared to GSO. Furthermore, Figures 4 and 5 reflect that SIGGSO possesses the ability in simultaneously locating as many solutions of AVE as possible which cannot be achieved directly by the original version of the compared metaheuristic algorithms.

5.5. Comparison of the Performances of GSO, SIGGSO, IADE, EDIWPSO, HSCH, HHO, and GA in Solving AVEs

We executed each of these algorithm 20 times independently and then compared the mean value and standard deviation of the fitness value of different test AVEs, as shown in Table 8. Table 9 shows the statistical results for the data of Table 8 using left-sided Wilcoxon signed-rank test. Table 10 shows the mean value and standard deviation of the execution time generated by the seven compared algorithms.

Table 8 indicates the following: (a) when solving low-dimensional, multisolution AVEs, SIGGSO, IADE, and EDIWPSO exhibit excellent solution accuracy and stability, whereas the performance of HSCH and GA is unsatisfactory the performance of GSO and HHO is the worst. As a matter of fact, HSCH and GA did not perform well on the whole test AVEs. Moreover, SIGGSO outperforms all the other algorithms in the comparison on AVE2. (b) When solving high-dimensional AVEs, EDIWPSO, HSCH, and GA are far inferior to SIGGSO and IADE in all aspects of our comparison. The performance of GSO is slightly inferior to SIGGSO on all high-dimensional AVEs. Furthermore, IADE outperforms SIGGSO on AVE4_100, AVE4_200, AVE7_100, and AVE7_200. However, we noticed that IADE has a poor performance on all 500-dimensional AVEs compared with SIGGSO, which means that IADE failed to perform well on higher-dimensional AVEs. HHO outperforms SIGGSO on 50% test AVEs including AVE4_100, AVE4_200, AVE4_500, AVE7_100, AVE7_100, and AVE7_100. On high-dimensional AVE5 and AVE6, HHO did not perform well than SIGGSO.

For more convincing statistical analysis, Wilcoxon signed-rank test (WSRT) is adopted to perform pairwise comparisons between SIGGSO and the rest of algorithms. Here is the hypothesis for SIGGSO proposed in this study:

H0. The mean and standard deviation of the fitness value obtained by the SIGGSO is greater than that of another algorithm.

H1. The mean and standard deviation of the fitness value obtained by the SIGGSO is less than or equal to that of another algorithms.

We use the left-sided WSRT function in MATLAB for the above hypothesis test. Table 9 gives the statistical results on all test AVEs.

According to the value, Table 9 shows that SIGGSO shows a significant improvement over GSO, EDIWPSO, HSCH, HHO, and GA with a level of significance . For the comparison between SIGGSO and IADE, the value is 0.2307 (>0.05), which means that the difference between these two algorithms cannot be deemed significant. Meanwhile, for the comparison between IADE and SIGGSO, the value is 0.7770 (>0.2307), which means that we prefer to accept the H1 hypothesis over SIGGSO vs. IADE; i.e., SIGGSO possesses competitive advantages compared with IADE.

Table 10 shows that SIGGSO always requires more execution time than IADE, EDIWPSO, and GA on all the compared AVEs. Although EDIWPSO and GA required far less time than SIGGSO, the three algorithms fail to solve AVEs well, as they have lower solution accuracy and weaker stability than SIGGSO. Due to its’ superiority in execution time compared to SIGGSO, IADE is preferable for solving AVEs, until the performance of IADE on higher-dimensional and multipeak AVEs is improved. Compared to SIGGSO, the execution time of HSCH is unacceptable in solving higher-dimensional AVEs. For example, HSCH required almost 19, 18.9, 19.5, and 21.4 seconds for solving AVE4_500, AVE5_500, AVE6_500, and AVE7_500, respectively. This is nearly 3.5 times than the time required by SIGGSO. Compared to SIGGSO, HHO not only required less time but also did not perform well on most test AVEs. Based on the previous analysis, there is a trade-off in SIGGSO between the execution time and the solution accuracy and the capacity for solving high-dimensional AVEs.

Figure 10 displays the fitness plots of one iterative process of the compared AVEs obtained by the compared algorithms.

As shown in Figures 10(a)10(c), SIGGSO requires more iterations to reach the global optima than the other metaheuristic algorithms. However, when solving high-dimensional AVEs containing AVE5_200, AVE_500, AVE6_100, AVE6_200, and AVE6_500, SIGGSO converges significantly faster, i.e., with fewer iterations than the other algorithms. Moreover, the ability of SIGGSO to leave the local optima is stronger than that of the other four algorithms. For instance, Figures 10(i)10(l) show that the ability of IADE to leave the local optima is weaker.

All figures show that when solving either multisolution AVEs or high-dimensional AVEs, GA and HSCH fluctuate considerably during their early iterations and cannot leave the local optima, and their solution accuracy is the worst out of the five algorithms.

One point that is worth to consider is that SIGPSO requires slightly less iterations than EDIWPSO on most test AVEs. For example, from Figure 10(k), we can see that SIGPSO requires about 120 iterations to converge to a global optimum, whereas EDIWPSO needs more than 200 iterations. This shows that the adaptive model (9) designed in this paper is a thinkable adaptive step size strategy.

Considering that we need a robust algorithm to solve AVEs, SIGGSO is absolutely an effective technique to this end, especially for multisolution AVEs or high-dimensional AVEs. This is because SIGGSO uses effective adaptive step size techniques with GSO, and it indicates that the strategy selecting method is useful for solving AVEs.

6. Conclusion

The AVE is a NP-hard problem, and its solution has several forms. There are both low-dimensional and high-dimensional AVEs. There are also single- or multisolution AVEs. This study verified the advantages of GSO in solving multisolution AVEs compared to the other metaheuristic algorithms. As the basic GSO has relatively poor solution accuracy and struggles to solve high-dimensional AVEs, an improved GSO based on the sigmoid function, called SIGGSO, was proposed in this study. The sigmoid function was used to reconstruct the adaptive step size model in GSO. This improvement enhances the convergence rate of the GSO during the early iterations and improves its ability to capture the global optima during the later stages. Through numerical experiments, it was established that SIGGSO had higher solution accuracy and better stability compared to the other algorithms. For solving high-dimensional AVEs, it could converge to the global optimum with fewer iterations. The proposed SIGGSO algorithm can be applied to linear complementarity, bilinear programming, concave minimization, and other problems. Moreover, it can be applied to other continuous optimization problems.

In the end, we suggest several potential directions of future research. We intend to focus on improving the speed of the algorithm for higher-dimensional AVEs, which is a limitation of SIGGSO, and we hope to find other activation functions of NN that may further improve the performance of GSO.

Data Availability

The data that support the findings of this study are available from the first author upon reasonable request.

Conflicts of Interest

There are no conflicts of interest to declare.

Acknowledgments

This research was supported by the 71st General Grant of China Postdoctoral Science Foundation (Project No. 2022M712680), General Research Project of Basic Science (Natural Science) in Jiangsu Province (Project No. 22KJB110027), General Project of Philosophy and Social Science Research in Jiangsu Universities (Project No. 2021SJA1079), and Research Initiation Foundation of Xuzhou Medical University (Project No. D2019046).