Abstract

In this paper, a novel swarm-based metaheuristic algorithm is proposed, which is called tuna swarm optimization (TSO). The main inspiration for TSO is based on the cooperative foraging behavior of tuna swarm. The work mimics two foraging behaviors of tuna swarm, including spiral foraging and parabolic foraging, for developing an effective metaheuristic algorithm. The performance of TSO is evaluated by comparison with other metaheuristics on a set of benchmark functions and several real engineering problems. Sensitivity, scalability, robustness, and convergence analyses were used and combined with the Wilcoxon rank-sum test and Friedman test. The simulation results show that TSO performs better compared to other comparative algorithms.

1. Introduction

Real-world optimization problems have become more challenging, which requires more efficient solution methods. Different scholars have studied various approaches to solve these complex and difficult problems from the real world. A part of researchers solve these optimization problems using traditional methods such as quasi-Newton, conjugate gradient, and sequential quadratic programming methods. However, owing to the nonlinear, nonproductivity characteristics of most real-world optimization problems and the involvement of multiple decision variables and complex constraints, these traditional algorithms are difficult to be solved effectively [1, 2]. The metaheuristic algorithm has the advantages of not relying on the problem model, not requiring gradient information, having strong search capability and wide applicability, and can achieve a good balance between solution quality and computational cost [3]. Therefore, the metaheuristic algorithms have been proposed to solve real-world optimization problems, such as image segmentation [4, 5], feature selection [6, 7], mission planning [8, 9], parameter optimization [10, 11], job shop scheduling [12, 13], etc.

Metaheuristic algorithms are usually classified into three categories [14]: evolution-based algorithms, physical-based algorithms, and swarm-based algorithms. The evolution-based algorithm is inspired by the laws of evolution in nature. Genetic algorithm (GA) [15], inspired by Darwin's theory of superiority and inferiority, is a well-known evolution-based algorithm. With the popularity of GA, several other widely used evolution-based algorithms have been proposed, including differential evolution (DE) [16], genetic programming (GP) [17], evolutionary strategies (ES) [18], and evolutionary programming (EP) [19]. In addition, several new evolution-based algorithms have been proposed, such as artificial algae algorithm (AAA) [20], biogeography-based optimization (BBO) [21], and monkey king evolutionary (MKE) [22]. The physical-based algorithms are inspired by various laws of physics. One of the most famous algorithms of this category is simulated annealing (SA) [23]. SA is inspired by the law of thermodynamics in which a material is heated up and then cooled slowly. There are other physical-based algorithms proposed, including gravitational search algorithm (GSA) [24], nuclear reaction optimization (NRO) [25], water cycle algorithm (WCA) [26], and sine cosine algorithm (SCA) [27]. The swarm-based algorithms are inspired by the social behavior of different species in natural groups. Particle swarm optimization (PSO) [28] and ant colony optimization (ACO) [29] are two typical swarm-based algorithms. PSO and ACO mimic the aggregation behavior of bird colonies and the foraging behavior of ant colonies, respectively. Some other algorithms of this category include: grey wolf optimizer (GWO) [30], monarch butterfly optimization (MBO) [31], elephant herding optimization (EHO) [32], moth search algorithm (MSA) [33], manta ray foraging optimization (MRFO) [34],earthworm optimization algorithm (EOA) [35], etc. With the development of metaheuristics, a type of human-based metaheuristic algorithm is also emerging. These algorithms are inspired by the characteristics of human activity. Teaching-learning-based optimization (TLBO) [36], inspired by traditional teaching methods, is a typical example of this category among metaheuristic algorithms. Other human-based metaheuristics include: social evolution and learning optimization (SELO) [37], group teaching optimization algorithm (GTOA) [38], heap-based optimizer (HBO) [39], political optimizer (PO) [40], etc.

There is a common feature of all these metaheuristic algorithms that rely on exploration and exploitation in the search space to find the optimal solution [41, 42]. Exploration means that the algorithm searches for promising regions in a wide search space and exploitation is a further search for the best solution in the promising regions. The balance of the two search behaviors affects the quality of the solution. When exploration dominates, exploitation declines, and vice versa. Therefore, it is a big challenge to balance exploration and exploitation for metaheuristics. Although there are constantly new algorithms being developed, the no free lunch (NFL) [43] theory states that no particular algorithm can solve all optimization problems perfectly. The NFL has motivated researchers to develop effective metaheuristic algorithms to solve various fields of optimization problems.

In this paper, a novel swarm-based metaheuristic is presented called tuna swarm optimization (TSO). It is inspired by two types of swarm foraging behavior of tunas. The TSO is evaluated in 23 benchmark functions and 3 engineering design problems. Test results reveal that the method proposed in this paper significantly outperforms those popular and recent metaheuristics. This paper is structured as follows: Section 2 describes the inspiration for TSO and builds the corresponding mathematical model. A benchmark function set and three engineering design problems are employed to check the performance of TSO in Sections 3 and 4, respectively. Section 5 concludes the overall work and provides an outlook for the future.

2. Tuna Swarm Optimization

2.1. Inspiration

Tuna, scientifically named Thunnini, is a marine carnivorous fish. There are many species of tuna, and the size varies greatly. Tuna are top marine predators, feeding on a variety of midwater and surface fish. Tunas are continuous swimmers, and they have a unique and efficient way of swimming (called fishtail shape) in which the body stays rigid while the long, thin tail swings rapidly. Although the single tuna swims very fast, it is still not as fast as the nimble small fish response. Therefore, the tuna will use the “ group travel “ method for predation. They use their intelligence to find and attack their prey. These creatures have evolved a variety of effective and intelligent foraging strategies.

The first strategy is spiral foraging. When tuna are feeding, they swim by forming a spiral formation to drive their prey into shallow water where they can be attacked more easily.

The second strategy is parabolic foraging. Each tuna swims after the previous individual, forming a parabolic shape to enclose its prey.

Tuna successfully forage by the above two methods. In this paper, a new swarm-based metaheuristic optimization algorithm, namely, tuna swarm optimization, is proposed based on modeling these natural foraging behaviors.

2.2. Mathematical Model

In this section, the mathematical model of the proposed algorithm is described in detail.

2.2.1. Initialization

Similar to most swarm-based metaheuristics, TSO starts the process of optimization by generating initial populations at random uniformly in the search space,where is the initial individual, and are the upper and lower boundaries of the search space, is the number of tuna populations, and is a uniformly distributed random vector ranging from 0 to 1.

2.2.2. Spiral Foraging

When sardines, herring, and other small schooling fish encounter predators, the entire school of fish forms a dense formation constantly changing the swimming direction, making it difficult for predators to lock a target. At this time, the tuna group chase the prey by forming a tight spiral formation. Although most of the fish in the school have little sense of direction, when a small group of fish swim firmly in a certain direction, the nearby fish will adjust their direction one after another and finally form a large group with the same goal and start to hunt. In addition to spiraling after their prey, schools of tuna also exchange information with each other. Each tuna follows the previous fish, thus enabling information sharing among neighboring tuna. Based on the above principles, the mathematical formula for the spiral foraging strategy is as follows:where is the individual of the iteration, is the current optimal individual (food), and are weight coefficients that control the tendency of individuals to move towards the optimal individual and the previous individual, is a constant used to determine the extent to which the tuna follow the optimal individual and the previous individual in the initial phase, denotes the number of current iteration, is the maximum iterations, and is a random number uniformly distributed between 0 and 1.

When all tuna forage spirally around the food, they have good exploitation ability for the search space around the food. However, when the optimal individual fails to find food, blindly following the optimal individual to forage is not conducive to group foraging. Therefore, we consider generating a random coordinate in the search space as a reference point for spiral search. This facilitates each individual to search a wider space and makes TSO with global exploration ability. The specific mathematical model is described as follows:where is a randomly generated reference point in the search space.

In particular, metaheuristic algorithms usually perform extensive global exploration in the early stage and then gradually transition to precise local exploitation. Therefore, TSO changes the reference points of spiral foraging from random individuals to optimal individuals as the iteration increases. In summary, the final mathematical model of the spiral foraging strategy is as follows:

2.2.3. Parabolic Foraging

In addition to feeding by forming a spiral formation, tunas also form a parabolic cooperative feeding. Tuna forms a parabolic formation with food as a reference point. In addition, tuna hunt for food by searching around themselves. These two approaches are performed simultaneously, with the assumption that the selection probability is 50% for both. The specific mathematical model is described as follows:where is a random number with a value of 1 or −1.

Tuna hunt cooperatively through two foraging strategies and then find their prey. For the optimization process of TSO, the population is first randomly generated in the search space. In each iteration, each individual randomly chooses one of the two foraging strategies to execute, or chooses to regenerate the position in the search space according to probability . The value of parameter will be discussed in the parameter setting simulation experiments. During the entire optimization process, all individuals of TSO are continuously updated and calculated until the end condition is met, and then the optimal individual and the corresponding fitness value are returned. The TSO pseudocode is shown in Algorithm 1. The detailed process of TSO is shown in Figure 1.

Input: the population size NP and maximum iteration tmax
 Output: the location of food (the best individual) and its fitness value
 Initialize the random population of tunas (i = 1, 2, . . ., NP)
 Assign free parameters a and z
 While (t < tmax)
  Calculate the fitness values of tunas
  Update
  For (each tuna) do
   Update , ,
   If () then
   Update the position using equation (1)
   Else if () then
    If () then
     If () then
      Update the position using Equation (7)
     Else if () then
      Update the position using Equation (2)
    Else if () then
     Update the position using Equation (9)
  End for
   
 End while
 Return the best individual and the best fitness value .

3. Numerical Experiment and Discussion

3.1. Benchmark Function Set and Compared Algorithms

In this section, in order to evaluate the performance of the TSO proposed in this paper, a set of well-known benchmark functions are employed for testing. This set of benchmark functions include 7 unimodal functions, 6 multimodal functions, and 10 multimodal functions with fixed dimensions. The unimodal functions F1–F7 have only one global optimal solution and are, therefore, often employed to evaluate the local exploitation capability of an algorithm. Besides the global optimal solution, the multimodal functions F8–F23 also have multiple local optimal solutions and are, therefore, used to challenge the global exploration capability and local optimal avoidance capability of an algorithm. The mathematical formulas and characteristics of these functions are shown in Table 1. A three-dimensional visualization of these functions is given in Figure 2.

3.2. Compared Algorithms and Experimental Setup

The results of the proposed TSO are compared with seven well-regarded and recent metaheuristics. These algorithms include particle swarm optimization (PSO) [28], grey wolf optimizer (GWO) [30], whale optimization algorithm (WOA) [44], and Salp swarm algorithm (SSA) [45], which are the more frequently used algorithms in the optimization field, and Harris Hawks optimization (HHO) [46], equilibrium optimizer (EO) [47], and tunicate swarm algorithm (TSA) [48], which are three new algorithms recently proposed.

All algorithms were implemented under MATLAB R2016b on a computer with Windows 10 64 bit Professional and 16 GB RAM. The population size and the maximum number of iterations for all optimizers were set to 50 and 1000, respectively. All results were recorded and compared based on the performance of each optimizer on 30 independent runs. It is well known that the parameter settings of an algorithm have a huge impact on the performance of the algorithm. For the fair comparison, the parameter settings of all compared algorithms are based on the parameters used by the authors of the original article. Table 2 lists the parameters used by each algorithm.

3.3. Analysis of TSO for Variable-Dimensional Functions

Table 3 presents the results of TSO and other comparison algorithms for solving F1–F13 with Dim = 30. In addition, the performance of TSO is also evaluated using test functions with different dimensions, which is beneficial to recognize the ability of TSO to solve high-dimensional functions. Table 4–Table 6 show the results of TSO and other comparison algorithms for processing F1–F13 with dimensions of 100, 500, and 1000.

As shown in the results of the unimodal functions F1–F7 in Table 3–Table 6, TSO achieves the best results in most of the functions, significantly outperforming almost all the comparison algorithms. In addition, TSO outperforms the comparison algorithms still when dealing with high-dimensional problems. On the other hand, the results obtained by TSO are not much fluctuated as the dimensionality increases, which can also be observed in the convergence curves in Figure 3. Specifically, TSO performs best on F1–F5 when Dim = 30. In particular, TSO can consistently obtain the theoretical optimal solution on F1 and F3. In F7, HHO is the best optimizer, and TSO follows the best. The TSO performs poorly for the F6. For high-dimensional functions, TSO and HHO perform in the top 2. TSO gives the most satisfactory results for F1–F4. HHO performs best on F5–F7, with TSO ranking behind it. Overall, TSO performs the best exploitation ability among all the algorithms involved in the test for unimodal functions on different dimensions.

The results for solving the multimodal functions F8–F13 in different dimensions for each algorithm are also given in Table 3–Table 6. The analysis shows that TSO performs best in all dimensions when solving F8–F11. The TSO ranks behind the HHO in solving F12 and F13. Notably, TSO can stably obtain the theoretical optimal solution for F9–F11. As the convergence curves show, the TSO performance does not degrade too much as the dimensionality increases, showing the superior performance of TSO in solving high-dimensional multimodal functions.

3.4. Analysis of TSO for Fixed Dimensional Functions

The test results of TSO applied to fixed dimensional functions are shown in 7. The means in the table show that TSO is superiorly competitive on the fixed dimensional functions, performing best on eight of the ten functions. TSO ranks second and third on F8 and F15. In order to analyze the distribution characteristics of TSO when solving fixed dimensional functions, box plots of F14–F23 are drawn based on the results of 30 runs, as shown in Figure 4. It can be observed that TSO outperforms the comparison algorithm in most functions in terms of maximum, minimum and median values, and the distribution of solutions is more concentrated, thus, TSO performs better compared to other algorithms.

3.5. Wall-Clock Time Analysis of TSO

Computational efficiency is also an important measure of algorithm performance. Table 8 records the average computational time consumed by these algorithms for 30 independent runs in each function. It can be seen that the computation of TSO does not take much time, only longer than WOA and TSA. Although TSO takes more time, the performance is better than WOA and TSA. Moreover, TSO takes less time with better performance than other comparison algorithms, thus TSO has a huge efficiency advantage. Figure 5 illustrates the ranking of computational time consumption of each algorithm, and it can be visually seen that WOA, TSA, and TSO rank in the top three.

3.6. Parameter Sensitivity Analysis

This section focuses on the analysis of the values of the two control parameters (z and a) of the TSO. The first parameter is , which controls the probability of randomly generated individuals. The second parameter is , which controls the extent to which each individual follows the optimal individual and the neighboring individuals. The 13 variable dimensional functions (F1–F13) and 10 fixed dimensional functions (F14–F23) are used for analyzing the effect of the values of the control parameters on the TSO performance. The values of each parameter are defined as , , and there are combinations in total. Each combination solves the test functions 30 times independently, and a total of 68310 data are obtained. Due to the large amount of data, no specific comparison of the experimental results was performed, but the differences in the experimental results were reflected by sorting the simulation results under different parameter settings by Friedman's test.

According to the results, the Friedman test results of unimodal functions F1–F7, multimodal functions F8–F23, and all functions F1–F23 are given respectively, as shown in Table 9–Table 11. From the results in Table 9, it is clear that the smaller the value of taken, the better the TSO performance. The larger the value of taken, the better the results obtained by TSO. This is because the smaller the value ofis, the smaller the probability of randomly generating new individuals, while the larger the value of is, the higher the degree to which each individual follows the optimal individual, all of which are beneficial for improving exploitation ability and accelerating convergence. For the multipeaked functions F14–F23, we can get almost the opposite conclusion from Table 10 compared to unimodal functions. The rankings considering the results of all functions are given in Table 11. The results show that TSO has the best performance when .

3.7. Statistical Analysis of TSO

This section further analyses the differences between TSO and other algorithms statistically using the Wilcoxon rank-sum test and Friedman test. The Wilcoxon rank-sum test is a paired test that checks for significant differences between two algorithms. The results of the test between TSO and each algorithm at significance level are given in Table 12–Table 16, where the symbols “+/ = /-” indicate that TSO performs better, similar, or worse than the comparison algorithm. Table 17 gives the statistical results of TSO in different dimensions and functions that are better than, similar to, and worse than the comparison algorithm. TSO outperforms other comparative algorithms in different cases and achieves results of 32/15/15, 42/13/7, 62/0/0, 61/1/0, 61/1/0, 51/6/5,and 55/7/0, confirming the significant superiority of MSMA in most cases compared to other algorithms.

Table 18 shows the statistics of F1–F13 in different dimensions and the fixed dimensional functions F14–F23. The statistics show that TSO ranks first in all cases. Therefore, it can be considered that TSO has the best performance compared to other algorithms.

4. TSO for Engineering Design Problems

This section uses three engineering design problems to assess TSO's ability to solve real-world problems. These problems include the pressure vessel design problem, the tension/compression spring design problem, and the welded beam design problem. TSO uses the same number of iterations (1000) and populations (50) in solving these engineering design problems. Each problem is run 30 times independently, and the statistical results are compared with other algorithms in the literature.

4.1. Pressure Vessel Design

The pressure vessel design problem shown in Figure 6 is a well-known benchmark test design problem with the goal of reducing total cost, including forming cost, material cost, and welding cost. There are four different variables: vessel thickness Ts (), head thickness Th (), inner diameter R (), and vessel cylindrical cross-section length L (). The problem is described as follows:

Subject to

The results of TSO for solving this problem are compared with other algorithms such as DDSCA, ISCA, MBA, CPSO, TEO, hHHO-SCA, HPSO, MVO, and AFA, and the comparison is shown in Table 19. The results show that the TSO solution is superior to the solutions provided by the comparison algorithms with optimal solutions for each parameter [0.7782, 0.3846, 40.3196, and 199.9999], corresponding to a minimum cost of 5885.3327.

4.2. Tension/Compression Spring Design

The tension/compression spring design problem is a mechanical engineering design optimization problem. As shown in Figure 7, the goal of this problem is to reduce the weight of the spring. It includes four nonlinear inequalities and three continuous variables: wire diameter w(), average coil diameter d(), and coil length or number L(). This problem can be described by the following equation:

Subject to

The solution of TSO is compared with other methods given in the literature, including GA3, CPSO, CDE, DDSCA, GSA, hHHO-SCA, AEO, and MVO. Table 20 shows the parameters and costs corresponding to the optimal solution of each algorithm. As can be seen from Table 10, TSO is the best algorithm for solving the problem. The optimal solution for each parameter corresponding to the lowest cost of 1.724852 is [0.205729, 3.470488, 9.036623, 0.205729].

4.3. Welded Beam Design

The welded beam design problem is the classical structural optimization problem. As shown in Figure 8, the objective of this design problem is to minimize the fabrication cost of the welded beam. The optimization variables include welding thickness h(), joint beam length l(), beam height t(), and beam thickness b(). The mathematical model is as follows:

Subject towhere

This problem has been solved by different algorithms such as DDSCA, HGA, MGWO-III, IAPSO, TEO, hHHO-SCA, HPSO, CPSO, and WCA. Table 21 summarizes the results of the above algorithms and compares them with the best results of TSO. The results show that TSO can provide a parameter design plan with lower cost compared to other algorithms. TSO generates the best solution at design variables of 0.205729, 3.470490, 9.036626, and 0.205729 with a minimum cost of 1.724854.

5. Conclusions

This work presents a novel swarm-based metaheuristic algorithm: tuna swarm optimization. The algorithm is inspired by the cooperative foraging mechanisms of tuna, including spiral foraging and parabolic foraging. The method has few adjustable parameters and can be implemented easily. TSO was comprehensively evaluated using a set of benchmark functions in different dimensions and was compared with other state-of-the-art algorithms. The results show that TSO is superior to the comparative algorithms. In addition, the pressure vessel design problem, the tension/compression spring design problem, and the welded beam design problem are investigated. The statistical results show that TSO has a high potential for solving real-world optimization problems compared to the reported methods. A major factor in TSO's success is the balance of exploitation and exploration achieved through the two foraging strategies. Meanwhile, fewer iterative steps bring less time costs, which is one of the strengths of TSO. However, while TSO performs excellently in most functions, there is still potential for enhancement regarding the small percentage of functions. This can be done by further enhancing TSO's ability to get rid of local optimum, using methods such as hybridisation of algorithms, adaptive parameters, etc.

For future work, binary and multiobjective versions of TSO can be developed for discrete problems and multiobjective optimization problems. Moreover, TSO will be applied to solve UAV mission planning problems such as trajectory planning problems, target allocation problems, etc. A further interesting direction would be to investigate the performance of different constraint handling methods in solving constrained optimization problems.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Andi Tang and Lei Xie made a major contribution to this work by presenting the conception and code.

Acknowledgments

The authors acknowledge funding received from the following science foundations: National Natural Science Foundation of China (No. 62101590) and The Science Foundation of the Shanxi Province, China (2020JQ-481 and 2021JM-224).