Abstract

In this paper, a new optimization algorithm called the search and rescue optimization algorithm (SAR) is proposed for solving single-objective continuous optimization problems. SAR is inspired by the explorations carried out by humans during search and rescue operations. The performance of SAR was evaluated on fifty-five optimization functions including a set of classic benchmark functions and a set of modern CEC 2013 benchmark functions from the literature. The obtained results were compared with twelve optimization algorithms including well-known optimization algorithms, recent variants of GA, DE, CMA-ES, and PSO, and recent metaheuristic algorithms. The Wilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. The statistical results indicated SAR is highly competitive with the compared algorithms. Also, in order to evaluate the application of SAR on real-world optimization problems, it was applied to three engineering design problems, and the results revealed that SAR is able to find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. Thus, the proposed algorithm can be considered an efficient optimization method for real-world optimization problems.

1. Introduction

In our world, there are many optimization problems for which different optimization algorithms are used. These algorithms can be classified into deterministic and stochastic optimization algorithms. The deterministic algorithms always produce the same outputs for particular inputs. These algorithms often are used as local search algorithms. Unlike deterministic algorithms, stochastic algorithms have random components and produce different outputs for particular inputs. Many metaheuristic algorithms implement some form of stochastic optimization algorithms [1]. In recent decades, many metaheuristic algorithms have been proposed to solve optimization problems. The genetic algorithm (GA) [2, 3], particle swarm optimization (PSO) [4, 5], and ant colony optimization (ACO) [6, 7] are some of the most widely used metaheuristic algorithms. Some features of these algorithms include simple implementation, flexibility, and capability for finding the local optimum. Most metaheuristic algorithms are inspired by physical or natural phenomena, i.e., animals’ movement to find food sources. Consequently, these algorithms are easily understandable and reproducible as software programs for various optimization problems. These algorithms are able to find optimal solutions regardless of the physical nature of the problem. Unlike other optimization methods, metaheuristic algorithms can find global optimal solutions for the problems where there are many local solutions due to their random nature. These reasons have led to extensive use of such algorithms in solving various optimization problems.

In recent years, researchers have carried out extensive studies on metaheuristic algorithms such as harmony search (HS) [8, 9], artificial bee colony (ABC) [10, 11], cuckoo search (CS) [12], imperialist competitive algorithm (ICA) [13], teaching-learning-based optimization (TLBO) [14], backtracking search optimization algorithm (BSA) [15], firefly algorithm (FA) [16], Yin-Yang-pair optimization (YYPO) [17], and squirrel search algorithm (SSA) [18]. Besides, many metaheuristic algorithms have been enhanced to solve real-world optimization problems such as a decomposition-based multiobjective firefly algorithm developed for RFID network planning [19] and a novel diffusion particle swarm optimization proposed for optimizing sink placement [20]. Based on the “no free lunch” theorem (NFL) [21, 22], there is no optimization algorithm that works well on all optimization problems. An optimization algorithm may achieve very good results on a set of optimization problems, while it is not suitable for others. Therefore, various metaheuristic algorithms might be good for a series of optimization problems, but not for others.

The metaheuristic algorithms can be categorized according to their nature into different groups such as evolution-based, swarm-based, physics-based, and human-based algorithms.(i)Evolution-based algorithms are developed, based on evolution techniques. The GA, biogeography-based optimizer (BBO) [23, 24], and differential evolution (DE) algorithm [25, 26] are examples of this group of metaheuristic algorithms. For example, the genetic algorithm is inspired by evolution theory.(ii)In nature, many living beings live socially and search for a variety of goals such as hunting and finding food sources as groups. They use different strategies for searching [27]. Some metaheuristic algorithms solve the optimization problems through modelling the social behavior of living organisms in nature. These types of metaheuristic algorithms are called population-based swarm intelligence (SI) or swarm-based algorithms. Algorithms such as PSO, ABC, ACO, FA, CS, krill herd (KH) [28], simplified dolphin echolocation (SDE) [29], and grey wolf optimizer (GWO) [30] are categorized in this group. For example, PSO has been inspired by movement of organisms in a bird flock or fish collection to search for food sources.(iii)Physics-based algorithms are inspired by physical phenomena. Algorithms like Big Bang-Big Crunch (BB-BC) [31], colliding bodies optimization (CBO) [32], gravitational search algorithm (GSA) [33, 34], star graph [35], water wave optimization (WWO) [36], and ray optimization [37] are located in this group. For example, WWO is inspired by refraction and breaking rules of water surface waves.(iv)Human-based algorithms are algorithms that are based on human behavior like tabu search (TS) [38, 39], human mental search (HMS) [40], ICA, and TLBO algorithms. For example, TS and TLBO algorithms have been inspired by human memory function and the way of human learning and training method in the classroom, respectively.

In this paper, a new metaheuristic algorithm is introduced which is based on how to search in search and rescue operations. Humans’ search methods have evolved over thousands of years, and there are not any algorithms that used humans’ behaviors during this type of search for solving optimization problems. So they encouraged authors to propose a new metaheuristic for solving optimization problems based on these features. This algorithm is categorized as a human-based algorithm. This article is organized as follows: after the introduction in Section 1, Section 2 gives an introduction to search and rescue operations. Section 3 presents the proposed algorithm. In Section 4, comparative tests and benchmarking functions for comparing algorithms are introduced. Section 5 presents the results and discussions, and finally in Section 6, conclusions of this paper are presented.

2. Search and Rescue Operations

Like other living creatures, human beings search for different purposes as groups. Search can be done for a variety of goals such as hunting, finding food sources, or finding lost people. One type of group searches is “search and rescue operations.” Search is a systematic operation using available personnel and facilities to locate persons in distress. Rescue is an operation to retrieve persons in distress and deliver them to a safe place [41]. One of the world’s earliest search and rescue efforts ensued following the 1656 wreck of the Dutch merchant ship Vergulde Draeck off the west coast of Australia [42].

Search and rescue operations are divided into several types such as mountain rescue, ground search and rescue, urban search and rescue, air-sea rescue, and combat search [43]. In the United States, institutions such as the American Society for Testing and Materials (ASTM) and the National Fire Protection Association (NFPA) provide codes for search and rescue operations [44]. Search and rescue operations are sometimes done to find specific people who are lost. Codes also provide some requirements for searches that increase the chances of successful finding. In the following, the procedure of finding lost people is described considering the main concepts in this operation. Humans can identify the clues and traces of lost people based on the received training. The found clues have different values relative to each other and provide different information about lost people. For example, some clues indicate the likelihood of the presence of lost people in that location. Each group member evaluates clues based on his/her training and delivers this information on found clues through communication equipment to other members. Finally, they can search based on the importance degree of these clues and information that can be obtained from them. Typically, group members search around the clues or seek in directions created by connecting the clues [45]. Therefore, human searches, in search and rescue operations, are divided into two phases: social and individual. In the social phase, the group members search based on the position of found clues and their quality in areas that are likely to get better clues. In the individual search phase, the searching is done regardless of the position and amount of clues found by others. Clues can be divided into the following two categories:(1)Hold clue: one member of the search group is present and searches around it.(2)Abandoned clue: group members have found the clue and there is no one in that position. In other words, the human who found the clue has left it to find better clues, but the information about that clue is available for group members.

In Figure 1, points A and B are the locations of a group member (human) and a clue, respectively. Path 1, Path 2, and Path 3 are three assumed paths that the lost person has likely passed through. The arrows show the movement directions. In the social phase, the human at position A selects the search direction based on the position of clue B. Since searching around better clues increases the probability of finding the lost person, the area that has better clues in the direction of AB will be selected. In other words, if there are better clues in area 1 compared to area 2, area 1 is chosen; otherwise, area 2 is selected to keep searching. In the sample case depicted in Figure 1, both Path 1 and Path 2 pass through points A and B. If the lost person has passed Path 1 or Path 2, this simple strategy will increase the chances of finding better clues in the social phase. In the individual phase, the human at point A searches around the best clue that is found. This search is done in an area, let us say area 3. If the lost person has passed Path 3, the probability to find him/her is higher during the individual phase compared to the social phase. When a new location is searched by one of these two phases, and the location has better clues than in the previous location (position A), the said location becomes the new position of the group member.

3. A Search and Rescue Optimization Algorithm Proposal

In this section, the mathematical model of the proposed algorithm for solving a “maximization problem” is described. In SAR, the humans’ positions are equal to the solutions of the optimization problem, and the amount of clues found in these positions represents the objective function for these solutions. The flowchart of SAR is shown in Figure 2.

3.1. Clues

The group members gather clue information during the search. They left some clues whenever they found better clues in other positions, but information on them are used to improve searching operations. In the model we proposed, the left clues’ positions are stored in the memory matrix (matrix M), whereas the humans’ positions are stored in a position matrix (matrix X). The dimensions of the matrix M are equal to those of the matrix X. They are N × D matrices, where D is the dimension of the problem and N is the number of humans. The clues matrix (matrix C) is a matrix containing the positions of found clues. This matrix consists of two matrices X and M. Equation (1) shows how to create C. All new solutions in social and individual phases are created based on the clues matrix, and it is an important part of SAR. The matrices X, M, and C are updated in each human search phase:where M and X are memory and humans’ position matrices, respectively, and XN1 is the position of the 1st dimension for the Nth human. Also, M1D is the position of the Dth dimension for the 1st memory. The two phases of human searches, including the “social phase” and “individual phase,” are modelled as follows.

3.2. Social Phase

Considering the explanations given in the previous section, and taking into account a random clue among found clues, the search direction is obtained using the following equation:where Xi, Ck, and SDi are the position of the ith human, the position of the kth clue, and the search direction of the ith human, respectively. k is a random integer number ranging between 1 and 2N and chosen in a way that k ≠ i.

It is important to point out that humans normally search in such a way that all desired areas are searched and any repeated location is not searched again. Therefore, the search should be done in a manner that movement of the group members toward each other is limited. To this end, all dimensions of Xi should not be changed by moving in the direction of equation (2). To apply this constraint, the binomial crossover operator has been used. Also as explained in the previous section, if the considered clue is better than the clue related to the current position (the objective function value for solution B is greater than the objective function value for solution A in Figure 1), an area around SDi direction and around the position of that clue is searched (area 1 in Figure 1); otherwise, the search will continue around the current location along the SDi direction (area 2 in Figure 1). Finally, the following equation is used for the social phase:where is the new position of the jth dimension for the ith human; Ck,j is the position of the jth dimension for the kth found clue; and are the objective function values for the solutions Ck and Xi, respectively; r1 is a random number with a uniform distribution in the range [−1, 1]; r2 is a uniformly distributed random number in the range [0, 1] and is different for each dimension, but r1 is fixed for all dimensions; jrand is a random integer number ranging between 1 and D which ensures that at least one dimension of is different from Xi,j; and SE is an algorithm parameter ranging between 0 and 1. Equation (3) is used to obtain a new position of the ith human in all dimensions.

3.3. Individual Phase

In the individual phase, humans search around their current position, and the idea of connecting different clues used in the social phase is applied to search. Contrary to the social phase, all dimensions of Xi change in the individual phase. The new position of the ith human is obtained by the following equation:where k and m are random integer numbers ranging between 1 and 2N. To prevent movement along with other clues, k and m are chosen in such a way that i ≠ k ≠ m. r3 is a random number with a uniform distribution ranging between 0 and 1.

3.4. Boundary Control

In all metaheuristic algorithms, all solutions should be located in the solution space, and if they are out of the allowable solution space, they should be modified. So if the new position of a human is out of the solution space, the following equation is used to modify the new position:where and are the values of the maximum and minimum threshold for the jth dimension, respectively.

3.5. Updating Information and Positions

In each iteration, the group members will search according to these two phases, and after each phase, if the value of objective function in position is greater than the previous one , the previous position (Xi) will be stored in a random position of the memory matrix (M) using equation (6) and this position will be accepted as a new position using equation (7). Otherwise, this position is left and the memory is not updated:where Mn is the position of the nth stored clue in the memory matrix and n is a random integer number ranging between 1 and N. Using this type of memory updating increases the diversity of the algorithm and the ability of the algorithm to find the global optimum as well.

3.6. Abandoning Clues

In search and rescue operations, time is a very important factor because the lost people may be injured and the delay of search and rescue teams may result in their deaths. Therefore, these operations must be done in such a way that the largest space is searched in the shortest possible time. So if a human cannot find better clues after a certain number of searches around his/her current position, he/she leaves the current position and goes to a new position. To model this behavior, at first, unsuccessful search number (USN) is set to 0 for each human being. Whenever a human finds better clues in the first or second phase of the search, the USN is set to 0 for that human; otherwise, it will increase by 1 point as presented in the following equation:where USNi indicates the number of times the human i has not been able to find better clues. When the USN for a human is greater than the maximum unsuccessful search number (MU), he/she goes to a random position in the search space using equation (9), and the USNi is set to 0 for that human:where r4 is a random number with a uniform distribution ranging between 0 and 1. It is different for each dimension.

3.7. Control Parameters of SAR

SAR has two control parameters: SE (social effect) and MU (maximum unsuccessful search number). The SE is used to control the effect of group members on each other in the social phase. This parameter ranges within [0, 1]. Greater values of the SE increase the convergence rate and also decrease the global search ability of the algorithms. The MU parameter indicates the maximum number of unsuccessful searches before leaving a clue. It ranges within [0, 2 × Tmax], where 2 × Tmax is the maximum number of searches done by each human and Tmax is the maximum number of iterations. For greater values of the MU, humans will never leave the clues. On the one hand, small values of this parameter lead to the group 3 member finish searching around the current clue and go to other locations before he/she can completely search around it. On the other hand, large values of this parameter cause an increase in searches around one clue and a reduction of the chances of searching in other regions. The MU directly relates to the dimension of the problem. As the search space increases, the maximum number of unsuccessful searches is increased, too.

For all the following tests, the value of the SE was set to 0.05 and the value of the MU was obtained by equation (10). The analysis of SAR parameters has shown that these values for the SE and MU are suitable for solving single-objective continuous optimization problems.

3.8. Pseudocode of Search and Rescue Optimization Algorithm (SAR)

The pseudocode of this algorithm is presented in Algorithm 1 for solving a maximization problem. Position sorting is performed only once before the iterations begin.

(1)Begin:
(2) Randomly initialize a population of 2N solutions uniformly distributed in the range [], j = 1, …, D
(3) Sort the solutions in the decreasing order and find the best position (Xbest)
(4) Use the first half of the sorted solutions for human position matrix (X) and the others for memory matrix (M)
(5) Define the algorithm parameters (SE, MU) and set USNi = 0 where i = 1, …, N
(6)While stop criterion is not satisfied do
(7)  For i = 1 to N do
Social phase
(8)   
(9)   
(10)   
(11)   
(12)   For j = 1 to D do
(13)     
(14)     
(15)   End For
(16)   
(17)   
(18)   
Individual phase
(19)   
(20)   
(21)   For j = 1 to D do
(22)     
(23)   End For
(24)   
(25)   
(26)   
(27)   If do
(28)    For j = 1 to D do
(29)     
(30)    End for
(31)    
(32)   End If
(33)  End for
(34)  Find the current best position and update Xbest
(35)End while
(36) Return Xbest
(37)End
3.9. Conceptual Comparison of SAR with Other Metaheuristic Algorithms

In each iteration of SAR, two new solutions are generated in two phases for each algorithm particle. The concepts of searching in these phases are different. In the social phase, new solutions are generated based on locations and objective functions of other solutions. In this phase, each particle can move toward the other particles. But in the individual phase, new solutions are generated around current solutions and each particle does not move toward the other particles. In this algorithm, only a new solution which is better than the current solution is accepted and the current solution is stored in an archive (memory). The archive is used to generate new solutions.

SAR such as PSO, ABC, DE, GSA, and TLBO is a population-based optimization algorithm. Similarity and difference of SAR and these algorithms are explained as follows.

3.9.1. SAR versus PSO

Similarity. Both of them utilize social and individual information and memory to generate new solutions.Difference. In PSO, a combination of the global best position (social information) and local best position (individual information) is used to generate a new solution. But SAR separately uses social and individual information to generate two new solutions. Besides, the global best solution is not considered by SAR in these phases. PSO accepts all new solutions, but SAR only accepts new solutions which are better than current solutions. Unlike PSO, SAR leaves unimproved solutions. Also, SAR considers objective function values to generate new solutions in the social phase. Besides, the memory update mechanisms of these methods are different.

3.9.2. SAR versus ABC

Similarity. To produce a new solution, objective function values are considered by both of these algorithms. They only accept a new solution which is better than the current solution. Both of them leave unimproved solutions.Difference. ABC has not any kinds of memories. It selects solutions by the roulette wheel mechanism and generates new solutions by changing only one dimension of the selected solutions. But SAR selects clues randomly and uses them to generate new solutions by changing some or all dimensions of current solutions. So they use different strategies to generate new solutions, and SAR produces two new solutions for each agent in each iteration.

3.9.3. SAR versus DE

Similarity. They accept only new solutions which are better than current solutions. The crossover mechanism used in the social phase of SAR is similar to the crossover mechanism of DE.Difference. DE does not consider the previous solutions to generate new solutions, and it is a memoryless algorithm. Although both of these algorithms utilize the same crossover mechanism, the equations applied to generate new solutions are different. DE does not leave unimproved solutions and does not consider objective functions values to produce new solutions. Also, DE produces only a new solution for each agent in each iteration.

3.9.4. SAR versus TLBO

Similarity. Both of them include two searching phases and consider values of objective function to generate new solutions. They accept only new solutions which are better than current solutions.Difference. The previous solutions are not used by TLBO, and it is a memoryless algorithm. Unlike this algorithm, SAR leaves unimproved solutions after a certain number of unsuccessful objective function evaluations. SAR and TLBO obtain new solutions using entirely different strategies.

Furthermore, in comparison with the above algorithms, the boundary control strategy of SAR is different.

3.10. Computational Complexity

In this section, the computational complexity of SAR is discussed. The population initialization process requires O(2 × n × d) times, where n and d indicate the number of humans and the dimension of the problem. The proposed algorithm requires O(2n · log(2n)) times to sort population in the initialization phase. The complexity of the social and individual phases is O(n × d) times in the worst case. The complexity of the abandon clue process is O(n) and O(n × d) in the best and the worst case. Thus, the computational complexity of SAR is as follows:where Maxit is the maximum number of iterations. According to the above discussion, the total computational complexity of SAR is O(Maxit × n × d).

4. Numerical Results

To evaluate the performance of SAR, four tests were considered. Classic benchmark functions were used in test 1 and test 2, while modern benchmarks have been used in test 3. Finally, some structural engineering design problems were utilized in test 4. All runs were executed on a 64 bit computer with 32 GB of RAM having an Intel i7 (3.4 GHz) CPU running Windows 10.

4.1. Test 1

As discussed in Introduction, metaheuristic algorithms are inherently divided into four groups. One well-known algorithm was selected from each group to compare with the proposed algorithm. Artificial bee colony (ABC), gravitational search algorithm (GSA), differential evolution (DE), and teaching-learning-based optimization (TLBO) algorithms were considered swarm-based, physics-based, evolution-based, and human-based algorithms, respectively. The population size and control parameters of these algorithms are given in Table 1, as suggested in [46], [33], [25], and [47]. TLBO has no control parameter.

In the first test, the performance of SAR was compared with that of ABC, GSA, DE, and TLBO on 27 benchmark functions. These functions used by various researchers [16, 4850] are presented in Table 2. In this table, type, D, range, and fmin represent the type of the benchmark function, the dimensions of the problem, the range of variations, and the optimal value of the function, respectively. Also, and are the maximum and minimum threshold values of the dimension j, respectively. In the type column of this table, U, M, S, and N refer to unimodal, multimodal, separable, and nonseparable functions, respectively. 13 benchmark functions are unimodal, while 14 functions are multimodal. Moreover, there are 11 separable functions and 16 nonseparable functions. Since in some of these classical functions the locations of global minima are symmetrical, some algorithms may be affected by this feature and show a different and unrealistic performance. Therefore, the locations of symmetrical global minima are shifted using function T defined in the last row of Table 2. This function generates nonsymmetrical numbers.

In test 1, the maximum number of function evaluations (NFE) was set to 4 × 103 × D for all algorithms, in which D is the number of dimensions of the benchmark function that is specified in Table 2. All algorithms were independently executed 51 times. The algorithms were stopped when the number of evaluations for the objective function exceeded the NFE (4 × 103 × D) or when the least error (distance between the objective function of the best found solution and the objective function of the global optimum solution) was less than 10−8.

4.2. Test 2

The algorithms and benchmark functions considered for this test are similar to those in the first test. The difference between the first and second tests is related to NFEs. For all of these benchmark functions (except for f13), this value is equal to 2 × 104 × D which is 5 times greater than the value considered in the first test. For f13, the NFE is set to 3 × 104 × D. The purpose of the second test is to examine the ability of the algorithms to find global minima. Therefore, a high value of the NFE is considered.

As in the first test, all the algorithms were independently run 51 times, and the algorithms were stopped when the error values (distance between the objective function value of the best found solution and the objective function value of the global optimum solution) for the global solution were less than 10−8 or the number of function evaluations reached its maximum. The population and parameters of the algorithms were set the same as those in test 1.

4.3. Test 3

In this test, 28 benchmark functions of CEC 2013 Competition on Single-Objective Real-Parameter Numerical Optimization are used to compare SAR with the state-of-the-art optimization algorithms. All of these functions are minimization problems. The details of CEC 2013 benchmark functions can be found in [51]. These functions cover various types of optimization problems. These functions are divided into three classes: unimodal (C1–C5), basic multimodal (C6–C20), and composition (C21–C28) benchmark functions. The composition functions are created by combinations of different basic functions. Consequently, they are multimodal, nonseparable, and asymmetrical. The algorithm was independently run 51 times, and it stops when the number of function evaluations reached 100,000 or when the error values from the global optima were less than 10−8. The control parameters of SAR were the same as those in the previous two tests. The problem dimension was 10, and the variables ranged within [−100, 100]. To verify the performance of SAR on these problems, it was compared with nine optimization algorithms. These algorithms which were verified on CEC 2013 benchmark functions are listed as follows:(i)Artificial bee colony (ABC)(ii)A CMA-ES super-fit scheme for the resampled inheritance search (CMA-RIS) [52](iii)Adaptive monogamous pairs genetic algorithm (AMopGA) [53](iv)Grey wolf optimizer (GWO)(v)Yin-Yang-pair optimization (YYPO) [17](vi)Reflected adaptive differential evolution with two external archives (RJADE) [54](vii)Self-adaptive differential evolution (SaDE) [55](viii)Self-adaptive heterogeneous PSO (fk-PSO) [56](ix)Standard Particle Swarm Optimisation 2011 (SPSO) [57]

These algorithms include a CMA variant (CMA-RIS), a GA variant (AMopGA), two DE variants (SaDE and RJADE), two PSO variants (fk-PSO and SPSO), two recent metaheuristic algorithms (GWO and YYPO), and ABC. Different studies show that the performances of these variants are better than that of the basic version of them. The results of ABC, GWO, and YYPO are reported in [17]. For the other above algorithms, the results from the original studies were used.

4.4. Test 4

In order to evaluate the performance of SAR for solving real-world engineering optimization problems, three engineering design problems were utilized. The penalty function approach was employed to handle the constraints of these problems. For all the engineering optimization problems, the control parameters of SAR were set the same as those in the previous tests and the population size of SAR was equal to 10. SAR was independently run 50 times. These engineering problems are introduced as follows.

4.4.1. I-Beam Design

The I-beam design problem was firstly proposed by Gold and Krishnamurty [58]. The objective is to find the minimum vertical deflection of an I-beam. The vertical deflection of an I-beam defined by equation (12) is dependent on design load (P), length of the beam (L), and modulus of elasticity (E):

P, L, and E are 600 kN, 200 cm, and 20000 kN/cm2. This problem has four variables and two constraints including the cross-sectional area (constraint ) and the bending stress of the beam (constraint ). The maximum cross-sectional area is 300 cm2, and the allowable bending stress of the beam is 56 kN/cm2. The mathematical model of the problem is expressed as follows:with bounds , , , and .

4.4.2. Cantilever Beam Design

The cantilever beam design problem was firstly presented by Fleury and Braibant [59]. It consists of five hollow square cross sections. The thickness of these sections is fixed, and the height of them is a design variable. A vertical force is applied at the free end of the beam, and another end of the beam is a rigid support.

The cantilever beam design problem has five variables and one constraint. The goal is to minimize the weight of the beam. This problem can be stated as follows:with bounds .

4.4.3. Spatial 25-Bar Truss Structure Design

The spatial 25-bar truss design was widely used in structural design optimization. Many optimization methods were applied to solve this well-known optimization problem. The elastic modulus and the material density of all members are 104 ksi and 0.1 lb/in3, respectively.

The minimum and maximum cross-sectional areas of them are 0.01 in2 and 3.4 in2, respectively. This truss is subjected to the two loading conditions presented in Table 3.

Because of the symmetry of the structure, the 25 members of the truss are divided into 8 groups, as follows: (1) A1, (2) A2–A5, (3) A6–A9, (4) A10-A11, (5) A12-A13, (6) A14–A17, (7) A18–A21, and (8) A22–A25. The displacements of the nodes in both directions are limited to ± 0.35 in, and the allowable stress for each group is shown in Table 4.

5. Results and Discussion

5.1. Performance of SAR on Classic Benchmark Functions

The means and variances of errors (distance between the minimum value of found objective functions and the optimal value of the function) obtained by the algorithms for the first test are shown in Table 5. Errors less than 10−8 are considered 0.

The Wilcoxon signed-rank test was used to compare the pair algorithms, and the results of 51 runs for SAR were compared with those of the other algorithms [60]. In the Wilcoxon signed-rank test, the superiority of the two algorithms is seen using the hypothesis test. Two hypotheses are defined as null hypothesis (H0) and alternative hypothesis (H1). The null hypothesis indicates that there is no difference between two algorithms, and the alternative hypothesis indicates that there is a difference between two algorithms. To determine whether these algorithms have any superiority to each other, the value is used. The smaller the value is, the more likely these two algorithms are different (alternative hypothesis). To determine the level of hypothesis through the level of significance, α has been used. The Wilcoxon signed-rank test results for the first test are shown in Table 6. In this paper, α is equal to 0.05. If the value is less than 0.05, then these two algorithms are statistically different with the 95% confidence level. When h is 0, it means that there is no difference between the two algorithms. 1+ for h indicates significant superiority of the first algorithm over the second one. Finally, 1 for h implies a significant superiority of the second algorithm over the first one.

According to the data in Table 5, SAR and ABC found the solution for all five US benchmarks—both unimodal and separable—in all runs. But the TLBO, DE, and GSA failed to find the global minimum in 2, 2, and one functions, respectively. The DE and SAR maintained almost the same performance on 8 UN functions, and they found the global minimum point on all runs except for three functions (f8, f10, and f13) that the performance of SAR was better than that of DE. TLBO had a performance near to that of these two algorithms. But it failed to find the minimum value in f11 in addition to these three functions. Both ABC and GSA failed to find the minimum value for any UN functions and had less convergence rate for these functions compared to the other algorithms. SAR had the best performance on MS functions and was able to find the value of global minima in four of six functions. Like the other algorithms, it failed to find the global minima in f18 and f19. Among the other four algorithms, ABC had less mean of errors. On 8 MN functions, SAR performed better than the other algorithms, and it only failed to find the global minimum in f22, but it had less errors than the others.

In Table 6, SAR was compared with the other algorithms by the two-sided Wilcoxon signed-rank test. In the last row of this table, the superiority of the first algorithm (SAR) over the second one has been shown by the sign (+). Equal superiority is shown with the sign (=). Finally, the superiority of the second algorithm over the first one is shown with the sign (−). According to the data in Table 6, SAR outperformed ABC in 13 functions, while they had similar performance in 13 functions. ABC outperformed SAR only in f13. TLBO and GSA were not better than SAR in any functions. DE outperformed SAR in f8, while SAR outperformed it in 9 functions, and they had similar performance in 17 functions. Accordingly, SAR outperformed ABC, DE, GSA, and TLBO in the classic benchmark functions.

5.2. Global Search Ability

In the second test, the ability of the algorithms to find the global minimum has been investigated, and for this purpose, the maximum number of function evaluations (NFE) was increased. In Table 7, the success percentages of the algorithms in finding the global minimum for 27 benchmark functions along with the average percentages are shown for 4 benchmark function types (unimodal, multimodal, separable, and nonseparable functions).

The least NFE among 5 algorithms is highlighted in bold for each function. ABC was not able to find the minimum value of UN functions for any runs except for f11. This fact indicates the slower convergence rate of this algorithm to solve these types of optimization problems. Moreover, as it is clear in Table 7, ABC had the lowest success rate for such problems among all the algorithms, while it performed better after SAR for multimodal functions. It also had a great performance for separable problems. DE had the best performance for nonseparable functions after SAR. Regarding the average success percentage of the algorithms in finding the global minimum, SAR had better performance than ABC, DE, GSA, and TLBO. This algorithm was able to find the global minimum for multimodal functions with an average of 100% indicating high ability of it for global searching and avoiding local minima.

5.3. Convergence Rate Analysis

In Table 7, in addition to the success percentages, the average numbers of function evaluations to reach stopping conditions are presented for the compared algorithms. Based on this table, SAR had the fastest convergence rate for 13 functions among all studied algorithms. In Figure 3, the convergence curves of SAR for some of the benchmark functions are shown. These functions include unimodal and separable (f2, f3, and f4), unimodal and nonseparable (f6, f7, and f10), multimodal and separable (f15, f17, and f19), and multimodal and nonseparable (f20, f22, and f25) functions. These curves were obtained by averaging 51 independent runs for all the algorithms.

Also, the Wilcoxon signed-rank test has been used to more accurately check the convergence rate of the algorithms. In this test, the maximum numbers of function evaluations of SAR are compared with those of the other algorithms in 51 runs by the two-sided Wilcoxon signed-rank test (α = 0.05), and the results are presented in Table 8. The comparison procedure is similar to that in Table 6. 1+ for h indicates the first algorithm has a faster convergence rate than the second algorithm.

According to the data in Table 8, SAR had a faster convergence rate than ABC in 26 functions, while ABC had a faster convergence rate than SAR only in f1. Therefore, it can be concluded that SAR has a faster convergence rate than ABC. Also, the convergence rate of SAR was faster than that of GSA in all 27 functions. The convergence rate of SAR was faster than that of DE in 10 functions, while it had lower convergence rate than DE in 13 functions, and they had equal convergence rate in 4 functions. In the unimodal functions, the convergence rate of DE was faster than that of SAR. But SAR had a faster convergence rate in the multimodal functions. In comparison with TLBO, the convergence rate of SAR was faster, equal, and lower in 15, 4, and 11 functions, respectively.

According to the data in Table 8 and Figure 3, the convergence rate of SAR was significantly higher than that of GSA and ABC. Also, SAR had a faster convergence rate compared to TLBO in more than half of the functions. The convergence rate of DE was almost equal to that of SAR. But SAR had a faster convergence rate than DE for multimodal functions.

5.4. Performance of SAR on CEC 2013 Benchmark Function Set

The mean of errors obtained by ABC, CMA-RIS, AMopGA, GWO, YYPO, RJADE, SaDE, fk-PSO, SPSO, and SAR algorithms for the third test is reported in Tables 911. The solutions less than 10−8 are considered 0. Moreover, by comparing these 10 algorithms, the rank of each algorithm is determined for each function and is provided in these tables. The lowest rank (1) is related to an algorithm with the lowest mean of errors compared to the other algorithms. The best mean of errors is highlighted in bold in these tables.

In Table 9, the results obtained by the algorithms for unimodal functions are shown. These functions are very suitable for comparing the ability of the algorithms in exploitation. As it is clear from Table 9, SAR and CMA-RIS were able to find 100% of the minimum point for all runs in 4 cases of these 5 functions and are better than the other algorithms. The mean error of CMA-RIS is lower than that of SAR on the C3 function. Therefore, CMA-RIS is the best algorithm for unimodal functions. The mean ranks and the overall ranks of them are shown in Table 12. Regarding the data in this table, after CMA-RIS, SAR is the second best algorithm among 10 compared algorithms for CEC 2013 unimodal functions. The DE variants are the third best algorithms, and there is a high difference between mean ranks of them (mean rank: 2.8) and those of SAR and CMA-RIS (mean rank: 1.2 and 1.4, respectively). Hence, SAR is efficient for unimodal functions, and it has high exploitation capability.

Table 10 illustrates the results of the algorithms for basic multimodal functions. These functions are very suitable for comparing the ability of the algorithms in exploration and finding the global optimum. SaDE and AMopGA found the best results for 4 of 15 basic multimodal functions. According to the data in Table 12, SaDE was the first best algorithm with mean rank 3.4 and SAR was ranked the second with mean rank 3.53. The performance of SaDE is slightly better than that of SAR. After them, fk-PSO with mean rank 3.93 was the third best algorithm among the 10 algorithms that have been compared for basic multimodal functions. These results confirmed a higher exploration ability of SAR in comparison with the other algorithms.

The results obtained by the 10 algorithms compared on 8 composition functions are shown in Table 11. Among all the compared algorithms, SAR obtained the best results for C22 and C26 functions and ABC for C21, C25, and C28 functions. From Table 12, it can be seen that SAR with mean rank 3.25 had the best performance compared to the other algorithms on the composition functions. After SAR, ABC was the second best algorithm with mean rank 3.88 and CMA-RIS was located at the next rank with mean rank 4 among the 10 algorithms. The results show that SAR outperformed the compared algorithms for these kinds of optimization problems.

In Table 12, means and overall ranks of the compared algorithms on unimodal, multimodal, and composition benchmark functions of CEC 2013 are reported. Furthermore, means and overall ranks of these algorithms on all the benchmark functions are presented in this table. It can be seen that the mean rank of SAR is the lowest, and it is the best algorithm among all the studied algorithms on the CEC 2013 benchmark function set. After SAR, the performance of SaDE is better than that of the others. It slightly outperformed CMA-RIS. RJADE was the fourth best algorithm in this test. The performance of GWO is the worst for all kinds of benchmark functions, and it was ranked 10th among the 10 studied algorithms.

In Tables 912, the algorithms were compared and ranked in the group. In Table 13, the mean of errors obtained by each of these 9 algorithms was compared with that of SAR.

According to the data in Table 13, SAR was significantly better than ABC, GWO, YYPO, AMopGA, RJADE, and SPSO. After SAR, SaDE, CMA-RIS, and RJADE have achieved the best performance, respectively. SaDE and CMA-RIS had less mean of errors than SAR in 9 functions, while SAR had less errors than them in 15 functions, and the mean of errors of SAR and them was equal in 4 functions.

5.5. Application of SAR on Engineering Optimization Problems

SAR was applied on three engineering design problems, and the obtained results were compared with those of other algorithms documented in the literature. These problems have been defined in the previous section. SAR and all the compared algorithms satisfied the constraints of these problems. In the following tables, “Std.,” “NFE,” and “NA” mean standard deviation, number of function evaluations, and not available, respectively, and the best results are highlighted in bold.

5.5.1. I-Beam Design

This problem was solved using several other methods including cuckoo search (CS) [61], adaptive response surface method (ARSM) [62], improved ARSM (IARSM) [62], and symbiotic organisms search (SOS) [63]. The results obtained by SAR and the mentioned algorithms are presented in Table 14.

The optimal designs found by SAR and SOS are the same and better than those by the others. The average and standard deviation of SAR are slightly better than those of SOS. The convergence curve of SAR for the I-beam design problem is shown in Figure 4.

5.5.2. Cantilever Beam Design

Table 15 compares the optimal designs found by SAR and the other algorithms including the method of moving asymptotes (MMA) [64], generalized convex approximation I and II (GCA(I) and GCA(II)) [64], symbiotic organisms search (SOS) [63], cuckoo search (CS) [61], flower pollination algorithm (FPA) [65], and modified firefly algorithm (MFA) [66]. According to the data in this table, the lowest cantilever beam weight was found by SAR. Moreover, SAR only required 10000 function evaluations to find the optimum design, and it is less than that of the others. Hence, SAR had the fastest convergence rate among all the compared algorithms. Also, the average and standard deviation of SAR are better than those of SOS.

The convergence curve of SAR for the cantilever beam design problem is shown in Figure 5. These comparative results show SAR outperforms all the studied algorithms for this design problem.

5.5.3. Spatial 25-Bar Truss Structure Design

SAR was compared with different methods including the artificial bee colony algorithm with an adaptive penalty function approach (ABC-AP) [67], self-adaptive harmony search algorithm (SAHS) [68], teaching-learning-based optimization algorithm (TLBO) [69], multistage particle swarm optimization (MSPSO) [70], hybrid particle swallow swarm optimization (HPSSO) [71], water evaporation optimization (WEO) [72], and culture algorithm (CA) [73]. Table 16 presents the optimization results of these methods.

Regarding the data in this table, SAR found the best design (545.0365 lb) and required less structural analyses than the other algorithms. The average and standard deviation of SAR are 545.0391 lb and 0.0064 lb, respectively, and they are significantly better than those of the other algorithms. These results clearly indicate that SAR outperforms the other algorithms. Figure 6 shows the convergence curve of SAR for the spatial 25-bar truss design problem.

5.6. Analysis of SAR Parameters

In this section, the effect of SAR parameters on solving the benchmark problems was investigated. Two classic (f13 and f18) and four modern (C3, C10, C22, and C28) benchmark functions were utilized for this test. f13 and C3 are unimodal functions, and f18, C10, C22, and C28 are multimodal functions. For classic and modern functions, NFEs were set to 2 × 104 × D and 105, respectively. The population of SAR was equal to 20, and it was independently executed 20 times for each configuration, and errors less than 10−8 were considered zero. The means of errors obtained by SAR for different values of the SE are reported in Table 17. To evaluate the effect of the SE on the performance of SAR, the MU was set to a big value because the unimproved clues will not be left for this value of MU. Hence, the effect of the MU was removed. According to the data in Table 17, the optimum value of the SE depends on the nature of the problem. The high values of the SE (more than 0.8) lead to unsatisfactory performance of SAR. Furthermore, SE values between 0.6 and 0.8 are suitable for unimodal functions and SE values lower than 0.1 are suitable for multimodal functions. Solving multimodal functions generally is more difficult than unimodal functions. Therefore, the SE was set to 0.05 for all the studied problems in this paper.

Table 18 shows the means of errors obtained by SAR with fixed SE = 0.05 for different values of the MU. From this table, it can be seen that the lower value of this parameter decreases the convergence rate of the algorithm and increases the chance of avoiding local minima. According to the results, the best value of the MU for the studied problems is 70 × D. However, SAR is not very sensitive to this parameter. The results indicate the effect of the SE on the performance of SAR is more than the effect of the MU. Thus, the SE is the key parameter of the proposed algorithm.

6. Conclusion

A new metaheuristic optimization algorithm called the search and rescue optimization algorithm (SAR) was introduced here for solving single-objective optimization problems. SAR was inspired by the explorations carried out by humans during search and rescue operations. The proposed algorithm consists of two phases including the social phase and the individual phase, and the implementation of it is relatively simple. The results of the tests done in this paper have shown that combining these two phases along with the use of memory leads to a balance between exploration and exploitation processes in SAR.

The performance of SAR was compared with that of twelve different optimization algorithms through fifty-five continuous benchmark problems including a set of 27 classic benchmark problems and a set of 28 modern CEC 2013 benchmark functions. The compared algorithms included recent variants of DE (SaDE and RJADE), GA (AMopGA), PSO (SPSO and fk-PSO), and CMA-ES (CMA-RIS) algorithms and some recent metaheuristic algorithms (ABC, GSA, TLBO, GWO, and YYPO). The Wilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. The statistical results indicated SAR is suitable for global optimization and highly competitive with the other algorithms. The proposed algorithm performed better than most of the compared algorithms in terms of finding the global optimum and convergence rate in many cases. Besides, to verify the application of the proposed algorithm on real-world optimization problems, SAR was tested on three engineering design problems including the I-beam design, the cantilever beam, and the spatial 25-bar truss structure design. The obtained results revealed that the proposed algorithm can find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. In future works, the performance of SAR on other types of optimization problems such as combinatorial and large-scale optimization problems will be investigated.

Data Availability

The data supporting this study are from previously reported studies and datasets, which have been properly cited in this paper. All problems were completely defined in this paper, except CEC 2013 instances. The codes of CEC 2013 benchmark instances and the algorithms implemented for this study were downloaded from the following links: The CEC 2013 benchmark data used to support the findings of this study have been deposited in the CEC2013matlab repository at http://web.mysites.ntu.edu.sg/epnsugan/PublicSite/Shared%20Documents/CEC2013/ce_c13matlab.zip. The MATLAB source code of SAR used to support the findings of this study are available from Amir Shabani upon request (email: [email protected]). The algorithms used to compare the proposed algorithm are available from the following links: ABC: http://mf.erciyes.edu.tr/abc/; DE: https://ch.mathworks.com/matlabcentral/fileexchange/52897-differentialevolution-de; GSA: http://www.mathworks.com/matlabcentral/fileexchange/27756-gravitationalsearch-algorithm-gsa; and TLBO: https://ch.mathworks.com/matlabcentral/fileexchange/52863-teaching-learningbased-optimization-tlbo.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was partially supported by the Spanish Research Project (nos. TIN2016-80856-R and TIN2015-65515-C4-1-R).