Abstract
For the shortcomings of the manta ray foraging optimization (MRFO) algorithm, like slow convergence speed and difficult to escape from the local optimum, an improved manta ray foraging algorithm based on Latin hypercube sampling and group learning is proposed. Firstly, the Latin hypercube sampling (LHS) method is introduced to initialize the population. It divides the search space evenly so that the initial population covers the whole search space to maintain the diversity of the initial population. Secondly, in the exploration stage of cyclone foraging, the Levy flight strategy is introduced to avoid premature convergence. Before the somersault foraging stage, the adaptive tdistribution mutation operator is introduced to update the population to increase the diversity of the population and avoid falling into the local optimum. Finally, for the updated population, it is divided into leader group and follower group according to fitness. The follower group learns from the leader group, and the leader group learns from each other through differential evolution to further improve the population quality and search accuracy. 15 standard test functions are selected for comparative tests in low and high dimensions. The test results show that the improved algorithm can effectively improve the convergence speed and optimization accuracy of the original algorithm. Moreover, the improved algorithm is applied to wireless sensor network (WSN) coverage optimization. The experimental results show that the improved algorithm increases the network coverage by about 3% compared with the original algorithm, and makes the optimized node distribution more reasonable.
1. Introduction
With the advancement and development of intelligent information technology, the scale and complexity of data are also increasing. Traditional numerical optimization methods are difficult to solve complex optimization problems, resulting in higher and higher calculation costs. In recent years, swarmbased intelligent optimization algorithms have been favored by many researchers because of their simplicity and high efficiency [1]. Swarm intelligence algorithms can effectively solve many complex optimization problems in the field of engineering, and are mainly used in network optimization [2], feature selection [3], image processing [4], automatic control [5], and other fields. In recent years, swarm intelligence optimization algorithms have been proposed, including butterfly optimization algorithm (BOA) [6], whale optimization algorithm (WOA) [7], sine cosine algorithm (SCA) [8], sparrow search algorithm (SSA) [9], marine predator algorithm (MPA) [10], African vultures optimization algorithm (AVOA) [11], manta ray foraging optimization (MRFO) algorithm [12], and so on.
The MRFO algorithm is a new swarm intelligence optimization algorithm proposed by Weiguo Zhao et al. in 2020. The inspiration of this algorithm is based on intelligent behaviors of manta rays. The foraging (optimization) process is divided into three stages, namely chain foraging, cyclone foraging, and somersault foraging. Compared with some classical intelligent algorithms and most of the above algorithms, it has higher convergence accuracy and faster optimization speed. Although MRFO has the above advantages, it still has the problems of easy premature convergence and falling into local optimum. In order to solve these problems, many researchers have improved the basic MRFO algorithm. Davut Izci et al. [13] introduced the oppositionbased learning strategy into the population initialization, which improves the quality of the population to a certain extent, but convergence accuracy needs to be improved. Biqi Sheng et al. [14] proposed a balanced manta ray foraging optimization (BMRFO) algorithm. BMRFO introduces the Levy flight strategy in the cyclone foraging stage, and improves the flip factor. Although the algorithm’s ability to jump out of the local optimum is improved, the convergence speed is not significantly improved. Oguz [15] introduces the chaotic map into the foraging behavior of MRFO, which improves the optimization performance of the algorithm, but the improvement ability was limited.
In order to better solve the problems and improve the optimization accuracy and convergence speed of the MRFO algorithm, this paper combines the Latin hypercube sampling (LHS) method with the group learning strategy, and introduces the Levy flight and adaptive tdistribution disturbance strategy. Therefore, an improved MRFO algorithm based on LHS and group learning (LGMRFO) is proposed. To verify the performance of the LGMRFO algorithm, 15 general test functions and 9 CEC2017 test suite functions are selected for lowdimensional and highdimensional comparison tests.
Adaptive adjustment and deployment of sensor nodes in WSN can make them more evenly distributed in the detection area and have a higher coverage, so as to rationally allocate network space resources and better complete the tasks of environmental awareness and information acquisition. This is of great significance to improve network viability, improve network reliability, and save network construction costs. Generally, area coverage is the main criterion for evaluation. Optimized coordinate deployment of sensor nodes is carried out through optimization algorithm, and as few sensor nodes as possible are used to ensure the area coverage requirement and reduce the redundancy of sensor nodes. Therefore, in order to improve the poor coverage effect caused by unreasonable deployment of WSN nodes, the LGMRFO is applied to the coverage optimization problem of WSN. The experimental results further verify the effectiveness of the algorithm.
The rest of this paper is organized as follows. The MRFO algorithm is described in details in section “MRFO”. Section “Related Works” detailly introduces some intelligence optimization algorithms. Section “LGMRFO” describes the improved strategies for MRFO in this work. The performance of LGMRFO is evaluated by optimizing 24 test functions in section “Numerical Simulation Analysis”. Section “Coverage optimization of WSN Based on LGMRFO” presents the simulations and performance evaluation of LGMRFO for WSN coverage. At last, Section “Conclusion” summarizes this paper.
2. Related Works
Based on the source inspiration, the intelligence optimization algorithms can be divided into four classes of [16]: (a) physicsbased, (b) mathbased, (c) humanbased, and (d) swarmbased. Physicsbased methods tend to perceive the landscape as a physical phenomenon and move the search agents using formulae borrowed from physical rules or theories. The Archimedes optimization algorithm [17] is devised with inspirations from an interesting law of physics Archimedes’ principle. An equilibrium optimizer (EO) [18] is inspired by control volume mass balance models used to estimate both dynamic and equilibrium states. Atomic orbital search (AOS) [19] is proposed based on some principles of quantum mechanics and the quantumbased atomic model. Transient search optimization (TSO) [20] is inspired by the transient behavior of switched electrical circuits that include storage elements.
Mathbased algorithms are solely based on mathematical equations. They are not inspired by a specific natural phenomenon. Runge Kutta optimizer (RUN) [21] is designed according to the mathematical foundations of the Runge Kutta method. Gradientbased optimizer (GBO) [22] is inspired by the gradientbased Newton’s method. The golden sine algorithm (GoldSA) [23] is inspired by sine that is a trigonometric function. The arithmetic optimization algorithm (AOA) [24] utilizes the distribution behavior of the main arithmetic operators in mathematics. Weighted mean of vectors (INFO) [25] is an efficient optimization algorithm based on weighted mean of vectors.
Inspired by the social behaviors of human beings, a lot of optimization algorithms have been proposed. Political optimizer (PO) [26] is inspired by the multiphased process of politics. The group teaching optimization algorithm (GTOA) [27] simulated the impact of teachers on learners’ output in the classroom. Queuing search (QS) [28] is inspired from human activities in queuing. Student psychology based optimization (SPBO) [29] is inspired by the psychology of the students who are trying to give more effort to improve their performance in the examination up to the level for becoming the best student in the class.
Swarmbased approaches imitate the social behavior and communications within a group of species of animals, plants, or other living things. These approaches have gained increasing popularity in terms of both application and new algorithm development. Some of the recently proposed algorithms that can be categorized under this approach are slime mould algorithm (SMA) [30], hunger games search (HGS) [31], Harris hawks optimization (HHO) [32], moth search algorithm (MSA) [33], monarch butterfly optimization (MBO) [34], golden eagle optimizer (GEO) [35], and tuna swarm optimization (TSO) [36].
Compared with these three types, swarmbased algorithms have superiority over other three types of algorithms. Manta ray foraging optimization (MRFO), with few adjustable parameters, is easy to implement, which in turn makes it very potential for applications in many engineering fields. So, this paper improves the manta ray foraging optimization (MRFO) algorithm named MRFO based on Latin hypercube sampling and group learning (LGMRFO). MRFO falls into the fourth class of optimization algorithms, as it originates from swarm behavior of manta rays (a kind of sea animal).
3. MRFO
MRFO updates the individual position by three foraging behaviors, including chain foraging, cyclone foraging, and somersault foraging. The mathematical models are described below.
3.1. Chain Foraging
Manta rays’ line up headtotail and form a foraging chain. In each iteration, each individual is updated by the best solution found so far and the solution in front of it. This mathematical model of chain foraging is represented as follows:where, is the position of ith individual at tth iteration, r is a random vector within the range of [0, 1], is a weight coefficient, is the plankton with high concentration (the best solution found so far), and N denotes the population size.
3.2. Cyclone Foraging
When manta rays find plankton in deep water, they form a long foraging chain and swim towards the food by a spiral. In the cyclone foraging behavior of manta rays, in addition to spirally move towards the food, each manta ray swims towards the one in front of it. The mathematical model of the exploitation stage of cyclone foraging behavior can be calculated by the following formula:where is a weight factor, T is the maximum number of iterations, and is a rand number in [0, 1].
In equation (3), MRFO focuses on local exploitation. In addition, by taking the random position in the search space as the reference position, this behavior can also be used to improve the exploration mechanism of the algorithm. The mathematical model is as follows:where is a random position produced in the search space, and are the lower and upper limits of the dth dimension, respectively.
3.3. Somersault Foraging
In this foraging behavior, the position of food is regarded as a pivot. Each individual tends to swim to and from around the pivot and somersault to a new position. The mathematical model can be created as follows:where S is the somersault factor that decides the somersault range of manta rays and , and are two random number in [0, 1].
MRFO balances the ability of global exploration and local exploitation by controlling the change in , where, t is the current number of iterations and T is the maximum number of iterations. When , selecting the current optimal position as the reference position for global exploration behavior. When , taking the optimal individual as the reference point, it focuses on the local exploitation ability of the algorithm.
4. LGMRFO
In order to improve the performance of MRFO, this paper improves it in three aspects: Firstly, the LHS method is used to initialize the population to enhance the diversity of the population; Secondly, in the exploration stage of cyclone foraging, Levy flight strategy is introduced to accelerate the convergence speed. Before the somersault foraging, an adaptive tdistribution mutation operator is added to update the population position to avoid falling into local optimization; Finally, the group learning strategy is set to improve the optimization accuracy of the algorithm.
4.1. LHS Method Population Initialization Strategy
In the basic MRFO, the initial population is generated in a random way. The initial population generated by this method is often unevenly distributed or even overlaps individuals, which reduces the optimization performance of the algorithm to a certain extent. The LHS method is a multidimensional stratified sampling technology proposed by McKay et al. [37], which has the following advantages compared with simple random sampling method.(1)The sampling points generated by LHS can achieve full space coverage and can be evenly distributed in the search space;(2)LHS has better robustness and stability.
Therefore, in order to enhance the diversity of the initial population and improve the performance, we adopt the LHS method to initialize the population.
Assuming that N initial individuals are generated in the ddimensional space, the specific steps to initialize the population with the LHS method are as follows: Step 1. Firstly, the population size N and dimension d are determined. Step 2. Determine the interval for individual x as [lb, ub], where lb and ub are the lower and upper bounds of the variable x, respectively. Step 3. Divide the interval of variable x into N equal small intervals. Step 4. Randomly select a point in each subinterval of each dimension. Step 5. Combine the extracted points of each dimension to form initial population.
Figure 1 and 2 are sample point maps generated by the LHS method and simple random sampling method, respectively, where the sampling size is 20 and dimension is 2. It can be seen that the sample points generated by the LHS method can be more evenly distributed in the search space. Therefore, using the LHS method to initialize the population of the MRFO algorithm, it can make the population position evenly distributed in the search space, and enhance the population diversity to improve the convergence performance of the algorithm.
4.2. Mutation Strategy
4.2.1. Levy Flight
In some cases, due to the random individual selection in each iteration, premature convergence may occur, thereby increasing the running time, so different mechanisms can be used to improve the MRFO algorithm. This paper uses the Levy flight mechanism [38] for local disturbance, the mechanism is based on random walk behavior, and the mathematical model is as follows:where u and come from the normal distribution, i.e.,
The values of and are as follows:where is the standard gamma function.
The position update formula of the cyclone foraging exploration stage with the addition of Levy flight strategy is as follows:where denotes pointtopoint multiplication.
4.2.2. Adaptive tDistribution
Tdistribution is also called student distribution [39], and its distribution state is closely related to degrees of freedom. In order to enhance the diversity of the population and avoid falling into local optimum, this paper introduces the adaptive tdistribution strategy to disturb manta ray population before the somersault foraging behavior. The calculation formula is as follows:where is the original individual, is the new individual after mutation, and is the tdistribution with the current iteration number as the degree of freedom.
In the early stage of the iteration, the degree of freedom is small (the number of iterations is small), and tdistribution is similar to Cauchy distribution. At this time, the update step size is larger, which can expand the search field of the individual and improve the global exploration ability. In the middle and later iteration, the degree of freedom gradually increases, and the performance of tdistribution is similar to Gauss distribution. At this time, the update step size is smaller, which helps the algorithm to search around the current individual neighbourhood, and the algorithm has better local exploitation ability.
4.3. Group Learning Strategy
In the process of algorithm evolution, some individuals may reach the optimal position, and the fitness value of others may become more worse. In order to overcome this defect, inspired by the salp swarm algorithm (SSA) [40], individuals with poor location need to learn foraging skills from individuals with good location. Based on this idea, a group learning strategy is proposed. The population after somersault foraging is evenly divided into two groups according to the fitness value. The group with better fitness is called the leader group, and the group with poor fitness is called the follower group.
4.3.1. Leader Group Learning Strategy
The differential evolution (DE) algorithm [41] has a good effect in solving complex optimization problems. In this paper, the differential evolution strategy is used to generate a new leader group individual, and the greedy strategy is used to select the optimal individual. The specific mathematical model is as follows:where is a new individual produced by mutation; is the optimal individual new individuals generated by randomly sorting dimensions; is the scaling factor, and ; and are two different leaders randomly selected from the leadership group, which are different from the current individual. The new individual generated by this strategy needs to be compared with the original individual, and the individual with better fitness should be selected as the current individual.
Compared with the mutation of whole individuals, this strategy has stronger selectivity, which can effectively enhance the local mining performance and improve the convergence accuracy of the algorithm.
4.3.2. Follower Group Learning Strategy
Each follower in the follower group learns from the average of the two leaders. The mathematical model is described as follows:where refers to the new individual generated after the ith individual of the following group learns from the leading group, represents the ith individual of the leadership group. The new follower individual needs to be compared with the original follower individual, and the individual with a better fitness value is selected as the current follower individual.
By learning from the leader group, the follower group can greatly improve the fitness, realize the conversion from follower to leader, and then improve the convergence speed of the algorithm.
4.4. LGMRFO Algorithm Implementation Steps
The specific implementation steps of LGMRFO algorithm are as follows: Step 1. Set the relevant parameters: population size N, variable dimension D, maximum number of iterations T, and initialize the population position by the LHS method. Step 2. The fitness value of each individual is calculated, and the initial optimal individual position and its optimal fitness value are obtained according to the fitness value. Step 3. Enter the algorithm iteration process. When , chain foraging is performed and updates the individual position according to equation (1); Otherwise, cyclone foraging is performed, when , the individual enters the exploration stage, introduces Levy flight strategy, and updates the individual position according to equation (10), when , the individual enters the development stage and updates the individual position according to equation (3). Step 4. Before performing somersault foraging behavior, an adaptive tdistribution strategy is added, the individual position is updated according to equation (11), and the current individual is greedily selected. Step 5. Perform somersault foraging behavior according to equation (6). Step 6. The group learning strategy is implemented, that is, the updated population is divided into a leading group and a following group according to the fitness value, and new individuals are generated by learning from equations (12) and (13), respectively. If the fitness becomes better after learning, the current individual position will be updated, otherwise, it will not be updated. Step 7. Update the optimal location and its optimal fitness value of each generation. Step 8. Judge whether the algorithm meets the iteration conditions. If so, the algorithm terminates; Otherwise, go to Step 3.
The pseudocode of LGMRFO is shown in Algorithm 1.

4.5. Time Complexity of the LGMRFO
The overall time complexity of MRFO is given aswhere, T is the maximum number of iterations, n is the number of individuals, and d is the number of variables.
LGMRFO proposed that in this paper only increases the computational complexity in adaptive tdistribution and group learning. Therefore, the overall time complexity of LGMRFO is given as
This shows that the time complexity of LGMRFO is consistent with that of MRFO.
5. Numerical Simulation Analysis
24 test functions are chosen and experimentally tested with five algorithms for finding the minimal value of the function: BOA, WOA, SCA, SSA (sparrow search algorithm), and MRFO, in order to evaluate the effectiveness of the proposed LGMRFO algorithm. Refer to the corresponding original literature for specific parameter settings.
5.1. Test Functions
(1)General functions: Singlepeaked functions have only one global best point and no local extreme points, while the test functions F1 to F4 are multidimensional singlepeaked functions, F5 to F12 are highdimensional multipeaked functions, and F13 to F15 are three fixeddimensional multipeaked functions. The multipeak function has numerous local extremum points, which are utilized to observe the performance of the function jumping out of local extremum points in different dimensions from two highdimensional views. Table 1 shows the precise function details.(2)CEC2017 test suite functions: In order to further test the performance of LGMRFO, this paper selects some CEC2017 test suite functions [42] for testing, which are CF2, CF4, CF7, CF8, CF10, CF15, CF17, CF20, and CF24, respectively, with D = 30 and Range ∈ [−100, 100].5.2. Results Evaluation of General Functions
Simulation and comparison experiments of six algorithms were conducted in the Matlab R2018a environment. To avoid excessive chance errors, each benchmark function was chosen to run 30 times independently in the experiments, and the optimal value, the worst value, the average value, and standard deviation were used as evaluation indexes, and the population size was set to 30 and the maximum number of iterations was 500. Black highlights the greatest outcomes. F13 to F15 (fixeddimensional multipeak function), D = 50 (lowdimensional), and D = 500 (highdimensional) functions F1 to F12, respectively, are used to test and assess the algorithm.
The iterative convergence curves of six algorithms at 500 dimensions under four single peak test functions, eight multipeak function test functions, and three fixeddimension test functions are plotted in this research due to the article length constraint, as shown in Figure 3.
(a)
(b)
(c)
(d)
(e)
(f)
5.2.1. Multipeak Function Test with Fixed Dimensions
Table 2 shows the test results for the three fixeddimension multipeak functions from F13 to F15. Table 2 and Figures 3(e) and 3(f) show that LGMRFO has faster convergence and better optimizationseeking accuracy than other algorithms, and its standard deviation is the smallest, indicating that it is more stable. The standard deviation reflects the algorithm’s stability in solving, so LGMRFO is more stable.
5.2.2. Evaluation of LowDimensional Functions
Table 3 shows a comparison of the algorithm’s function test results in 50 dimensions. Both LGMRFO and basic MRFO can meet the theoretical optimal value in the singlepeak lowdimensional function test, as shown in Table 3, and standard deviation is 0. This indicates that LGMRFO’s optimizationseeking ability is more stable than other algorithms, and LGMRFO’s convergence speed is significantly faster than other intelligent algorithms, including MRFO, indicating that the improvement strategy has significantly improved MRFO’s convergence performance. Table 2 shows that LGMRFO can also get greater accuracy solutions in the multipeak lowdimensional function test, especially for functions F5, F6, F7, F8, and F10. Although the average solutions of functions F9, F11, and F12 do not approach the theoretical ideal value, LGMRFO’s overall convergence performance ranks 2nd, 1st, and 2nd, respectively, when compared to other algorithms. The other algorithms have a better chance of escaping the local optimum. Except for functions F9 and F12, LGMRFO has the smallest standard deviation among the other functions, hence its robustness is higher in terms of stability.
5.2.3. Evaluation of HighDimensional Functions
Table 3 shows a comparison of the algorithms’ outcomes in the 500dimensional function test. It is obvious from a comparison of the experimental findings of the lowdimensional function test that LGMRFO gets better outcomes in terms of both search accuracy and convergence speed. The increase in dimensionality of the function from a lowdimensional to a highdimensional function will affect the algorithm’s convergence performance. Table 3 shows that both LGMRFO and basic MRFO approach the theoretical optimum with a standard deviation of 0 in the single peaked highdimensional test function. This indicates that LGMRFO and MRFO are stable, and the convergence speed of LGMRFO is faster than other algorithms, as shown in Figures 3(a)∼3(d), demonstrating the superiority of the improved strategy, whereas the convergence results of other compared algorithms are worse than the lowdimensional function. The standard deviation is also higher than in the low dimension, indicating that the other comparison algorithms are less robust on singlepeaked highdimensional functions; LGMRFO ranks first in the multipeaked highdimensional function test, except for function F9; and LGMRFO ranks first in the lowdimensional multipeaked function F12, indicating that the improvement strategy in higher ability. In terms of convergence performance under highdimensional functions, LGMRFO still outperforms the other five techniques.
5.3. Results Evaluation of CEC2017 Test Suite Functions
Table 4 shows a comparison of the algorithms’ outcomes in some CEC2017 test suite functions. Figure 4 shows the average convergence curves of some CEC2017 test suite functions. Therefore, LGMRFO achieves the best results in CF2, CF4, CF7, CF8, CF10, CF17, and CF20. It shows that the overall performance of LGMRFO is powerful so that it can perform a smoother transition between exploration and exploration trends.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
5.4. Wilcoxon Rank Sum Test
The Wilcoxon rank sum test [43] is a nonparametric statistical test that is performed to see if the LGMRFO method is significantly different from others. As a result, the results of the five algorithms were tested 30 times independently on 15 test functions and 9 CEC2017 functions as samples, and the Wilcoxon rank sum test was used to determine the significant difference between the solution results of the five compared algorithms and the LGMRFO solution results for the 50 and 500dimensional, fixeddimensional functions, and 9 CEC2017 functions, respectively. Tables 5–7 show the outcomes of the tests.
The null hypothesis is rejected when < 0.05 indicates that the two algorithms are statistically different, whereas > 0.05 implies that the two algorithms provide equivalent search results, according to the literature [44]. “NaN” implies that the associated algorithm searches for theoretical optimal solution, hence this hypothesis test is not applicable. In the 50dimensional instance, the LGMRFO algorithm performs much better than the other examined algorithms, with the exception of MRFO, whereas in the 500dimensional situation, the LGMRFO method performs significantly better than the 50dimensional one. In the 9 CEC2017 functions situation, among the 45 data sets, 42 are less than 0.05, comprising 93.3% of the total data. This shows that LGMRFO has statistical advantages over the other competitive algorithms. In conclusion, LGMRFO outperforms MRFO, SSA, SCA, WOA, and BOA by a statistically significant margin, indicating that the LGMRFO algorithm is statistically superior.
6. Coverage Optimization of WSN Based on LGMRFO
6.1. WSN Node Coverage Model
The Boolean measurement model and the probabilistic measurement model are the two basic types of WSN node coverage models [45]. In this research, we calculate network coverage using the more standard Boolean model.
Assume that in a square WSN monitoring region with a side length of L, N isomorphic sensor nodes are randomly distributed. Assume that the set of nodes is , with node ’s location coordinates being , and each node’s sensing radius being . The area is discretized into target grid points to be covered to make the calculation easier, and the set of target points is indicated as . The distance between the sensor node and the target point is specified as
The target point has been covered if there is a node whose distance from the target point is less than or equal to the sensing radius . According to the Boolean model, the chance that the sensor node detects the target location is defined as
When the target point is sensed by more than one sensor, the joint sensing probability of the target point is defined as
The area network coverage is calculated by multiplying the sum of the total perceived probability of target points covered by a set of nodes by the entire number of target points in the area.
As a result, the WSN coverage optimization issue can be defined as the coverage of complete target grid points by N sensor nodes on the monitoring area using an optimization technique, which can then be turned into a single objective optimization problem that maximizes equation. (17), i.e.,
6.2. Analysis of Simulation
Two sets of experiments are used in this work to verify the efficiency of LGMRFO on WSN coverage optimization. As stated in Table 8, the experimental settings have been set.
Figure 5 shows the results of sensor area coverage after algorithm optimization. The distribution at 30 nodes is shown in Figure 5(a) and 5(b), with MRFO covering 82.43 percent of the nodes and LGMRFO covering 84.78 percent. In the monitoring region, there are still coverage blind spots, and node overlapping coverage is more evident, as shown in Figure 5(a), but the optimized nodes in Figure 5(b) are more uniformly distributed. Figures 5(c) and 5(d) show the coverage results when 35 nodes are installed. After MRFO optimization, the coverage rate is 89.43%, yet there are coverage blind patches near the monitoring area’s edge. After LGMRFO optimization, the coverage rate is 92.62%, and the node overlapping area is greatly reduced.
(a)
(b)
(c)
(d)
Table 9 shows the coverage of MRFO and LGMRFO running independently for 20 times and each operation iteration for 500 times, respectively. As can be seen from Table 9, LGMRFO’s final and initial coverage are higher than those of the MRFO algorithm, indicating that the LHS method’s enhanced strategy and location update improve the algorithm’s search accuracy.
The average coverage iteration curves are given in Figure 6. LGMRFO coverage in the middle of iteration is slightly lower than MRFO at 30 nodes in Figure 6(a), which is owing to the premature maturity produced by MRFO converging too quickly. LGMRFO gradually surpasses MRFO after 300 iterations, suggesting that MRFO has entered the local optimum, whereas LGMRFO jumps out of the local optimum and optimization accuracy improves, demonstrating that the group learning technique is effective. The population’s health (node distribution) has improved, and the coverage rate has continuously increased. LGMRFO’s coverage is greater than MRFO’s when 35 nodes are deployed, which corresponds to an increase in individual dimension, and both the convergence speed and coverage are much greater than the MRFO algorithm’s average optimization result.
(a)
(b)
In summary, by comparing the experimental results of deploying different numbers of nodes, LGMRFO achieves higher average network coverage under the same conditions, and the node layout is more reasonable, resulting in fewer coverage blind areas and overlapping areas, proving the effectiveness of the improved strategy.
7. Conclusion
To overcome the inadequacies of the manta ray foraging optimization method in terms of optimization accuracy, this work offers an improved manta ray foraging optimization algorithm (LGMRFO). Firstly, to improve the quality of the initial population, the LHS method is used to homogenize the population position distribution. Secondly, the Levy flight and adaptive tdistribution variation strategies are used before the cyclone foraging exploration phase and somersault foraging behavior, respectively, so as to improve the algorithm’s ability to jump out of the local optimum. Finally, a group learning strategy is used for the updated population. On 24 typical test functions, the LGMRFO algorithm is compared to the other five algorithms, and the method significance level is validated using the Wilcoxon rank sum test. LGMRFO greatly enhances convergence speed, optimizationseeking accuracy, and global optimization capability, according to the findings of the experiments. Finally, on the WSN coverage optimization problem, LGMRFO is compared to MRFO, and the experimental findings support the usefulness of the proposed improvement strategies.
As future challenges, different applications other than WSN coverage optimization of LGMRFO can be explored and its capabilities in dealing with difficult test problems can be examined. Besides, new configurations of this algorithm can be considered as other researchers may have different viewpoints on the presented methodology.
Data Availability
The data used to support the study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.