Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 4231647 | 19 pages | https://doi.org/10.1155/2018/4231647

A Modified Sine-Cosine Algorithm Based on Neighborhood Search and Greedy Levy Mutation

Academic Editor: Paulo Moura Oliveira
Received26 Feb 2018
Revised26 Apr 2018
Accepted30 Apr 2018
Published04 Jul 2018

Abstract

For the deficiency of the basic sine-cosine algorithm in dealing with global optimization problems such as the low solution precision and the slow convergence speed, a new improved sine-cosine algorithm is proposed in this paper. The improvement involves three optimization strategies. Firstly, the method of exponential decreasing conversion parameter and linear decreasing inertia weight is adopted to balance the global exploration and local development ability of the algorithm. Secondly, it uses the random individuals near the optimal individuals to replace the optimal individuals in the primary algorithm, which allows the algorithm to easily jump out of the local optimum and increases the search range effectively. Finally, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. The experimental results show that the proposed algorithm can effectively avoid falling into the local optimum, and it has faster convergence speed and higher optimization accuracy.

1. Introduction

Many problems in the field of engineering practice and scientific research come down to the global optimization problems. The traditional methods which purely lie upon the exactly mathematical mode have unsatisfactory effect in solving such optimization problems. These problems need to be continuous and derivable when the traditional methods are used for solving such practical engineering optimization problems, and these methods do not have the ability of global optimization for the multimodal, strong-nonlinearity, and dynamic change problems [1]. Accordingly, many scholars begin to explore new solution methods. The swarm intelligence optimization algorithm is a kind of global optimization algorithm designed by simulating the mutual cooperation behavior mechanism of gregarious biology in nature. Compared with the traditional optimization methods, the swarm intelligence optimization algorithm is characterized by simple principle and fewer adjustment parameters, and the gradient information and strong global optimization algorithm of problems are not required. So it is widely used in the engineering field of function optimization [24], combinatorial optimization [5], neural network training [6, 7], and image processing. At present, many swarm intelligence optimization algorithms are proposed [2, 815] like particle swarm optimization (PSO) [8], differential evolution (DE) [9, 10], artificial bee colony algorithm (ABC) [2, 11], cuckoo search (CS) [12, 13], and flower pollination algorithm (FPA) [14, 15].

Sine-cosine algorithm (SCA) is a new swarm intelligence optimization algorithm proposed by Mirjalili in 2016 [16]. This algorithm has been concerned and studied by many scholars due to its simple implementation and less parameter setting, and its optimization search can be realized through simple variation of sine and cosine function values. It has been successfully applied to solving the parameter optimization of support vector regression [17], short-term hydrothermal scheduling [18], and other engineering fields at present. However, as with other swarm intelligence algorithms, this algorithm also has the disadvantage of low optimization precision and slow convergence speed. Many scholars have put forward various improved sine-cosine algorithms from different perspectives in order to overcome this disadvantage in last two years. Elaziz et al. [19] proposed a sine-cosine algorithm based on the opposition method, and the more accurate solutions is obtained. Nenavath et al. [20] adopted a hybrid algorithm by combining differential evolution with sine-cosine to solve the problem of global optimization and target tracking. This algorithm has faster convergence speed and ability of seeking the optimal solution compared with the basic sine-cosine algorithm and differential evolution algorithm. Reddy et al. [21] applied a new binary variant of sine-cosine algorithm to solve the PBUC (profit-based unit commitment) problem. Sindhu et al. [22] used the elitism strategy and new updating mechanism to improve the sine-cosine algorithm, which improved the accuracy of classification in the selection of features or attributes. Kumar et al. [23] proposed a new sine-cosine optimization algorithm with the hybrid Cauchy and Gaussian mutations in order to track MPP (maximum power point) quickly and efficiently. Mahdad et al. [24] presented a sine-cosine algorithm coordinated with the interactive process to improve the security of the power system aimed at loading margin stability and faults at specified important branches. Bureerat et al. [25] adopted an adaptive differential sine-cosine algorithm to solve the problem of structural damage detection. Turgut et al. [26] combined backtracking search algorithm (BSA) and sine-cosine algorithm (SCA) to obtain the optimal design for the shell and tube evaporator. Attia et al. [27] embed Levy's flight into the original sine-cosine algorithm to increase the local search ability of the algorithm and avoided the algorithm being trapped in a local optimal defect. Tawhid et al. [28] used elite nondominated sorting to obtain different nondominated grades and applied crowd distance method to maintain the diversity of optimal solution sets, putting forward a multiobjective SCA algorithm. Issa et al. [29] presented an enhanced version of SCA by embedding the particle swarm optimization algorithm in SCA(ASCA-PSO). The ASCA-PSO algorithm makes full use of developing ability of the particle swarm optimization algorithm in the search space, which is stronger than that of the SCA. In the tests of some functions, it is found that the search performance of ASCA-PSO is apparently superior to that of SCA and other recently proposed basic metaheuristic algorithms. Rizk-Allah et al. [30] proposed a multiorthogonal sine-cosine algorithm (MOSCA) based on a multiorthogonal search strategy (MOSS) to solve the problem of engineering designs. The MOSCA algorithm eliminated the disadvantages which are that the basic SCA lacked exploitability and it was easily trapped in local optimum.

The modified sine-cosine algorithm (MSCA) based on neighborhood search and the greedy Levy mutation has been proposed in order to better balance the global exploration ability and local exploitation ability. The improved algorithm makes improvements in the following three aspects. Firstly, both the linear decreasing inertia weight and exponential declining conversion parameters are used to balance the global exploration and local exploitation ability, which achieves the smooth transition of algorithm from global exploration to local development. Secondly, the guidance of random individuals near the optimal solution is fully used to allow the algorithm easily jump out of the local optimum, which effectively prevents the algorithm premature convergence and increases the diversity of population. Thirdly, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. Compared with other swarm intelligence algorithms, the improved sine-cosine algorithm has better performance in terms of searching precision, convergence speed, and stability.

2. Basic Sine-Cosine Algorithm

In the basic sine-cosine algorithm, the simple variation of sine and cosine function values is used to achieve the optimization search. In this paper, the population size is n. The dimension of search space is d, and the ith individual in the population is . In each iteration, the update mode of can be obtained by the following equation:where t is the current iteration,   is the jth dimension value of the optimal individual at iteration t, and is the jth dimension value of the individual i at iteration t. , , , and are the random numbers. and obey uniform distribution between 0 and 2. obey uniform distribution between 0 and 2, and obey uniform distribution between 0 and 1.

In (1), or jointly lead the global exploration and local development ability of the algorithm. When the value of or is greater than 1 or less than -1, the algorithm conducts a global exploration search. When the value of or is within the range of , the algorithm conducts a local development search. The value of or is within the range of . So the control parameter plays a crucial role in the global exploration, which controls the transition of the algorithm from global exploration to local development. In the basic algorithm, the control parameter adopts the linear decreasing method of (2) to guide the algorithm transit from the global exploration to the local development.where is a constant, is the current iteration, and is the maximum number of iterations.

3. Modified Sine-Cosine Algorithm

3.1. Exponential Decreasing Conversion Parameter

The parameter setting is crucial to the search performance in the basic sine-cosine algorithm, in which the control parameter controls the transition of algorithm from global exploration to local development. The larger value can improve the global searching ability of the algorithm, and the smaller value can enhance the local development ability of the algorithm. Therefore, is designed as the linear decreasing method of (2) in the basic algorithm to balance the global exploration and local development ability of the algorithm. In the literature [31], experimental contrast analysis is made on the linear decreasing method, parabola decreasing method, and exponential decreasing method in the basic algorithm. It is found that the exponential decreasing method is superior to the other two methods in the search performance. At the same time, the inertia weight remains unchanged in the iterative process of the basic algorithm, which may easily cause the population individuals to oscillate in the later period of search. In this paper, both the linear decreasing inertia weight and exponential decreasing conversion parameter strategy are used on the basis of (1), which can better balance the global exploration and local development ability of the algorithm. The update mode of individuals is as follows:where t is the current iteration, is the maximum number of iterations, is the jth dimension value of the optimal individual at iteration t, is the jth dimension value of the individual i of current iteration, and and are the maximum and minimum inertia weight, respectively.

It can be seen from (3) that the population individuals work together through the inertia weight and conversion parameter . The value of and is large in the early iterations, which is conducive to the global exploration of the algorithm. The values of and are small in later iterations, which are conducive to the local development of the algorithm so as to improve the searching precision and convergence speed of the algorithm.

3.2. The Neighborhood Search of the Optimal Individual

In the basic sine-cosine algorithm, the search directions of the new individuals simply are updating process by optimal individuals in the population. Once the global optimal individuals fall into the local optimum, the whole algorithm easily gets into premature convergence. Therefore, in order to reduce the possibility of algorithm getting into the local optimum, the guiding role of the better individuals possibly existing near the optimal solution should be used. In this paper, the random individuals near the optimal solution are used to replace the current optimal individuals to guide the algorithm search, so as to improve the possibility of algorithm jumping out of the local optimum. The sine-cosine algorithm strategy for the neighborhood search of the optimal individual iswhere is the uniform distribution number within (-1, 1), and is the disturbance coefficient. Other parameters are in line with (3).

In the neighborhood search of the optimal individual, the current optimal individual is taken as the center and as the step size, and the algorithm searches between the section and . It effectively expands the search orientation and increases the probability of algorithm jumping out of the local optimum.

3.3. Greedy Levy Mutation

In the basic sine-cosine algorithm, the optimal individuals lead the search direction of the whole population. But the optimal individuals lack experiential knowledge and self-learning ability. So they may hardly get effective improvement and thus get into the domain of local optimum. In order to further prevent the basic sine-cosine algorithm from getting into the local optimum and eliminate the defect of low efficiency in later period, a strategy based on greedy Levy mutation is proposed for the optimum individuals. Thus, the population individuals can jump out of the position of optimal value searched previously through the mutation operation, which retains the diversity of population. The mutation method is as follows:where is the random number that obeys the Levy distribution, is the coefficient of self-adapting variation, and is the jth dimension value of the optimal individual at iteration t (Algorithm 1).

Set the initial parameters, including the total population size n, the maximum number of
generations N_iter, control parameter a et al.
Generate a population .
Calculate the fitness and find the best solution of the population.
for t=1: N_iter
for i=1:n
for j=1:d
Generate a rand .
if
else
end if
end for
Cross-border processing for .
Calculate the fitness
if
end if
end for
end for
Output the best solution
3.3.1. Random Number Generated According to the Levy Distribution

The flight is characterized by long-term short-distance migration and occasional long-distance jump, which is suitable for describing the life active law of many colonial organisms. In this paper, the characteristic of flight is used to form a mutation mechanism. This mechanism ensures that the proposed algorithm makes sufficient search near the area of the optimal individuals and has a certain mutation at the same time, which can improve the global searching ability of the algorithm. As the integral of probability density function of   distribution is difficult, it has been proved that Mantegna algorithm can be used to achieve the equivalent calculation [32]. That is,where , and can be calculated based on where is the standard Gamma function.

3.3.2. Coefficient of Self-Adapting Variation

The swarm intelligence optimization algorithm is generally divided into two stages in the iterative process, namely, global exploration at the earlier stage and local development at the later stage. Therefore, in order to achieve the goal of obtaining a big variation to conduct the global disturbances at the earlier stage and decreasing the variation range to accelerate the local search at the later stage, the proposed algorithm is used a self-adapting mutation strategy. The self-adapting variation control coefficient is in where t is the current iteration, is the maximum iteration, is the coefficient, is the difference between the jth dimension value of the current optimal individual and the jth dimension average value of the population individual, and is the maximum distance of the jth dimension in the population.

From (10) ~ (12), it can be seen that the coefficient can be mainly considered from both iterative process and diversity. The iterative part is controlled by the part of , and the diversity is adjusted by the part of . On the early iterations, the individuals have poor performance and large diversity. So large coefficient can cause enough disturbances to the population and enhance the global searching ability. As iterations go on, the individuals in the population have better performance and gradually decrease coefficient, which can ensure that the algorithm converges to the optimal value smoothly to reduce the search oscillation of the optimal value. The solution method is shown in Algorithm 2.

Set the parameters
Obtain the best individual and its fitness value.
for j=1:d
;
Calculate the value of according to Eq.(11).
Calculate the value of according to Eq.(12).
Calculate the value of according to Eq.(10).
Calculate the value of according to Eq.(8).
;
if
end
end
3.4. The Modified Sine-Cosine Algorithm Based on the Greedy Levy Variation

The procedure of the improved sine-cosine algorithm based on neighborhood search and the greedy Levy variation is shown in Algorithm 3.

Set the initial parameters, including the total population size n, the maximum number of
generations N_iter, control parameter a, , , , and et al.
Generate a population .
Calculate the fitness and find the best solution of the population.
for t=1: N_iter
Calculate the value of according to Eq.(4).
Calculate the value of according to Eq.(5).
for i=1:n
for j=1:d
Generate a rand .
if
else
end if
end for
Cross-border processing for.
Calculate the fitness .
if
end if
end for
Perform the improved sine-cosine algorithm based on the greedy levy
variation(described in Algorithm 2).
end for
Output the best solution .

For the basic SCA algorithm, the time complexity of creating the initial population is , the time complexity of performing sine and cosine operations is , and the cross-border processing is . So the time complexity of the basic SCA algorithm is +. In the MSCA algorithm, the time complexity of creating the initial population is , and the time complexity of calculating and is . The time complexity of performing the sine and cosine operations based on the neighborhood search is . The time complexity of cross-border processing is , and the time complexity of the greedy Levy mutation operation is . Therefore, the time complexity of the MSCA algorithm is . Obviously, the time complexity of the MSCA algorithm is higher than that of the standard SCA algorithm while both of them are in the same order of magnitude.

4. Experimental Simulation

In order to verify the performance of MSCA, the experiment will be conducted from the following three aspects: Contrast experiment is conducted between MSCA and particle swarm optimization (PSO) [8], differential evolution (DE) [9], bat algorithm (BA) [33, 34], teaching-learning-based optimization (TLBO) [35, 36], grey wolf optimizer (GWO) [37], and basic SCA algorithm. The effectiveness of 3 improvement strategies is analyzed. The parameter in the optimal individual neighborhood search strategy and parameter in the greedy mutation strategy are analyzed, respectively, and the effectiveness of the algorithm is discussed, so that the specific reference value of the above parameters in the algorithm can be determined.

4.1. Test Function and Experimental Platform
4.1.1. Experimental Platform

In order to provide a comprehensive and full test environment, the simulation experiment is conducted in the test environment with operating system of Windows 10, CPU of Intel (R) Core (TM) i5-4210U (quad core), dominant frequency of 2.4GHZ and internal storage of 4GB, and programming tool of Matlab 2016b.

4.1.2. Benchmark Functions

In order to validate the performance of the presented algorithm, 20 benchmark test functions in the literature [38, 39] are selected as experimental subjects, which have been widely used in the test. The benchmark test functions selected can be categorized into three types (i.e., unimodal high-dimensional functions, multimodal high-dimensional functions, and multimodal low-dimensional functions ). are the unimodal high-dimensional functions, and they can be used to investigate the optimization precision of the algorithm, which can hardly converge to the global optimal point. are the multimodal high-dimensional functions with several local extreme points, which can be used to test the global searching performance and ability to avoid premature convergence of the algorithm. are the multimodal low-dimensional functions. As the optimal value of the most test functions is zero, we select some test functions with nonzero optimal value. The function name, expression, dimension, search range, and theoretical optimal value are shown in Table 1.


NoNameBenchmark test functionsDimensionScopeOptimum

Sphere Model300

Schwefel’s Problem 2.22300

Schwefel’s Problem 1.2300

Schwefel’s Problem 2.21300

Generalized Rosenbrock’s Function300

Step Function300

Quartic Function i.e. Noise300

Generalized Schwefel’s Problem 2.2630-418.9829n

Generalized Rastrigin’s Function300

Ackley’s Function300

Generalized Griewank Function300

Generalized Penalized Function 30 0

Generalized Penalized Function300

Shekel’s Foxholes Function20.9980

Kowalik’s Function40.0003075

Six‐Hump Camel‐Back Function2-1.0316285

Branin Function20.398

Goldstein‐Price Function23

Hartman’s Function3-3.8628

Hartman’s Function6-3.32

4.2. Contrastive Analysis of Sine-Cosine Algorithm Based on Greedy Levy Mutation

In order to evaluate the performance of the algorithm proposed in this paper, six algorithms are selected as contrast algorithms in the experiment, that is, PSO, DE, BA, TLBO, GWO, and SCA, respectively. The contrast algorithms selected the same parameters as the original literature and the parameter setting as shown in Table 2. The parameters of the MSCA algorithm are set as follows. The population size is 100. The minimum inertia weight is 0.9. The minimum inertia weight is 0.4. is 30. is 0.01. The other parameters are consistent with the basic SCA. For each test function, the number of iterations is 5000, and each algorithm runs independently 20 times. The performance of each algorithm is measured by four indexes, which are optimal value, average value, worst value, and variance. The statistical results are as shown in Tables 35.


AlgorithmsParameters

PSOthe population size is 100, c1 = 1.49445, c2 = 1.49445,
DEthe population size is 100, pCR=0.2, ,